#StackBounty: #bash #shell #scripting #fzf Dynamically update fzf items

Bounty: 50

I am writing a script that searches the system for files and then for each file does some sanity checks, and if they pass those, I want to display them in fzf. When the item is clicked on in fzf, I want to run the file with a program.

So far I have:

dir="/path/to/dir"

fd . $dir --size +1MB | while read -r line; do
    file_type=$(file -b "$line")
    echo "$line" | fzf
    if [[ "$file_type" == "data" ]]; then
        echo "$file_type"
    fi
done

Basically, I search for files bigger than 1MB in the specified dir. For each file I run the file command and check if the command output returns "data". If it does, I want to add it to the fzf list.
Then, when I click on the item, I want to run the file with the full path with an app: e.g. myapp /path/to/file/selected/in/fzf.

The problem so far is that fzf is blocking, and I can’t populate the list in the loop. Do I really have to add everything to an array first and then pipe this into fzf? Ideally that should happen in parallel, aka, I want to add new items on the fly while the search is still going, instead of waiting for it to complete.

Also I dont know how to later run the selected file.
Can someone help me with this?


Get this bounty!!!

#StackBounty: #linux #bash #package-management Distribution-agnostic package search and installation

Bounty: 50

I’m trying to figure out a distribution-agnostic way to install packages providing specific executables or files. I know that it is impossible to get this perfect, but I think I have something that is almost good enough and I was hoping that maybe someone has an idea on how to improve upon it.

Basically I’ve written a script that provides an abstraction layer around the most common package managers:

commandAvailable() { command -v $1 &> /dev/null; }

if commandAvailable dnf; then
    updatePackageInfo() { dnf check-update; }
    searchPath() { dnf provides $1 2> /dev/null | grep ' : ' | head -1 | cut -d'.' -f1 | rev | cut -d'-' -f 2- | rev; }
    searchBin() { searchPath {/bin,/sbin,/usr/bin/,/usr/sbin}/$1; }
    install() { dnf install -y $@; }
elif commandAvailable yum; then
    updatePackageInfo() { yum check-update; }
    searchPath() { yum provides $1 2> /dev/null | grep ' : ' | head -1 | cut -d'.' -f1 | rev | cut -d'-' -f 2- | rev; }
    searchBin() { searchPath {/bin,/sbin,/usr/bin/,/usr/sbin}/$1; }
    install() { yum install -y $@; }
elif commandAvailable apt-get; then
    updatePackageInfo() { apt-get update && if ! commandAvailable apt-file; then install apt-file; fi && apt-file update; }
    searchPath() { apt-file search $1 | head -1 | cut -d':' -f1; }
    searchBin() { searchPath {/bin,/sbin,/usr/bin/,/usr/sbin}/$1; }
    install() { apt-get install -y $@; }
elif commandAvailable pacman; then
    updatePackageInfo() { pacman -Sy && pacman -Fy; }
    searchPath() { pacman -F $1 | head -1 | rev | cut -d' ' -f2 | rev; }
    searchBin() { pacman -F $1 | grep -B 1 -P "    (usr/bin|usr/sbin|bin|sbin)/$1" | head -1 | cut -d' ' -f1; }
    install() { pacman -S --noconfirm $@; }
elif commandAvailable zypper; then
    updatePackageInfo() { zypper refresh; }
    searchPath() { zypper search -f $1 | grep " | package" | head -1 | tr -d ' ' | cut -d'|' -f2; }
    searchBin() { searchPath {/bin,/sbin,/usr/bin/,/usr/sbin}/$1; }
    install() { zypper --non-interactive install "$@"; }
elif commandAvailable emerge; then
    updatePackageInfo() { emerge-webrsync -v && if ! commandAvailable e-file; then install app-portage/pfl; fi; }
    searchPath() { e-file $1 | grep -P "([I]| * )" | sed 's/*//g' | sed 's/[I]//g' | tr -d ' '; }
    searchBin() { searchPath /usr/bin/$1; searchPath /usr/sbin/$1; searchPath /bin/$1; searchPath /sbin/$1; }
    install() { emerge $@; }
fi

searchBins() { for executable in "$@"; do searchBin $executable; done | tr "n" " "; echo; }
searchPaths() { for path in "$@"; do searchPath $path; done | tr "n" " "; echo; }
installPkgWithPath() { install $(searchPath "$1"); }
installPkgsWithPaths() { install $(searchPaths $@); }
installPkgWithExecutable() { install $(searchBin $1); }
installPkgsWithExecutables() { install $(searchBins $@); }

The functions it creates can be used like this:

updatePackageInfo                                  # Equivalent to apt-get update

installPkgWithPath "curl/curl.h"                   # Installs the package containing the header file curl/curl.h

installPkgsWithPaths "curl/curl.h" "/usr/bin/wget" # Installs multiple packages by file paths

installPkgWithExecutable curl                      # Install the package that provides the `curl` executable

installPkgsWithExecutables curl wget make          # Install all packages required to get these 3 executables

This basically works fine on Fedora, RHEL, Debian, Arch, Gentoo, OpenSuse (and many more I would think). But it does not work on Ubuntu for example because Ubuntu doesn’t have the required apt-file package (unless you enable the Universe repository that provides it).
Another thing that doesn’t work is installing things like vlc in Fedora. On Fedora you would usually enable the RPM Fusion repositories for that.
And I’m sure other distributions have similar situations.

What I have considered is maybe adding a --force flag that, when set, causes these repositories to be searched and added if needed. But I would hate to end up in a situation where I have to maintain lists of repositories. My hope is that most distributions somehow reference these semi-trusted repositories in a way that I don’t have to maintain a list of repositories for every distribution.

Any ideas?


Get this bounty!!!

#StackBounty: #apt #bash #symbolic-link #alias #apt-file How do I find the package providing an alias?

Bounty: 50

Using apt-file I am able to find the packages providing certain executables like this for example:

sudo apt-file search {/bin,/sbin,/usr/bin/,/usr/sbin}/wget

Well actually:

sudo apt-file search {/bin,/sbin,/usr/bin/,/usr/sbin}/wget | grep "/wget$"

(Because otherwise it would just return all packages containing executables starting with wget.)

Now I was running:

EXEC_NAME="x86_64-w64-mingw32-g++"
sudo apt-file search {/bin,/sbin,/usr/bin/,/usr/sbin}/${EXEC_NAME} | grep "${EXEC_NAME}$"

And surprisingly it doesn’t return anything. Why? Because no package provides a file with that name.

If I run:

EXEC_NAME="x86_64-w64-mingw32-g++"
sudo apt-file search {/bin,/sbin,/usr/bin/,/usr/sbin}/${EXEC_NAME}

I get the following result:

g++-mingw-w64-x86-64-posix: /usr/bin/x86_64-w64-mingw32-g++-posix
g++-mingw-w64-x86-64-win32: /usr/bin/x86_64-w64-mingw32-g++-win32

implying there is no package providing x86_64-w64-mingw32-g++.

But after a while I found that g++-mingw-w64-x86-64-posix doesn’t just provide the executable g++-mingw-w64-x86-64-posix, but also an alias or symlink called g++-mingw-w64-x86-64.

In this case it was easy to figure out because the package happened to contain another binary with a very similar name. Now my issue is that I need to automate this in a way that works for any alias/symlink, even for ones that have a completely different name.

How can I do this?

Edit:

The alias is created in the file g++-mingw-w64-x86-64-posix.postinst of the g++-mingw-w64-x86-64-posix package, in case that helps:

update-alternatives --install /usr/bin/x86_64-w64-mingw32-g++ x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix 30 
  --slave /usr/bin/x86_64-w64-mingw32-c++ x86_64-w64-mingw32-c++ /usr/bin/x86_64-w64-mingw32-c++-posix


Get this bounty!!!

#StackBounty: #bash #shell-script #chroot chroot fails to be executed more than once in a while loop

Bounty: 100

Description

Very same loop containing chroot command can be executed in terminal but can not be executed within a shell script.

Reproduction

  1. Create a basic (or copy of your) rootfs in /mnt/myrootfs
  2. Create a file in /mnt/myrootfs/tmp/hello.sh (and make it executable) with the following contents:
    #!/bin/bash
    echo "i am exiting."
    exit 
    
  3. Create the following script (./chroot-poll.sh):
    #!/bin/bash
    while sleep 1; do 
        echo "chrooting into the target..."
        sleep 1    
        sudo chroot /mnt/myrootfs /bin/bash --rcfile /tmp/hello.sh
    done
    

Result

Console output is as follows:

$ ./chroot-poll.sh 
chrooting into the target...
i am exiting
chrooting into the target...

[1]+  Stopped                 ./chroot-poll.sh

Why is this stopping? Bringing it foreground by fg makes it iterate once more, and it stops again.

Running within a terminal works:

Copying the contents of ./chroot-poll.sh and pasting directly into the terminal works as expected:

$ while sleep 1; do      echo "chrooting into the target...";     sleep 1    ;     sudo chroot /mnt/myrootfs /bin/bash --rcfile /tmp/hello.sh; done
chrooting into the target...
i am exiting
chrooting into the target...
i am exiting
chrooting into the target...
i am exiting
chrooting into the target...
i am exiting
chrooting into the target...
^C

Question

Why contents of a script can work in a terminal while script itself fails to execute?


Get this bounty!!!

#StackBounty: #bash #ssh Run an alias over ssh that runs another alias via ssh

Bounty: 50

I’m trying to run an alias over ssh that runs another alias via ssh. Is this possible?

I was looking at this question about running aliases over ssh and I got this so far:

ssh develop -t /bin/bash -ic "gotoserver"

gotoserver is an alias that runs:

ssh -o StrictHostKeyChecking=no -l user 10.10.10.10

This all works and I end up to 10.10.10.10. But I’m looking to run another alias inside 10.10.10.10 so I tried this:

ssh develop -t /bin/bash -ic "gotoserver -t /bin/bash -ic 'loaddocker'"

But it’s not working. It seems to only recognize until gotoserver.
I still end up at 10.10.10.10 but the rest of the command (-t /bin/bash -ic 'loaddocker') seems to be ignored.

But if I go inside develop and run:

gotoserver -t /bin/bash -ic 'loaddocker'

It works and loaddocker is executed.

What am I doing wrong? And can I do this another way without changing anything in develop and 10.10.10.10?


Get this bounty!!!

#StackBounty: #python #bash #pdf #mojibake How to identify likely broken pdf pages before extracting its text?

Bounty: 50

TL;DR

My workflow:

  1. Download PDF
  2. Split it into pages using pdftk
  3. Extract text of each page using pdftotext
  4. Classify text and add metadata
  5. Send it to client in a structured format

I need to extract consistent text to jump from 3 to 4. If text is garbled, I have to OCR its page. But, OCR all pages is out of question. How to identify beforehand which pages should be OCRed? I’ve tried to run pdffonts and pdftohtml on each page. Isn’t it expensive to run subprocess.run twice a page?

What do I mean by broken page?

A PDF page that is not possible to extract text from its source, maybe due to to_unicode conversion.

Description

I’m building an application that relies on the extraction of text from a thousand PDF files every day. The layout of text in each PDF is somewhat structured, therefore calling pdftotext from python works well in most cases. But, some PDF files from one or two resources bring pages with problematic fonts, which results in garbled text. I think that using OCR only on problematic pages would be ok to overcome such an issue. So, my problem is how to identify, before extracting text, which pages are likely to result in gibberish.

First, I tried to identify garbled text, after extracting it, using regex (p{Cc} or unlikely chars outside Latin alphabet), but it did not work because I found corrupted text with valid chars and numbers, i.e AAAAABS12 54c] $( JJJJ Pk , as well.

Second, I tried to identify garbled text calling pdffonts – to identify name, encoding, embeddedness and existence of to_unicode map – on each page and parsing its output. In my tests, it kinda works well. But I found also necessary to count how many chars used likely problematic fonts, pdftohtml – Display each text block in p tag along with its font name – saved the day here. @LMC helped me to figure out how to do it, take a look at the answer. The bad part is I ended up calling subprocess.run two times for each pdf page, what is super expensive. It would be cheaper if I could just bind those tools.

I’d like to know if it’s possible and feasible to look at PDF source and validate some CMAP (uni yes and not custom font), if present, or maybe other heuristics to find problematic fonts before extracting text or OCR it.

Example of garbled text in one of my PDF files:

0n1n2n3n4n2n0n3n0n5 6n6nÿn89 ÿn4nx0en3nÿnx0fx10nx11nx12nÿn5nÿn6n6nx13nx11nx11nx146n2n2nx15nx11nx16nx12nx15nx10nx11nx0enx11nx17nx12nx18nx0enx17nx19x0enx1anx16n2 x11nx10nx1bx12nx1cnx10nx10nx15nx1d29 2nx18nx10nx16n89 x0enx14nx13nx14nx1enx14nx1fn5 x11x1fnx15nx10n! x1cn89 x1fn5n3n4n"n1n1n5 x1cn89n#x15nx1dx1fn5n5n1n3n5n$n5n1 5n2n5n%8&&#'#(8&)n*+n'#&*,nÿn(*ÿn-n./0)n1n*n*//#//8&)n*ÿn#/2#%)n*,nÿn(*/ÿn/#&3#40)n*/ÿn#50&*-n.()n%)n*)n/ÿn+nÿn*#/#n&x19nx12nÿnx1cÿn,x1dnx12nx1bx10nx15nx116nÿnx15n7nÿn8n9n4n6nÿn%x10nx15nx11nx166nÿn:x12x10;n2n*,n%#26nÿn<n$n3n0n3n+n3n8n3nÿn+nÿn=x15nx10n6nÿn>n9n0n?nÿn4n3n3n1n+n8n9n3n<n@AnBnCnDnEÿnGHnInÿnJnJnKnLnJnMnJnNnOnPnOnQnIn#x1bÿn0n1nÿnx1cnx10nÿn*x1anx16nx18nÿnx1cnx10nÿn0n3n0n5nx0en/x10nx15nx13x16nx12nÿn/x10nx16nx1dx1cx16nx12n6nÿn* x19nx15nx116nÿnx12nx19nx11nx19nx12nx16nÿnx15ÿn/*-nx0enÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿn(x10nÿx16nx1cnx10nx1bÿnx1cnx12nÿn%x13nx10n9nx10nÿnx1cnx10nÿn'x12nx1ax15nx10nx11nx10nÿnx1cnx12nÿn%x16nx16nx10nRnx10nx1cx16nx12nÿn'x10nx16nx12nx18nÿnx1cnx12nÿn-nx19x11n1nx12nÿnx1cÿn#x11nx12nx1cÿnx1cnx10nÿn*x18nx12nRx126nÿn/x16nx12nx0en& x10nx12nx15nx12nÿn%x10nx18x11nx16nx10nÿn:x12x13nx12nx1cx0enÿn*x19nx11nx19nx10n+x10nÿnx10nÿn&x10nRx11nx16nx10n+x10nÿnx15ÿn/*-n2n2'<nÿn+nÿn#Snx11nx16nx12nx17nx19nx1c x12nx18nÿn*x1cnx1bx15x11nx16nx12nx11nx1dx0enÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿnÿn*x11nx10nx15 x12nx1bx10nx15nx11nx10n6nTUnVnWUnXÿnYXÿnTUnVnWnXnXYZUn[UnT\]X\UnWnXnVDn^n_n`nÿnabnÿnXGbncnE^ndnOnPnOnQnPnenOnfnPnfnJnfnPnengnGbnh_nEGIniaAnYjTknXlm@ YjTknXlmX] ]jTk@[Yj] UnZk]UnZUn] X]noUnWnX] W@Vn\nX]nÿn89nÿn89np ÿnqn(x10x14nx12x13n8rnIOVx11x03x14n(VWHx03GRFXPHQWRx03px03FySLDx03GRx03RULJLQDOx03DVVLQDGRx03GLJLWDOPHQWHx03SRUx03(00$18(/$x030$5,$x03&$/$'2x03'(x03)$5,$6x036,/9$x11x033DUDx03FRQIHULUx03Rx03RULJLQDOx0fx03DFHVVHx03Rx03VLWHx03x0fx03LQIRUPHx03Rx03SURFHVVRx03x13x13x13x13x16x17x18x10x1ax18x11x15x13x15x14x11x1bx11x13x15x11x13x13x1ax16x03Hx03Rx03nFyGLJRx03x17(x14x14x16x14x13x11x03

The text above was extracted from page 25 of this document using pdftotext.

For that page, pdffonts outputs:

name                                 type              encoding         emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
[none]                               Type 3            Custom           yes no  no      13  0
DIIDPF+ArialMT                       CID TrueType      Identity-H       yes yes yes    131  0
DIIEDH+Arial                         CID TrueType      Identity-H       yes yes no     137  0
DIIEBG+TimesNewRomanPSMT             CID TrueType      Identity-H       yes yes yes    142  0
DIIEDG+Arial                         CID TrueType      Identity-H       yes yes no     148  0
Arial                                TrueType          WinAnsi          yes no  no     159  0

It’s easy to identify that [none] named font as problematic. My take so far, given the data I’ve analysed, is to mark fonts with custom or identity-h encoding, no to_unicode map or none named as likely problematic. But, as I said, I also found cases with ToUnicode table and not Custom encoding fonts, problematic as well. As far as I know, it’s also possible to find, for example, a single char that is defined for a broken font, but does not affect the overall readability of the page, so maybe it would be not necessary to OCR that page. In other words, if a font, in a given page, does not have ToUnicode convertion, it does not mean that the text of the page is totally affected.

I’m looking for a solution that is better than regex garbled text.

Examples of PDF pages that I had to OCR

All pages bellow contains text in portuguese, but if you try to copy the text and paste somewhere you will see universal gibberish.

What I’ve done so far

I’ve avoid calling subprocess twice a page since I created a bash script that iterate pages and merges pdftohtml and pdffonts output for each one into a single HTML:

#!/bin/sh

# Usage: ./font_report.sh -a 1 -b 100 -c foo.pdf


while getopts "a:b:c:" arg; do
    case $arg in
        a) FIRST_PAGE=$OPTARG;;
        b) LAST_PAGE=$OPTARG;;
        c) FILENAME=$OPTARG;;
        *)
            echo 'Error: invalid options' >&2
            exit 1
    esac
done

: ${FILENAME:?Missing -c}

if ! [ -f "$FILENAME" ]; then
    echo "Error: $FILENAME does not exist" >&2
    exit 1
fi

echo "<html xmlns='http://www.w3.org/1999/xhtml' lang='' xml:lang=''>" ;

for page in $(seq $FIRST_PAGE $LAST_PAGE)
do
   { 
       echo "<page number=$page>" ; 
       echo "<pdffonts>" ; 
       pdffonts -f $page -l $page $FILENAME ; 
       echo "</pdffonts>" ;  
       (
           pdftohtml -f $page -l $page -s -i -fontfullname -hidden $FILENAME -stdout | 
           tail -n +35 |  # skips head tag and its content
           head -n -1  # skips html ending tag
        ) ;
       echo "</page>"
    }
done

echo "</html>"

The code above has enabled me to call subprocess once and parse html using lxml for each page (considering <page> tag). But it is still needed to look at text content to have a idea if the text is broken.


Get this bounty!!!

#StackBounty: #bash #permissions #kvm #qemu #libvirt Clean way of running virt-install with an iso file that's in your home directo…

Bounty: 50

I have a script that automatically creates and runs a VM. That script is used by many people. You basically call the script giving it some information like what PCI or USB devices you want to pass through and which iso to use to install the OS and then the script runs sudo qemu-system-x86_64 with the appropriate parameters.

So if you break it down, you could currently call my script like this:

./create-vm.sh /home/me/os-images/windows10.iso

And this works fine.

But now I want to take it a step further and use sudo virt-install ... instead of sudo qemu-system-x86_64 ... and that is causing major issues because with virt-install it can’t access the iso file anymore. Presumably because it drops its root privileges and uses the qemu user even if I run it with sudo…

So now I have to make a difficult decision:

  • Do I move the iso file to /var/lib/libvirt/images? (No because the user might need that file in the exact location where it is right now.)
  • Do I copy the iso to /var/lib/libvirt/images? (No because the user might not have enough disk space and it just seems like a waste of resources.)
  • Do I set user = root or user = me in /etc/libvirt/qemu.conf? (No, because that is a global setting that might mess up other qemu stuff the user is doing. – I have tried it though and it causes libvirtd.service to crash.)
  • Do I add the group of the iso file to the qemu user? (No, because that could have unwanted side effects, potentially giving qemu more access in situations where the user wouldn’t want it. – Nevertheless, I’ve tried it and it didn’t work, presumably some SElinux magic is blocking it…)
  • Do I change the owner of the iso file to qemu? (No, because that might have unwanted side effects. – Besides that, when I try it I still get permission denied errors, probably because of SElinux.)
  • Do I mount the iso and make the mountpoint available to the qemu user? (No, because iso files can be very complex and some data will not be available in the mountpoint.)
  • Do I mount the folder containing the iso? (No because the iso file would still have the same owner/group.)

I just can’t seem to find a good solution. What am I supposed to do now? I really need some of the functionality that virt-install offers over qemu-system-x86_64.

Note: In reality there is not just one iso image, but also a floppy image, some other iso files containing drivers and an ACPI table file. I get permission errors for all of these files from virt-install.


Get this bounty!!!

#StackBounty: #bash #permissions #kvm #qemu #libvirt Clean way of running virt-install with an iso file that's in your home directo…

Bounty: 50

I have a script that automatically creates and runs a VM. That script is used by many people. You basically call the script giving it some information like what PCI or USB devices you want to pass through and which iso to use to install the OS and then the script runs sudo qemu-system-x86_64 with the appropriate parameters.

So if you break it down, you could currently call my script like this:

./create-vm.sh /home/me/os-images/windows10.iso

And this works fine.

But now I want to take it a step further and use sudo virt-install ... instead of sudo qemu-system-x86_64 ... and that is causing major issues because with virt-install it can’t access the iso file anymore. Presumably because it drops its root privileges and uses the qemu user even if I run it with sudo…

So now I have to make a difficult decision:

  • Do I move the iso file to /var/lib/libvirt/images? (No because the user might need that file in the exact location where it is right now.)
  • Do I copy the iso to /var/lib/libvirt/images? (No because the user might not have enough disk space and it just seems like a waste of resources.)
  • Do I set user = root or user = me in /etc/libvirt/qemu.conf? (No, because that is a global setting that might mess up other qemu stuff the user is doing. – I have tried it though and it causes libvirtd.service to crash.)
  • Do I add the group of the iso file to the qemu user? (No, because that could have unwanted side effects, potentially giving qemu more access in situations where the user wouldn’t want it. – Nevertheless, I’ve tried it and it didn’t work, presumably some SElinux magic is blocking it…)
  • Do I change the owner of the iso file to qemu? (No, because that might have unwanted side effects. – Besides that, when I try it I still get permission denied errors, probably because of SElinux.)
  • Do I mount the iso and make the mountpoint available to the qemu user? (No, because iso files can be very complex and some data will not be available in the mountpoint.)
  • Do I mount the folder containing the iso? (No because the iso file would still have the same owner/group.)

I just can’t seem to find a good solution. What am I supposed to do now? I really need some of the functionality that virt-install offers over qemu-system-x86_64.

Note: In reality there is not just one iso image, but also a floppy image, some other iso files containing drivers and an ACPI table file. I get permission errors for all of these files from virt-install.


Get this bounty!!!

#StackBounty: #bash #permissions #kvm #qemu #libvirt Clean way of running virt-install with an iso file that's in your home directo…

Bounty: 50

I have a script that automatically creates and runs a VM. That script is used by many people. You basically call the script giving it some information like what PCI or USB devices you want to pass through and which iso to use to install the OS and then the script runs sudo qemu-system-x86_64 with the appropriate parameters.

So if you break it down, you could currently call my script like this:

./create-vm.sh /home/me/os-images/windows10.iso

And this works fine.

But now I want to take it a step further and use sudo virt-install ... instead of sudo qemu-system-x86_64 ... and that is causing major issues because with virt-install it can’t access the iso file anymore. Presumably because it drops its root privileges and uses the qemu user even if I run it with sudo…

So now I have to make a difficult decision:

  • Do I move the iso file to /var/lib/libvirt/images? (No because the user might need that file in the exact location where it is right now.)
  • Do I copy the iso to /var/lib/libvirt/images? (No because the user might not have enough disk space and it just seems like a waste of resources.)
  • Do I set user = root or user = me in /etc/libvirt/qemu.conf? (No, because that is a global setting that might mess up other qemu stuff the user is doing. – I have tried it though and it causes libvirtd.service to crash.)
  • Do I add the group of the iso file to the qemu user? (No, because that could have unwanted side effects, potentially giving qemu more access in situations where the user wouldn’t want it. – Nevertheless, I’ve tried it and it didn’t work, presumably some SElinux magic is blocking it…)
  • Do I change the owner of the iso file to qemu? (No, because that might have unwanted side effects. – Besides that, when I try it I still get permission denied errors, probably because of SElinux.)
  • Do I mount the iso and make the mountpoint available to the qemu user? (No, because iso files can be very complex and some data will not be available in the mountpoint.)
  • Do I mount the folder containing the iso? (No because the iso file would still have the same owner/group.)

I just can’t seem to find a good solution. What am I supposed to do now? I really need some of the functionality that virt-install offers over qemu-system-x86_64.

Note: In reality there is not just one iso image, but also a floppy image, some other iso files containing drivers and an ACPI table file. I get permission errors for all of these files from virt-install.


Get this bounty!!!