#StackBounty: #io #iotop Why are processes blocked by I/O in case of heavy system load?

Bounty: 200

I have a workstation(2x Intel Xeon family CPUs and 128GiB of RAM) running several virtual machines and while the combined CPU usage is <30%, then the load average is between 20 and 25. For example, if I execute a tar -xzvf vm_data.tgz --directory vm4/ --strip-components=1 command, then the gzip process is 90% – 99% of its time blocked by I/O and the command takes forever to complete:

enter image description here

On the other hand, the actual reads and writes to disks are very low compared to SATA 3.0 or SSDs(I’m using single Kingston SA400S37960G SSD) hardware limits.

What might cause a process(gzip in my example) to wait after the I/O while the actual disk reads and writes appear to be very low? My first thought was that maybe the system interrupts are very high and that’s what’s blocking the I/O, but according to /proc/interrupts this does not seem to be the case as none of the counters are increasing rapidly.


Get this bounty!!!

#StackBounty: #android #ios #file #io #operating-system How can I read & write one file at same time by different threads on Androi…

Bounty: 50

I have lots of small files. To save file handles and improve IO efficiency, these files are packed into a big single file. However, for some reason, these small files should be able to update in runtime. So Updating and reading different parts of a big single file at the same time by different threads is required.

Because of the memory limit, mmap is not a good choice. I have to implement it by myself. But I’m concerned about is it safe to read and write different parts of a single file at the same time on iOS/Android. How can I make sure the block which is being writing will not read by other thread.

Should I implement the whole feature by thread locks or there has been some mature technic to do the same work?


Get this bounty!!!

#StackBounty: #performance #io #scheduler How to get I/O priority to work on Ubuntu?

Bounty: 50

Ubuntu has ionice, but as far as I can tell, it does absolutely nothing.

I suspect this is because Ubuntu replaced cfq with deadline and deadline doesn’t support priorities.

Is there any possible way to have prioritized I/O on Ubuntu anymore?

EDIT: The context is that I have a database restore that easily consumes all my I/O and renders my system unusable until it has finished. I’d like it to remain usable for other tasks.


Get this bounty!!!

#StackBounty: #lvm #performance #io #ssd #iostat NVMe disk shows 80% io utilization, partitions show 0% io utilization

Bounty: 50

I have a CentOS 7 server (kernel 3.10.0-957.12.1.el7.x86_64) with 2 NVMe disks with the following setup:

# lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1       259:0    0   477G  0 disk
├─nvme0n1p1   259:2    0   511M  0 part  /boot/efi
├─nvme0n1p2   259:4    0  19.5G  0 part
│ └─md2         9:2    0  19.5G  0 raid1 /
├─nvme0n1p3   259:7    0   511M  0 part  [SWAP]
└─nvme0n1p4   259:9    0 456.4G  0 part
  └─data-data 253:0    0 912.8G  0 lvm   /data
nvme1n1       259:1    0   477G  0 disk
├─nvme1n1p1   259:3    0   511M  0 part
├─nvme1n1p2   259:5    0  19.5G  0 part
│ └─md2         9:2    0  19.5G  0 raid1 /
├─nvme1n1p3   259:6    0   511M  0 part  [SWAP]
└─nvme1n1p4   259:8    0 456.4G  0 part
  └─data-data 253:0    0 912.8G  0 lvm   /data

Our monitoring and iostat continually shows nvme0n1 and nvme1n1 with 80%+ io utilization while the individual partitions have 0% io utilization and are fully available (250k iops, 1GB read/write per sec).

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           7.14    0.00    3.51    0.00    0.00   89.36

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme1n1           0.00     0.00    0.00   50.50     0.00   222.00     8.79     0.73    0.02    0.00    0.02  14.48  73.10
nvme1n1p1         0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
nvme1n1p2         0.00     0.00    0.00   49.50     0.00   218.00     8.81     0.00    0.02    0.00    0.02   0.01   0.05
nvme1n1p3         0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
nvme1n1p4         0.00     0.00    0.00    1.00     0.00     4.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
nvme0n1           0.00     0.00    0.00   49.50     0.00   218.00     8.81     0.73    0.02    0.00    0.02  14.77  73.10
nvme0n1p1         0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
nvme0n1p2         0.00     0.00    0.00   49.50     0.00   218.00     8.81     0.00    0.02    0.00    0.02   0.01   0.05
nvme0n1p3         0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
nvme0n1p4         0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md2               0.00     0.00    0.00   48.50     0.00   214.00     8.82     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    1.00     0.00     4.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00

Any ideas what can be the root cause for such behavior?
All seems to be working fine except for monitoring triggering high io alerts.


Get this bounty!!!

How to read a file in Java

Usually such kind of function is not recommended when reading huge files. Because it is not possible for java to allocate so much contiguous memory.

As far a possible, avoid using this function.

public static String getFile(String filepath) 
{
        StringBuilder output = new StringBuilder("");
        try 
        {       
            File file = new File(filepath);
            FileReader fileReader = new FileReader(file);
            BufferedReader bfr = new BufferedReader(fileReader);
            String line ;
            while((line = bfr.readLine()) != null)
            {
                output.append(line + "n");
            } 
            bfr.close();
            fileReader.close();
        }
        catch (FileNotFoundException e) 
        {
              e.printStackTrace();
        } 
        catch (IOException e) 
        {
              e.printStackTrace();
        }   
        finally
        {
            
        }
        return output.toString();
}