#StackBounty: #iscsi #software-raid #mdadm #raid5 #open-iscsi Why open-iscsi has 2x slower writes than Samba via 10G Ethernet?

Bounty: 50

On my local file server I have raid-6 on 7x HDD drives.

dd if=/dev/zero of=tempfile bs=1M count=2048 conv=fdatasync

Local speed test gives me 349 MB/s write speed.

Remote writes to Samba from SSD (>2Gb/s read speed) gives me 259 MB/s writes.
But remote writes to iSCSI drive (on Win10 iSCSI initiator) gives me mere 151 Mb/s writes.

raid6 config – 128K chunk size, stripe_cache_size = 8191. Write intent bitmap is on SSD (Samsung 860 PRO, 4096K bitmap chunk).

Array mounted with options: rw,noatime,nobarrier,commit=999,stripe=128,data=writeback

open-iscsi setup: target is based on a 4Tb file.

Any hints why iSCSI is slower than Samba on writes?
Any hints how to improve iSCSI writes speed?

I assume it has something to do with desire of open-iscsi to flush writes to disk after each operation, which increases write amplification on raid6 due to excessive parity rewrites. But I am not sure how to fix it. Speed it more important than safety of currently written data in case of power outage.

As a side note older ietd iSCSI target had the ability to enable write-back mode (using IOMode=wb), and sustained write speed was much faster. Unfortunately, it seems to be currently unmaintained.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.