#StackBounty: #windows-10 #hard-drive #ssd #performance How do I measure the time taken for Samsung's Intelligent TurboWrite buffer…

Bounty: 50

I have a 1 TB Samsung 860 QVO SSD which is Samsung’s QLC NAND SSD offering.

It has an SLC cache used as a buffer during writes and I observed speeds of upto 520 MB/s for up to 10 gigabytes of sequential data transfer before the write speed tanked to 75-80 MB/s.

I tried writing another file to the drive after idling for more than an hour but I’m still seeing 80 MB/s write speeds. I also rebooted my notebook although I’m not sure that helped with anything as write speeds are still 80 MB/s.

I should mention that I’m running Windows 10 and this drive is used for nothing but storing data. It’s not the boot drive and it doesn’t store the pagefile or hiberfil.

Is there a way for me to see how much of the buffer is utilized at any given point so I can only start transferring files when it’s empty?


Get this bounty!!!

#StackBounty: #windows-10 #hard-drive #boot #mbr Unable to recover HDD after failed formatting

Bounty: 100

Hard drive: WD Caviar Blue 640GB

I was formatting hard drive using Darik’s Boot and Nuke (writing zeros method), and suddenly power in the house went out so computer did shot down. It was on about 30% (pass 1 of 3). When I booted computer to start the process again, hard drive is not listed anymore. I tried rebuilding and deleting MBR using installation disk, Hirens, Even data lifeguard say’s mbr table is locked by another program and bunch of other software, but still can’t make it work. In windows disk manager shows disk unknown not initialized I/O error. Tried fixing with DISKPART but no success. Is there any other way to fix it?


Get this bounty!!!

#StackBounty: #boot #hard-drive #sata Slow Ubuntu Boot [ata2: SRST failed (errno=-16)] and [ata2: reset failed, giving up]

Bounty: 100

This Motherboard with a triple boot setup, booting in AHCI mode

Here is the relevant message (pulled from dmesg), note the long delay in timestamp between them:

[    6.528119] ata2: link is slow to respond, please be patient (ready=-19)
[   11.028120] ata2: SRST failed (errno=-16)
[   16.540117] ata2: link is slow to respond, please be patient (ready=-19)
[   21.040117] ata2: SRST failed (errno=-16)
[   26.552117] ata2: link is slow to respond, please be patient (ready=-19)
[   56.072118] ata2: SRST failed (errno=-16)
[   56.072125] ata2: limiting SATA link speed to 1.5 Gbps
[   61.104117] ata2: SRST failed (errno=-16)
[   61.104142] ata2: reset failed, giving up

Seems to work fine on Windows. It’s just an Ubuntu issue.

All my drives seem firmly connected, with no jumpers, most are SSD but some are IDE. I have like 6 drives. One is a Hackintosh (Mac GPT drive) and another is Windows. One SSD is connected via eSATA

I don’t know if ata2 is the same as /dev/sdb but if it is, this may be interesting:

$ sudo fsck /dev/sdb                                                                                         
fsck from util-linux 2.31.1
e2fsck 1.44.1 (24-Mar-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/sdb

A commentary on the cause: https://www.redhat.com/archives/rhl-list/2006-October/msg03892.html

Because it’s a GPT hackintosh drive:

$ sudo gdisk /dev/sdb -l
GPT fdisk (gdisk) version 1.0.3

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Model: ST31000524AS    
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 384E6D96-A1EE-4D32-8FE5-14B63E4BF049
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 8-sector boundaries
Total free space is 262157 sectors (128.0 MiB)

Verify disk passes:

$ sudo gdisk /dev/sdb
$ v
No problems found. 262157 free sectors (128.0 MiB) available in 2
segments, the largest of which is 262151 (128.0 MiB) in size.


Get this bounty!!!

#StackBounty: #mount #hard-drive #ntfs NTFS Drobo External HDD not mounting properly

Bounty: 50

Looked around for hours, but can’t seem to find any advice better than “reformat it with ext3”:

I have a Drobo 5D with 20+ TB of usable space, most of which is used with very important data. It has been working flawlessly on Windows, but we’re trying to migrate our work over to Ubuntu for a variety of reasons, and the Drobo simply refuses to move with us. It’s formatted with NTFS, and I’m trying to mount it on a new Ubuntu 18.04 system.

Using drobo-utils, I can verify that the system is plugged in and looks ready to go, but when I try to mount it, the command just hangs:

sudo mount -t ntfs -o force,rw /dev/sdc2 /data/drobo

When I check the command later:

ps aux | grep mount
root     19309  0.0  0.0  72716  4280 pts/3    S    17:36   0:00 sudo mount -t ntfs -o force,rw /dev/sdc2 /data/drobo
root     19310  0.0  0.0  32448  1332 pts/3    S    17:36   0:00 mount -t ntfs -o force,rw /dev/sdc2 /data/drobo
root     19311  0.0  0.0  21428  2884 pts/3    D    17:36   0:00 /sbin/mount.ntfs /dev/sdc2 /data/drobo -o rw,force

Notice that the CPU time is at 0:00, so it doesn’t seem to be doing anything (this is an hour after issuing the command). I’ve pulled the drobo back off of linux and checked it out on Windows, and everything seems fine. There’s too much data on it to try reformatting it at this point. Is there anything particular about NTFS that would be causing this? Or is it an issue with the Drobo in general when using NTFS? Any help would be appreciated.


Get this bounty!!!

#StackBounty: #hard-drive #centos #ext4 Getting an “is not a block special device.” error when trying to mount an 8TB disk in CentOS 7.6

Bounty: 50

I’m trying to format and mount a 8TB external as an ext4 and it doesn’t seem to work right.

I can create my partition using gdisk:

sdc               8:32   0   7.3T  0 disk
└─sdc1            8:33   0   7.3T  0 part

But when I create the filesystem I get a weird error that says: “/dev/sdc1 is not a block special device.” however it seems to complete:

# mke2fs -t ext4 /dev/sdc1
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdc1 is not a block special device.
Proceed anyway? (y,n) y
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
246512 inodes, 984801 blocks
49240 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1008730112
31 block groups
32768 blocks per group, 32768 fragments per group
7952 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Afterwards though, when mounting it doesn’t seem to work and it mounts /dev/loop0 instead? I’m not even sure what that is and the filesystem is the wrong size.

/dev/loop0               4.0G   16M  3.7G   1% /mnt/test

I was reading apparently this is some new type of drive with sectors not in the normal place? Any help would be appreciated.


Get this bounty!!!

#StackBounty: #windows-7 #hard-drive #multi-boot #bootloader Windows 7 boot loader on a new drive?

Bounty: 50

We had a SATA3 drive that was performing poorly and making noise, and installed Windows 7 onto a new drive. The install went well, but left the bootloader on the old drive, and so I could not remove it; Windows treated it as a multi-boot system with a new boot option on the new disk. Everything worked, so I figured I’d have time to fix it. Well, the old drive died a few weeks later and now I have no idea how to address this.

How can I instruct the PC to boot to the new disk, when there isn’t a bootloader present on it? Do I need to reinstall Windows from scratch? From looking at the board specs, it supports UEFI, I am unsure if this is relevant.

Thanks!


Get this bounty!!!

#StackBounty: #hard-drive #backup #raid #lvm #cloning Backup: How to mirror/clone LVM LVs (or VGs) on demand?

Bounty: 100

So far, I have a semi-automated backup approach for the whole system installed in different LVs, using LVM (CoW) snapshots (which can be created upon boot to avoid data corruption). That is, if I feel like I need to create a backup, I will run a script to mount those snapshot LVs (in the same way the live system is mounted) to some location in read-only mode and then run another script to perform a compressed backup of that location, what finally produces a compressed backup of the whole system as a single archive file. Neat…

Now, I have a new use case for which I’m trying to find a good solution. Say, I got an external HDD, which purpose is to store stuff. Well, you know, it is encrypted, also has its own VG and LVs on it, each of which is dedicated for storing different types of data, e.g. pictures, videos, documents, and even those system backup archives. This gives flexibility of choosing file systems depending on a type of data and all the cool features of LVM. That is from time to time, I plug in this beast and copy more stuff to either of these LVs depending on what I want to store. Next, I have yet another external HDD, which I would like (in the ideal world) to act as a mirror of the first external HDD. Furthermore, ideally that mirroring should happen automagically when both devices are plugged in and I explicitly copy stuff only onto the first one (i.e. something like CoW, maybe we should call it incremental mirroring, though I’m not sure it’s possible). The situations when, for example, the first device is connected and being written, while the second one is not, should also be handled. That is when the second device is connected next time, it should not only mirror the data that is being written to the first one this time, but also the one that was missed last time. I guess this is why I refer to mirroring, which is probably not possible to perform incrementally in such situations and a complete clone overwrite is required.

Up until now, I considered the following solutions:

  1. LVM snapshots seem useless here:
    • they are CoW, i.e. they don’t actually maintain a complete mirror on a snapshot LV (unless all of that data changes after writing to the first device);
    • they would require to span one VG not only over separate and independent PVs but also over separate and, technically speaking, independent physical devices (I don’t like such coupling and would like to avoid it).
  2. sector cloning via dd:
    • looks solid in terms of producing the exact clone on the second device;
    • according to LVM and cloning HDs, is dangerous because LVM configurations are obviously duplicated in this case and, as a result, may result in data corruption if some renaming black magic is not done (I don’t want to script that renaming unless we will not find another solution in this thread as it looks fragile to me);
    • smells like an overkill to overwrite the whole second device every time I copy a couple of files into one of the LVs on the first device, just to maintain a mirror;
    • not really automated as it does not detect what was changed where and copy it transparently just based on the fact that both are plugged in and the first one was written (this is what I call “on demand”).
  3. rsync of files between corresponding LVs of the two devices:
    • the good thing is that it will synchronize only the changes and will satisfy even the case when we missed to plug the second device once or twice (and that’s a big plus);
    • looks convoluted;
    • requires scripting for simultaneous mounting of corresponding LVs;
    • not automated (has to be triggered manually), thus error-prone;
    • requires manual recreation of the same partitioning scheme and LVM on the second device based on the first device (yes, this could be automated, but still).
  4. LVM RAID1? I have no idea how reliable and/or possible it is to set up with two plug-and-play devices which may or may not be plugged in and/or mounted simultaneously, and, thus, how it will compare to the above options. Any experiences shared are much appreciated.
  5. Some other solution, maybe utilizing vgimportclone?

I appreciate any inputs on this and especially already tested/used solutions which either self-designed and customized by you or existing adopted tools to manage this scenario. The goal is to make it as simple/least error-prone/automated as possible. Thank you!


Get this bounty!!!