#StackBounty: #data-recovery #trim Is it possible to recover data from a Western Digital TRIM supporting disks (not SSD) after quick fo…

Bounty: 50

I have a western digital blue disk, which is one of the SMR variations. I’ve heard that they support TRIM command. The disk is accidentally quick formatted on Windows 10, and now seems to be all zero. But, I’m wondering, for a disk it would take illogical amount of effort to actually set all sectors (1TB) to zero. I don’t really understood how TRIM applies to such disks, and suppose to see something like a list of safely deletable sectors or something like that in the disk’s firmware.

So the question is: Is there any way to recover the data from my disk? Including firmware tweaks or hardware tweaks?


Get this bounty!!!

#StackBounty: #data-recovery #mv #archive #corruption Failed to move a 7z Archive, now it can’t be read, but there is a fragment left i…

Bounty: 100

I was trying to move my old 7z archive from an older ext4 partition to my current one, but in the process it randomly seems to have cancelled. Not only does the new location possess an incomplete archive, which cant be read, but the old location has a 7z.part file. Is it possible to somehow continue the moving process? I can’t seem to figure it out simply from the file explorer’s UI, but I was wondering if there might be a terminal command?

Using ls in the old directory shows the old 7z file, but its name seems to be red, and ls claims it cannot be read (I/O error). Using it in the new directory seems to also show it as red, but it doesn’t have the same error

The old directory was ~/Documents/Archive/backup (on a different partition, as this was from an older installation of Linux)

[frontear@frontear-net backup]$ ls
ls: cannot access 'OneDrive.7z': Input/output error
OneDrive.7z  OneDrive.7z.part

Although ls claims a OneDrive.7z exists here, it’s actually not visible via the file explorer, even with hidden files enabled

The current directory is ~/Desktop/Archive/backup (my current Linux install)

[frontear@frontear-net backup]$ ls
OneDrive.7z

In both commands, the OneDrive.7z is in a red color, which I take to mean something, probably in regards to it being corrupted

Running fsck from a Manjaro Live ISO yields no obvious signs of corruption. /dev/sda2 is my current partition, whereas /dev/sda3 is the old partition.

[manjaro manjaro]$ sudo fsck /dev/sda2
fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
/dev/sda2: clean, 738853/39223296 files, 76011466/156883968 blocks
[manjaro manjaro]# fsck /dev/sda3
fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
Superblock last write time is in the future.
        (by less than a day, probably due to the hardware clock being incorrectly set)
/dev/sda3: clean, 4362438/9715712 files, 18432430/38835968 blocks

Edit: Updated fsck at the request of Timothy Baldwin:

[manjaro manjaro]# fsck -f /dev/sda2
fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
Pass 1: Checking inodes, blocks, and sizes
Inode 19400943 extent tree (at level 2) could be narrower.  Optimize<y>? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda2: 757397/39223296 files (0.5% non-contiguous), 80084462/156883968 blocks
[manjaro manjaro]# fsck -f /dev/sda3
fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
Superblock last mount time is in the future.
        (by less than a day, probably due to the hardware clock being incorrectly set)
Superblock last write time is in the future.
        (by less than a day, probably due to the hardware clock being incorrectly set)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sda3: 4362438/9715712 files (0.1% non-contiguous), 18432430/38835968 blocks

Edit 2: Updated ls to ls -l:

[frontear@frontear-net ~]$ cd ~/Documents/Archive/backup/
[frontear@frontear-net backup]$ ls -l
ls: cannot access 'OneDrive.7z': Input/output error
total 2168832
-????????? ? ?        ?                 ?            ? OneDrive.7z
-rw------- 1 frontear frontear 2220883968 Apr 30 06:46 OneDrive.7z.part
[frontear@frontear-net backup]$ cd ~/Desktop/Archive/backup/
[frontear@frontear-net backup]$ ls -l
total 2446952
-rw------- 1 frontear frontear 2220883968 Apr 29 23:13  OneDrive.7z

Edit 3: Added a smartctl check:

[manjaro@manjaro ~]$ sudo smartctl -a /dev/sda | sed -n '/Threshold/,/^$/p'
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0004   142   142   000    Old_age   Offline      -       70
  3 Spin_Up_Time            0x0007   128   128   024    Pre-fail  Always       -       177 (Average 180)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       2450
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000a   100   100   000    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0004   118   118   000    Old_age   Offline      -       33
  9 Power_On_Hours          0x0012   097   097   000    Old_age   Always       -       23216
 10 Spin_Retry_Count        0x0012   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       2388
192 Power-Off_Retract_Count 0x0032   098   098   000    Old_age   Always       -       2461
193 Load_Cycle_Count        0x0012   098   098   000    Old_age   Always       -       2461
194 Temperature_Celsius     0x0002   119   119   000    Old_age   Always       -       46 (Min/Max 20/53)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0012   097   097   000    Old_age   Always       -       23203
241 Total_LBAs_Written      0x0012   100   100   000    Old_age   Always       -       171426971200
242 Total_LBAs_Read         0x0012   100   100   000    Old_age   Always       -       228792438899

Edit 4: badblocks check

[manjaro@manjaro ~]$ sudo badblocks -sv /dev/sda
Checking blocks 0 to 976762583
Checking for bad blocks (read-only test): done                                                 
Pass completed, 0 bad blocks found. (0/0/0 errors)


Get this bounty!!!

#StackBounty: #windows-7 #windows #ssd #data-recovery #format Recover files after deleteing and merging partitions on SSD

Bounty: 50

In the Windows Installer’s Partition Manager I hit “delete” on the two logical partition reserved for the system (the first is the ~100 MB which was created by windows and the other was the rest of the physical drive allocated for windows files and other stuff).

After that, I hit “new” on the unallocated space (the whole physical capacity of the SSD). Than after the new OS was booted I ran a quick format on it from the File Explorer.

Before doing all this, I selected everything on my Desktop (on old OS install), and copied everything to a backup folder. After I installed the new OS I realized that it hasn’t copied the files, but only links to the Desktop of the original files.

I haven’t touched the drive so far, and as far as I know quick format only unassigns the files, and the data remains. So I ran a dozen of undelete application so far, but non of them seems to find the files on the Desktop.

I managed to recover files deleted like this before. What should I check to get back my files?

The old OS is Win7 the new one is Win10, if that may matter.

EDIT: Could it be that the undelete apps only finds the MFT of the first partition?


Get this bounty!!!

#StackBounty: #linux #data-recovery #zfs Recover/import ZFS pool from single old mirror

Bounty: 100

I made a couple serious errors dealing with my ZFS pools the other day (while misreading some online advice in fixing an error) and accidentally “created over” an existing 2-drive mirrored pool named backup. (Yes, I used the -f option after it complained. And now I know never to do this again.)

In any case, I happen to have taken out a 3rd mirrored drive from the same pool a few months back, as it was getting old and I didn’t want to wait for it to start to fail. So, I thought that I could swap this drive in and use it to restore the pool. (I’d just be missing out on the past few months of backups, which is mostly what this pool is used for.)

However, I don’t seem to be able to import the pool with this single old drive. At first, I thought it might have had to do with the name conflict with the new backup pool I accidentally created (and then destroyed). But even when trying to import via GUID, I get nothing.

Here’s the output from zdb -l /dev/sdb1 (which is the third drive)

------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'backup'
    state: 0
    txg: 0
    pool_guid: 3936176493905234028
    errata: 0
    hostid: 8323329
    hostname: [omitted]
    top_guid: 14695910886267065742
    guid: 17986383713788026938
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 14695910886267065742
        whole_disk: 0
        metaslab_array: 34
        metaslab_shift: 33
        ashift: 12
        asize: 1000197324800
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17914838236907067293
            path: '/dev/sdd1'
            whole_disk: 0
            DTL: 143
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 17986383713788026938
            path: '/dev/sdb1'
            whole_disk: 0
            DTL: 141
        children[2]:
            type: 'disk'
            id: 2
            guid: 1683783279473519399
            path: '/dev/sdc1'
            whole_disk: 0
            DTL: 145
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    create_txg: 0
    labels = 0 1 2 3 

Thus, the drive and pool data on the drive seem to be intact, according to zdb. However, importing the pool (even with -f and/or -F) just gets an “cannot import… no such pool available” error. I tried using the various GUIDs in the above info too (since I wasn’t sure with GUID was the relevant one), but none of those commands (e.g., zpool import 3936176493905234028) gets anything other than the “no such pool available” message.

I have installed a new version of my Linux OS since I removed that drive, so I thought using the old zpool.cache file I managed to recover from the old OS might do something. But the command zpool import -c zpool.cache just gives:

  pool: backup
     id: 3936176493905234028
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
 config:

    backup      UNAVAIL  insufficient replicas
      mirror-0  UNAVAIL  insufficient replicas
        sdd1    FAULTED  corrupted data
        sdc1    FAULTED  corrupted data

Which is somewhat to be expected. Those are the two disks where the pool was overwritten by my create command. However, sdb1 isn’t listed as a potential drive there — probably because I removed it from the pool after I took the disk out. Nevertheless, I think I have an intact copy of old mirrored data on sdb1, and zdb agrees. Why won’t it import?

Any suggestions on what else to try? Other diagnostic commands to run?


Note: I tried asking about this over at Server Fault (see link for more details about my situation), but I didn’t get any feedback and realized the specific Linux implementation may be important in figuring out how to resolve this. I would sincerely appreciate any advice or suggestions.


UPDATE: I think I may have found the problem. I thought that I had removed the spare drive before I had issued a detach command. And the fact that I was still seeing label information (when other online sources seem to indicate detach destroys the pool metadata) seemed to confirm that. I note that I’m able to simply type zdb -l backup and get label info (and get uberblock info with -u), so zfs seems to see the pool even without explicitly pointing to the device. It just doesn’t want to import it for some reason.

However, I’m no longer certain about the detach status. I came upon this old thread about recovering a ZFS pool from a detached mirror, and it makes a cryptic reference to txg having a value of zero. There are also references elsewhere to uberblocks being zeroed out upon a detach.

Well, the uberblock to my backup pool does list txg = 0 (while an active zpool I have elsewhere has large numbers in this field, not zero). And while there is an existing uberblock, there’s only one, with the others on backup listed as “invalid.” Unfortunately, I can’t seem to find much documentation of anything coming out of zdb easily available online.

I assume that means the spare third drive was detached? Can anyone confirm my interpretation? However, if the drive data is otherwise intact, is there any way to recover from it? While some advice online suggests a detached mirror is unrecoverable without resilvering, the thread I linked above has code for Solaris that seems to do a rather simple function to trick the label into thinking the uberblock is fine. Further poking around found me an updated Solaris version of this utility from only three years ago.

Assuming my understanding is correct and that my third mirror was detached, can I attempt a similar uberblock label fix in Linux? Is my only option to attempt to rewrite the Solaris code so that it ports to Linux? (I’m not sure I’m up to that.)

Honestly, given multiple references to scenarios like this online, I’m surprised at the lack of reasonable data recovery tools for ZFS. It seems there are finally some options for basic data recovery for common problems (including a possibility for recovering a pool that was written over by a create command; this doesn’t appear to be likely to work for me), but other than this one-off script for Solaris, I don’t see anything for dealing with detached devices. It’s very frustrating to realize that there are at least a dozen reasons why ZFS pools may fail to import (sometimes for trivial things that could be easily recoverable), and little in the way of troubleshooting, proper error codes, or documentation.

Again, any help, thoughts, or suggestions would be appreciated. Even if someone could recommend a better place to ask about this, I’d really appreciate it.


Get this bounty!!!

#StackBounty: #data-recovery #zfs Recover/import ZFS pool from single old mirror

Bounty: 100

I made a couple serious errors dealing with my ZFS pools the other day (while misreading some online advice in fixing an error) and accidentally “created over” an existing 2-drive mirrored pool named backup. (Yes, I used the -f option after it complained. And now I know never to do this again.)

In any case, I happen to have taken out a 3rd mirrored drive from the same pool a few months back, as it was getting old and I didn’t want to wait for it to start to fail. So, I thought that I could swap this drive in and use it to restore the pool. (I’d just be missing out on the past few months of backups, which is mostly what this pool is used for.)

However, I don’t seem to be able to import the pool with this single old drive. At first, I thought it might have had to do with the name conflict with the new backup pool I accidentally created (and then destroyed). But even when trying to import via GUID, I get nothing.

Here’s the output from zdb -l /dev/sdb1 (which is the third drive)

------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'backup'
    state: 0
    txg: 0
    pool_guid: 3936176493905234028
    errata: 0
    hostid: 8323329
    hostname: [omitted]
    top_guid: 14695910886267065742
    guid: 17986383713788026938
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 14695910886267065742
        whole_disk: 0
        metaslab_array: 34
        metaslab_shift: 33
        ashift: 12
        asize: 1000197324800
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17914838236907067293
            path: '/dev/sdd1'
            whole_disk: 0
            DTL: 143
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 17986383713788026938
            path: '/dev/sdb1'
            whole_disk: 0
            DTL: 141
        children[2]:
            type: 'disk'
            id: 2
            guid: 1683783279473519399
            path: '/dev/sdc1'
            whole_disk: 0
            DTL: 145
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    create_txg: 0
    labels = 0 1 2 3 

Thus, the drive and pool data on the drive seem to be intact, according to zdb. However, importing the pool (even with -f and/or -F) just gets an “cannot import… no such pool available” error. I tried using the various GUIDs in the above info too (since I wasn’t sure with GUID was the relevant one), but none of those commands (e.g., zpool import 3936176493905234028) gets anything other than the “no such pool available” message.

I have installed a new version of my Linux OS since I removed that drive, so I thought using the old zpool.cache file I managed to recover from the old OS might do something. But the command zpool import -c zpool.cache just gives:

  pool: backup
     id: 3936176493905234028
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
 config:

    backup      UNAVAIL  insufficient replicas
      mirror-0  UNAVAIL  insufficient replicas
        sdd1    FAULTED  corrupted data
        sdc1    FAULTED  corrupted data

Which is somewhat to be expected. Those are the two disks where the pool was overwritten by my create command. However, sdb1 isn’t listed as a potential drive there — probably because I removed it from the pool after I took the disk out. Nevertheless, I think I have an intact copy of old mirrored data on sdb1, and zdb agrees. Why won’t it import?

Any suggestions on what else to try? Other diagnostic commands to run?


Note: I tried asking about this over at Server Fault (see link for more details about my situation), but I didn’t get any feedback and realized the specific Linux implementation may be important in figuring out how to resolve this. I would sincerely appreciate any advice or suggestions.


Get this bounty!!!

#StackBounty: #windows-10 #hard-drive #data-recovery #windows-10-v1803 #dynamic-disk unable to recover data on a dynamic disk

Bounty: 50

I had a computer with two 3TB HDDs that were Dynamic Disk mirrors of one another. I’m trying to recover data off of one of them without success.

So I plugged one of these 3TB HDDs into a USB3<->SATA adapter and, in Disk Management, see the new disk as an “Invalid” “Dynamic” disk. I right click on that disk and then click “Reactive Disk” in the resultant menu and get this error:

This operation is not allowed on the invalid disk pack.

Does this mean the disk is bad? I found lots of paid data recovery software that I can use and maybe that really is the only option available to me but I can’t help but seriously wonder if that’s also just the result of those software vendors just spamming Google.

I’m running Windows 10 Pro 64-bit Build 1803.


Get this bounty!!!