#StackBounty: #linux #filesystems #btrfs How to prevent attaching a disk with a btrfs partition that has same UUID as host from corrupt…

Bounty: 50

Background: The host OS is an Azure Oracle Linux 7.8 instance with its OS disk mounted via /dev/sda entries. /dev/sda2 (/) is btrfs. I have another Azure Oracle Linux 7.8 instance that is broken, so I wanted to attach its disk to debug. Once attached to my host OS, because the attached disk is from the same Oracle Linux 7.8 image, its UUIDs are the same as my host, and it seems to create some confusion/corruption with mounts. Below is the output of lsblk after Azure has finished attaching the image:

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb       8:16   0    4G  0 disk
└─sdb1    8:17   0    4G  0 part /mnt
sr0      11:0    1  628K  0 rom
fd0       2:0    1    4K  0 disk
sdc       8:32   0   50G  0 disk
├─sdc15   8:47   0  495M  0 part
├─sdc2    8:34   0   49G  0 part /      <--- isn't really sdc2, its mounted from sda2
├─sdc14   8:46   0    4M  0 part
└─sdc1    8:33   0  500M  0 part
sda       8:0    0  100G  0 disk
├─sda2    8:2    0   99G  0 part
├─sda14   8:14   0    4M  0 part
├─sda15   8:15   0  495M  0 part /boot/efi
└─sda1    8:1    0  500M  0 part /boot

You can see it thinks root / is mounted via /dev/sdc2, but this disk (/dev/sdc) has literally only just been attached. I can only presume the UUID conflict is causing this (could it be anything else?), but now I can’t mount the real/attached /dev/sdc2 to debug that disk because the system thinks its already mounted.

Is there anyway to prevent this happening as I attach the disk?


Get this bounty!!!

#StackBounty: #linux #filesystems #btrfs btrfs "unable to zero output file"

Bounty: 50

I’m trying to create a btrfs file system with a single volume from a directory ‘root’, over mtd and nandsim.

With a few token files and directories in my root directory, and successfully created and mounted the filesystem:

sudo modprobe mtdblock
sudo flash_erase /dev/mtd0 0 0
sudo mkfs.btrfs -m single -s 2048 -r root /dev/mtdblock0

All is well in the world. Now, I add in the actual contents of my root directory: a few metadata files, and just under 128k binary files at 2k each. When I try the same approach again, mkfs.btrfs fails with “ERROR: unable to zero the output file”.

In the source code, the offending method at line 407 fails if either call to pwrite64() fails. I can’t see why this would fail, unless the system call has some limit on the overall size it will allow?

That said, my device is only 256MB, on a system with plenty of RAM and disk space — it seems unlikely.

Could anyone please point me in the right direction? Have I missed some key step?

If it matters, I’m using btrfs-progs v4.15.1, on bionic 18.04 kernel 4.15.0-99-generic


Get this bounty!!!

#StackBounty: #linux #hard-drive #partitioning #filesystems #dynamic-disk Converting Ext4 Simple/Dynamic drive back to Basic/Partition

Bounty: 50

So basically i was trying to add a 5th partition and in that process I’ve converted my whole disk to a dynamic drive, which contains both NTFS and EXT4 with Linux on dual boot. Now the problem is it seems like i can’t convert EXT4 volume back to basic, but i’am able to convert NTFS volumes, which will result in Unallocating the EXT4 volume.

  1. So is there any software or a way to convert the dynamic disk back to basic disk including the EXT4 partitions?

Extra Information:

  1. Windows Diskpart shows the EXT4 drive as RAW, Other disk management software’s like AOMEI Partition Assistant show it as UNKNOWN, whereas the disk properties from my computer show it as EXT4


Get this bounty!!!

#StackBounty: #boot #usb #partitioning #filesystems #cd Delete CDFS volume from USB drive in windows?

Bounty: 50

I recently attempted to install ubuntu onto a removable USB drive. Eventually, I wanted to replace the ubuntu version with another, but the problem is that, besides for the 3.7gb of “free space” I can mount the new boot ISO onto, there is about 6mb of some “CDFS” boot file system, which is preventing any other boot system from being booted from.

So basically, I need to delete this CDFS voluem.

Running diskpart, I do the following commands:

list volume

to which I see volume 1 is the CDFS filesystem I am trying to delete, so then

select volume 1

so far so good

but then:

delete volume

is where the trouble is, because I get the following output:

DiskPart cannot delete volumes on removable media.

I’ve tried looking this up but couldn’t find any conclusive articles, and for sure not on the stackexchange, so:

How do I remove a CDFS filesystem from a removable USB drive, using windows [vista or 7]?


Get this bounty!!!

#StackBounty: #boot #filesystems #uefi #efi What Filesystems Are Supported by Lenovo And Dell Laptops on EFI System Partition?

Bounty: 50

According to the UEFI specification (13.3.1.1 File System Format) an EFI firmware must support FAT12, FAT16 and FAT32 file systems for the EFI system partition (ESP). However, the Arch Wiki states that “any conformant vendor can optionally add support for additional file systems“.

Does one know of a vendor supporting additional file systems, like ext2/3/4? Or does one even (successfully) use a journaling file system on an ESP?

In particular I am interested in Lenovo and Dell laptops from 2017 or newer.


Get this bounty!!!

#StackBounty: #windows #windows-10 #windows-explorer #filesystems #copy-paste When copying a file/folder, strange characters are added …

Bounty: 50

My operating system is Windows 10.

When I copy a file or folder to the same location they are in, Windows automatically adds the word “copy” to the end of the file name,
which is great. But there is a strange phenomenon that at the beginning of the file name are added two unvisible characters.

I made a paste copy of these characters to a binary editor and it turned out that these were the code of each of them: U+200F

And the name of a character is: RIGHT-TO-LEFT MARK

And its binary representation is: e2 80 8f

What to do? How can we get rid of this strange phenomenon?

Edit, operating system details:

Version: Microsoft Windows [Version 10.0.17134.885]

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlNlsCodePage:

    AllowDeprecatedCP    REG_DWORD    0x42414421
    ACP                  REG_SZ       1255
    OEMCP                REG_SZ       862
    MACCP                REG_SZ       10005

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersion:

ReleaseId    REG_SZ    1803


Get this bounty!!!

#StackBounty: #arch-linux #filesystems #mount #nvme Operation Not Supported Error Mounting NVME Drive on Arch Install

Bounty: 50

I’m trying to install Arch on a Dell XPS 15 9560.

I’ve used nomodeset to make the text legible and pcie_aspm=off to stop the slew of pci bus errors as per a suggestion on the device’s Arch Wiki page.

However, when I try to mount the drive I get a slew of errors (continuing forever):

print_req_error: operation not supported error, dev nvme0n1, sector {secnum} flags 9

Where the secnum is gradually increasing, presumably it’s going through and trying to do the mount starting at every block but I digress.

Any ideas on how to fix this? I’ve tried secure erasing the SSD to account for any bugs there but nothing.


Get this bounty!!!

#StackBounty: #linux #filesystems #hadoop Hadoop file system from non-hadoop machine

Bounty: 50

I’m having a difficult time finding information on this issue because so many of the results in my searches turn up basic information about copying files from a machine that’s part of a cluster.

Problem: I have a hadoop 3 node cluster that’s running hdfs. Everything is working fine. I can use files view, I can copy files to it from Windows, I can copy files from the local filesystem to hdfs, I can view directories, create, delete, etc.

I have another machine that’s not a part of the cluster. It’s running Dremio (fwiw), and it’s also the machine that processes the files that I eventually need to copy to the hdfs file system. Dremio is working fine, but I’m trying access the hdfs file system from this machine and I’m not entirely sure how I should be doing that correctly.

Since I’m running scripts that used to run (correctly) on a machine that was part of the cluster, I installed just the hadoop-client (to get access to hdfs -dfs) and I updated the lines to reference the hdfs cluster (rather than assuming it was local). That command looks like this:

hdfs dfs -copyFromLocal test.txt hdfs://ws-hdfs01:50070/Data/

This exact command works just fine from the ws-hdfs01 box (removing the hdfs://ws-hdfs01:70050/), but on the machine that’s not part of the cluster I get the following error:

19/06/04 12:11:09 WARN net.NetUtils: Unable to wrap exception of type class org.apache.hadoop.ipc.RpcException: it has no (String) constructor
java.lang.NoSuchMethodException: org.apache.hadoop.ipc.RpcException.<init>(java.lang.String)
        at java.lang.Class.getConstructor0(Class.java:3082)
        at java.lang.Class.getConstructor(Class.java:1825)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
        at org.apache.hadoop.ipc.Client.call(Client.java:1443)
        at org.apache.hadoop.ipc.Client.call(Client.java:1353)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1660)
        at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583)
        at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595)
        at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:65)
        at org.apache.hadoop.fs.Globber.doGlob(Globber.java:281)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:149)
        at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2016)
        at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:353)
        at org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:195)
        at org.apache.hadoop.fs.shell.CopyCommands$CopyFromLocal.processOptions(CopyCommands.java:348)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
copyFromLocal: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "ws-bi01[fqdn removed]/10.0.10.37"; destination host is: "ws-hdfs01":50070;

If there’s a better command to copy files to the system, I would much rather just uninstall the hadoop client and do it that way, but like I said, I’m having a hard time trying to find it because of the thousands of search results about how to copy files from the local system to the hdfs file system from a machine that’s part of the cluster.


Get this bounty!!!