#StackBounty: #btrfs What does "Counts for qgroup … are different" mean and how to fix it?

Bounty: 50

I’ve been using btrfs for my primary file system for a Linux Mint 20 install I did a couple of months ago. Today I had reason to want to extend the partition the btrfs filesystem is on, and gparted did a filesystem check beforehand and then refused to extend the partition due to file system errors. So I did a btrfs check on the partition (while it was unmounted; booted from a "Live CD" USB stick) and got 13 errors saying Counts for qgroup id 0/1234 are different (with varying ID numbers; below).

According to Wikipedia, quota groups (qgroups) relate to limiting the size of snapshots. I’ve only done snapshots indirectly via timeshift so I’m afraid I don’t know the details there. I can get rid of those snapshots if needed.

What do the errors mean, and how do I fix them? The manpage for btrfs check is really quite adamant about not using the --repair option without knowing what you’re doing…which I don’t. 🙂

Here is the output of the btrfs check (whitespace may have been slightly mangled, I copied and pasted to a Google Docs doc while running the Live CD and didn’t think about whitespace):

Opening filesystem to check...
Checking filesystem on /dev/sda3
UUID: 1cf835e0-2f64-493e-ae63-035dbd007cc3
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups
Counts for qgroup id: 0/256 are different
our:     referenced 15140220928 referenced compressed 15140220928
disk:        referenced 15170641920 referenced compressed 15170641920
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 847220736 exclusive compressed 847220736
disk:        exclusive 847220736 exclusive compressed 847220736
Counts for qgroup id: 0/257 are different
our:     referenced 1181160329216 referenced compressed 1181160329216
disk:        referenced 1181156818944 referenced compressed 1181156818944
diff:        referenced 3510272 referenced compressed 3510272
our:     exclusive 1181160329216 exclusive compressed 1181160329216
disk:        exclusive 1181156818944 exclusive compressed 1181156818944
diff:        exclusive 3510272 exclusive compressed 3510272
Counts for qgroup id: 0/1026 are different
our:     referenced 9243115520 referenced compressed 9243115520
disk:        referenced 9243115520 referenced compressed 9243115520
our:     exclusive 696569856 exclusive compressed 696569856
disk:        exclusive 682135552 exclusive compressed 682135552
diff:        exclusive 14434304 exclusive compressed 14434304
Counts for qgroup id: 0/2848 are different
our:     referenced 13068500992 referenced compressed 13068500992
disk:        referenced 13098921984 referenced compressed 13098921984
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 1556750336 exclusive compressed 1556750336
disk:        exclusive 1556750336 exclusive compressed 1556750336
Counts for qgroup id: 0/2882 are different
our:     referenced 14523535360 referenced compressed 14523535360
disk:        referenced 14553956352 referenced compressed 14553956352
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 1373368320 exclusive compressed 1373368320
disk:        exclusive 1373368320 exclusive compressed 1373368320
Counts for qgroup id: 0/2935 are different
our:     referenced 14761443328 referenced compressed 14761443328
disk:        referenced 14791864320 referenced compressed 14791864320
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 232054784 exclusive compressed 232054784
disk:        exclusive 232054784 exclusive compressed 232054784
Counts for qgroup id: 0/2937 are different
our:     referenced 14889074688 referenced compressed 14889074688
disk:        referenced 14919495680 referenced compressed 14919495680
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 244756480 exclusive compressed 244756480
disk:        exclusive 244756480 exclusive compressed 244756480
Counts for qgroup id: 0/2951 are different
our:     referenced 15147077632 referenced compressed 15147077632
disk:        referenced 15177498624 referenced compressed 15177498624
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 239132672 exclusive compressed 239132672
disk:        exclusive 239132672 exclusive compressed 239132672
Counts for qgroup id: 0/2953 are different
our:     referenced 15282089984 referenced compressed 15282089984
disk:        referenced 15312510976 referenced compressed 15312510976
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 229437440 exclusive compressed 229437440
disk:        exclusive 229437440 exclusive compressed 229437440
Counts for qgroup id: 0/2965 are different
our:     referenced 14960881664 referenced compressed 14960881664
disk:        referenced 14991302656 referenced compressed 14991302656
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 221765632 exclusive compressed 221765632
disk:        exclusive 221765632 exclusive compressed 221765632
Counts for qgroup id: 0/2970 are different
our:     referenced 15028105216 referenced compressed 15028105216
disk:        referenced 15058526208 referenced compressed 15058526208
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 226172928 exclusive compressed 226172928
disk:        exclusive 226172928 exclusive compressed 226172928
Counts for qgroup id: 0/2971 are different
our:     referenced 15051378688 referenced compressed 15051378688
disk:        referenced 15081799680 referenced compressed 15081799680
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 221945856 exclusive compressed 221945856
disk:        exclusive 221945856 exclusive compressed 221945856
Counts for qgroup id: 0/2972 are different
our:     referenced 15066845184 referenced compressed 15066845184
disk:        referenced 15097266176 referenced compressed 15097266176
diff:        referenced -30420992 referenced compressed -30420992
our:     exclusive 244789248 exclusive compressed 244789248
disk:        exclusive 244789248 exclusive compressed 244789248
found 1210188607488 bytes used, error(s) found
total csum bytes: 1175644964
total tree bytes: 5449875456
total fs tree bytes: 3953197056
total extent tree bytes: 239271936
btree space waste bytes: 804352655
file data blocks allocated: 7189404340224
 referenced 1204569280512


Get this bounty!!!

#StackBounty: #linux #filesystems #btrfs How to prevent attaching a disk with a btrfs partition that has same UUID as host from corrupt…

Bounty: 50

Background: The host OS is an Azure Oracle Linux 7.8 instance with its OS disk mounted via /dev/sda entries. /dev/sda2 (/) is btrfs. I have another Azure Oracle Linux 7.8 instance that is broken, so I wanted to attach its disk to debug. Once attached to my host OS, because the attached disk is from the same Oracle Linux 7.8 image, its UUIDs are the same as my host, and it seems to create some confusion/corruption with mounts. Below is the output of lsblk after Azure has finished attaching the image:

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb       8:16   0    4G  0 disk
└─sdb1    8:17   0    4G  0 part /mnt
sr0      11:0    1  628K  0 rom
fd0       2:0    1    4K  0 disk
sdc       8:32   0   50G  0 disk
├─sdc15   8:47   0  495M  0 part
├─sdc2    8:34   0   49G  0 part /      <--- isn't really sdc2, its mounted from sda2
├─sdc14   8:46   0    4M  0 part
└─sdc1    8:33   0  500M  0 part
sda       8:0    0  100G  0 disk
├─sda2    8:2    0   99G  0 part
├─sda14   8:14   0    4M  0 part
├─sda15   8:15   0  495M  0 part /boot/efi
└─sda1    8:1    0  500M  0 part /boot

You can see it thinks root / is mounted via /dev/sdc2, but this disk (/dev/sdc) has literally only just been attached. I can only presume the UUID conflict is causing this (could it be anything else?), but now I can’t mount the real/attached /dev/sdc2 to debug that disk because the system thinks its already mounted.

Is there anyway to prevent this happening as I attach the disk?


Get this bounty!!!

#StackBounty: #linux #fedora #hibernate #btrfs #luks btrfs, LUKS, swapfile: How to hibernate on swapfile?

Bounty: 50

I’m using btrfs encrypted by LUKS on fedora 32 silverblue, kernel version 5.7.7 with anaconda installer default setting.

Because fedora installer automatic partition does not add swap partition or file (or I’ve done wrong), I added swapfile on myself for hibernation like this:

$ # swapfile under /var directory because the location is the only part user can modify on fedora silverblue
$ touch /var/swapfile
$ chattr +C /var/swapfile 
$ fallocate --length 10GiB /var/swapfile
$ sudo chown root /var/swapfile 
$ sudo chmod 600 /var/swapfile 
$ sudo mkswap /var/swapfile 
$ sudo swapon /var/swapfile

and I added swapfile_t attr for selinux:

$ ls -Z /var/swapfile
unconfined_u:object_r:swapfile_t:s0 /var/swapfile

Then I followed arch wiki instruction(https://wiki.archlinux.org/index.php/Power_management/Suspend_and_hibernate#Hibernation_into_swap_file_on_Btrfs).

my /var/swapfile‘s physical offset is 19793240064 and page size is 4096, so I added kernel param with grub. here’s part of my /etc/default/grub kernel params now:

GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-572bfd87-6fa7-4be1-8c73-4759ac9af3cd rhgb quiet resume=UUID=572bfd87-6fa7-4be1-8c73-4759ac9af3cd resume_offset=4832334"

here’s my blkid:

$ sudo blkid
/dev/nvme0n1p1: UUID="5490-E733" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="46ecd0d1-6722-4b92-af73-9574a58eb332"
/dev/nvme0n1p2: UUID="c9294f4d-9c92-4c08-a037-715223443f2b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="731852d5-26cd-43bb-8904-c4256247f97d"
/dev/nvme0n1p3: UUID="572bfd87-6fa7-4be1-8c73-4759ac9af3cd" TYPE="crypto_LUKS" PARTUUID="e74de89a-fe5f-402f-a3bf-e398ad069b5b"
/dev/sda: BLOCK_SIZE="512" UUID="C602B4D602B4CD25" TYPE="ntfs"
/dev/mapper/luks-572bfd87-6fa7-4be1-8c73-4759ac9af3cd: LABEL="fedora_fedora" UUID="337b2fcb-a61b-4976-89ac-2b3feee02963" UUID_SUB="932cfe1c-9713-4063-bda0-a8a792654c39" BLOCK_SIZE="4096" TYPE="btrfs"

and hibernation failed. it seems resume parameter problem. I tried UUID=572bfd87-6fa7-4be1-8c73-4759ac9af3cd and UUID=337b2fcb-a61b-4976-89ac-2b3feee02963 and both failed. What were wrong? How can I setup swapfile hibernate properly?

I’ve checked journalctl -u systemd-logind and found the message but that didn’t help:

...
 localhost.localdomain systemd-logind[936]: Failed to open swap file /var/swapfile to determine on-disk offset: Permission denied
...


Get this bounty!!!

#StackBounty: #hard-drive #mount #raid #nas #btrfs btrfs ls gets invalid root flags

Bounty: 50

3 months ago, after a powercut, our Synology NAS DS415+ started to reboot to hell, staying 5 minutes up and then rebooting.
We recently tried to recover the data from it,2*3TB SHR-RAID 1.

After a long way through Synology Documentation, I mounted the raid without any problem, and when I run a classic ls in the folder, I have the following error:

ls: cannot access '/mnt/FOLDERA': Input/output error
FOLDERA
FOLDERB

I tried to run ls /mnt/FOLDERA and I have the same io error.
I also tried on folderB and I have no problem to access to this folder.

I tried also to mount the disk without the raid, and the same problem comes on both disks.
Looking at dmesg, every time I try accessing any io error folder, I have the following logs:

[109433.244496] BTRFS error (device dm-0): block=133562368 read time tree block corruption detected
[109433.301567] BTRFS critical (device dm-0): corrupt leaf: root=1 block=133070848 slot=30, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109433.301575] BTRFS error (device dm-0): block=133070848 read time tree block corruption detected
[109433.301876] BTRFS critical (device dm-0): corrupt leaf: root=1 block=133070848 slot=30, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109433.301883] BTRFS error (device dm-0): block=133070848 read time tree block corruption detected
[109441.972111] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109441.972121] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109441.972356] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109441.972362] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109449.284056] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109449.284066] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected
[109449.284318] BTRFS critical (device dm-0): corrupt leaf: root=1 block=132923392 slot=17, invalid root flags, have 0x200000000 expect mask 0x1000000000001
[109449.284323] BTRFS error (device dm-0): block=132923392 read time tree block corruption detected

So my problem seems to be with btrFS, so I tried a scrub

UUID:             f9995e41-6f97-41e8-bf0a-31f83c9e8314
Scrub started:    Wed Aug  5 08:53:01 2020
Status:           finished
Duration:         3:10:00
Total to scrub:   1.20TiB
Rate:             110.51MiB/s
Error summary:    no errors found

And also a recover but Seems to not being able to find the root block.

btrfs-find-root /dev/vg1000/lv
Superblock thinks the generation is 3391864
Superblock thinks the level is 1
Found tree root at 656900096 gen 3391864 level 1
Well block 642072576(gen: 3391863 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 341000192(gen: 3391845 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 299876352(gen: 3391844 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 190988288(gen: 3391842 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 159334400(gen: 3391841 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 88440832(gen: 3391840 level: 1) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
........................................... ~50 lines ...
Well block 4243456(gen: 3 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1
Well block 4194304(gen: 2 level: 0) seems good, but generation/level doesn't match, want gen: 3391864 level: 1

The only thing I didn’t try is the btrfs check as it seems to be a very dangerous command.

How can I mount all my data to make it accessible? I currently run on a ubuntu 20.04 live-usb key to be able to mount the disk in the machine.


Get this bounty!!!

#StackBounty: #linux #filesystems #btrfs btrfs "unable to zero output file"

Bounty: 50

I’m trying to create a btrfs file system with a single volume from a directory ‘root’, over mtd and nandsim.

With a few token files and directories in my root directory, and successfully created and mounted the filesystem:

sudo modprobe mtdblock
sudo flash_erase /dev/mtd0 0 0
sudo mkfs.btrfs -m single -s 2048 -r root /dev/mtdblock0

All is well in the world. Now, I add in the actual contents of my root directory: a few metadata files, and just under 128k binary files at 2k each. When I try the same approach again, mkfs.btrfs fails with “ERROR: unable to zero the output file”.

In the source code, the offending method at line 407 fails if either call to pwrite64() fails. I can’t see why this would fail, unless the system call has some limit on the overall size it will allow?

That said, my device is only 256MB, on a system with plenty of RAM and disk space — it seems unlikely.

Could anyone please point me in the right direction? Have I missed some key step?

If it matters, I’m using btrfs-progs v4.15.1, on bionic 18.04 kernel 4.15.0-99-generic


Get this bounty!!!

#StackBounty: #scripts #updates #automation #btrfs #snapshot How can I automatically create a Btrfs Snapshot before updating my system?

Bounty: 100

I’ve installed Ubuntu 20.04 on a Btrfs root-partition for its snapshot functionality.

To keep it as simple as possible, I would like to integrate the creation of a Btrfs snapshot into my upgrade-alias command, which currently looks like this:

sudo apt update && sudo apt upgrade -y && sudo flatpak update -y && sudo snap refresh

How would I best add a snapshot before the updates so I can roll back if anything goes wrong?

Is there also a possiblity to remove older snapshots at the same time? (My root-partition is filled less than 10%, so I could copy my entire system multiple times, but I suppose it will fill up quickly with weekly updates?)


Get this bounty!!!