FYI, I don't think Unraid tracks BTRFS filesystem errors in its error count.

i had this issue recently -- as another commenter mentioned, for me this happened bc my cache was full .. but i still went through a lot of pain to diagnose and fix, so i'll share what i learned

assuming its not a hardware issue (sata cable or drive) you basically have 3 options:

  1. btrfs scrub

  2. btrfs check --repair (NOTE: RISKY!!)

  3. backup your data and reformat the drive

because my issue was storage size, reformatting was the path i took, but scrub / repair might work for you. just check out the documentation to understand what you're signing up for.

https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-scrub

btrfs scrub is used to scrub a mounted btrfs filesystem, which will read all data and metadata blocks from all devices and verify checksums. Automatically repair corrupted blocks if there’s a correct copy available.

Note: Scrub is not a filesystem checker (fsck) and does not verify nor repair structural damage in the filesystem. It really only checks checksums of data and tree blocks, it doesn’t ensure the content of tree blocks is valid and consistent. There’s some validation performed when metadata blocks are read from disk but it’s not extensive and cannot substitute full btrfs check run.

https://btrfs.wiki.kernel.org/index.php/Btrfsck

Warning: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck successfully repair all types of filesystem corruption. Eg. some other software or hardware bugs can fatally damage a volume.

Other Notes

https://forums.unraid.net/topic/57426-btrfs-problem-corrupt-leaf/

  • btrfs dev stats /dev/md1

  • btrfs check

https://www.reddit.com/r/btrfs/comments/lcbwa0/btrfs_check_errors_fixed_by_scrub/ "You could attempt a btrfs scrub, on that drive only, but... you have a bad tree block. Afaik a btrfs scrub does not ensure/fix tree block consistency. So I do think you need a btrfs check --repair in this particular case. I don't think a scrub can hurt tho."

https://forums.unraid.net/topic/46348-how-to-rebuild-dockerimg/ "You don't have to take the array offline.  Settings, docker, disable docker, delete the image, reenable docker, add the containers"

https://stackoverflow.com/questions/30884044/how-to-fix-btrfs-root-inode-errors provided that the broken inodes are the only problem present, the solution is to simply remove them. There may be a quicker way to do this, but here is what worked for me. From here I gleaned that you can use the find command to search for an inode like so: find / -inum XXXXXX -print

https://unix.stackexchange.com/questions/436642/directory-entry-without-inode "btrfs check will only show the problems. You need to use btrfs check --repair to (attempt to) fix the problems it finds. However, the vocal majority of documentation advises against using --repair. However, since the btrfsck does expose the inodes of the broken files, they can be deleted with e. g. find / -inum XXXX -delete (replace XXXX with the actual broken inode)."

https://linuxhint.com/how-to-use-btrfs-balance/

https://wiki.unraid.net/index.php/Check_Disk_Filesystems#Drives_formatted_with_BTRFS

FAQ https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-543490

https://forums.unraid.net/topic/33678-btrfs-scrub-discussion/

Great write up. Thanks for this.

More replies

This happens to me when the cache gets 100% filled to the brim, so make sure it's not getting filled up completely. Maybe run Mover more often.

Im not using the cache as a transfer thing for my array. It only has my home assistant vmdisk, appdata and docker img on it, the other shares dont use the cache at all. Its only filled for ~120GB with the stuff that is on it, and i have never seen it much above that. Mover is also setup to run every hour.

More replies

Thumbnail image: PSA, Redditors: You don't need a business to have a website. All you need is yourself. And with Squarespace, you can easily create a website that reflects your personal brand, individuality, and identity–all by using its intuitive design, AI, and expressibility tools.

I'm been having pretty much the same issue lately but with 2 × M.2 NVMe drives (meaning not a cable issue). It's really annoying.

Now I just have 2 separate cache drives running xfs.

Do you keep your data mirrored between those 2 drives?
if so, how?

I'm also thinking about doing this, but no parity seems risky to me.

More replies

I have had a few Samsung ssd’s fail on me. I know they are supposed to be good, but mine developed problems.
I have two Samsungs in my cache on one unRaid server. I get the same message you are reporting. I have another unRaid server with adata ssd’s - no problems reported. I know my comparison isn’t exactly fair, but my Adata drives were bought about the same time and no issues yet.

Do your Samsung ssd's show any errors when trying to do smart checks or anything?
Mine don't show errors anywhere when testing, only on this specific line "write time tree block corruption detected".

Are there any other differences between your 2 systems?
I'm using my old pc for UnRaid so its all desktop components:
Ryzen 2700X
4*16GB ddr4 @ 1866mhz (could run at 2400mhz but lowered it to spec because of this error, not that it helped)
gigabyte X470 Gaming 7
1060ti (for transcoding)
2 * 4tb WD blue (1 parity xfs array)
2 * 500gb samsung 860 (btrfs raid1)

More replies More replies