I ran my monthly btrfs scrub
overnight - RAID1 array across 3 disks of 8 TB each - generally takes all night to run. I had to interrupt it briefly to copy over some files and then resumed it as I often do.
This morning I check the status and check dmesg
to see if anything crapped out. btrfs scrub status
tells me it scrubbed just about everything and found no errors, but the dmesg
output is strange;
[1321461.097501] BTRFS info (device sdb1): scrub: started on devid 1 [1321461.097912] BTRFS info (device sdb1): scrub: started on devid 2 [1321461.097915] BTRFS info (device sdb1): scrub: started on devid 3 [1357979.019433] BTRFS info (device sdb1): scrub: finished on devid 1 with status: 0 [1359053.862388] BTRFS info (device sdb1): scrub: finished on devid 2 with status: 0
And that's it. In other words; scrub finished on devid 1 and 2, but not on 3. If I run ps a | grep scrub
it shows me the resume is still running;
7544 pts/1 Sl 68:44 btrfs scrub resume /srv/dev-disk-by-label-d1/
The "running for" timestamp in btrfs scrub status
no longer updates, so it seems to be finished... but there's this process still running and a missing finish status for devid 3.
I've never seen this before. Does anyone know what could cause this and how to resolve it? I don't want to blindly kill the scrub and I'd prefer not having to run it again.