You are not logged in.

#1 2016-04-18 04:20:22

keegano
Member
Registered: 2016-03-04
Posts: 4

How do I fix my btrfs partition?

I have four 4TB spinning disk hard drives in an external enclosure, which I've converted to a RAID5 btrfs volume. Initially, I only had two drives in a non-redundant configuration (just acting like one big 6TB volume), which I then extended to include all drives for ~12 TB redundant storage. All was well, until I ran out of disk space before the expected 12TB. After some investigation, somewhere, something clearly went wrong.

Some investigation:

[foo@bar /] # btrfs fi df /mnt/
Data, single: total=830.00GiB, used=828.73GiB
Data, RAID5: total=7.35TiB, used=7.35TiB
System, single: total=32.00MiB, used=960.00KiB
Metadata, single: total=1.21TiB, used=1.17TiB
GlobalReserve, single: total=512.00MiB, used=14.31MiB
[foo@bar /] # btrfs fi show /mnt/
Label: none  uuid: 28eef0cd-<snip>
        Total devices 4 FS bytes used 9.33TiB
        devid    1 size 3.64TiB used 3.06TiB path /dev/sdb1
        devid    2 size 3.64TiB used 3.64TiB path /dev/sda
        devid    3 size 3.64TiB used 3.64TiB path /dev/sdc
        devid    4 size 3.64TiB used 3.64TiB path /dev/sdd
[foo@bar /] # btrfs fi usage /mnt/
WARNING: RAID56 detected, not implemented
Overall:
    Device size:                  14.55TiB
    Device allocated:              2.02TiB
    Device unallocated:           12.53TiB
    Device missing:                  0.00B
    Used:                          1.98TiB
    Free (estimated):            126.28TiB      (min: 12.55TiB)
    Data ratio:                       0.10
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 2.16MiB)

Data,single: Size:830.00GiB, Used:828.73GiB
   /dev/sdb1     830.00GiB

Data,RAID5: Size:7.35TiB, Used:7.35TiB
   /dev/sda        3.64TiB
   /dev/sdb1       1.03TiB
   /dev/sdc        3.64TiB
   /dev/sdd        3.64TiB

Metadata,single: Size:1.21TiB, Used:1.17TiB
   /dev/sdb1       1.21TiB

System,single: Size:32.00MiB, Used:960.00KiB
   /dev/sdb1      32.00MiB

Unallocated:
   /dev/sda        1.00MiB
   /dev/sdb1     597.01GiB
   /dev/sdc        1.02MiB
   /dev/sdd        1.02MiB

Alright, usage says RAID56 not implemented, so I'll take its output with a grain of salt (particularly the 126.28TiB I have free).

But, some notes:
* My data is split between RAID5 and single, with a bunch of the single space on /dev/sdb1
* Metadata is taking up 1.21 TiB (!)
* Metadata and system are not redundant, so it seems losing /dev/sdb1 would be rather catastrophic.
* 597.01GiB is unallocated, but the volume gives ENOSPC when trying to balance, create a file, etc.

So far, I've tried without success:
* btrfs balance start -m . Takes a long time, then says a bunch of relocations failed due to lack of space.
* defragmenting files; fails due to no space.

Even if those worked, I have no idea how I'd go about merging the single and RAID5 data volumes. And if I succeeded, it wouldn't free up space - merely make sure that the data on /dev/sdb1 is correctly redundant. Also, I'm not sure how to get that 600 GB of space back - even if only to get the rebalance working. Of course, what I really want is to free up some of that 1+ TB of metadata!

Last edited by keegano (2016-04-19 05:43:48)

Offline

#2 2016-04-18 13:28:32

teekay
Member
Registered: 2011-10-26
Posts: 271

Re: How do I fix my btrfs partition?

It looks like you have 3 raw disks added to the pool as RAID5 (sda, sdc, sdd), and the first partition of sdb (sdb1) which of course won't fit into the RAID5 stripe due to having a different size, and therefore got added to the pool as single data stripe.
That's not a 4-disk RAID5 then.

So, you could try to do some iterations of partial balance to get some data space back on the RAID5 part, like

btrfs balance start -dusage=2
btrfs balance start -dusage=5
btrfs balance start -dusage=10

Offline

#3 2016-04-19 05:40:35

keegano
Member
Registered: 2016-03-04
Posts: 4

Re: How do I fix my btrfs partition?

Thanks for the help,


Edit2: Disregard below. Deleted more files, and now seems to be working. I've freed up about 10 GB on /dev/sda with -dusage=80, it now looks like this:

WARNING: RAID56 detected, not implemented
Overall:
    Device size:                  14.55TiB
    Device allocated:              1.98TiB
    Device unallocated:           12.57TiB
    Device missing:                  0.00B
    Used:                          1.93TiB
    Free (estimated):            126.60TiB      (min: 12.65TiB)
    Data ratio:                       0.10
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:830.00GiB, Used:822.71GiB
   /dev/sdb1     830.00GiB

Data,RAID5: Size:7.34TiB, Used:7.34TiB
   /dev/sda        3.63TiB
   /dev/sdb1       1.03TiB
   /dev/sdc        3.64TiB
   /dev/sdd        3.64TiB

Metadata,single: Size:1.17TiB, Used:1.13TiB
   /dev/sdb1       1.17TiB

System,single: Size:32.00MiB, Used:956.00KiB
   /dev/sdb1      32.00MiB

Unallocated:
   /dev/sda       10.00GiB
   /dev/sdb1     647.08GiB
   /dev/sdc        1.02MiB
   /dev/sdd        1.02MiB

Which may be progress. I'm going to try rebalancing metadata to see if that helps. Still not sure how to convert sdb1 to be like the other drives.

----------------------------------------------------------------------------------------------------------------
<Pre-update>

Unfortunately, it doesn't really look like that's working:

]$ sudo btrfs balance start -dusage=0 .                    
Done, had to relocate 0 out of 6076 chunks                                      
$ sudo btrfs balance start -dusage=2 .                    
Done, had to relocate 0 out of 6076 chunks
$ sudo btrfs balance start -dusage=5 .
Done, had to relocate 0 out of 6076 chunks
$ sudo btrfs balance start -dusage=10 .
Done, had to relocate 0 out of 6076 chunks
$ sudo btrfs balance start -dusage=20 .
ERROR: error during balancing '.': No space left on device
There may be more info in syslog - try dmesg | tail

Sadly, looks like I'm still stuck due to too much space usage. I tried deleting some files, but the freed space was quickly chewed up during rebalancing and now the volume is full again.

Your suggestion led me to the wiki page on balance filters, which had me try a -dconvert=raid5, but that also failed with no space left.

Since it seems I improperly added /dev/sdb to the array (or, perhaps, it was the first one I ever added and I added a partition instead of the whole disk), is it at all possible to change to using /dev/sdb, and reclaim the remaining space that way?

Edit: also, it seems I misspoke before: these are actually 4TB disks, expected space is 12TB.

Last edited by keegano (2016-04-19 05:58:33)

Offline

#4 2016-04-19 08:33:07

teekay
Member
Registered: 2011-10-26
Posts: 271

Re: How do I fix my btrfs partition?

OK. That's some progress.

Addmittedly I'm not even sure if it's true what I thought at first (sdb1 being only a single stripe), as from your outputs it looks like half of it is part of the raid5, and the other half a single stripe. Really weird.

Anyways, if you can get enough free space, it should theoretically be possible to remove sdb1 and add it back as raw sdb.

Offline

#5 2016-05-01 19:24:14

keegano
Member
Registered: 2016-03-04
Posts: 4

Re: How do I fix my btrfs partition?

It's been a couple weeks, thought I'd update with my progress.

With repeated cycles of

# btrfs balance start -dusage=95 .
# btrfs balance start -dconvert=raid5 .

I was able to get all of my data onto the RAID. The most recent balance took about 6 days to complete before giving enospc errors, but at least it appears to have moved over the last of my data:

WARNING: RAID56 detected, not implemented
Overall:
    Device size:                  14.55TiB
    Device allocated:              1.16TiB
    Device unallocated:           13.39TiB
    Device missing:                  0.00B
    Used:                          1.11TiB
    Free (estimated):                0.00B      (min: 8.00EiB)
    Data ratio:                       0.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 48.62MiB)

Data,RAID5: Size:8.17TiB, Used:8.12TiB
   /dev/sda        2.99TiB
   /dev/sdb1       1.81TiB
   /dev/sdc        3.36TiB
   /dev/sdd        3.36TiB

Metadata,single: Size:1.16TiB, Used:1.11TiB
   /dev/sdb1       1.16TiB

System,single: Size:32.00MiB, Used:856.00KiB
   /dev/sdb1      32.00MiB

Unallocated:
   /dev/sda      661.95GiB
   /dev/sdb1     682.00GiB
   /dev/sdc      285.03GiB
   /dev/sdd      285.03GiB

Sadly, I still have non-RAID system and metadata, so I don't really have redundancy. I'm waiting for the most recent balance to finish, then I'll try to convert those as well.

Unfortunately, I still have greater than 1 TB of metadata! Is that normal? Is there any way to diagnose metadata on a btrfs system?

Last edited by keegano (2016-05-01 19:27:59)

Offline

Board footer

Powered by FluxBB