You are not logged in.

#1 2015-03-02 13:06:44

Puyb
Member
Registered: 2015-03-02
Posts: 2

BTRFS RAID10 full but space still available on devices

Hi, I'm new to this forum. I'm not a native english speaker, so please excuse me if my grammar is incorrect. I hope you'll understand me ;-).
I tried to search if someone had a similar issue with BTRFS without success so I'm asking directly to the community.

I have a  BTRFS array that was initially composed of 4 disks of 1,5T. Some times ago, the array was close to be full, so I decided to replace two of the 1,5T disks by two new 4T disks.
To to that, I plugged the new disks, do a "btrfs device add" for each one, wait for completion, then I do a "btrfs device remove" to the two oldest 1.5T disks. After the completion of this task, I ended up with an array that look like this :

Label: none  uuid: 8dec4ac7-1160-4610-9b45-66c4466fc8b3
	Total devices 4 FS bytes used 2.73TiB
	devid    5 size 1.36TiB used 1.30TiB path /dev/sdb1
	devid    6 size 1.36TiB used 1.30TiB path /dev/sdd1
	devid    7 size 3.64TiB used 1.31TiB path /dev/sde1
	devid    8 size 3.64TiB used 1.31TiB path /dev/sdc1

Btrfs v3.17.3

Note that this is not the exact output of the command at the time. It's from my memory, I haven't kept the log of that operation.

A few weeks latter, sdd failled (unrecoverable IO read AND write error in the kernel log). I replaced it with a spare 1,5T disk I had. Then I did a "btrfs device add" of the new device and a "btrfs device remove missing". The new array is now :

Label: none  uuid: 8dec4ac7-1160-4610-9b45-66c4466fc8b3
	Total devices 4 FS bytes used 2.73TiB
	devid    5 size 1.36TiB used 1.36TiB path /dev/sdb1
	devid    7 size 3.64TiB used 1.37TiB path /dev/sde1
	devid    8 size 3.64TiB used 1.37TiB path /dev/sdc1
	devid    9 size 1.36TiB used 1.36TiB path /dev/sdd1

Btrfs v3.17.3

But last week, when trying to add new file on the array, the copy failed due to lack of available space. But as you can see above there's about 2.3T of space left on two devices.

I thought that BTRFS was able to manage RAID 10 with disks of different size. Am I wrong ?
What should I do to fully use all the available space and still have some redundancy ?
I don't care about performances (the striping part of RAID10), I just want a lot of space with mirroring.

Note : I use an up to date Arch linux with a standard kernel.

Linux plonk 3.18.2-2-ARCH #1 SMP PREEMPT Fri Jan 9 07:37:51 CET 2015 x86_64 GNU/Linux

If you need more details about my configuration, don't hesitate to ask.

Offline

#2 2015-03-02 14:00:36

nstgc
Member
Registered: 2014-03-17
Posts: 393

Re: BTRFS RAID10 full but space still available on devices

The current kernel is 3.18.6, not .2.

Could you also post the output of "btrfs fi df {path/to/array}".

[edit] While you're at it, did you run a full balance? I also believe, though I haven't tried this myself, that btrfs can handle paritions/disks of differing sizes by using different stripe widths. However it may not know to do that until after you completely re-stripe the volume by running a balance.

[edit2] Also, your btrfs-progs seems to be out of date.

$ pacman -Q |grep btrfs
btrfs-progs 3.18.2-1
$ uname -a
Linux HostName 3.18.6-1-ARCH #1 SMP PREEMPT Sat Feb 7 08:44:05 CET 2015 x86_64 GNU/Linux

Last edited by nstgc (2015-03-02 14:14:48)

Offline

#3 2015-03-02 14:54:38

Puyb
Member
Registered: 2015-03-02
Posts: 2

Re: BTRFS RAID10 full but space still available on devices

My bad. Upgrade in progress...

What do you mean by a full balance ? I tried without success things like this :

sudo btrfs fi balance start -dusage=5 /media/data
Done, had to relocate 0 out of 1409 chunks

But without to much success...

the df command show clearly that the space allocated for data is full.

sudo btrfs fi df /media/data
Data, RAID10: total=2.72TiB, used=2.72TiB
System, RAID10: total=64.00MiB, used=304.00KiB
Metadata, RAID1: total=13.00GiB, used=9.32GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

But the reported spaces is far from the total capacity of the array (it should be something close to 5.5TiB).

BTW, I found a simulator online : http://carfax.org.uk/btrfs-usage/index.html
I think that my error was to use RAID10. The simulator report the same unusable space with RAID10 that I encounter.
But when I use the RAID1 setting, the simulator report no unusable space.
So I think that I should try to rebalance to RAID1. If I'm not wrong something like this should allow me to do it :

btrfs fi balance start -dconvert=raid1 -mconvert=raid1 /media/data

Offline

#4 2015-03-02 17:21:22

Spider.007
Member
Registered: 2004-06-20
Posts: 1,175

Re: BTRFS RAID10 full but space still available on devices

You should realize that if you pass -dusage to the balance, it doesn't balance everything. You should run `btrfs balance start /media/data` for that.

Offline

#5 2015-03-02 17:25:20

nstgc
Member
Registered: 2014-03-17
Posts: 393

Re: BTRFS RAID10 full but space still available on devices

Spider.007 wrote:

You should realize that if you pass -dusage to the balance, it doesn't balance everything. You should run `btrfs balance start /media/data` for that.

Exactly, -dusage only balances those chunks that are 5% full or less. To restripe the volume you need to rewrite each chunk.

If I had my volume mounted at /btrfs/raid10 I would run the following

 # btrfs balance start /btrfs/raid10

[edit] I'm seeing some incongruent data

https://btrfs.wiki.kernel.org/index.php/Glossary wrote:

RAID-0
A form of RAID which provides no form of error recovery, but stripes a single copy of data across multiple devices for performance purposes. The stripe size is fixed to 64KB for now.

However,

https://btrfs.wiki.kernel.org/index.php/RAID56 wrote:

The algorithm [for RAID 5/6] uses as many devices as are available: No support for a fixed-width stripe (see note, below)

I would think that the striping algorithm would be the same, though I could be wrong.

[edit3] I'm sorry. I was confusing stripe size and stripe width:
http://www.pcguide.com/ref/hdd/perf/rai … ipe-c.html

[edit4]

Okay. So, I'm quite curious about this and decided to try it out myself.

\dev\sdc11 and 12 are 100G while 13 and 14 are 50G.

$ sudo mkfs.btrfs -L TEST -m raid1 -d raid10 /dev/sdc11 /dev/sdc12 /dev/sdc13 /dev/sdc14
Btrfs v3.18.2
See http://btrfs.wiki.kernel.org for more information.

Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs
adding device /dev/sdc12 id 2
adding device /dev/sdc13 id 3
adding device /dev/sdc14 id 4
fs created label TEST on /dev/sdc11
        nodesize 16384 leafsize 16384 sectorsize 4096 size 300.00GiB

This results in

$ sudo btrfs fi sh /dev/sdc11
Label: 'TEST'  uuid: d3db3fae-16d4-47bf-94e2-58d4123be271
        Total devices 4 FS bytes used 112.00KiB
        devid    1 size 100.00GiB used 2.02GiB path /dev/sdc11
        devid    2 size 100.00GiB used 2.00GiB path /dev/sdc12
        devid    3 size 50.00GiB used 1.01GiB path /dev/sdc13
        devid    4 size 50.00GiB used 1.01GiB path /dev/sdc14
$ sudo mount -L TEST /mnt/temp
$ sudo btrfs fi df /mnt/temp
Data, RAID10: total=2.00GiB, used=768.00KiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=1.00GiB, used=112.00KiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=16.00MiB, used=0.00B

Then I try to load it up with just under 100GB of data.

Partway through I ran btrfs fi sh on the volume.

$ sudo btrfs fi sh /dev/sdc11                                                                                                                             
Label: 'TEST'  uuid: d3db3fae-16d4-47bf-94e2-58d4123be271
        Total devices 4 FS bytes used 30.19GiB
        devid    1 size 100.00GiB used 18.02GiB path /dev/sdc11
        devid    2 size 100.00GiB used 18.00GiB path /dev/sdc12
        devid    3 size 50.00GiB used 17.01GiB path /dev/sdc13
        devid    4 size 50.00GiB used 17.01GiB path /dev/sdc14

Btrfs v3.18.2

It looks like it is treating it as if its four disks if the same size.

Now after the transfer was completed

$ sudo btrfs fi sh /dev/sdc11                                                                                                                             
Label: 'TEST'  uuid: d3db3fae-16d4-47bf-94e2-58d4123be271
        Total devices 4 FS bytes used 91.76GiB
        devid    1 size 100.00GiB used 47.02GiB path /dev/sdc11
        devid    2 size 100.00GiB used 47.00GiB path /dev/sdc12
        devid    3 size 50.00GiB used 46.01GiB path /dev/sdc13
        devid    4 size 50.00GiB used 46.01GiB path /dev/sdc14

Btrfs v3.18.2
[user@Host ~]$ sudo btrfs fi df /mnt/temp                                                                                                                              
Data, RAID10: total=92.00GiB, used=91.60GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=1.00GiB, used=172.73MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=64.00MiB, used=0.00B

I then tried to copy another ~15GB and was told that there isn't enough space, even though there should be 50GB free. I'm just copying with Nemo so I said "copy anyway". At some point it told me there was not space left of the volume, with nothing in dmesg.
btrfs has the following to say about that:

$ sudo btrfs fi sh /dev/sdc11
Label: 'TEST'  uuid: d3db3fae-16d4-47bf-94e2-58d4123be271
        Total devices 4 FS bytes used 97.21GiB
        devid    1 size 100.00GiB used 51.01GiB path /dev/sdc11
        devid    2 size 100.00GiB used 50.99GiB path /dev/sdc12
        devid    3 size 50.00GiB used 50.00GiB path /dev/sdc13
        devid    4 size 50.00GiB used 50.00GiB path /dev/sdc14

Btrfs v3.18.2
[user@Host ~]$ sudo btrfs fi df /mnt/temp                                                                                                                              
Data, RAID10: total=99.98GiB, used=99.46GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=1.00GiB, used=181.41MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=64.00MiB, used=0.00B

So it doesn't look like restriping will help.

[edit5]

It does work for raid1 however.

$ sudo btrfs fi sh /dev/sdc11
Label: 'TEST'  uuid: d3db3fae-16d4-47bf-94e2-58d4123be271
        Total devices 4 FS bytes used 149.22GiB
        devid    1 size 100.00GiB used 99.99GiB path /dev/sdc11
        devid    2 size 100.00GiB used 100.00GiB path /dev/sdc12
        devid    3 size 50.00GiB used 50.00GiB path /dev/sdc13
        devid    4 size 50.00GiB used 50.00GiB path /dev/sdc14

Btrfs v3.18.2
$ sudo btrfs fi df /mnt/temp                                                                                                                              
Data, RAID1: total=148.98GiB, used=148.97GiB
System, RAID1: total=8.00MiB, used=48.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=1.00GiB, used=261.98MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=96.00MiB, used=0.00B

Last edited by nstgc (2015-03-03 00:56:17)

Offline

#6 2015-03-03 13:47:27

coruun
Member
Registered: 2014-10-23
Posts: 26

Re: BTRFS RAID10 full but space still available on devices

You should take a look at the BTRFS disk usage calculator, which is linked from the official wiki.

Your setup (2x1.5TB + 2x4TB) with RAID10 reports 3TB usable and 5TB unusable disk space.

Looking through the wiki, I also found a page, which states that it is necessary to use the profile "single" to use all space on uneven disks.

Offline

#7 2015-03-06 22:06:02

nstgc
Member
Registered: 2014-03-17
Posts: 393

Re: BTRFS RAID10 full but space still available on devices

coruun wrote:

Looking through the wiki, I also found a page, which states that it is necessary to use the profile "single" to use all space on uneven disks.

It isn't necessary, but it is sufficient as I demonstrated above.

Offline

Board footer

Powered by FluxBB