You are not logged in.

#1 2010-11-23 13:31:43

delerious010
Member
From: Montreal
Registered: 2008-10-07
Posts: 72

BTRFS raid 10 drive useage

I've just dumped my raid10 + lvm in favor a of striped/mirrored BTRFS implementation. Basically, all it took was a mkfs.btrfs -m raid10 /drive1 /drive2 /drive3 /drive4.

Strangely enough though, checking df -h reveals that I have 1.9T free ( the full size ) as opposed to 1T ( half size ) as was common with my raid10 setup.

What exactly have I done wrong here ? Or is it simply the way btrfs calculates disk space ?

Offline

#2 2010-11-23 13:50:15

falconindy
Developer
From: New York, USA
Registered: 2009-10-22
Posts: 4,111
Website

Re: BTRFS raid 10 drive useage

I know a while back, there were issues with df reporting the incorrect filesystem size. Not sure they were ever resolved. btrfs comes with its own df util:

btrfs filesystem df <mountpoint>

Which is more likely to be accurate.

Offline

#3 2010-11-23 13:55:22

Dieter@be
Forum Fellow
From: Belgium
Registered: 2006-11-05
Posts: 2,004
Website

Re: BTRFS raid 10 drive useage

store 1 file of size x and check if free space has decreased with x or 2x wink
probably it will be 2x


< Daenyth> and he works prolifically
4 8 15 16 23 42

Offline

#4 2010-11-23 14:04:54

delerious010
Member
From: Montreal
Registered: 2008-10-07
Posts: 72

Re: BTRFS raid 10 drive useage

Yup, df -h is glitchy for these.
You were right there, either "btrfs filesystem df" or "btrfs filesystem show" are good alternatives for this.

Thanks a bunch

Offline

#5 2011-11-02 14:50:54

ClashTheBunny
Member
Registered: 2011-11-02
Posts: 1

Re: BTRFS raid 10 drive useage

Hey, don't know if you ever figured this out, but your problem is that you only mirrored/striped the metadata:
       -d, --data type
              Specify how the data must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.
       -m, --metadata profile
              Specify how metadata must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.

You should have run:
mkfs.btrfs -m raid10 -d raid10 /drive1 /drive2 /drive3 /drive4
Hopefully you can fix this before a disk fails....

Offline

Board footer

Powered by FluxBB