You are not logged in.
I've just dumped my raid10 + lvm in favor a of striped/mirrored BTRFS implementation. Basically, all it took was a mkfs.btrfs -m raid10 /drive1 /drive2 /drive3 /drive4.
Strangely enough though, checking df -h reveals that I have 1.9T free ( the full size ) as opposed to 1T ( half size ) as was common with my raid10 setup.
What exactly have I done wrong here ? Or is it simply the way btrfs calculates disk space ?
Offline
Offline
store 1 file of size x and check if free space has decreased with x or 2x ![]()
probably it will be 2x
< Daenyth> and he works prolifically
4 8 15 16 23 42
Offline
Yup, df -h is glitchy for these.
You were right there, either "btrfs filesystem df" or "btrfs filesystem show" are good alternatives for this.
Thanks a bunch
Offline
Hey, don't know if you ever figured this out, but your problem is that you only mirrored/striped the metadata:
-d, --data type
Specify how the data must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.
-m, --metadata profile
Specify how metadata must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.
You should have run:
mkfs.btrfs -m raid10 -d raid10 /drive1 /drive2 /drive3 /drive4
Hopefully you can fix this before a disk fails....
Offline