You are not logged in.
I have a multidevice btrfs partition for my root filesystem
I would like to try different raid level (I have 6 disks)
I have :
$ btrfs fi df /
Data, RAID0: total=18.00GiB, used=17.08GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=3.00GiB, used=1.28GiB
GlobalReserve, single: total=416.00MiB, used=0.00B
I do :
$ sudo btrfs balance start -dconvert=raid10 -mconvert=raid10 /
And I don't understand why I still have:
$ btrfs fi df /
Data, RAID0: total=18.00GiB, used=16.91GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=2.00GiB, used=1.28GiB
GlobalReserve, single: total=416.00MiB, used=0.00B
Why can't I change my raid level ?
Am I doing something wrong ?
EDIT - SOLVED :
No, I wasn't doing anything wrong,
that was due to a bug in the linux kernel 4.0.0
Last edited by pums974 (2015-05-07 06:38:38)
Offline
As far as I can tell you're not doing anything wrong. Is there anything in dmesg?
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
Could you post the output of both "$ sudo btrfs balance start -dconvert=raid10 -mconvert=raid10 /" and "dmesg|tail"? That should have worked. I've done several times in the last 3 months myself.
It may also help to post "journalctl |grep BTRFS" or at least some of it.
Offline
Looking at journalctl -b |grep BTRFS
At boot time, I see normal things
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 5 transid 20175 /dev/sde5
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 4 transid 20175 /dev/sdd5
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 6 transid 20175 /dev/sdf5
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 2 transid 20175 /dev/sdb5
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 3 transid 20175 /dev/sdc5
mai 04 20:26:38 Grand-PC kernel: BTRFS: device fsid 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9 devid 1 transid 20175 /dev/sda5
mai 04 20:26:38 Grand-PC kernel: BTRFS info (device sda5): disk space caching is enabled
mai 04 20:26:38 Grand-PC kernel: BTRFS info (device sda5): disk space caching is enabled
mai 04 20:26:38 Grand-PC kernel: BTRFS: has skinny extents
During the balance, he seems to be doing something, with some
mai 05 23:23:02 Grand-PC kernel: BTRFS info (device sda5): found 10716 extents
mai 05 23:23:09 Grand-PC kernel: BTRFS info (device sda5): found 10716 extents
mai 05 23:23:09 Grand-PC kernel: BTRFS info (device sda5): relocating block group 199812448256 flags
same things on dmesg
Offline
That's actually normal, you want to be seeing that.
Hmm, so it is doing a balance, but the end result is the same raid level. Weird.
[edit] How about after the balance has finished? What sort of dmesg/journalctl messages are you seeing?
Also could you post the output of "# btrfs fi sh"? I don't think this is the cause, but sometimes I've found if there isn't enough free space of the volume converts can go wrong.
Last edited by nstgc (2015-05-06 11:34:56)
Offline
There is nothing else in dmesg/journalctl.
And I think, I have enough free space
Label: 'root' uuid: 60fc2276-29c5-40f8-bc3e-15b2d1fb9ac9
Total devices 6 FS bytes used 23.86GiB
devid 1 size 100.00GiB used 5.03GiB path /dev/sda5
devid 2 size 100.00GiB used 5.00GiB path /dev/sdb5
devid 3 size 100.00GiB used 6.00GiB path /dev/sdc5
devid 4 size 100.00GiB used 7.00GiB path /dev/sdd5
devid 5 size 100.00GiB used 6.00GiB path /dev/sde5
devid 6 size 100.00GiB used 5.03GiB path /dev/sdf5
df -h
/dev/sda5 600G 26G 574G 5% /
Offline
Strange, I can convert a cobbled together array of loop devices with no problems. I haven't seen anyone mention problems with convert on the mailing list recently, but there was a similar topic back in february.
Can you post the output of:
btrfs --version
uname -a
Also, what mount options are you using for this array?
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
I just had an revelation, my / is on a subvolume so I'm realizing that I'm not balancing the whole filesystem...
So everything is normal, I'm just an idiot...
I will retry on the real / now and it should be ok
To answer your questions anyway :
btrfs --version
btrfs-progs v4.0
uname -a
Linux Grand-PC 4.0.1-1-ARCH #1 SMP PREEMPT Wed Apr 29 12:00:26 CEST 2015 x86_64 GNU/Linux
/dev/sda5 on / type btrfs (rw,relatime,thread_pool=9,compress=lzo,space_cache,autodefrag)
Offline
It shouldn't matter that the target is the mount point for a subvolume, but yeah, you may want to mount the actual volume. Something like "# mount /dev/sda5 /mnt" then "# btrfs bal start -dconvert=raid10 -mconvert=raid10 /mnt".
Offline
You are right, it didn't change anything.
But I found the explanation : it is a bug in the linux 4.0 kernel
http://unix.stackexchange.com/questions … a-to-raid1
I don't have time to patch the kernel right know, so I guess I will wait and try again later
Offline
If you don't have time to patch, a quick workaround would be to downgrade your kernel (+headers, +propriety drivers) to the 3.19 series.
Or install linux-ck, which is still on 3.19 due to other problems, and is what I'm using (which is probably why I had no problems).
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
That sucks. Good to know though. Thank you.
Make sure you may your thread as solved if you are satisfied.
Offline
Is this still broken in 4. 0.4-2? I'm trying to convert something from within the June 2015 image without luck. Or is it just not reporting it correctly?
Offline
Ah, thanks.
[edit] The fix actually appears in 4.0.6. I just converted a RAID 1 to a RAID 10.
Last edited by nstgc (2015-06-26 00:45:09)
Offline