You are not logged in.
Pages: 1
I know that a lot of topics exists and I've already tried most things:
- https://wiki.archlinux.org/index.php/Bt … ree_failed
- mount flags: recovery,degraded
- btrfs check --repair
- btrfs scrub start
This is my current setup RAID-1:
- HDD1 (luks + btrfs)
- HDD2 (luks + btrfs)
When trying to mount /dev/mapper/hdd1 on boot I'm getting the famous `BTRFS: open_ctree failed` message.
When trying to mount /dev/mapper/hdd2 afterwards it states `Unable to find block group for 0`.
I'm unable to mount and access my data.
Doing the same on a live USB (arch_201711), I'm able to mount /dev/mapper/hdd1 fine and can access my data.
Mounting /dev/mapper/hdd2 results in the same error as shown above, so I need to use the mount flag degraded for /dev/mapper/hdd1.
At the moment I'm copying all my stuff to a different drive, so hopefully nothing has been corrupted.
Is there something I could try? It looks like HDD2 has corrupted data/tree, what should I try to recovery the drive?
Would it be possible to remove the HDD2 drive out of the RAID array, format/clean the disk, and then put it back or is my complete Btrfs RAID broken?
Thanks.
SOLVED:
Created backup, re-formated disk with ZFS.
Last edited by beta990 (2017-12-09 11:44:53)
Offline
Pages: 1