You are not logged in.

#1 2017-11-05 13:06:16

beta990
Member
Registered: 2011-07-10
Posts: 207

[SOLVED] BTRFS: open_ctree failed

I know that a lot of topics exists and I've already tried most things:
- https://wiki.archlinux.org/index.php/Bt … ree_failed
- mount flags: recovery,degraded
- btrfs check --repair
- btrfs scrub start

This is my current setup RAID-1:
- HDD1 (luks + btrfs)
- HDD2 (luks + btrfs)

When trying to mount /dev/mapper/hdd1 on boot I'm getting the famous `BTRFS: open_ctree failed` message.
When trying to mount /dev/mapper/hdd2 afterwards it states `Unable to find block group for 0`.
I'm unable to mount and access my data.

Doing the same on a live USB (arch_201711), I'm able to mount /dev/mapper/hdd1 fine and can access my data.
Mounting /dev/mapper/hdd2 results in the same error as shown above, so I need to use the mount flag degraded for /dev/mapper/hdd1.

At the moment I'm copying all my stuff to a different drive, so hopefully nothing has been corrupted.

Is there something I could try? It looks like HDD2 has corrupted data/tree, what should I try to recovery the drive?
Would it be possible to remove the HDD2 drive out of the RAID array, format/clean the disk, and then put it back or is my complete Btrfs RAID broken?

Thanks.

SOLVED:
Created backup, re-formated disk with ZFS.

Last edited by beta990 (2017-12-09 11:44:53)

Offline

#2 2021-07-20 10:36:33

unfa
Member
From: Warsaw/Poland
Registered: 2016-11-07
Posts: 18
Website

Re: [SOLVED] BTRFS: open_ctree failed

That doesn't sound like [SOLVED] to me...


I make videos about audio production using Linux & FOSS: YouTube · Odysee · PeerTube

Offline

Board footer

Powered by FluxBB