You are not logged in.
I have a single disk btrfs root partition which currently unmountable. When I boot to a live CD, and attempt to mount manually:
% sudo mount -o discard,compress=lzo /dev/sda3 /newarch
mount: wrong fs type, bad option, bad superblock on /dev/sda3,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
dmesg tells me:
BTRFS: device label arch64 devid 1 transid 1960 /dev/sda3
BTRFS info (device sda3): enabling auto recovery
BTRFS info (device sda3): disabling disk space caching
BTRFS: detected SSD devices, enabling SSD mode
parent transid verify failed on 23298048 wanted 1961 found 1957
BTRFS: failed to read log tree
BTRFS: open_ctree failed
I get the same if I try to mount wiith the recovery option so no help there.
Running btrfs in repair mode outputs the following, but again does not change the state of the disk; still cannot mount. I am backed-up but would like to learn how to handle this.
% sudo btrfsck --repair /dev/sda3 :(
enabling repair mode
parent transid verify failed on 23298048 wanted 1961 found 1957
parent transid verify failed on 23298048 wanted 1961 found 1957
Ignoring transid failure
Checking filesystem on /dev/sda3
UUID: 6dab04e4-f277-416a-b9d5-7c471815f2f0
checking extents
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
checking csums
checking root refs
Recowing metadata block 23298048
Couldn't find owner root 18446744073709551610
Transid errors in file system
found 1404572718 bytes used err is 1
total csum bytes: 3511100
total tree bytes: 211009536
total fs tree bytes: 198295552
total extent tree bytes: 7733248
btree space waste bytes: 37341046
file data blocks allocated: 4195098624
referenced 4769267712
Btrfs v3.14.2-dirty
Last edited by graysky (2014-06-06 21:03:23)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
btrfs is still beta if I recall correctly? As in it is not guaranteed to work correctly in a production environment? I've never used it because of that impression, I stick with good old ext4.
Sorry I don't have anything constructive to contribute - just pointing out that I believe that it isn't production-ready yet.
Offline
Your comments are fine. I am about to nuke the drive and do just that [ext4]. I did manage to fix the mount error by running btrfs-zero-log /dev/sda3 so I can mount the drive now. BUT, the data on it seems to be f*cked. Running the btrfsck util gave 1000s of lines like this:
checking extents
checking free space cache
checking fs roots
root 5 inode 172963 errors 1000, some csum missing
root 5 inode 172966 errors 1000, some csum missing
root 5 inode 172967 errors 1000, some csum missing
root 5 inode 172968 errors 1000, some csum missing
root 5 inode 172969 errors 1000, some csum missing
root 5 inode 172971 errors 1000, some csum missing
root 5 inode 172993 errors 1000, some csum missing
root 5 inode 173001 errors 1000, some csum missing
...
root 5 inode 518745 errors 1000, some csum missing
root 5 inode 518756 errors 1000, some csum missing
If I mount with the -o recovery option, I can see files, but if I attempt to tarup a dirtree, I get tons of bullshit errors like:
filename: Read error at byte 0, while reading 10240 bytes: Input/output error
Last edited by graysky (2014-06-06 20:05:06)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Yeah, btrfs has tons of cool features but they come to naught if it ends up like this situation. Good luck graysky, hope you didn't lose too much data.
Offline
Maybe I will let my temper cool down and wait to see some other Archers have suggestions for data recovery. I would prefer not to blow away the disk and restore a week old backup. Plus, I might learn something.
EDIT:
% sudo btrfsck --repair --init-csum-tree /dev/sda3
enabling repair mode
Creating a new CRC tree
Checking filesystem on /dev/sda3
Reinit crc root
checking extents
ref mismatch on [4440064 16384] extent item 1, found 0
attempting to repair backref discrepency for bytenr 4440064
Backrefs don't agree with each other and extent record doesn't agree with anybody, so we can't fix bytenr 4440064 bytes 16384
failed to repair damaged filesystem, aborting
Last edited by graysky (2014-06-06 20:19:08)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
!!!!!MAKE BACKUPS before experimenting with btrfs!!!!
Be careful with --init-csum-tree flag. As far as I remember this flag just zeroes all csums and effectively makes all files unreadable. And yes, it sounds completely unintuitive and useless.
About a year ago I had an error similar to what you have. The data was still on the disk but checksums was corrupted. After reading internets and btrfs maillist I found that situation with btrfs recovery tools was very poor. Maybe the situation has changed since that time though.
I end up with using 'filefrag' tool that reports physical location for data blocks and then I was using 'dd' copying those blocks from broken filesystem to a new partition. I even wrote a ruby scripts that recursively copies all files. https://gist.github.com/anatol/ffad3a59c02bfe917fd5 The only problem is that small files do not use extension (probably because they are stored in metadata block?) so I was not able to recover them.
Here is additional information about these tools https://blogs.oracle.com/wim/entry/btrf … orruptions
Last edited by anatolik (2014-06-06 21:00:21)
Read it before posting http://www.catb.org/esr/faqs/smart-questions.html
Ruby gems repository done right https://bbs.archlinux.org/viewtopic.php?id=182729
Fast initramfs generator with security in mind https://wiki.archlinux.org/index.php/Booster
Offline
I had a similar problem with btrfs today. After running a pacman -Syu, I noticed the my root partition had suddenly become read-only (interrupting the upgrade process). I rebooted to find the partition was unmountable. Running btrfsck --repair didn't do anything, and I'm now restoring from a backup. The errors I got are extemely similar to the errors you posted.
Offline
Cleared the memory cells and partitioned to ext4, restored backup and only lost 1 day of data (nothing). F*ck btrfs and its do-nothing recovery tools...
Last edited by graysky (2014-06-06 21:03:05)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
If you ever decide to go back to btrfs (which doesn't sound likely), you should not run btrfsck with the --repair flag unless told to do so by the developers. This of course isn't very intuitive, but at the moment is just the way it is.
Not too long ago, a user inquired about what one should if they experience issues. This was Hugo Mills' response:
On Mon, Aug 26, 2013 at 01:10:54PM -0600, Chris Murphy wrote:
>
> On Aug 26, 2013, at 11:41 AM, Nick Lee <email@nickle.es> wrote:
>
> > There was a discussion on IRC a few days ago that the problem with the tree root's bloco was likely the result of either an issue with the disk itself, or the chunk tree/logical mappings. I ran the chunk recover, looked over the errors it found, and hit write. (If it failed, I was going to run something photorec, loss of organization as a side effect.)
> >
> > I can write something more clear after my flight lands tomorrow if you want.> I'm just curious about when to use various techniques: -o recovery,
> btrfsck, chunk-recover, zero log.Let's assume that you don't have a physical device failure (which
is a different set of tools -- mount -odegraded, btrfs dev del
missing).First thing to do is to take a btrfs-image -c9 -t4 of the
filesystem, and keep a copy of the output to show josef.Then start with -orecovery and -oro,recovery for pretty much
anything.If those fail, then look in dmesg for errors relating to the log
tree -- if that's corrupt and can't be read (or causes a crash), use
btrfs-zero-log.If there's problems with the chunk tree -- the only one I've seen
recently was reporting something like "can't map address" -- then
chunk-recover may be of use.After that, btrfsck is probably the next thing to try. If options
-s1, -s2, -s3 have any success, then btrfs-select-super will help by
replacing the superblock with one that works. If that's not going to
be useful, fall back to btrfsck --repair.Finally, btrfsck --repair --init-extent-tree may be necessary if
there's a damaged extent tree. Finally, if you've got corruption in
the checksums, there's --init-csum-tree.Hugo.
Offline
@WonderWoofy thanks for that info
when I converted my root partition from ext4 for the first time, everything worked fine for a while, until the drive suddenly became similarly unmountable. I tried btrfsck --repair which only seemed to make things worse.
still using btrfs on both / and /home, although posts like this make me think that I really should improve my backup routine a little.
Offline
You should definitely have a backup. Btrfs is awesome, but it is still in pretty heavy development. There don't seem to be a plethora of data eating bugs anymore, but things are being found and fixed all the time.
Offline
@WW - I found a post where you quoted Hugo's check list before I nuked my partition but was unable to recover. As I said, only lost a day's of data thanks to backintime.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
For future reference, there is the "Restore" function which actually does work quite well in getting files off an un-mountable FS. I've used it twice with good results: https://btrfs.wiki.kernel.org/index.php/Restore
Offline
@megagram - I went through that wiki link (extensive googling to avoid the repartitioning) but none of the steps therein recovered my data. Best advice I can offer is not to use btrfs without a robust backup scheme. I do weekly tars of my system + daily backintime snapshots of key files. The way I have mine setup is to run backintime via anacron so it ends up running once per day but only in the beginning of the day. In my case, I lost a day's worth of changes because I don't want it running on shutdown.
Last edited by graysky (2014-06-08 11:39:41)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Best advice I can offer is not to use btrfs without a robust backup scheme.
I love btrfs. But I just want to make sure anyone reading this thread sees this sentence. It is not production ready at the moment.
Offline
But I just want to make sure anyone reading this thread sees this sentence. It is not production ready at the moment.
To quote Hartman, from Full Metal Jacket (1987), "Well, no shit!"
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
WonderWoofy wrote:But I just want to make sure anyone reading this thread sees this sentence. It is not production ready at the moment.
To quote Hartman, from Full Metal Jacket (1987), "Well, no shit!"
Indeed, but you would be amazed at how many people email linux-btrfs who don't understand this.
Offline
I know you have already restored from backup, but may I suggest LVM + ext4/XFS if you miss the snapshots functionality from Btrfs? I've found it works quite well, although you only get full size snapshots. Cheers!
Last edited by Pse (2014-06-09 03:07:40)
Offline
It's too late as well, but I recommend #btrfs on freenode. The devs are rather active in the channel, and I've had some odd bugs that they've helped me quash right away. Moreover, it means fast upstream fixes if there was a bug with btrfs. GL!
Offline
But I just want to make sure anyone reading this thread sees this sentence. It is not production ready at the moment.
but it is very convenient for things like building & testing custom packages
generally, i don’t use btrfs on my storage partitions (/home), but i’m a huge fan of using btrfs on system partitions, since i always can either roll things back or (in worst case) restore my system from backup (never actually had to, but who knows…)
Best advice I can offer is not to use btrfs without a robust backup scheme.
i can offer even better advice to use anything with a robust backup scheme
— love is the law, love under wheel, — said aleister crowley and typed in his terminal:
usermod -a -G wheel love
Offline
It's been mentioned but don't use the repair utility in write mode unless you know what you are doing. For anyone else reading who comes across a similar problem with btrfs I would suggest always going to the mailing list and wiki, reading up a bit, and then asking what to do there. It's probably the best way to do it because if you do the wrong thing you can seriously mess up your filesystem or at the very least make things far worse. They need to work on making recovery more intuitive. Most people think you just run the fsck utility right away as with other filesystems and if you do that you can often end up pretty screwed.
Also I had some problems too when using single. My opinion is that everyone using btrfs should at least use RAID1 for meta and data at this point. I had a filesystem get borked pretty bad (though 90%+ recovery) when using single last year. I switched to RAID1 and it's been very solid since. You also want to regularly do scrubs and do one after a power failure or the like too. That is what took out my first filesystem -- a bad power supply.
Last edited by davidm (2014-06-26 00:17:49)
Offline
btrfs works just fine for me, but yes, it can be sort of unstable.
You should always have backups. Doesn't matter if you're using brtfs or anything else, always have backups
Last edited by kahrkunne (2014-06-29 18:10:02)
Offline
I dont know if 3 months is necro-bump territory, but I just wanted to add on to this thread.
I just had the same problem, and nothing I did would fix it either. The strange thing is, my Arch partition and EVERY OTHER linux partition had "msftdata" flags on the partition according to Gparted. I know my setup well and I have never set such a flag nor have I noticed it before.
I went to update and noticed I had a read-only filesystem. But it was worse than that- I couldnt pull anything off disk (samsung 840 pro SSD only a year old and good according to SMART including no reallocated sectors). I had no choice but to hard power down (systemctl didnt work, mount didnt work, etc etc).
Arch partition wouldnt mount, even in a livecd. Same error as OP, and I tried all the steps in this thread to fix it. The strange thing is, both my Debian and Gentoo partitions- also btrfs but not mounted- failed as well. Gentoo worked once I used gparted to remove the msftdata flag, while the Debian partition mentioned "fsck.fat marked partition as dirty" (why is fsck.fat running?) and gave me a c_tree btrfs error- couldnt get it working either.
I have Windows 7 which I never use, and it was/is fine. I have backups, so I decided to partition EXT4 and run dd for about 10gb worth of data (via livecd)- no issues (did this in case my SSD controller was going bad). Formatted btrfs and restored Arch and Debian and now everything is fine?
Im pretty confident with filesystem/partitioning, but I honestly have no idea wtf happened! I put this here just in case someone in the future experiences the "msftdata" flag being tripped and figures out what happened.
Offline
@GSF1200S did you try -orecovery and -oro,recovery followed by btrfs-zero-log when it didn't work?
Or did you use btrfsck --repair or another last resort first.
I am a total noob but most failures I see in forums seem to be from using something other than those priority tools first like Hugo Mills suggested. I'm really just curious for myself if I end up with the same problem
More info seems to be here.
https://btrfs.wiki.kernel.org/index.php … el_oops.21
https://btrfs.wiki.kernel.org/index.php/Btrfs-zero-log
Please share the order in which you did things
Offline