You are not logged in.
Pages: 1
Hey, firstly, I'm a noob to Arch Linux, and heck, I'm pretty much a noob to linux as I started with Ubuntu just a month ago, anyway...
I backed up 4.7gigs of data with fwbackups to cross to Arch a few days ago, and didn't verify it (cos I didn't know about that), and now, it's corrupt somewhere. I've tried using bzip2recover, which has given me a lot of chunks of data (over 5000 blocks), but following the instructions here, I tested all the data, and the test results showed that there wasn't a single corrupt block...
Anyway, I put them all together following another tutorial here, and after trying to unzip the tar archive, it came up with another error.
So currently, I have an archive that is corrupt, but isn't corrupt. I have noticed two things though:
Firstly, a lot of the errors are saying that the file 'ends unexpectedly'.
Secondly, when I try to use head or tail to get a chunk of the big archive, I end up with an archive that is the EXACT SAME SIZE. This implies that head or tail has no effect on it (I tried head with -2000, or the first 2000 lines, and got an archive that's still 4.7 gigs).
I've currently got the latest version of bzip2 and tar.
Offline
What have you exactly done with tail and head?
When using bzip2recover, and following the first link you mentioned, how many files could you untar/bunzip. When bzip2 -t doesn't find corrupt parts, it must be possible to unpack these packages.
Further, if you still have the original set of data, just create a backup using
tar vJpcf backup.tar.xz / --exclude=/dev --exclude=/proc --exclude=/sys
extract the tarball via:
tar xf backup.tar.xz
.
Offline
With tail, I followed the instructions of the second website I mentioned in my first post, where I did something like:
tail -c +17185 before.tar > after.tar
and for the head, I followed the man page (I think), and just kind of did the opposite of the above, with:
head -c -2000 before.tar > after.tar
At least, I think I did that. I can't remember whether it was exactly like that, except I know that I almost certainly did the 'head -c -2000' bit.
With bzip2recover, it produced 5368 blocks, and after following the instructions of both websites (I had multiple attempts at this), I ended up not being to unzip anything. It produced an error every time. I even tried cpio, and that did 'file ends unexpectedly' as well.
I suspect either the archive actually does end abruptedly (maybe the backup process somehow stopped before the end?), or there is a bad header. Either way, my head and tail commands aren't working for me.
Offline
Looks like no one knows what my problem is. I do have another backup of my most important document files, so I guess I'll just leave the archive.
Offline
Pages: 1