You are not logged in.
Pages: 1
So... I just rebooted my machine after doing some stuff from a usb drive on which I just installed the arch iso using dd.
After boot I went into screen and ran dhcpcd using sudo to connect to my router/the internet. After that I wanted to run openvpn. Using sudo again. Which I thought should be the second last command I've used, so I quickly pressed the up-key two times and hit enter..... Followed by lots of massive CTRL+C hits because, yes, I accidentally ran `sudo dd bs=4M if=/path/to/arch.iso of=/dev/sdb`.
It was running for... I'm not exactly sure. I've hit CTRL+C after around 200ms, but it took dd around 1-2 seconds to actually stop.
It's a 3TB HDD which had a single NTFS partition using every sector available on the disk. The partition table was the default one, I think it's msdos. I do not have a backup of the partition table. Here's the output of the current table using fdisk:
Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x4c840490
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 1533951 1533952 749M 0 Empty
/dev/sdb2 172 82091 81920 40M ef EFI (FAT-12/16/32)
Partition 2 does not start on physical sector boundary.
I haven't done anything else so far, didn't try mounting it too. I would usually do a dd copy of the drive and 'play around' with it, but I do not have any 3TB drive besides that one. The drive was bought just a few days ago so there's not much on it, a system backup of arch done with rsync (not an archieve), one with tar (and gzip) and a dropbox backup (not archieved too) with around 7000 files in total.
The most (and pretty much only) important thing is the arch backup. If that matters: The .tar.gz archieve was the last thing that was written to the disk.
Since I think this "issue" could possibly be solved by just changing the partition table (I don't have much experience when it comes to that kind of harddisk stuff) and starting sector/type of the partition or something like that I decided to look into that before I try to use recovery tools like foremost/phototec/testdisk to get that archieve. I've searched for that kind of issue but only found a thread where the overwritten partition was LVM/LUKS which is (luckily. I have a LUKS encrypted harddisk connected to this machine too which is usually located at /dev/sdb.) not the case for me.
I also started a testdisk quick search, it's already at cylinder 7000 and only found the newly created FAT16 partition (ARCHISO_EFI) so far. I'll keep it running but I don't think it will find anything else since there was only one partition starting at sector 0.
Offline
Please tell me you have backups.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
Please tell me you have backups.
It was the backup...
Offline
If that was a backup, you still have the original data and can just re-do the backup...? If there was no other copy it was not a backup anymore, either. Backup means more than one copy at any time.
dd tells you exactly how much it copied, even if you CTRL-Ced it. If you don't know and haven't yet done anything else, you can find out by:
cmp /path/to/arch.iso /dev/sdb
If it tells you "EOF on arch.iso" then it was completely copied; if it says "arch.iso sdb differ: byte X" then X bytes were copied. You can then proceed to zero out X-1 bytes (or size of arch.iso) as it may confuse recovery tools such as testdisk otherwise. And overwriting already overwritten data does no harm - as long as you don't mess it up by overwriting more than X.
If it was a GPT partition table, there is a backup table at the end of the disk so original partitions should actually be visible to a GPT capable program (gdisk? parted? idk). With msdos partition it's gone.
Last edited by frostschutz (2016-07-24 18:22:48)
Offline
If that was a backup, you still have the original data and can just re-do the backup...? If there was no other copy it was not a backup anymore, either. Backup means more than one copy at any time.
dd tells you exactly how much it copied, even if you CTRL-Ced it. If you don't know and haven't yet done anything else, you can find out by:
cmp /path/to/arch.iso /dev/sdb
If it tells you "EOF on arch.iso" then it was completely copied; if it says "arch.iso sdb differ: byte X" then X bytes were copied. You can then proceed to zero out X-1 bytes (or size of arch.iso) as it may confuse recovery tools such as testdisk otherwise. And overwriting already overwritten data does no harm - as long as you don't mess it up by overwriting more than X.
If it was a GPT partition table, there is a backup table at the end of the disk so original partitions should actually be visible to a GPT capable program (gdisk? parted? idk). With msdos partition it's gone.
I do have the original data for most of the things. After the arch backup I made a system update though and I still have some issues with it, I do not hope that I will need anything from that backup again, but since I stopped DD after a very short runtime I thought this could be solved rather easy and even faster than creating new backups.
This is the result of cmp:
Downloads/archlinux-2016.07.01-dual.iso /dev/sdb differ: byte 167772161, line 620657
GParted shows the same result as fdisk did, saying that there's a 40MiB (used 31.83MiB) fat16 partition and the other 2.73TiB are "unallocated".
Edit: By the way, dd didn't show any message at all after I hit CTRL+C around twenty times until it finally stopped.
Last edited by randomarchuser (2016-07-24 18:33:50)
Offline
The real problem, unfortunately, is NTFS. It is a black box closed standard. Things are hit and (mostly) miss. Also, I believe that a lot of important structure stuff is at th front of an NTFS volume
I really am sorry to harsh your mellow.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
So if you didn't do anything else you copied 167772160 bytes = 160MiB.
You can zero those out:
dd bs=1M count=160 if=/dev/zero of=/dev/sdb
dd bs=1 count=167772160 if=/dev/zero of=/dev/sdb # mathematically challenged alternative
And then see if gdisk or parted give you partitions (pulled from presumed GPT backup at the end of the disk).
Otherwise run testdisk again...
If the kernel still knows the old partitions according to `cat /proc/partitions` you can get the full list using `head /sys/block/sdb/*/{start,end}`
It's a 3TB HDD which had a single NTFS partition using every sector available on the disk.
Chances of survival are dwindling into single digits now... too bad, if it was a disk with several partitions and the filesystem of interest beyond the 160MiB mark you'd have more chances at recovery. If it was a single partition most likely it started at 1MiB.
photorec will grab stuff that isn't fragmented.
Last edited by frostschutz (2016-07-24 18:44:24)
Offline
So if you didn't do anything else you copied 167772160 bytes = 160MiB.
You can zero those out:
dd bs=1M count=160 if=/dev/zero of=/dev/sdb dd bs=1 count=167772160 if=/dev/zero of=/dev/sdb # mathematically challenged alternative
And then see if gdisk or parted give you partitions (pulled from presumed GPT backup at the end of the disk).
Otherwise run testdisk again...
If the kernel still knows the old partitions according to `cat /proc/partitions` you can get the full list using `head /sys/block/sdb/*/{start,end}`
It's a 3TB HDD which had a single NTFS partition using every sector available on the disk.
Chances of survival are dwindling into single digits now... too bad, if it was a disk with several partitions and the filesystem of interest beyond the 160MiB mark you'd have more chances at recovery. If it was a single partition most likely it started at 1MiB.
photorec will grab stuff that isn't fragmented.
What I just did before anything else was installing gdisk. I didn't had it and thought "maybe it shows something different". I executed `sudo gdisk /dev/sdb` and this is what it told me:
GPT fdisk (gdisk) version 1.0.1
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: damaged
Found valid MBR and corrupt GPT. Which do you want to use? (Using the
GPT MAY permit recovery of GPT data.)
1 - MBR
2 - GPT
3 - Create blank GPT
Your answer:
I haven't answered anything so far.. What should I do? (Testdisk quick search is still running, it's at cylinder 60000/364800 at the moment)
The "invalid backup GPT header" message confuses me a bit. If it was GPT and there is a backup header, shouldn't that one be at the end of the disk? (=> being untouched by my dd operation)
Last edited by randomarchuser (2016-07-24 18:51:10)
Offline
I think you should answer "2" there, and then `x extra functionality (experts only)`, `r recovery and transformation options (experts only)`, `b use backup GPT header (rebuilding main)`.
`p print the partition table` to check what it did find if anything.
Take it with a grain of salt, I'm not too familiar with gdisk, myself.
If you decide to zero out the 160M, consider restarting testdisk from scratch. Testdisk can be confused if there is wrong but valid content on a disk (like if it finds an ISO/UDF filesystem) and then sent off the wrong track (tries to find stuff as it would look like if it was actually ISO and not check for anything else)
Last edited by frostschutz (2016-07-24 19:00:40)
Offline
I think you should answer "2" there, and then `x extra functionality (experts only)`, `r recovery and transformation options (experts only)`, `b use backup GPT header (rebuilding main)`.
`p print the partition table` to check what it did find if anything.
Take it with a grain of salt, I'm not too familiar with gdisk, myself.
If you decide to zero out the 160M, consider restarting testdisk from scratch. Testdisk can be confused if there is wrong but valid content on a disk (like if it finds an ISO/UDF filesystem) and then sent off the wrong track (tries to find stuff as it would look like if it was actually ISO and not check for anything else)
I got this result:
Your answer: 2
Warning! Main partition table overlaps the first partition by 64 blocks!
You will need to delete this partition or resize it in another utility.
Command (? for help): x
Expert command (? for help): r
Recovery/transformation command (? for help): b
Recovery/transformation command (? for help): p
Disk /dev/sdb: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 34A428E1-7AA1-4FDE-9D1A-E70210E35B69
Partition table holds up to 248 entries
First usable sector is 64, last usable sector is 1533888
Partitions will be aligned on 8-sector boundaries
Total free space is 1 sectors (512 bytes)
Number Start (sector) End (sector) Size Code Name
2 172 82091 40.0 MiB 0700 ISOHybrid1
I also hit `v verify disk` at the end which caused the following output:
Caution: The CRC for the backup partition table is invalid. This table may
be corrupt. This program will automatically create a new backup partition
table when you save your partitions.
Problem: The secondary header's self-pointer indicates that it doesn't reside
at the end of the disk. If you've added a disk to a RAID array, use the 'e'
option on the experts' menu to adjust the secondary header's and partition
table's locations.
Warning! Main partition table overlaps the first partition by 64 blocks!
You will need to delete this partition or resize it in another utility.
Caution: Partition 2 doesn't begin on a 8-sector boundary. This may
result in degraded performance on some modern (2009 and later) hard disks.
Offline
That's still seeing the ISO stuff. So this could mean 2 things, a) gdisk sees the ISO stuff and gets just as confused as everything else, so you have to zero it out first or b) there wasn't a [GPT] partition table on that disk before.
Last edited by frostschutz (2016-07-24 19:15:53)
Offline
That's still seeing the ISO stuff. So this could mean 2 things, a) gdisk sees the ISO stuff and gets just as confused as everything else, so you have to zero it out first or b) there wasn't a [GPT] partition table on that disk before.
I zero'd out the first 168MB. gdisk `p` result:
Disk /dev/sdb: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EF4515BA-C87C-4A0B-8FD6-83DA84E8EFFD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2925 sectors (1.4 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 5860532223 2.7 TiB 0700
Testdisk result (without doing any search at all):
Disk /dev/sdb - 3000 GB / 2794 GiB - CHS 364801 255 63
Current partition structure:
Partition Start End Size in sectors
Bad GPT partition, invalid signature.
Trying alternate GPT
No FAT, NTFS, ext2, JFS, Reiser, cramfs or XFS marker
1 P MS Data 2048 5860532223 5860530176
1 P MS Data 2048 5860532223 5860530176
That doesn't look too bad in my opinion.... But I could be completely wrong.
Offline
That looks correct. Partition starts at 1MiB, so you overwrote the first 159MiB of your filesystem.
testdisk complains about missing filesystem markers which is to be expected as those were overwritten by your ISO.
I'm not familiar enough with NTFS to judge whether there is any way of recovery with the first 159MiB missing.
Offline
Disk /dev/sdb - 3000 GB / 2794 GiB - CHS 364801 255 63
Partition Start End Size in sectors
>D MS Data 2048 5860532223 5860530176
D MS Data 2392198936 2392201815 2880
D MS Data 4799482184 4799486279 4096
D MS Data 5468681607 5719815558 251133952
This is what Testdisk found. I've tried to `P list files` for all of these, worked only for the third one which contains the files of the arch ISO. (Which is very strange to me because accotding to Testdisk that partition starts at a very high sector... Shouldn't these files be at the very beginning of the disk?)
I have started photorec on the one Partition it found now setting the file types to .GZ and .TAR only as there is 100% no need for any other files. That will take a while now. I'll report back. Thanks for all the assistance!
P.S.: Photorec asked `Try to unformat a FAT filesystem (Y/N)` at the beginning, I've hit `N`. I hope that wasn't wrong.
Offline
This is what Testdisk found. I've tried to `P list files` for all of these, worked only for the third one which contains the files of the arch ISO. (Which is very strange to me because accotding to Testdisk that partition starts at a very high sector... Shouldn't these files be at the very beginning of the disk?)
If you let testdisk scan the entire disk then it might simply be it found an ISO file that was stored within your NTFS filesystem.
Offline
randomarchuser wrote:This is what Testdisk found. I've tried to `P list files` for all of these, worked only for the third one which contains the files of the arch ISO. (Which is very strange to me because accotding to Testdisk that partition starts at a very high sector... Shouldn't these files be at the very beginning of the disk?)
If you let testdisk scan the entire disk then it might simply be it found an ISO file that was stored within your NTFS filesystem.
That could be it, yea. Photorec found a 4.3GB Archieve (still 28GB missing but I think that must be it) but it was broken. Tries a testdisk deep-scan, but nothing.
I ended up re-formatting the whole hard disk and creating the backups again. Could have tried foremost but it doesn't come with tar/gz support so I would have to do that on my own and since I'm not familiar with filetype headers I thought it'd be easier and faster to just re-create everything.
Kinda off-topic but is there a way to make dd ask for confirmation before it starts doing it's job?
Offline
Kinda off-topic but is there a way to make dd ask for confirmation before it starts doing it's job?
Create an alias??
Offline
Something like this could maybe work (not tested).
function confirm()
{
read -r -p "Are you sure? [y/N] " response
if [[ "${response,,}" =~ ^(yes|y)$ ]]; then
return 0
else
return 1
fi
}
alias dd='confirm && dd'
Or maybe enable passworded sudo if you haven't. That way you may be more inclined to think twice just in case (it is a sudo command after all).
Last edited by Omar007 (2016-07-26 12:51:52)
Offline
Maybe you should parse some of dd options (if= of=) and display info about the device about to be read from/written to. [size/name/type/serial of the device]
Otherwise even such a confirmation question won't prevent many accidents.
Maybe you should just stop using /dev/sdx and go full path via /dev/disk/by-id/... (or one of the other by-*)
But even if someone made the effort of such fancy confirmation dialogs, it wouldn't help since people won't think of using it until after the accident already happened
Last edited by frostschutz (2016-07-26 12:59:56)
Offline
randomarchuser wrote:Kinda off-topic but is there a way to make dd ask for confirmation before it starts doing it's job?
Create an alias??
Something like this could maybe work (not tested).
function confirm() { read -r -p "Are you sure? [y/N] " response if [[ "${response,,}" =~ ^(yes|y)$ ]]; then return 0 else return 1 fi } alias dd='confirm && dd'
Or maybe enable passworded sudo if you haven't. That way you may be more inclined to think twice just in case (it is a sudo command after all).
Thanks! I'm going to implement this.
I have passworded sudo but since I've used it for another command right before it didn't asked for a password again, which is a behaviour I wouldn't like to change.
Maybe you should parse some of dd options (if= of=) and display info about the device about to be read from/written to. [size/name/type/serial of the device]
Otherwise even such a confirmation question won't prevent many accidents.
Maybe you should just stop using /dev/sdx and go full path via /dev/disk/by-id/... (or one of the other by-*)
But even if someone made the effort of such fancy confirmation dialogs, it wouldn't help since people won't think of using it until after the accident already happened
I am sorry. I just noticed the title I gave this thread. I didn't ran `dd` on the wrong harddisk by accident, I ran `dd` completely by accident. (Thought the second last command I've used was openvpn, so I just hit the UP-Key two times followed by Enter without looking first..)
So all I really need is something that tells me "Hey you're about to run dd, are you aware of that?" and the above code should work perfectly fine for this.
Thank you very much for all the help everyone. At least I've learned a few things about partition tables and GPT I didn't know before.
Last edited by randomarchuser (2016-07-26 15:16:11)
Offline
I didn't ran `dd` on the wrong harddisk by accident, I ran `dd` completely by accident.
I have a couple suggestions based on that sentence alone. The most practical is to use HISTIGNORE (assuming your shell is bash). I only tested this briefly, but `HISTIGNORE='dd*'` should prevent any command beginning with 'dd' from being recorded in history. Note that HISTIGNORE should be set in your .bashrc:
export HISTIGNORE='dd*'
You could also try to get in the habit of putting an extra space before sensitive or dangerous commands:
$ ls ~/funny_cat_pics
$ ls ~/.secret_Pr0n_stash
The second command won't be recorded in history [edit: if HISTCONTROL is set appropraitely]. Again, this presumes bash.
Last edited by alphaniner (2016-07-26 15:53:27)
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
You could also try to get in the habit of putting an extra space before sensitive or dangerous commands:
$ ls ~/funny_cat_pics $ ls ~/.secret_Pr0n_stash
The second command won't be recorded in history.
Damn, what a great idea! One just never stops learning when using Linux.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
Oops, I forgot that the leading-space trick only works if HISTCONTROL is set appropriately.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
I didn't ran `dd` on the wrong harddisk by accident, I ran `dd` completely by accident.
I have a couple suggestions based on that sentence alone. The most practical is to use HISTIGNORE (assuming your shell is bash). I only tested this briefly, but `HISTIGNORE='dd*'` should prevent any command beginning with 'dd' from being recorded in history. Note that HISTIGNORE should be set in your .bashrc:
export HISTIGNORE='dd*'
You could also try to get in the habit of putting an extra space before sensitive or dangerous commands:
$ ls ~/funny_cat_pics $ ls ~/.secret_Pr0n_stash
The second command won't be recorded in history [edit: if HISTCONTROL is set appropraitely]. Again, this presumes bash.
This is an awesome idea! Not only to use for dangerous commands but also for everything you just don't want in your history (as your example shows, heh). Thanks a lot for this suggestion.
Offline
Pages: 1