I currently have an 80GB hard drive on which I have Arch installed along with Mandrake 10 Community and XP. I found a pretty good deal on a 160GB drive that I'll want to install in the machine as well. My question is, what is the best setup for this? Is there going to be an issue with Linux recognizing the full 160GB or are there some settings to change?
I was kicking around a couple of options. Install the new drive (as slave?), then somehow copy the contents of the first drive to it so that eventually the newer, larger drive is the primary.
Or maybe some type of RAID option (although I don't know how well that would work on drives of different sizes like this).
Looking for some insight from this knowlegeable group.
Size: Adding a 160Mb hard disk shouldn't present any problems for Linux.
Installation: The main issues, if you don't already know or understand them, are: (a) creating a file system on the drive; (b) creating a mount point for the drive; (c) mounting the drive (to test that it works, that you can access it properly); and (d) creating an entry in /etc/fstab for the hard disk. This process is described in many, many places (e.g., http://www.storm.ca/~yan/Hard-Disk-Upgrade.html or http://www.linuxplanet.com/linuxplanet/ … s/4232/1/).
RAID: Please up on RAID thoroughly. You should carefully think through the type of RAID you want to set up. I'm guessing that you want to set up a software RAID and the following comments reflect that.
Please note that you will see the expression "hard disks of roughly equal size" sprinkled liberally through the RAID how-tos and descriptions. A combination of a massive additional hard disk added to a system with a much smaller hard disk isn't really appropriate for RAID. It's not that it won't work; you just won't get access to the full capacity of the larger hard disk (since you'll have to base the design on the smallest drive you have available).
My preferences: RAID5 and RAID1.
Here is text describing RAID5 from the "RAID Software HOWTO":
"This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4.
If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, all data are lost. RAID-5 can survive one disk failure, but not two or more.
Both read and write performance usually increase, but can be hard to predict how much. Reads are similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information), or similar to RAID-1 writes. The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive."
"This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. With large RAID-1 arrays this can be a real problem, as you may saturate the PCI bus with these extra copies. This is in fact one of the very few places where Hardware RAID solutions can have an edge over Software solutions - if you use a hardware RAID card, the extra write copies of the data will not have to go over the PCI bus, since it is the RAID controller that will generate the extra copy. Read performance is good, especially if you have multiple readers or seek-intensive workloads. The RAID code employs a rather good read-balancing algorithm, that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 6 ms equals a read of 123 kB at 20 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement."
It's not that I necessarily wanted RAID, just something I wanted some input about. My main concern with the setup was, as mentioned above, the size differences in the drives (80GB vs. 160GB). So it sounds like it would just be best to not have a RAID environment and instead use the disc space 'as-is'.
When the second drive is installed, should it be set as slave? Then I guess I could use a command like dd to copy the 80GB drive to the 160GB drive and then make it the master?
I moved my files to a new, larger, and faster hard disk a few months ago, so I can feel your pain.
You should be able to do what you suggest, but I'd probably do something more like:
1. Transfer all relevant files as coherently as possible to your new hard disk. However, you will run into a problem with mounting your new drive transferring your data (i.e., while still running from the original drive) because you can't mount identically named mount points simultaneously. You might find this process easier (but you've got to be very vigilant because you end up with two drives that look identical from a file structure point of view) if you do this off a live CD such as Knoppix. That way you can mount and unmount drives at will and still have a working environment.
2. Make a change to /etc/lilo.conf or /boot/grub/menu.lst ON YOUR ORIGINAL DISK, as appropriate, that allows you to boot to one or the other partition/disk. This way you can try out the new drive, but still be able to return to the original if your configuration is wrong.
3. Make certain (i.e., actually do this) you know how to boot to your original partition/disk using a rescue floppy or CD-ROM.