You are not logged in.
Pages: 1
Hey all i've been trying the past two days to install Archlinux on to my first generation Ocz Revodrive 120gb ssd, i am able to install grub with grub-install /dev/mapper/sil_raidassignedname but when ever i reboot, grub will give the "error: no such device: long UID here."
I've read the fake raid wiki and used:
"modprobe dm_mod"
"dmraid -ay"
"ls -la /dev/mapper/"
"modprobe sata_sil"
"dmraid -tay"
Once i run those commands i am able to use "cfdisk /dev/mapper/sil_raidassignedname" and create a 15gb root partition, and the rest of the space for the home partition. When i have created the partitions with cfdisk i run "dmsetup remove_all" followed by "dmraid -ay" and "ls -la /dev/mapper" to refresh the array to detect the newly created partitions under "lsblk"
Once the partitions are created i format them both will these commands:
"mkfs.ext4 /dev/mapper/sil_raidassignedname1" - root
"mkfs.ext4 /dev/mapper/sil_raidassignedname2 " - home
Then i mount both the partitions to the root and home directory:
mount /dev/mapper/sil_raidassignedname1 /mnt
mkdir /mnt/home
mount /dev/mapper/sil_raidassignedname2 /mnt/home
Once i have mounted both the partitions i use "pacstrap -i /mnt base base-devel" to install the system then generate the fstab with "genfstab -U -p /mnt >> /mnt/etc/fstab"
I have added the line "dm_mod" to the MODULES line in "mkinitcpio.conf" and have added "dmraid" to the HOOKS line also as instructed in the Fake Raid wiki.
After that i use "arch-chroot /mnt' and install grub with "pacman -S grub" then finally "grub-install /dev/mapper/sil_raidassignedname".
Once i reboot into grub, it will display the Archlinux boot options but always fails to boot. Does anyone else have a Revodrive or Raid 0 drives and have gotten Archlinux and dmraid to boot correctly with grub? I installed Fedora 19 and it installs out of the box to the Revodrive with grub working perfectly all on one partition, I then tried Linux Mint 15, it recognizes the array out of the box but fails at a certain percent when copying files. Does anyone know how to get Archlinux to work with dmraid, ive read around but still cant find the answer, i think the fake raid wiki might be out dated also as it refers to using /arch/setup.
I've run low on ideas and cant seem to find much information on this, any advice is greatly appreciated.
Last edited by itzmeluigi (2013-07-16 18:08:14)
Offline
I ended up getting the Revodrive working by following the Software RAID and LVM wiki.
https://wiki.archlinux.org/index.php/So … ID_and_LVM
Heres the installation method i used on the Revodrive, i summarized the guide for installation on the Revodrive.
Hope this helps other Revodrive users.
First activate
# modprobe raid0
# modprobe dm-mod
Create a 8000mb root partition and create a home partition using the rest of the space on both of the single Revodrives disks. They will be merged together later.
the boot partition and grub has to be installed on a another device as it cannot be apart of the lvm raid.
________Raid Installation:
Create the / array at /dev/md0:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sd[bc]1
Create the /home array at /dev/md1:
# mdadm --create /dev/md1 --level=0 --raid-devices=2 /dev/sd[bc]2
The boot loader must go on a device that is not part of the lvm raid.
________LVM Installation:
This section will convert the two RAIDs into physical volumes (PVs). The combine those PVs into a volume group (VG). The VG will the be divided into logical volumes (LVs) that will act
like physical partitions (e.g. / , /var, /home). If you did not understand that make sure you read the LVM Introduction section.
Make the RAIDs accessible to LVM by converting them into physical volumes (PVs) using the following command. Repeat this action for each of the RAID arrays created above.
# pvcreate /dev/md0
# pvcreate /dev/md1
confirm that LVM has added the PVs with:
# pvdisplay
________Create the Volume Group
The next step is to create a volume group (VG) with the first PV:
# vgcreate VolGroupArray /dev/md0
# vgextend VolGroupArray /dev/md1
Confirm that LVM has added the VG with:
# vgdisplay
________Create logical volumes
Now we need to create logical volumes (LVs) on the VG, much like we would normally prepare a hard drive. In this example we will create separate /, ,/home LVs. The LVs will be accessible
as /dev/mapper/VolGroupArray-<lvname> or /dev/VolGroupArray/<lvname>
Create a / LV:
# lvcreate -L 15g VolGroupArray -n lvroot
Create a /home LV:
# lvcreate -l +100%FREE VolGroupArray -n lvhome
Confirm that LVM has created the LVs with:
# lvdisplay
________Update the RAID configuration
Since the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do
automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:
# mdadm --examine --scan > /etc/mdadm.conf
--------Configure system
mkinitcpio.conf can use a hook to assemble the arrays on boot. For more information see mkinitpcio Using RAID.
1. Add the dm_mod module to the MODULES list in /etc/mkinitcpio.conf.
2. Add mdadm_udev to HOOKS after udev and the lvm2 HOOK between block and filesystem.
3. Add mdmon to BINARIES and shutdown to HOOKS in /etc/mkinitcpio.conf - This will prevent the array from being rebuit/resync'd on every bootup.
________Create filesystems and mount logical volumes
Your logical volumes should now be located in / dev/mapper/ Now you can create filesystems on logical volumes and mount them as normal partitions(if you
are installing Arch linux, refer to mounting the partitions for additional details):
# mkfs.ext4 /dev/mapper/VolGroupArray-lvroot
# mount /dev/mapper/VolGroupArray-lvroot /mnt
# mkfs.ext4 /dev/mapper/VolGroupArray-lvhome
# mount /dev/mapper/VolGroupArray-lvhome /mnt/home
# mkfs.ext4 /dev/sd**
# mount /dev/sd** /mnt/boot - use a device as boot outside the lvm.
______Installing System
# pacstrap -i /mnt base base-devel
then continue normal installation.
Last edited by itzmeluigi (2013-07-16 18:37:51)
Offline
Offline
Thanks i just added it to my mkinitcpio.conf file and i noticed it boots up faster now , i updated it to the guide also.
Offline
Pages: 1