You are not logged in.
I'm currently trying to install ArchLinux on Software Raid 10 with LVM on top of GPT using EXT4, however I cannot get syslinux to install, much less boot the new system. I have been reading up on the installation for the last week or so, but I haven't really made any progress. I've outlined the basic setup below, in hopes that someone might have an idea as to what I'm doing wrong. Keep in mind I am fairly new to ArchLinux.
Step #1 - Load all required modules
modprobe ahci
modprobe dm-mod
modprobe raid10
modprobe raid0
Step #2 - Creating the Partition Table on the Primary Hard-Drive.
$> gdisk /dev/sda
- Select Option "o" from menu to make a new GPT partition table
- Select Option "n" to make the base partition
- Select Option "n" to make the swap partition
- Select Option "x" for expert mode
- Select Option "a" for attributes
- Select Option "1" for first partition
- Select Option "2" for legacy boot
- Select Option "w" to write the changes
Step #3 - Copying the Partition Table to the other three.
$> sgdisk --backup=table /dev/sda
$> sgdisk --load-backup=table /dev/sdb
$> sgdisk --load-backup=table /dev/sdc
$> sgdisk --load-backup=table /dev/sdd
Step #4 - Create the Raid Arrays
$> mdadm --create /dev/md0 --level=10 --raid-devices=4 --metadata=1.0 /dev/sd[abcd]1
$> mdadm --create /dev/md1 --level=0 --raid-devices=4 --metadata=1.2 /dev/sd[abcd]2
Step #5 - Make the Primary Array available to LVM & Create the Volume Group
$> pvcreate /dev/md0
$> pvdisplay
$> vgcreate ArchLinux /dev/md0
$> vgdisplay
$> lvcreate -L 5G ArchLinux -n 001 # /
$> lvcreate -L 10G ArchLinux -n 002 # /home
$> lvcreate -L 10G ArchLinux -n 003 # /srv
$> lvcreate -L 10G ArchLinux -n 004 # /usr/local
$> lvcreate -L 5G ArchLinux -n 005 # /var/log
$> lvcreate -L 5G ArchLinux -n 006 # /var/spool
$> lvdisplay
Step #6 - Activate LVM Group, set EXT4, & Mount the Partitions.
$> vgscan
$> vgchange -ay
$> mkfs.ext4 /dev/mapper/ArchLinux-001
$> mkfs.ext4 /dev/mapper/ArchLinux-002
$> mkfs.ext4 /dev/mapper/ArchLinux-003
$> mkfs.ext4 /dev/mapper/ArchLinux-004
$> mkfs.ext4 /dev/mapper/ArchLinux-005
$> mkfs.ext4 /dev/mapper/ArchLinux-006
$> mount /dev/mapper/ArchLinux-001 /mnt
$> mkdir /mnt/home
$> mkdir /mnt/srv
$> mkdir /mnt/usr
$> mkdir /mnt/usr/local
$> mkdir /mnt/var
$> mkdir /mnt/var/log
$> mkdir /mnt/var/spool
$> mount /dev/mapper/ArchLinux-002 /mnt/home
$> mount /dev/mapper/ArchLinux-003 /mnt/srv
$> mount /dev/mapper/ArchLinux-004 /mnt/usr/local
$> mount /dev/mapper/ArchLinux-005 /mnt/var/log
$> mount /dev/mapper/ArchLinux-003 /mnt/var/spool
Step #7 - Create Swap and Turn it on (Swap on Raid0 with no LVM)
$> mkswap /dev/md1
$> swapon /dev/md1
Step #8 - Generate the FSTAB
$> genfstab -U -p /mnt >> /mnt/etc/fstab
Step #9 - Update /etc/mdadm.conf (Array Information)
$> mdadm --examine --scan > /etc/mdadm.conf
Step #10 - Update mkinitcpio.conf
$> nano /etc/mkinitcpio.conf
MODULES="ahci raid0 raid10 dm_mod"
HOOKS=".. mdadm_udev lvm2 filesystem .."
Step #11 - Install the Syslinux (Boot Loader)
$> pacman -Sy
$> pacman -S syslinux
$> pacman -S gptfdisk
$> nano /boot/syslinux/syslinux.cfg
change "APPEND root=/dev/sda3 ro" to "APPEND root=/dev/mapper/ArchLinux-001 ro"
$> syslinux-install_update -iam
$> FAILED to SET boot flag on /dev/sda <-- WTF!?!
That about sums it up, however I omited the obvious installation stuff like installing the base system. I was able to sucessfully setup on just LVM with GPT.
Anyone have a clue what is preventing syslinux from installing correctly? I already wiped the system just to start over, however I didn't really see anything off beat with the logs. On a side note I was wondering what the performance difference would be using RAID1 + LVM vs RAID10 + LVM. I thought I read that LVM stripes so using RAID10 w/ LVM is almost a waste but I'm not sure if there are any other benefits to running RAID10 vs RAID1 w/ LVM. Any input is much appreciated!
Thanks!
- Aaron
Last edited by artomason (2013-02-23 01:48:52)
Offline
** UPDATE **
Still having issues booting with a new install (RAID10 + LVM on GPT). All I'm currently getting is a blinking cursor in the upper left hand corner of the screen.
Last edited by artomason (2013-02-21 23:28:27)
Offline
This is just an observation! Have you taken btrfs in consideration. I found out that its incredibly easy to setup a raid system with it, it manages the RAID without the need of any external tools, and as a plus, it can be fully optimized in the case of using SSD drives.
I know that it is considered as unstable, but i'm using it as my root and storage filesystem and it hasn't break anything and it allready has btrfsck tool to fix filesystem errors.
Just a reminder, be carefull of /boot partition, If you decide to add compression to your fiesystem, you will need to add a diferent partition for your /boot, because i found that the bootloader, at least syslinux, doesn't boot out-of-box from compressed filesystems.
Offline
$> pacman -Sy
$> pacman -S syslinux
$> nano /boot/syslinux/syslinux.cfgchange "APPEND root=/dev/sda3 ro" to "APPEND root=/dev/mapper/ArchLinux-001 ro"
$> syslinux-install_update -iam
$> FAILED to SET boot flag on /dev/sda <-- WTF!?!
dont forget to install the package "gptfdisk", or else syslinux won't support GPT filesystems. then run "syslinux-install_update -iam" and after that edit syslinux.cfg to your needs.
Offline
This is just an observation! Have you taken btrfs in consideration. I found out that its incredibly easy to setup a raid system with it, it manages the RAID without the need of any external tools, and as a plus, it can be fully optimized in the case of using SSD drives.
I know that it is considered as unstable, but i'm using it as my root and storage filesystem and it hasn't break anything and it allready has btrfsck tool to fix filesystem errors.
Just a reminder, be carefull of /boot partition, If you decide to add compression to your fiesystem, you will need to add a diferent partition for your /boot, because i found that the bootloader, at least syslinux, doesn't boot out-of-box from compressed filesystems.
I'm still fairly new to ArchLinux and Linux in general, so I would like to get this setup working first before I start dipping into stuff that I have never heard of, but I will definitely put that on a list of things to check out later down the road once I get a bit more familiar with this.
dont forget to install the package "gptfdisk", or else syslinux won't support GPT filesystems. then run "syslinux-install_update -iam" and after that edit syslinux.cfg to your needs.
I did install gptfdisk I just didn't outline it above. I will append my original post to reflect this.
I read somewhere that you can't run your boot partition on Raid10 or any form of Raid that stripes in Syslinux. I think I'm going to give Grub2 (I'm not really excited about Grub2) a stab or reinstall tomorrow and drop /boot on Raid1. Although this is 2013 and I don't really like the idea of splitting /boot off from the Root Operating System. which brings me to my other question:
I thought I read that LVM stripes so using RAID10 w/ LVM is almost a waste but I'm not sure if there are any other benefits to running RAID10 vs RAID1 w/ LVM. Any input is much appreciated!
If LVM is capable of stripping the data to the drives, and there is no performance hit between Raid10 and Raid1 + LVM then I may consider running the entire system from Raid1. I just haven't really found any benchmarks or concrete 'YES' and 'NO' answers.
Thank you for your reply! I appreciate it!
Last edited by artomason (2013-02-23 04:05:59)
Offline
Figured I would post my FSTAB, SYSLINUX.CFG, MKINITCPIO.CONF, and MDADM.CONF
$> nano /etc/fstab
# /dev/md1
UUID=28ca7c4c-6e3b-4f7f-8859-d8ee2560cee9 none swap defaults 0 0
# /dev/mapper/ArchLinux-001
UUID=46b99467-ea15-410d-9991-1833e49d6a64 / ext4 rw,relatime,stripe=256,data=ordered 0 1
# /dev/mapper/ArchLinux-002
UUID=2c93e5cf-0752-48a9-b24d-b737583a52f8 /home ext4 rw,relatime,stripe=256,data=ordered 0 2
# /dev/mapper/ArchLinux-003
UUID=89dca462-9b57-4cd6-88c2-6de849938c60 /srv ext4 rw,relatime,stripe=256,data=ordered 0 2
# /dev/mapper/ArchLinux-004
UUID=d99b3311-ffdd-4eb8-8344-9887a51ae5ff /usr/local ext4 rw,relatime,stripe=256,data=ordered 0 2
# /dev/mapper/ArchLinux-005
UUID=d85c9092-6b7a-408a-9153-c3bd50887c4e /var/log ext4 rw,relatime,stripe=256,data=ordered 0 2
# /dev/mapper/ArchLinux-006
UUID=d36501b5-573b-4750-9cd0-6a76b6a4b624 /var/spool ext4 rw,relatime,stripe=256,data=ordered 0 2
$> nano /boot/syslinux/syslinux.cfg
LABEL arch
MENU LABEL Arch Linux (Normal)
LINUX ../vmlinuz-linux
APPEND root=/dev/mapper/ArchLinux-001 ro
INITRD ../initramfs-linux.img
LABEL archfallback
MENU LABEL Arch Linux (Fallback)
LINUX ../vmlinuz-linux
APPEND root=/dev/ArchLinux-001 ro
INITRD ../initramfs-linux-fallback.img
$> syslinux-install_update -iam
Syslinux install successful
FAILED to Set the boot flag on /dev/mapper/ArchLinux-001
$> nano /etc/mkinitcpio.conf
MODULES="ahci raid0 raid10 dm_mod"
HOOKS="base udev autodetect modconf block mdadm_udev lvm2 filesystems keyboard fsck"
$> nano /etc/mdadm.conf
ARRAY /dev/md/0 metadata=1.0 UUID=f7fae7b2:c92c8127:63b20cbc:0785b1ed
ARRAY /dev/md/1 metadata=1.2 UUID=14bcec8a:9c945cec:818465f9:f73103d0
Offline
Well, i decided to test by creating on virtualbox 4 virtual disks to see if i can help you.
Offline
Well, i decided to test by creating on virtualbox 4 virtual disks to see if i can help you.
Much appreciated! I'm interested in seeing what the outcome is!
Offline
whell this is the first thing i've discovered, it seems syslinux isn't able to boot from lvm2 groups yet, from what i read in syslinux wiki. One of the conditions for syslinux is:
partition containing /boot is a real partition, not an LVM partition.
What i'm going to try is to create to separate arrays, one for /boot (nonLVM), and another for the rest of the system (LVM) and see if it works.
Another thing, it seems grub2 can boot in LVM.
Offline
well my friend you are in luck!!!!!!!!!!!!!!!
Offline
I was abble to create my working system!!!
To use syslinux you really need to make a non-LVM /boot partition, and it must be RAID 1 for security reasons (for backing up mbr in all disks).
I'm going to write a complete workthrue for you and for anyone else who needs it.
Offline
I was abble to create my working system!!!
To use syslinux you really need to make a non-LVM /boot partition, and it must be RAID 1 for security reasons (for backing up mbr in all disks).
I'm going to write a complete workthrue for you and for anyone else who needs it.
SWEET! However, I don't understand why syslinux can boot normal non-raided LVM just fine, but struggles with Raid + LVM. I tried putting /boot on Raid1 + LVM and it failed pretty bad. The Software Raid and LVM guide seemed to get it working just fine but when I attempted that walk-thru I was met with failure as well. Could it be possible that running my drives in AHCI mode is causing the issue? I think I will try switching off AHCI, however I don't really see that making a huge difference. I look forward to seeing your write up. Thanks for taking the time to do this.
Offline
Install ArchLinux - RAID 10 + LVM + GPT + SYSLINUX
NOTE: Tested with 4 HardDrives - recommended use of disks in multiples of 2
1º Boot the live CD
2º Delete all lvm groups and arrays allready created (ALL DATA WILL BE DELETED)
List lvm groups:
pvs
Delete lvm groups:
vgchange -an my_volume_group
vgremove my_volume_group
See array status:
mdadm --detail /dev/mdX
Remove array drives:
mdadm --stop /dev/mdXX
mdadm --fail /dev/md0 /dev/sdXX
mdadm -r /dev/md0 /dev/sdXX
mdadm --zero-superblock /dev/sdXX
Make sure kernel clears old entries:
partprobe -s
3º Reboot live CD
4º Make new gpt partition table for all disks:
sgdisk -o /dev/sda
sgdisk -o /dev/sdb
sgdisk -o /dev/sdc
sgdisk -o /dev/sdd
cgdisk /dev/sda
sda1 - xxMB type linuxRAID: fd00 /boot - at least 100MB because it won't be multiplied [RAID 1]
sda2 - xxMB type linuxRAID: fd00 /swap - Optional (only if you want swap) - It will be multiplied by all 4 swap partitions [RAID 0]
sda3 - xxxMB type linuxRAID: fd00 /sgdisk --backup=table /dev/sda
sgdisk --load-backup=table /dev/sdb
sgdisk --load-backup=table /dev/sdc
sgdisk --load-backup=table /dev/sdd
Create the RAID 1 for boot needs --metadata=1.0
mdadm -v --create /dev/md0 --level=raid1 --raid-devices=4 --metadata=1.0 /dev/sd[abcd]1
mkfs.ext4 /dev/md0
Create the RAID 0 for swap needs (only if needed)
mdadm -v --create /dev/md1 --level=raid0 --raid-devices=4 /dev/sd[abcd]2
mkswap /dev/md1
swapon /dev/md1
Create the RAID 10 for root
mdadm -v --create /dev/md2 --level=raid10 --raid-devices=4 /dev/sd[abcd]3
Create lvm volume on root:
pvcreate /dev/md2
NOTE: Use "pvs" to list physical volume
vgcreate root-vg /dev/md2
NOTE: Use "vgs" to list volume group
lvcreate -L XXXm -n lv_XXXXXXX root-vg
NOTE: you can create as many volumes as you wich!
EXAMPLES:
lvcreate -L 20g -n lv_root root-vg - /
lvcreate -L 20g -n lv_home root-vg - /home
lvcreate -L 5g -n lv_log root-vg - /log
lvcreate -L 5g -n lv_spool root-vg - /var/spool
Activate lvm group:
vgchange -ay root-vg
Format volumes:
EXAMPLES:
mkfs.ext4 /dev/root-vg/lv_root - /
mkfs.ext4 /dev/root-vg/lv_home - /home
mkfs.ext4 /dev/root-vg/lv_log - /log
mkfs.ext4 /dev/root-vg/lv_spool - /var/spool
Mount the system:
mount /dev/root-vg/lv_root /mnt - /
mkdir /mnt/boot
mount /dev/md0 /mnt/boot - /bootmkdir /mnt/home
mount /dev/root-vg/lv_home /mnt/home - /homeetc ...
Install Base System:
pacstrap /mnt base base-devel
pacstrap /mnt syslinux gptfdisk sudo alsa-utils btrfs-progs dosfstools ntfsprogs wireless_tools wpa_supplicant rfkill linux-headers
Create FSTAB:
genfstab -U -p /mnt > /mnt/etc/fstab
Enter System:
arch-chroot /mnt
Build kernel:
sed -i 's/HOOKS="base udev autodetect modconf block filesystems keyboard fsck"/HOOKS="base udev block mdadm_udev lvm2 filesystems"/' '/etc/mkinitcpio.conf'
sed -i 's/MODULES=""/MODULES="dm_mod"/' '/etc/mkinitcpio.conf'
mkinitcpio -p linux
Add RAID arrays for mdadm auto-detection:
mdadm --examine --scan > /etc/mdadm.conf
Setup Syslinux:
syslinux-install_update -iam
Edit /etc/syslinux/syslinux.cfg boot lines:
APPEND root=/dev/mapper/xxxxxxxxxxxxxxxxxxx ro
NOTE: see fstab to see the name of your root volume [/] and replace xxxxxxxxxxxxxxxxxxx with it
When you're done setting your system to your needs, just "exit" the chroot shell and "reboot" the system.
THE END
Offline
In answer to your question, it seems that in the archlinux wiki they jumped something, its not complete.
It seems that many of the wiki pages aren't up-to-date!!! I think !!! Because some things are broken in their walk-thrus!
WELL, thats also the beauty of it, it makes us think for ourselves, i've learned a lot by trying to fix things for my self!
NOTE: My walk-thru is a little big, lets hope i'm not forgetting something!
Offline
Activate lvm group:
vgchange -ay root-vg
Don't you have to "vgscan" first to find ALL the Volume Groups? and if I'm not mistaken you should beable to "vgchane -ay" instead of specifying the volume group which should activate all available Volume Groups and Logical Partitions. I agree that while some of the information on the Wiki is still good, there is a good amount that is outdated. I did notice that people usually try to warn you that one of the Wiki's is out dated, but it is hard to catch everything.
Offline
You walkthrough isn't much different that what I did in terms of setup. Even when I put /boot on Raid1 I had the same outcome. I'm going to try disabling AHCI and see if that helps in combination with following your walkthrough. Thanks again for taking the time to do this.
Offline
the diference is that i created a different array for /boot partition without LVM , i dont see that in the code you posted.
Then i mounted it in /boot of the LVM root.
Offline
s1ln7m4s7r wrote:Activate lvm group:
vgchange -ay root-vg
Don't you have to "vgscan" first to find ALL the Volume Groups? and if I'm not mistaken you should beable to "vgchane -ay" instead of specifying the volume group which should activate all available Volume Groups and Logical Partitions. I agree that while some of the information on the Wiki is still good, there is a good amount that is outdated. I did notice that people usually try to warn you that one of the Wiki's is out dated, but it is hard to catch everything.
I didn't vgscan, because it serves only for you to see what you have done (information).
I did "vgchange -ay root-vg" because i like to see what i'm doing, because sometimes if we are not specific it can backfire by not doing what we want. "vgchange -ay root-vg" will activate all logical volumes in root-vg.
Offline
the diference is that i created a different array for /boot partition without LVM , i dont see that in the code you posted.
Then i mounted it in /boot of the LVM root.
Your correct there! I did put /boot on Raid1 + LVM.
Offline
For now this is the only way to work with syslinux, there is allready work beying done, at least there is some requested features in syslinux own wiki, along with EFI support!
If you wish to have every volume in lvm you should try to use grub2, but i findthat it's setup is not as forward as syslinux
Offline