You are not logged in.

#1 2004-02-19 04:13:27

ubermartian
Member
From: Edinburgh
Registered: 2004-02-06
Posts: 32

LVM2 installation with a 2.6 Kernel HowTO

First, and very rough draft.  Feel free to criticise, correct mistakes and ask questions, maybe I'll even be able to answer them.
------------------------------------------------------------------------
<B>INTRODUCTION</B>

Many linux users will eventually use multiple partitions for a variety of reasons.  Choosing good partition sizes is a task of trial, error and often luck.

LVM (logical volume managment) is a nice little method that makes looking after your hard drive space a breeze.  To put it simply, LVM takes over the space you allocate to it and manages the partition data on there for you.  Using LVM you can dynamically change the size of your partitions (though it is easier to grow a partition than to shrink it).

This document shall hopefully show you how to get a working LVM2 system with a 2.6 kernel.  Things are different with LVM1 and/or pre 2.6 kernels, there is a lot of information out there, I suggest you look elsewhere if you plan on using a an older kernel.

An example will give a clearer picture than a dry discussion.  To start with we need a working AL system.  For this example we have two scsi discs, sda and sdb.  Firstly 3 partitions were created on sdb, a 75mb ext2 partition for /boot, a 150Mb reiserfs partition for / and the rest of the drive was given over to an LVM partition (set the type to be 8E in cfdisk).  Swap was put on sda along with two reiserfs partitions, each 2GB, to use as /usr and /var until they can be moved to LVM partitions.  It is intended that /usr, /var, /tmp, /home and /opt shall be managed by LVM.

<B>INSTALLING LVM</B>

<B><U>Firstly, back up your important data.  </U></B>

AL was installed, and then upgraded to a 2.6 based system with the latest packages in current.  The kernel was compiled by hand, ensuring that the device-mapper option was enabled.  Then the system was rebooted, and tested to make sure the upgrade was succesful.

Now the system is ready to be moved over to LVM.  Firstly install lvm using pacman (note we install lvm2, not lvm which is also available):

#pacman -S lvm2

This will install device-mapper and lvm2.

<B>SETTING UP LVM</B>

Now we should be ready to set our LVM partition up on sdb.  This is listed in /dev as sdb3.  Use ls -l to show what this is linked to:

lr-xr-xr-x    1 root     root           34 Feb 17 00:13 /dev/sdb3 -> scsi/host0/bus0/target1/lun0/part3

Now we can use the pvcreate command to initialise the partition as a
physical volume:

#pvcreate /dev/scsi/host0/bus0/target1/lun0/part3

Physical volume "/dev/scsi/host0/bus0/target1/lun0/part3" successfully created

Now we need to create lvm groups, the first group will be called vg1 and will be used to control the LVM partition on sdb.  Eventually a second group will be created, vg0, and used to control LVM on sda.  You can manage both drives under one group, but I like the extra control that this system gives me.  Use the vgcreate command to create groups:

#vgcreate vg1 /dev/scsi/host0/bus0/target1/lun0/part3
  Volume group "vg1" successfully created

To display information about a group, use the vgdisplay command:

#vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                255
  Cur LV                0
  Open LV               0
  Max PV                255
  Cur PV                1
  Act PV                1
  VG Size               8.34 GB
  PE Size               4.00 MB
  Total PE              2135
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2135 / 8.34 GB
  VG UUID               PMOmxM-CrhW-UIMd-P1L6-NMbS-UwCy-5CSeje

Now we can add logical volumes to the volume group using the lvcreate command:

#lvcreate -L1G -nlvopt vg1
  Logical volume "lvopt" created

This created a 1GB partition called lvopt in the vg1 group.  lvdisplay will show you information logical volumes:

#lvdisplay

  --- Logical volume ---
  LV Name                /dev/vg1/lvopt
  VG Name                vg1
  LV UUID                YsDZsm-goB3-BGGG-J4Sg-Z0rZ-Cobf-JcP2Py
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                1.00 GB
  Current LE             256
  Segments               1
  Allocation             next free (default)
  Read ahead sectors     0
  Block device           254:0

   
using ls /dev/vg1 should show lvopt too.

Now we need to create a filesystem on the partition.  This is exactly the same as for standard partitions.

#mkreiserfs /dev/vg1/lvopt

LVM partitions are mounted in the normal way, ie:

#mount /dev/vg1/lvopt /opt 

will mount our new partition under opt.  The new /opt partition was entered into /etc/fstab

/dev/vg1/lvopt /opt reiserfs defaults 0 0

If you have problems, you could try /dev/mapper/vg1-lvopt which is the raw entry for /dev/vg1/lvopt.

Before trusting /usr or /var to LVM, we should check that the system will work properly.  Firstly we need to edit rc.sysinit.  Find the section for mounting partitions and change it to read:

stat_busy "Mounting Local Filesystems"
/bin/mount -n -o remount,rw /
/bin/rm -f /etc/mtab*
/bin/mount /proc
/sbin/vgscan
/sbin/vgchange -a y
/bin/mount -a -t nonfs,nosmbfs,noncpfs
stat_done

This will start LVM up so that any LVM partitions in /etc/fstab will mount.
Next we need to edit rc.shutdown so that LVM will stop properly.  Find the part that unmounts the filesystems and edit it to read:

stat_busy "Unmounting Filesystems"
/bin/umount -a
/sbin/vgchange --ignorelockingfailure -a n
stat_done

LVM2 uses locks in /var, obviously, if /var is umounted the lock won't be accessable, the ignorelockingfailure option overcomes this.
Now the system should be ready for using LVM.  To be safe run the commands <B>vgscan</B> and </B>vgchange -a y</B> before rebooting.

If the system boots without errors, we can start moving our /usr and /var partitions over to LVM.  Using the lvcreate command, 2 partitions were created in the vg1 and then formatted with reiserfs, lvusr and lvvar.  They were mounted under /mnt/tmp and the data from /var and /usr copied over to them:

#mkdir /mnt/tmp
#mount /dev/vg1/lvvar /mnt/tmp
#cp -av /var/* /mnt/tmp
#umount /mnt/tmp
#mount /dev/vg1/lvusr /mnt/tmp
#cp -av /usr/* /mnt/tmp

The entries for /usr and /var in /etc/fstab were suitably modified and the system rebooted to check that it worked.  Once the system was working properly, the temporary partitions used for /var and /usr were deleted and the rest of sda partitioned for LVM.  A new LVM group was created for managing this drive, vg0.  Paritions were created for /home and /tmp and the rest of the system could be installed.

<u>Note:</u> Since installation, / has stayed at around 50% usage, /tmp hasn't been moved to lvm yet.  As Arch uses

<B>OTHER IDEAS FOR INSTALLATION</B>

If you don't have 2 hard drives, several other options may be available to you.

A minimal AL install needs around 550MB for / (if compiling the kernel yourself), if you don't  mind having a / this big, you could follow this method ignoring the temporary /usr and /var partitions.  After copying over the files and editing fstab, typing "init 1" into the shell should put you in single usr mode, you can safely delete the files in your old /usr and /var (make sure the lvm ones are umounted).

Create temporary /usr and /var partitions in the space you want to use LVM, Install LVM, but don't create any groups etc.  Tarball /usr and /var and store them somewhere (use the -p option to preserve permissions), i.e. a cd-rw.  Set the system into single user mode and create the 8E partition.  Reboot into single user mode and then create your lvm setup.  untar /usr and /var onto the volumes once they are ready.

<B>Resizing LVM partitions</B>

The commands lvextend and lvreduce are used to resize lvm volumes.  After resizing it is necessary to let the filesystem know.  As an example, /dev/vg1/lvopt is initially at 1.1GB, a further 1GB is needed.

#lvextend -L+1GB /dev/vg1/lvopt

will add 1GB to the volume. Or you can give the new size directly

#lvextend -L2GB /dev/vg1/lvopt

Read the man page for more information.

Afterwards you need to resize the filesystem.  For reiserfs there is no need to umount.

#resize_reiserfs -f /dev/vg1/lvopt 

Offline

#2 2004-02-20 20:05:55

apeiro
Daddy
From: Victoria, BC, Canada
Registered: 2002-08-12
Posts: 771
Website

Re: LVM2 installation with a 2.6 Kernel HowTO

Nice doc.  Thanks, ubermartian!

Offline

#3 2004-02-20 21:47:12

ubermartian
Member
From: Edinburgh
Registered: 2004-02-06
Posts: 32

Re: LVM2 installation with a 2.6 Kernel HowTO

It's a 3am in the morning draft though, I think it needs work, I need to add things about  lvm.conf, clean up grammar and clarify some parts.  I'd like to try to work on getting the rc scripts handled better and eventually make the changes official.  Maybe I'll even work on adding LVM to the AL installer so that AL can run LVM from scratch.  Its all time, time, time.... 

I ran my system for a week or so before posting.  It seems ok so far.

Offline

#4 2004-02-20 23:23:26

apeiro
Daddy
From: Victoria, BC, Canada
Registered: 2002-08-12
Posts: 771
Website

Re: LVM2 installation with a 2.6 Kernel HowTO

Perhaps explain the role of the device-mapper a little more, why the fstab device is different than the mkfs call.

Offline

#5 2004-02-26 10:00:43

Guest
Guest

Re: LVM2 installation with a 2.6 Kernel HowTO

uber -

1st, thx for a very fine "3 AM" rough draft. I just installed from the new 0.6BETA ISO. I'd previously used 0.5 and then the "0.591 pre-Widget" set using the 2.6 kernel. (Big fan of the 2.6 kernel here).

In one setting, I was able to install Arch, and LVM-ize it. Not bad, IMO.

Notes on the draft:

1.  *At this point*, the "make modules modules_install" doesn't appear to be required. If I figure out differently, I'll post asap.

2.  Following these instructions blindly will, at the point where you suggest rebooting to insure LVM is working, reboot to empty /dev/mapper/* partitions you'd previously suggested be changed in fstab. An obvious 3AM mistake; those need to be swapped so that the reboot to test LVM (I assume, to test the initscripts and the validity of the vg's) is clearly done before making any fstab changes. I caught it, and did the reboot as suggested, using my saved (original) fstab. The "test reboot" did as expected, the initscripts reported all was OK, and I had no problems.

3.  It was originally stated that /tmp was going to be LVM-ized, then later it's stated that since / usage wasn't high, it wasn't done. Since Arch uses tmpfs, I (and likely others) don't know if making /tmp an LVM voume is possible and/or suggested. I also didn't move it over; I'll watch usage (I have 512MB + 390MB of swap).

4. Note that there is no real need to have multiple drives to follow this procedure - just adequate space. I have a single, 120GB drive - and I imagine most have something larger than 8.4GB today...I made an empty slot (typed as LVM) right behind /, of almost 20GB. I made 2GB partitions for the initial /usr, /opt, and /var behind the LVM slot. Now I have that available (and an LVM partition that should last longer than I do...) .

FWIW, my / is 195 MB, it's reiserfs so 32 MB is eaten by the journal, and
(this is base only) it's using 30MB (+ the journal) right now. 32%. I don't run a particularly 'lean' system, once filled out (but prior experience w/Arch says 200 MB s/be adequate). Will post later as to whether that's 'adequate', or not. Pour moi, anyway.

What about /tmp ?

Thanks again for a fine document. I know Arch OK, but I'm certainly no expert, and I'd only used LVM for a short period when I had SuSE on my wife's machine. Didn't really get to know it (guess I'm gonna now! LMAO and looking forward to it). That I was able to get through both the BETA install, AND the LVM 'upgrade', successfully, on first try and in one sitting, impressed me, anyway.  smile

K

#6 2004-02-26 12:22:04

ubermartian
Member
From: Edinburgh
Registered: 2004-02-06
Posts: 32

Re: LVM2 installation with a 2.6 Kernel HowTO

anonycoward wrote:

.

Notes on the draft:

1.  *At this point*, the "make modules modules_install" doesn't appear to be required. If I figure out differently, I'll post asap.

I was just being ultra safe.  I had a few problems with my initial installs, this is one thing that I found on a mailing list.  The problems were solved shortly after that so I kept the step.  I've removed it for now, if anyone has any problems and this sorts them out I'll put it back in.

2.  Following these instructions blindly will, at the point where you suggest rebooting to insure LVM is working, reboot to empty /dev/mapper/* partitions you'd previously suggested be changed in fstab. An obvious 3AM mistake; those need to be swapped so that the reboot to test LVM (I assume, to test the initscripts and the validity of the vg's) is clearly done before making any fstab changes. I caught it, and did the reboot as suggested, using my saved (original) fstab. The "test reboot" did as expected, the initscripts reported all was OK, and I had no problems.

Strange it worked for me, but then the /dev/mapper/ settings were put in when I was having problems and were left over as they worked.  Did you run vgscan and vgchange first?  I've edited that part slightly to try and make it more clearer and changed the fstab entry.

3.  It was originally stated that /tmp was going to be LVM-ized, then later it's stated that since / usage wasn't high, it wasn't done. Since Arch uses tmpfs, I (and likely others) don't know if making /tmp an LVM voume is possible and/or suggested. I also didn't move it over; I'll watch usage (I have 512MB + 390MB of swap).

Maybe someone who knows about tmpfs can enlighten us.  I know very little about it.

4. Note that there is no real need to have multiple drives to follow this procedure - just adequate space. I have a single, 120GB drive - and I imagine most have something larger than 8.4GB today...I made an empty slot (typed as LVM) right behind /, of almost 20GB. I made 2GB partitions for the initial /usr, /opt, and /var behind the LVM slot. Now I have that available (and an LVM partition that should last longer than I do...) .

I completely agree.  It is most likely possible to reclaim those by resizing the LVM partition.  Not something I can try though.  Sdb is a 9GB scsi drive and sda is an 18GB with the first 10GB given over to windoze.  So for me getting that space back was necessary.

FWIW, my / is 195 MB, it's reiserfs so 32 MB is eaten by the journal, and
(this is base only) it's using 30MB (+ the journal) right now. 32%. I don't run a particularly 'lean' system, once filled out (but prior experience w/Arch says 200 MB s/be adequate). Will post later as to whether that's 'adequate', or not. Pour moi, anyway.

My / is 150Mb and only 50% full,  I did use the method of putting everything over to a 500Mb partition.  With compiling the kernel by hand and cleaning /var as necessary, I found that it was very close, at points I was down to under 10MB. If I screwed up the kernel compile (forgot to add device-mapper at one point) I would run out of space on the next compile, despite make clean and make mrproper in /usr/src/linux.

What about /tmp ?

For me its still on /, I still haven't needed to change it. / has constantly sat at 50% as far as I can see.

Thanks again for a fine document. I know Arch OK, but I'm certainly no expert, and I'd only used LVM for a short period when I had SuSE on my wife's machine. Didn't really get to know it (guess I'm gonna now! LMAO and looking forward to it). That I was able to get through both the BETA install, AND the LVM 'upgrade', successfully, on first try and in one sitting, impressed me, anyway.  smile

K

This is only the second (working) install of LVM I have ever had.  I mainly wrote the how-to (first ever) as everything I could find was on LVM1 and 2.4 kernels.  Things are quite different with LVM2 and 2.6, so I made a few mistakes and just thought I'd try to help other people avoid them.  I'm really happy with my LVM, resizing as and when I need it.  Hopefully I'll get some time soon to write a brief bit on lvm.conf.  There's a default one in the lvm2 tarball.

Thanks for the comments.

Offline

#7 2004-05-07 10:58:43

jf/
Member
Registered: 2003-10-26
Posts: 79

Re: LVM2 installation with a 2.6 Kernel HowTO

hey guys, er, do any of you encounter a "segmentation fault" message printed on your console when u bootup with these instructions (not to say that they are wrong - i dont think it's the doc's fault!)?

i get a

/etc/rc.sysinit: Line 74:  1332 Segmentation fault      /sbin/vgscan

Nothing wrong ~seems~ to happen with my drive though - even with this funny "error", my volume seems to load fine, and i still can access data on there?!! I even get a confirmation message '1 logical volume(s) in volume group "big-space" now active' after this "segmentation fault" error!!

The weird thing is, i tried to capture this error using a

/sbin/vgscan >/root/vgscanlog 2>&1

instead of the plain '/sbin/vgscan' - but still, no luck!

/sbin/vgscan 2>&1 | /usr/bin/tee /root/vgscanlog

(well basically the same thing but...) just doesnt cut it as well. The 'vgscanlog' file is created - but it's empty???

Even worse - '>/root/vgscanlog 2>&1' gives me the error on the console - but not '2>&1 | /usr/bin/tee /root/vgscanlog'? But in both cases, /root/vgscanlog is empty...

So this is really more of a "it bugs me a bit - but it isnt killing me" kind of thing. Anybody see this error when u boot up? What's your opinion of this error? at this point in time, i'm slanting (given my failed attempts at capturing any output out of vgscan) towards thinking that perhaps it's the bash process that calls vgscan that somehow gave a 'segmentation fault'.

But really - i am at a loss, and would really like to know what's going on with that message. I'm mounting the logical volume at /home, by the way - if that helps. Thanks.

Offline

#8 2004-08-10 03:53:52

kleptophobiac
Member
From: Sunnyvale, CA
Registered: 2004-04-25
Posts: 489

Re: LVM2 installation with a 2.6 Kernel HowTO

I'm running a home file server under arch, but it has grown inadequate.

I have a dual processor p3 600 machine with a 9gb scsi drive. I'm going to install AL on that, and have an LVM for the actual data storage.

I currently have two drives, a 200 and a 250 with stuff on them. Is there a way to migrate to an LVM scheme without wiping the disks? I'm a little hard pressed for backup space... I can't just move everything.

Also, how hard is it to add another HDD to the LVM... not just resize partitions? As my needs for more storage grow (having an HTPC with a remote file dump makes my needs grow rather rapidly)

Is there a good way to build in redundancy in case of HDD failure?

This system will likely have between 1.5 and 2 TB in it when all is said and done.

Offline

#9 2004-12-21 20:08:58

darose
Member
Registered: 2004-04-13
Posts: 158

Re: LVM2 installation with a 2.6 Kernel HowTO

Agreed - very helpful post!

Just wondering, though:
Is it possible to also put the file system root (/) on a logical volume under Arch, or will that not work?

The LVM howto mentions that an initrd might be needed for something like this.  Is that correct?

Thanks,

DR

Offline

#10 2004-12-31 17:26:36

kleptophobiac
Member
From: Sunnyvale, CA
Registered: 2004-04-25
Posts: 489

Re: LVM2 installation with a 2.6 Kernel HowTO

My /dev/mapper/vg0-lv0 seems to be completely empty. I can't successfully reiserfsck it, nor mount it anymore. I used to have it working fine. I really don't want to lose my data. sad

Offline

#11 2004-12-31 17:36:21

kleptophobiac
Member
From: Sunnyvale, CA
Registered: 2004-04-25
Posts: 489

Re: LVM2 installation with a 2.6 Kernel HowTO

[root@fileserver vg0]# lvextend -L+1G /dev/vg0/lv0
  Extending logical volume lv0 to 525.70 GB
  device-mapper ioctl cmd 9 failed: Invalid argument
  Couldn't load device 'vg0-lv0'.
  Problem reactivating lv0

sad

Offline

#12 2005-03-14 00:36:11

Cotton
Member
From: Cornwall, UK
Registered: 2004-09-17
Posts: 568

Re: LVM2 installation with a 2.6 Kernel HowTO

Good work ubermartian.  Any chance this could end up in the wiki?

You have covered the (less common) case where 2 drives are available.  It would  be nice to have a description of how to  implement this on a single disc at a fresh Arch installation, ie allocation of (temporary) partition sizes and how to recover that space after the LVM is up and running.

Offline

#13 2005-10-10 05:32:01

cybler
Member
Registered: 2005-10-10
Posts: 3

Re: LVM2 installation with a 2.6 Kernel HowTO

Awsome HowTo, very easy to follow.  I used to do this in HP-UX at work and the commands are exactly the same.

Thanks

Offline

#14 2005-10-29 22:06:55

Lone_Wolf
Administrator
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 13,296

Re: LVM2 installation with a 2.6 Kernel HowTO

this howto helped me a lot, but it needs a few updates.

the code for rc.sysinit and rc.shutdown is now already present in the 2.6.13 ide kernel, which also has lvm2 and raid support in it.

lvm support is activated in rc.conf.

to get rid of the segmentation faults / file descriptor missing errors , put this line in fstab :

none /sys sysfs defaults 0 0

Also you can use /dev/hdx now, no need for devfsd entries anymore.


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.

clean chroot building not flexible enough ?
Try clean chroot manager by graysky

Offline

#15 2006-12-21 06:42:42

Moo-Crumpus
Member
From: Hessen / Germany
Registered: 2003-12-01
Posts: 1,488

Re: LVM2 installation with a 2.6 Kernel HowTO

Should be a wiki article, imho.


Frumpus addict
[mu'.krum.pus], [frum.pus]

Offline

#16 2007-01-21 04:04:47

Martillo1
Member
From: My kabila in Lavapiés
Registered: 2004-02-20
Posts: 66

Re: LVM2 installation with a 2.6 Kernel HowTO

Offline

Board footer

Powered by FluxBB