You are not logged in.
I am still new on lvm, and I saw its robust if it used wisely. But I need really to understand the differences between lvm linear and striped methods on real hardware.
I did some searches and I found the linear method stores data in Physical Extent "PE" one by one which makes the life of the hard disk more shorter than usual "Hotspot area". But the striped method distributes data to Physical Extent "PE" which saves the hard disk and increases the hard disk performance.
So, I want to setup this lvm schema/model on my pc:
First I have two hard disks the first one is a HDD and the second is a SSD:
sda HDD 1TB
sdb SSD 128GB
I will setup Windows for gaming stuff, alongside linux "arch" with UEFI, So my schema will be like this
sda HDD 1TB
├─sda1 /windows-data ntfs
├─sda2 /Home ext4 /dev/lvm-group/home
└─sda3 /linux-data ext4 /dev/lvm-group/stuff
sdb SSD 128GB
├─sdb1 /boot/EFI fat32
├─sdb2 /Windows ntfs
├─sdb3 /Root ext4 /dev/lvm-group/root
Does the schema I mentioned look logical to you? Any suggestions?
First problem here, Is lvm striped method worth to setup or I just stay with the default one lvm linear method.
Second problem how the data will be stored? I made one Volume group contains part of sdd and part of sdd, this part is confusing to me. I know lvm doesn't care about hard disk types. But still confusing.
Third as you a arch user, did you face any problems with lvm? Is there any thing I have to keep in my mind when I use lvm with arch?
Last edited by MohamedHany (2021-01-11 11:29:35)
Offline
I use separate Volume Groups for SSD vs. HDD storage. SSD-root, SSD-home, HDD-storage. Makes it obvious what's stored where without mixing things.
Offline
I use separate Volume Groups for SSD vs. HDD storage. SSD-root, SSD-home, HDD-storage. Makes it obvious what's stored where without mixing things.
What about lvm snapshots?
In my case, I need to group these "with red color" into one Volume Group, to be able to create snapshots.
sda HDD 1TB
├─sda1 /windows-data ntfs
├─sda2 /Home ext4 /dev/lvm-group/home
└─sda3 /linux-data ext4 /dev/lvm-group/stuff
sdb SSD 128GB
├─sdb1 /boot/EFI fat32
├─sdb2 /Windows ntfs
├─sdb3 /Root ext4 /dev/lvm-group/root
I will take 40% for root and 60% for windows form my SSD.
Offline
In my case, I need to group these "with red color" into one Volume Group, to be able to create snapshots.
Snapshots are made of logical volumes, so it should be irrelevant whether those lvs are in the same vg.
Offline
MohamedHany wrote:In my case, I need to group these "with red color" into one Volume Group, to be able to create snapshots.
Snapshots are made of logical volumes, so it should be irrelevant whether those lvs are in the same vg.
Please see: https://stackoverflow.com/questions/289 … ume-groups
Offline
Ah, I see. Nevertheless, I would create two pvs, one for SSD storage plus snapshots on HDD and one for HDD storage only.
For the first pv, you'll likely want to use what I know as linear RAID. I do not know how this manifests in the LVM world.
But what is the reason for snapshots on the HDD in the first place? They should usually not take up that much space, unless you keep them for long.
Offline
Ah, I see. Nevertheless, I would create two pvs, one for SSD storage plus snapshots on HDD and one for HDD storage only.
Good idea, maybe I will try it.
For the first pv, you'll likely want to use what I know as linear RAID. I do not know how this manifests in the LVM world.
But why do I need to RAID, in my case? I want striped or linear LVM
But what is the reason for snapshots on the HDD in the first place? They should usually not take up that much space, unless you keep them for long.
I don't know, maybe i need 10G or maximum 20GB.
Offline
If you want to use snapshots you'll have to leave some space for them.
Once you cross SSD+HDD boundaries you'll drag the SSD down to HDD performance...
That said technically it should be possible to allocate snapshots in such a way it uses the HDD space for SSD volumes. I never used LVM this way though so I don't have the necessary commands at the fingertips...
In general you can specify where to allocate each LV on lvcreate. Might be as simple as specifying the HDD-PV as last argument to lvcreate, though I'm not sure.
Offline
respiranto wrote:For the first pv, you'll likely want to use what I know as linear RAID. I do not know how this manifests in the LVM world.
But why do I need to RAID, in my case? I want striped or linear LVM
You don't. I was just referring to the analogous concept in the RAID world, given I do not know the correct terms for LVM.
respiranto wrote:But what is the reason for snapshots on the HDD in the first place? They should usually not take up that much space, unless you keep them for long.
I don't know, maybe i need 10G or maximum 20GB.
Then I'd just leave that much free space on the SSD vg. Assuming that ntfs partition is not too large.
Addendum: If you place snapshots only on the HDD for one vg, new writes will incur writes to the HDD and therefore be slow. Reads should be fast, as well as second writes (i.e., writes to areas that are no longer in sync with the snapshot).
Last edited by respiranto (2021-01-01 18:52:21)
Offline
Once you cross SSD+HDD boundaries you'll drag the SSD down to HDD performance...
Oh, wow..
You have kicked my ass, this is a new note to me!!
But are you sure about this info?
Please I need a reference.
That said technically it should be possible to allocate snapshots in such a way it uses the HDD space for SSD volumes. I never used LVM this way though so I don't have the necessary commands at the fingertips...
In general you can specify where to allocate each LV on lvcreate. Might be as simple as specifying the HDD-PV as last argument to lvcreate, though I'm not sure.
May you explain more, those points aren't' clear.
Offline
You don't. I was just referring to the analogous concept in the RAID world, given I do not know the correct terms for LVM.
That's all right. You mean linear lvm.
linear lvm = linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled.
Striped lvm distributes data "chunks" to LV(s) "Physical Extent "PE"".
Offline
frostschutz wrote:Once you cross SSD+HDD boundaries you'll drag the SSD down to HDD performance...
Oh, wow..
You have kicked my ass, this is a new note to me!!
But are you sure about this info?
Please I need a reference.
No reference, but I can describe a situation that may help to clarify.
My previous system had 2 (small) fast 10k rpm raptor hdds and 2 (large) slow 7.2k hdds .
My /home used part of the first raptor , all of the 2nd raptor and both of the slow hdds.
After my /home grew to big for the fast drives, accessing it became noticably slower.
In my experience accessing files on a LV that uses PVs on devices with different speeds tends to degrade towards the speed of the slowest device.
Last edited by Lone_Wolf (2021-01-02 19:19:13)
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Online
frostschutz wrote:Once you cross SSD+HDD boundaries you'll drag the SSD down to HDD performance...
No reference, but I can describe a situation that may help to clarify.
My previous system had 2 (small) fast 10k rpm raptor hdds and 2 (large) slow 7.2k hdds .
My /home used part of the first raptor , all of the 2nd raptor and both of the slow hdds.
After my /home grew to big for the fast drives, accessing it became noticably slower.In my experience accessing files on a LV that uses PVs on devices with different speeds tends to degrade towards the speed of the slowest device.
Thanks guys,.....
I have posted the same topic in Ask Fedora, the guys there are really familiar with these kind of stuff "LVM".
And you are right guys, grouping SSD + HDD, will make performance locked with HDD in the end. "SSD speed will be wasted".
This now is my layout
[mohamed@asusrog ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 244.1G 0 part /mnt/win-data
└─sda2 8:2 0 687.4G 0 part
├─data--group-home 254:0 0 180G 0 lvm /home
└─data--group-stuff 254:1 0 507.4G 0 lvm /mnt/stuff
sdb 8:16 0 119.2G 0 disk
├─sdb1 8:17 0 100M 0 part /boot/EFI
├─sdb2 8:18 0 16M 0 part
├─sdb3 8:19 0 77.5G 0 part /mnt/win-sys
├─sdb4 8:20 0 498M 0 part
└─sdb5 8:21 0 41.1G 0 part
└─sys--group-root 254:2 0 35G 0 lvm /
Offline