You are not logged in.
Hi,
I have new installation of arch linux and first time I used RAID1 and lvm on the mdadm raid1.
And my system refuses to boot properly, It hangs during boot asking to log as root and fix the problem.
The problem is that my /home parition (lv in vg created on raid1 software raid) is incative.
Simple 'lvchange -ay /dev/mapper/bla-bla' will fix this and with 'Ctrl-d' boot process continues and the PC starts. In fact there are two LVs in raid1 and both fail the same way.
In addition I get this error every boot:
systemd-remount-fs.service - Remount Root and Kernel File Systems
Loaded: loaded (/usr/lib/systemd/system/systemd-remount-fs.service; static)
Active: failed (Result: exit-code) since Ne 2014-05-04 22:31:51 CEST; 14min ago
Docs: man:systemd-remount-fs.service(8)
http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
Process: 531 ExecStart=/usr/lib/systemd/systemd-remount-fs (code=exited, status=1/FAILURE)
Main PID: 531 (code=exited, status=1/FAILURE)
May 04 22:31:51 _______ systemd[1]: Starting Remount Root and Kernel File Systems...
May 04 22:31:51 _______ systemd-remount-fs[531]: mount: / not mounted or bad option
May 04 22:31:51 _______ systemd-remount-fs[531]: In some cases useful info is found in syslog - try
May 04 22:31:51 _______ systemd-remount-fs[531]: dmesg | tail or so.
May 04 22:31:51 _______ systemd[1]: systemd-remount-fs.service: main process exited, code=exited, status=1/FAILURE
May 04 22:31:51 _______ systemd[1]: Failed to start Remount Root and Kernel File Systems.
May 04 22:31:51 _______ systemd[1]: Unit systemd-remount-fs.service entered failed state.
also, when booting the boot process stops and waith for 1 and half minutes with "a start job is running for dev-mapper-raid1vg/homelv"
I am new to this and can find a clue on google why this is going on
any advice?
Tibor
Offline
UPDATE:
after some googling i downgraded lvm2 to lvm2-2.02.105-2 and the PC rebooted properly (I tried only once so far). Though problem with systemd-remount-fs.service as pasted above is still there.
What with this? I would like to be sure that this will be fixed and I will not have to spend so much time fixing this problem in the future...
Offline
UPDATE2:
so the boot failed again...
Offline
Well, I gave up, got rid of lvm on the top of raid1...
Yesterday, I enabled lvm logging - the log file showed nothing suspicious (to me).
Also when I disabled mount of particular filesystems in fstab - it booted and all lvs were active!
My conclusion is this is some confict between raid and lvm, maybe bad timing or bad prioritization during boot - who knows
Offline
This seems to be a systemd issue or maybe a new issue with the lvm version. Im running same setup in Debian with no problem.
Offline
FWIW, I experience the same issue, but unreliably. Maybe one out of every 5 boots. I haven't worked out why.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline