You are not logged in.

#1 2015-07-28 20:09:04

noorac
Member
Registered: 2013-07-24
Posts: 10

[Solved] Root get's mounted read-only on fresh-install

I'm having an issue I see a lot of other people are having, however, all of my google-searches came up with solutions that didn't work for me.

The situation;

I did a fresh install today, everything went smoothly, however, when I reboot the system root is mounted read-only. I am able to remount it using;

mount -n -o remount,rw / 

but this is obviously going to be tedious to do every time I turn the computer on, and I'm looking for a more permanent solution.

My fstab;

# /dev/md126p2
UUID=f17ac8c8-0aa4-485a-8f54-ed37fde07ba9	/         	ext4      	rw,relatime,stripe=32,data=ordered	0 1

# /dev/md126p1
UUID=C864-9D8E      	/boot     	vfat      	rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro	0 2

# /dev/sda2
UUID=0abc48ac-34c7-4bab-b2fe-f0dc9f9aaaa2	/var      	ext4      	rw,relatime,data=ordered	0 2

# /dev/sda3
UUID=172bdfd5-1f0d-4565-8316-d9c89c230031	/home     	ext4      	rw,relatime,data=ordered	0 2

# /dev/sdb1
UUID=985ec848-b082-4634-8aa9-c8f1ee6a2962	/home/noorac/storage	ext4      	rw,relatime,data=ordered	0 2

# /dev/sda1
UUID=f6e557c3-58f5-491b-9502-5146d7b25426	none      	swap      	defaults  	0 0

and the UUID is similar to the one from lsblk -f

NAME        FSTYPE          LABEL       UUID                                 MOUNTPOINT
sda                                                                          
├─sda1      swap                        f6e557c3-58f5-491b-9502-5146d7b25426 [SWAP]
├─sda2      ext4                        0abc48ac-34c7-4bab-b2fe-f0dc9f9aaaa2 /var
└─sda3      ext4                        172bdfd5-1f0d-4565-8316-d9c89c230031 /home
sdb                                                                          
└─sdb1      ext4                        985ec848-b082-4634-8aa9-c8f1ee6a2962 /home/noorac/storage
sdc         isw_raid_member                                                  
└─md126                                                                      
  ├─md126p1 vfat                        C864-9D8E                            /boot
  └─md126p2 ext4                        f17ac8c8-0aa4-485a-8f54-ed37fde07ba9 /
sdd         isw_raid_member                                                  
└─md126                                                                      
  ├─md126p1 vfat                        C864-9D8E                            /boot
  └─md126p2 ext4                        f17ac8c8-0aa4-485a-8f54-ed37fde07ba9 /
sde         isw_raid_member                                                  
└─md126                                                                      
  ├─md126p1 vfat                        C864-9D8E                            /boot
  └─md126p2 ext4                        f17ac8c8-0aa4-485a-8f54-ed37fde07ba9 /
sdf         isw_raid_member                                                  
└─md126                                                                      
  ├─md126p1 vfat                        C864-9D8E                            /boot
  └─md126p2 ext4                        f17ac8c8-0aa4-485a-8f54-ed37fde07ba9 /

I suspected that my Raid 10 setup had something to do with it, but /boot seems to mount fine. These are the two lines generated in mdadm.conf:

ARRAY /dev/md/imsm0 metadata=imsm UUID=6cce41dc:4e79fe7d:ccc5113c:2e6fab7c
ARRAY /dev/md/Volume1_0 container=/dev/md/imsm0 member=0 UUID=e68c9438:d9b70850:10254aea:93306bac

I think the problem occurs after running hooks, right before login. One of the  [ OK ] are instead marked [ FAILED ] , with the following message:

[FAILED] Failed to start Remount Root and Normal File Systems...
See 'systemctl status systemd-remount-fs.service' for details
[    3.016677] systemd[1]: Failed to start Remount Root and Normal File Systems.
                     Starting udev Coldplug all Devices...
                     Starting Static Device Nodes in /dev...

(I had to videotape the screen in order to catch the message, and verbose it to the best of my abilites, there might be a spelling-mistake or something in there)

I tried running systemctl status systemd-remount-fs.service, wich gave this output:

● systemd-remount-fs.service - Remount Root and Kernel File Systems
   Loaded: loaded (/usr/lib/systemd/system/systemd-remount-fs.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2015-07-28 20:41:40 CEST; 1h 23min ago
     Docs: man:systemd-remount-fs.service(8)
           http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
  Process: 304 ExecStart=/usr/lib/systemd/systemd-remount-fs (code=exited, status=1/FAILURE)
 Main PID: 304 (code=exited, status=1/FAILURE)

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

I tried fsck and it complains(I'll reboot and try on the unmounted root partion again after posting to get more information), but e2fsck doesn't help to fix the problem, it says it cleans it up, and fix'es, but when I reboot, nothing has changed.
EDIT; I tried to run fsck again, and it complains that

Fs was not properly unmounted and some data may be corrupt

EDIT2; I tried rebooting a couple of times and but because of the speed of the shutdown, the only thing I could decipher was Failed to finalize file systems, ignoring

Last edited by noorac (2015-08-02 18:50:14)

Offline

#2 2015-07-28 20:40:55

Malkymder
Member
Registered: 2015-05-13
Posts: 258

Re: [Solved] Root get's mounted read-only on fresh-install

The accepted answer in this post is not a solution but a possible explanation of your problem http://unix.stackexchange.com/questions … filesystem

Did/can you try smartctl ?

Offline

#3 2015-07-28 21:10:58

noorac
Member
Registered: 2013-07-24
Posts: 10

Re: [Solved] Root get's mounted read-only on fresh-install

I tried smartctl(short) on all 4 disks in the RAID, and all of them passed. Maybe I'm just optimistic, but there being something wrong with the disks seems unlikely.

Offline

#4 2015-08-02 17:04:12

noorac
Member
Registered: 2013-07-24
Posts: 10

Re: [Solved] Root get's mounted read-only on fresh-install

bump ( installed arch on the HDD's, and skipped the raid on the SSD's entirely), but I am really annoyed to have 4 SSD's sitting useless in my computer, and decided to reinstall and try anew, alas, same problem, and I was hoping someone else might have a go at it).

I tried fsck again, and both partitions on the raid are clean.

Last edited by noorac (2015-08-02 17:07:12)

Offline

#5 2015-08-02 17:12:33

falconindy
Developer
From: New York, USA
Registered: 2009-10-22
Posts: 4,111
Website

Re: [Solved] Root get's mounted read-only on fresh-install

Try removing the stripe option from the root entry in fstab

Offline

#6 2015-08-02 17:19:20

noorac
Member
Registered: 2013-07-24
Posts: 10

Re: [Solved] Root get's mounted read-only on fresh-install

falconindy wrote:

Try removing the stripe option from the root entry in fstab

No change.

I feel like I am overlooking something small.

Offline

#7 2015-08-02 17:29:08

noorac
Member
Registered: 2013-07-24
Posts: 10

Re: [Solved] Root get's mounted read-only on fresh-install

I (think I) found some new information(however I'm not sure how relevant it is);

On shutdown/reboot, the last output of the process, I get this;

mdadm: Cannot get exclusive access to /dev/md126:Perhaps a running process, mounted filesystem or active volume group?

This message get's spammed a few times. I didn't get it on the last install.

I also get this, right before the spam starts:

Unmounting all devices: target is busy
         (in some cases useful info about processes that use the device is found by lsof(8) or fuser(1).)
Detatching loop devices.
Disassembling stacked devices.

It is after this that the above message starts spamming.

Last edited by noorac (2015-08-02 18:07:47)

Offline

#8 2015-08-02 18:41:17

noorac
Member
Registered: 2013-07-24
Posts: 10

Re: [Solved] Root get's mounted read-only on fresh-install

From this https://bbs.archlinux.org/viewtopic.php?id=137058 i tried putting /sbin/mdmon to binaries in mkinitcpio.conf, and my problem seems to have dissapeared. I don't get the [ OK ]'s when I boot now, so I hope I didn't trade the problem for something else, but alas, for now, it seems to be working.

Offline

Board footer

Powered by FluxBB