You are not logged in.
I'm having some trouble with my RAID array. I have tried to research this, but all I can find is how to install Arch Linux ONTO an array, not how to detect Windows on one, and configure grub for it. I have two SSDs in a RAID 0 configuration, which holds my Windows 7 installation. I then have two HDDs in JBOD. One of those HDDs is my mass storage drive for Windows and Linux, the other is my Arch Linux install.
The problem is that--with os-prober installed--Grub isn't detecting my Windows install. I was trying Fedora when I converted my Windows install to RAID 0, and Fedora picked up on it when I ran grub-mkconfig with os-prober installed. Grub at that point was pointing at /dev/mapper/<some random-seeming string> for Windows, and it worked.
Now that I'm back "home" to Arch Linux, I am trying to get grub to see Windows, but it never detects it. /dev/mapper/ is empty except for 'control', and /dev contains sda and sdb, which are the drives that are supposed to be in the array. At this point, I'm completely lost. I'm not even sure if this is software or hardware raid. I thought it to be hardware, because my motherboard has a RAID option rom on it, which is what I used to build the array, but the /dev/mapper stuff makes me think it's software.
Can anyone at least get me pointed in the right direction?
Thank you,
KD0BPV
Last edited by kd0bpv (2012-12-08 10:25:12)
Offline
What kind of raid controller are you using?
Offline
Could you post your grub.cfg file please? Since it is such a large file, use the following command to upload it to Sprunge:
sudo cat /boot/grub/grub.cfg | curl -F 'sprunge=<-' http://sprunge.us
Then just post the url that your terminal displays once the command completes. What motherboard are you using? If you do not know, just post the model number of the PC you are using and hopefully from that the RAID controller may be determined.
Offline
Thanks for the reply guys. Your help will be much appreciated!
This is a custom-built machine. The motherboard is an ASUS M4A88TD-M/USB3. According to my documentation, the AMD SB850 chipset handles RAID, so if I'm not mistaken, that's the controller.
My grub.cfg is at http://sprunge.us/KDKG
As an update, I finally found a post that gave me the idea to run dmraid -ay. Now /dev/mapper is populated with pdc_bjfjjihcag{,p1,p2}, and /dev has dm-{0,1,2} devices in it. I've tried to mount each of the dm-* devices separately. dm-1 is Windows' boot partition, dm-2 is my actual Windows install. dm-0 gave "mount: /dev/mapper/pdc_bjfjjihcag is already mounted or /mnt busy". My understanding is that dm-0 is to /dev/dm-{1,2} as /dev/sda is to /dev/sda{1,2}.
After unmounting the dm-* devices, I tried running os-prober on its own just to see if it would detect windows now.
First attempt:
No volume groups found
grub-probe: error: unknown filesystem.
grub-probe: error: unknown filesystem.
Second attempt:
No volume groups found
rmdir: failed to remove ‘/var/lib/os-prober/mount’: Device or resource busy
/dev/sdd1:Windows 7 (loader):Windows:chain
grub-probe: error: unknown filesystem.
grub-probe: error: unknown filesystem.
You'll notice that in the second attempt, it did find Windows 7, but that is the old boot record from my mass storage drive. I guess I never got around to deleting it after upgrading to RAID 0. I'm also concerned by the rmdir failure. When I list my mounts, I find "grub-mount on /var/lib/os-prober/mount type fuse.grub-mount (rw,nosuid,nodev,relatime,user_id=0,group_id=0)". If I manually unmount that and re-run os-prober, it goes back to the first attempt's output.
EDIT:
I just thought I should mention that I do have the NTFS packages installed, in case that implication was missed. It was implied by the fact that I was able to read my windows files when I mounted dm-2.
Last edited by kd0bpv (2012-12-04 23:28:33)
Offline
Please run:
blkid
and paste the output here. I am hoping that you didn't somehow overwrite your MBR on one of the Windows OS drives (e.g. when installing Grub).
EDIT: As far as I know, you don't need the ntfs-3g package unless you plan to read/write from the NTFS partitions of your other drives. Os-prober should only be looking for the section of the MBR that indicates that the drive is bootable, and if so, what is bootable.
Last edited by rogue (2012-12-05 00:12:26)
Offline
Well, I'm not at home right now, so I cannot run that command right now, but I will when I get home. However, I highly doubt I made that mistake because both systems are bootable. I just have to use the biosto change what drive is shown as the"first disk".
Offline
Read this on dual booting in GRUB2. It may be simpler to just manually configure GRUB as outlined than using os-prober at this point. There is also the option of chainloading the MS bootloader.
Last edited by rogue (2012-12-05 01:55:28)
Offline
Thanks for the help rogue. I'm having another problem now though... The wiki page said to use the mdraid module... When grub executes that insmod, it gives me "Cannot find /grub/i386-pc/mdraid.mod" I looked in that folder, and sure enough, it's not there.
# ls -l /boot/grub/i386-pc | grep -i raid
-rw-r--r-- 1 root root 1952 Nov 30 16:15 mdraid09_be.mod
-rw-r--r-- 1 root root 1924 Nov 30 16:15 mdraid09.mod
-rw-r--r-- 1 root root 1968 Nov 30 16:15 mdraid1x.mod
-rw-r--r-- 1 root root 1404 Nov 30 16:15 raid5rec.mod
-rw-r--r-- 1 root root 2168 Nov 30 16:15 raid6rec.mod
Should I be using one of those instead? If not, how can I get mdraid.mod?
Also, once this is sorted out, will I need to use mdadm to "build" my array for linux? According to the wiki, I would access my windows raid drive via set root=(md1) or so, but I don't have /dev/md* at all, even after running dmraid -ay. All I get is the dm-* devices I mentioned earlier in the thread.
Offline
You know what, I had an idea a couple minutes after making the previous post... My Windows array has a 100MB partition on just one of the drives. Obviously, this would be Windows' boot partition, so I thought "What if I forego the raid stuff, and just set root to hd0 and chainload hd0...?"
I tried it. The first attempt just chainloaded back into grub... so I realized that because my bios is set to map my linux drive as the "first drive", Windows must be on hd1 and edited the command list appropriately. I'm now in Windows via Grub! I'll still need to find a way to permanently activate my raid array so I can use fstab to mount dm-2 to /windows... but that's another fight for another day.
Thanks again for the help rogue. Also, if you can think of any other information I can provide that would help others solve this or a similar problem, let me know what it is and I'll provide it asap.
-- KD0BPV
Last edited by kd0bpv (2012-12-06 07:45:29)
Offline
Have a look at this. There is a bit more information here as well, just scroll down to the table with the hooks listed for information on mdadm. Also, I found this thread on the Ubuntu forums which seems to pertain to your situation. Hope that helps; and you're welcome. I'm glad to hear you're making some headway.
Offline
Well rogue, you once again got me pointed in the right direction. All I had to do to get my raid array to be ready to go immediately upon boot was add dmraid to my HOOKS section of mkinitcpio.conf where the wiki was saying to add mdadm, and leave MODULES empty (how it was after Arch was installed). Now, /dev/mapper is populated with my array, and the dm-{0,1,2} devices exist in /dev upon boot.
I now have /dev/dm-2 in my fstab via UUID, being mounted to /win64. My system is now completely built! Just one glitch to research with my sound system... and all considered, it's not a big deal.
Last edited by kd0bpv (2012-12-06 14:37:23)
Offline
Me and my big mouth... I should have rebooted to test before I said it was fixed. It is ALMOST fixed. Only ONE issue remains for this... Upon rebooting, my system got to the point where it was trying to mount /dev/dm-0 by UUID to /win64. It froze there for a little while--about a minute or two--then finally dropped into emergency mode.
# journalctl -xb:
Dec 06 02:42:55 ares systemd[1]: Job dev-disk-by\x2duuid-7EC09A4FC09A0E11.device/start timed out.
Dec 06 02:42:55 ares systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-7EC09A4FC09A0E11.device.
-- Subject: Unit dev-disk-by\x2duuid-7EC09A4FC09A0E11.device has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
--
-- Unit dev-disk-by\x2duuid-7EC09A4FC09A0E11.device has failed.
--
-- The result is timeout.
Dec 06 02:42:55 ares systemd[1]: Dependency failed for /win64.
-- Subject: Unit win64.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
--
-- Unit win64.mount has failed.
--
-- The result is dependency.
Dec 06 02:42:55 ares systemd[1]: Dependency failed for Local File Systems.
-- Subject: Unit local-fs.target has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/be02cf6855d2428ba40df7e9d022f03d
--
-- Unit local-fs.target has failed.
--
-- The result is dependency.
Dec 06 02:42:55 ares systemd[1]: Job local-fs.target/start failed with result 'dependency'.
Dec 06 02:42:55 ares systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
Dec 06 02:42:55 ares systemd[1]: Job win64.mount/start failed with result 'dependency'.
Dec 06 02:42:55 ares systemd[1]: Job dev-disk-by\x2duuid-7EC09A4FC09A0E11.device/start failed with result 'timeout'.
At this point, I listed /dev, and my raid array was sitting there ready to go. I manually mounted the array to /win64, exited, and the boot proceeded like normal. I also tried dmesg, but that was even less specific. I have no idea what is going on. It has done this twice now, so it's not just some weird glitch... Either I've mis-configured something, or there's a bug somewhere.
As for the other minor glitch, that's figured out. My stupid soundcard has a "auto-mute mode" that was enabled. Disabled that through alsamixer and now it works as expected.
Last edited by kd0bpv (2012-12-06 15:18:20)
Offline
Post your fstab please.
Offline
/etc/fstab:
# /dev/sdc3
UUID=a7bc6a38-76b5-4f64-b362-101fd5bca0ac / ext4 defaults,relatime 0 1
# Swap file
/swapfile none swap defaults 0 0
# /dev/sdc2
UUID=48269892-429a-436a-8292-29ed3b3a7089 /boot ext2 rw,relatime 0 0
# /dev/sdc4
UUID=92be1069-7e5f-46c3-9807-773522de51fe /home ext4 rw,relatime 0 2
# /dev/sdd1 -- Mass Storage
UUID=74C282B438772763 /mass ntfs defaults 0 0
# /dev/dm-2 -- Windows7
UUID=7EC09A4FC09A0E11 /win64 ntfs defaults 0 0
Offline
Have you installed ntfs-3g? That could be the dependency journalctl is referring to.
Last edited by rogue (2012-12-07 16:14:00)
Offline
It's installed. . . Maybe I need to load it as a module in my initramfs?
Offline
Well, I did add ntfs to my initramfs, but it still dropped into an emergency mode terminal. I checked to make sure that it was successful, and the ntfs kernel module was loaded. Any ideas?
# lsmod | grep ntfs run in emergency shell:
ntfs 191626 0
/etc/mkinitcpio.conf:
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES="piix ide_disk reiserfs"
MODULES="ext4 ntfs"
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=""
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
# Some users may wish to include modprobe.conf for custom module options
# like so:
# FILES="/etc/modprobe.d/modprobe.conf"
FILES=""
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS="base"
#
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS="base udev autodetect pata scsi sata filesystems"
#
## This is identical to the above, except the old ide subsystem is
## used for IDE devices instead of the new pata subsystem.
# HOOKS="base udev autodetect ide scsi sata filesystems"
#
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS="base udev pata scsi sata usb filesystems"
#
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS="base udev pata mdadm encrypt filesystems"
#
## This setup loads an lvm2 volume group on a usb device.
# HOOKS="base udev usb lvm2 filesystems"
#
## NOTE: If you have /usr on a separate partition, you MUST include the
# usr, fsck and shutdown hooks.
HOOKS="base udev autodetect pata scsi sata dmraid filesystems usbinput fsck"
# COMPRESSION
# Use this to compress the initramfs image. By default, gzip compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=""
The systemd journal is still showing the same errors I posted earlier.
Offline