You are not logged in.
Ive just installed initscripts-systemd from [community], when i log into kde from kdm i get en error dialog saying something like "console-kit-daemon.unit not found". Installing systemd-arch-units solved the problem.
Is that a bug or my weird config?
crabman
Offline
I've installed initscripts-systemd and systemd-arch-units and rebooted my system running linux with 'init=/bin/systemd' without any further configuration.
Right after mounting / I get an error somewhat like:
/bin/systemd: error while loading shared libraries: libdbus-1.so.3: cannot open shared object file: No such file or directory
Kernel panic...
My /usr is on another partition than /, so I suppose systemd doesn't mount /usr early enough. How can I tell it that?
$ pacman -Ql dbus-core | grep libdbus
dbus-core /usr/lib/libdbus-1.so
dbus-core /usr/lib/libdbus-1.so.3
dbus-core /usr/lib/libdbus-1.so.3.5.3
/etc/fstab
...
/dev/sda5 / ext4 defaults,relatime 0 1
/dev/sda9 /usr ext4 defaults,relatime,comment=systemd.automount 0 1
...
Last edited by mokasin (2011-03-25 11:38:39)
Offline
I've installed initscripts-systemd and systemd-arch-units and rebooted my system running linux with 'init=/bin/systemd' without any further configuration.
Right after mounting / I get an error somewhat like:
/bin/systemd: error while loading shared libraries: libdbus-1.so.3: cannot open shared object file: No such file or directory Kernel panic...
My /usr is on another partition than /, so I suppose systemd doesn't mount /usr early enough. How can I tell it that?
$ pacman -Ql dbus-core | grep libdbus dbus-core /usr/lib/libdbus-1.so dbus-core /usr/lib/libdbus-1.so.3 dbus-core /usr/lib/libdbus-1.so.3.5.3
/etc/fstab
... /dev/sda5 / ext4 defaults,relatime 0 1 /dev/sda9 /usr ext4 defaults,relatime,comment=systemd.automount 0 1 ...
systemd does not support /usr on a separate partition. See the mailing list.
Offline
Ive just installed initscripts-systemd from [community], when i log into kde from kdm i get en error dialog saying something like "console-kit-daemon.unit not found". Installing systemd-arch-units solved the problem.
Is that a bug or my weird config?
crabman
Working as intended.
Offline
crabman wrote:Ive just installed initscripts-systemd from [community], when i log into kde from kdm i get en error dialog saying something like "console-kit-daemon.unit not found". Installing systemd-arch-units solved the problem.
Is that a bug or my weird config?
crabman
Working as intended.
So i guess this is a case for the wiki isnt it?
Offline
So i guess this is a case for the wiki isnt it?
Definitely!
It might be worth noting that many other things will also fail if /usr is on a separate partition (independently of systemd), but they might fail silently, so you might be affected even if you have not noticed. I think the most serious one is some of the helper programs used by udev. Many upstream packages have basically given up on supporting a separate /usr, so making this work in a stable way is a very big undertaking.
Offline
@crabman/@mokasin: sorry, I got your comments confused. Both your issues are worthy of being in the wiki, and my comment was obviously aimed at @mokasin.
Offline
I'm using arch with fakeraid since december 2007 with the normal initscripts, in this summer I bought this ocz revodrive with a builtin raid controller, and this is only working with dmraid with the normal initscripts, however with systemd I've no luck with or without dmraid.service or with or without md-assemble.service or both. With dmraid.service it stops at assembling fakeraid arrays.
I'm not personally using fakeraid, so it would be great if you could help me hunt this down. Could you try disabling all mdadm/mdrai/lvm services in systemd, rebooting and running the relevant commands manually. Does this work? Could you tell me exactly what sequence of commands you have to use to get your drives up and running, and if possible if any of them return any error codes?
Tom
Offline
Whether it is possible at this time to completely replace sysv to systemd? Like at this - http://en.gentoo-wiki.com/wiki/Systemd# … sv_support
Offline
I've got a wierd problem. When I start with systemd, I've got an empty tray icon which is called org.kde.StatusNotifierItem-1702-1/StatusNotifierItem, and system notification don't work. (I'm using KDE.)
I too have this empty tray icon in kde. However notifications work and i dont see any errors when i lauch systemctl.
Ideas anyone?
Offline
Whether it is possible at this time to completely replace sysv to systemd? Like at this - http://en.gentoo-wiki.com/wiki/Systemd# … sv_support
Depends what you mean. You can easily configure your system in such a way that initscripts and everything in /etc/rc.d/ is never used. It is also the case that sysvinit itself (i.e. /sbin/init) is never used if you run systemd. However, I believe some of the helper programs that comes with sysvinit is still needed. There has been talk about moving some binaries from sysvinit to util-linux, but I don't know how much remains.
What exactly do you want to achieve by removing sysvinit? Maybe there is a way...
Offline
frankieboy wrote:I'm using arch with fakeraid since december 2007 with the normal initscripts, in this summer I bought this ocz revodrive with a builtin raid controller, and this is only working with dmraid with the normal initscripts, however with systemd I've no luck with or without dmraid.service or with or without md-assemble.service or both. With dmraid.service it stops at assembling fakeraid arrays.
I'm not personally using fakeraid, so it would be great if you could help me hunt this down. Could you try disabling all mdadm/mdrai/lvm services in systemd, rebooting and running the relevant commands manually. Does this work? Could you tell me exactly what sequence of commands you have to use to get your drives up and running, and if possible if any of them return any error codes?
Tom
I can't run a command, if the root fs is not mounted, so please give me specific instructions. I assembled the ocz revodrive with dmraid -ay during install, here is my fstab:
#
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
/dev/mapper/sil_bgbgdjacfhaep1 /boot ext4 defaults,noatime,nodiratime 0 2
/dev/mapper/sil_bgbgdjacfhaep2 / ext4 defaults,noatime,nodiratime 0 1
# Commented out by Dropbox
# /dev/mapper/sil_bgbgdjacfhaep3 /home ext4 defaults,noatime,nodiratime 0 0
/dev/mapper/sil_bgbgdjacfhaep4 swap swap defaults 0 0
/dev/md127 /var ext4 defaults,noatime,nodiratime 0 2
devpts /dev/pts devpts defaults 0 0
shm /dev/shm tmpfs defaults 0 0
#/dev/cdrom /mnt/cd iso9660 ro,user,noauto,unhide 0 0
#/dev/dvd /mnt/dvd udf ro,user,noauto,unhide 0 0
/dev/fd0 /mnt/fl vfat user,noauto 0 0
#/dev/sdb1 /mnt/usb vfat rw,user,noauto,async,iocharset=utf8 0 0
#/dev/sdc1 /mnt/usb vfat rw,user,noauto,async,iocharset=utf8 0 0
none /sys/bus/usb/drivers usbfs devgid=108,devmode=664 0 0
none /tmp tmpfs nodev,nosuid,noatime,size=1000M,mode=1777 0 0
/dev/mapper/sil_bgbgdjacfhaep3 /home ext4 defaults,noexec,noatime,nodiratime,user_xattr 0 2
/dev/md2 /home/user/tarolo xfs defaults,noexec,noatime,nodiratime 0 2
/dev/mapper/sil_.. is the ssd revodrive, and /dev/md is home and var. With the normal initscripts, everything is fine.
Frankieboy
Offline
@tomegun:
I recently read about the principles of systemd and its advantages compared with sysvinit.
If I understand correctly, then systemd designed to replace sysvinit.
Thus, use together sysvinit and systemd Is temporary measure (solution) until a replacement all sysvinit scripts and daemons.
Offline
@frankieboy: this doesn't make sense. if the dmraid device is your root, then its not systemd which should be assembling it, but your initcpio. With the classic arch sysvinit, by the time rc.sysinit is read, your root is already mounted read only.
Offline
@tomegun:
I recently read about the principles of systemd and its advantages compared with sysvinit.
If I understand correctly, then systemd designed to replace sysvinit.
Thus, use together sysvinit and systemd Is temporary measure (solution) until a replacement all sysvinit scripts and daemons.
Yes, that is correct. The recommended setup of systemd on arch completely replaces all the legacy stuff. For technical reasons we still need the sysvinit package around (some helper binaries), but this will eventually go away.
Offline
@frankieboy: could you post your rc.conf as well?
Offline
@frankieboy: could you post your rc.conf as well?
Sure, here it is.
#
# /etc/rc.conf - Main Configuration for Arch Linux
#
#
# -----------------------------------------------------------------------
# LOCALIZATION
# -----------------------------------------------------------------------
#
# LOCALE: available languages can be listed with the 'locale -a' command
# HARDWARECLOCK: set to "UTC" or "localtime"
# TIMEZONE: timezones are found in /usr/share/zoneinfo
# KEYMAP: keymaps are found in /usr/share/kbd/keymaps
# CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
# CONSOLEMAP: found in /usr/share/kbd/consoletrans
# USECOLOR: use ANSI color sequences in startup messages
#
LOCALE="hu_HU.utf8"
HARDWARECLOCK="UTC"
TIMEZONE="Europe/Budapest"
KEYMAP="hu"
CONSOLEFONT="LatArCyrHeb-14"
CONSOLEMAP=
USECOLOR="yes"
#
# -----------------------------------------------------------------------
# HARDWARE
# -----------------------------------------------------------------------
#
# Scan hardware and load required modules at bootup
MOD_AUTOLOAD="yes"
# Module Blacklist - modules in this list will never be loaded by udev
MOD_BLACKLIST=()
#
# Modules to load at boot-up (in this order)
# - prefix a module with a ! to blacklist it
#
MODULES=(r8169 slhc pcspkr snd-mixer-oss !snd-pcm-oss snd-hwdep snd snd-page-alloc snd-pcm snd-timer snd-hda-codec snd-hda-intel soundcore acpi-cpufreq loop fuse floppy vboxdrv vboxnetflt)
# Scan for LVM volume groups at startup, required if you use LVM
USELVM="no"
#
# -----------------------------------------------------------------------
# NETWORKING
# -----------------------------------------------------------------------
#
HOSTNAME="leonidas"
#
# Use 'ifconfig -a' or 'ls /sys/class/net/' to see all available
# interfaces.
#
# Interfaces to start at boot-up (in this order)
# Declare each interface then list in INTERFACES
# - prefix an entry in INTERFACES with a ! to disable it
# - no hyphens in your interface names - Bash doesn't like it
#
# Note: to use DHCP, set your interface to be "dhcp" (eth0="dhcp")
#
# Don't use this for wireless interfaces, see network profiles below
#
#eth0="dhcp"
#INTERFACES=(eth0)
#
# Routes to start at boot-up (in this order)
# Declare each route then list in ROUTES
# - prefix an entry in ROUTES with a ! to disable it
#
#gateway="default gw 192.168.0.1"
#ROUTES=(!gateway)
#
# Enable these network profiles at boot-up. These are only useful
# if you happen to need multiple network configurations (ie, laptop users)
# - set to 'menu' to present a menu during boot-up (dialog package required)
# - prefix an entry with a ! to disable it
#
# Network profiles are found in /etc/network-profiles
#
#NET_PROFILES=(main)
#
# -----------------------------------------------------------------------
# DAEMONS
# -----------------------------------------------------------------------
#
# Daemons to start at boot-up (in this order)
# - prefix a daemon with a ! to disable it
# - prefix a daemon with a @ to start it up in the background
#
DAEMONS=(acpid irqbalance shorewall shorewall6 syslog-ng dbus networkmanager cpufreq alsa osspd cups crond mysqld samba xfs)
# End of file
Thanks
Frankieboy
Offline
@frankieboy:
If I understand correctly, this is the situation:
During install you assembled your arrays using dmraid. However, during boot they are assembled using "/sbin/mdadm --assemble --scan", as dmraid is not yet in a released version of initscripts (it is in git).
Your problem during boot is either that your arrays cannot be assembled or that the assembly causes fsck to hang on the root fs.
Firstly, please check that you have the right systemd service files installed. You want to disable md-assemble.service and enable mdadm.service (and make sure you have installed initscripts-systemd). If this does not fix your system, then please try the following:
Comment out all but the root fs from fstab, then turn off fsck for the root fs and disable all the md-assemble/mdadm/dmraid service files you have enabled in systemd. Reboot.
Then you should get a console with only the root fs mounted. Now verify that your arrays are not assembled, and that they do get assembled if you run "/sbin/mdadm --assemble --scan".
If that works ok, try to reboot again and this time run "systemctl start mdadm.service". If this fails, then please post the output of "systemctl status mdadm.service".
HTH,
Tom
Offline
tomegun wrote:@frankieboy: could you post your rc.conf as well?
Sure, here it is.
Please use code-tags..... that was just LONG to scroll past.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
frankieboy wrote:tomegun wrote:@frankieboy: could you post your rc.conf as well?
Sure, here it is.
Please use code-tags..... that was just LONG to scroll past.
Sorry, please be more specific about code-tags, how can I use that?
Thank you
Frankieboy
Offline
Sorry, please be more specific about code-tags, how can I use that?
See this: https://bbs.archlinux.org/help.php#bbcode
Then scroll to the Code section.
ᶘ ᵒᴥᵒᶅ
Offline
@frankieboy:
If I understand correctly, this is the situation:
During install you assembled your arrays using dmraid. However, during boot they are assembled using "/sbin/mdadm --assemble --scan", as dmraid is not yet in a released version of initscripts (it is in git).
Your problem during boot is either that your arrays cannot be assembled or that the assembly causes fsck to hang on the root fs.Firstly, please check that you have the right systemd service files installed. You want to disable md-assemble.service and enable mdadm.service (and make sure you have installed initscripts-systemd). If this does not fix your system, then please try the following:
Comment out all but the root fs from fstab, then turn off fsck for the root fs and disable all the md-assemble/mdadm/dmraid service files you have enabled in systemd. Reboot.
Then you should get a console with only the root fs mounted. Now verify that your arrays are not assembled, and that they do get assembled if you run "/sbin/mdadm --assemble --scan".
If that works ok, try to reboot again and this time run "systemctl start mdadm.service". If this fails, then please post the output of "systemctl status mdadm.service".
HTH,
Tom
I installed systemd-git, initscripts-systemd-git and systemd-arch-units-git, disabled all md*, dmraid services, no luck. Enabled mdadm.service, still no luck. Commented out all partitions but the root, I get a login prompt, but unable to login (don't know why), enabled all partitions on the ssd (/dev/mapper/sil_) in fstab, no luck.
Frankieboy
Offline
On the wiki page it says that:
The initscripts-systemd package
This package contains unit files and scripts that are needed to emulate Arch's initscripts. Most people will not need all (if any) of these units, and they can be easily disabled by doing
# systemctl disable <unitfile>
This implies that by default all units are enabled, but i don't believe this is the case, right?
The reason i ask is because the 'systemctl is-enabled' output confuses me a bit. The man-page says:
is-enabled [NAME...]
Checks whether any of the specified unit files is enabled (as with enable). Returns an exit code of 0 if at least one is enabled, non-zero
otherwise.
Running it:
% systemctl is-enabled blah
Unit name bla is not a valid unit name.
Cannot install unit bla: Invalid argument
% systemctl is-enabled lvm-activate.service
--no output--
This implies that lvm-activate is enabled, but i don't see it in the list of loaded services.
Any ideas how this works?
ᶘ ᵒᴥᵒᶅ
Offline
I installed systemd-git, initscripts-systemd-git and systemd-arch-units-git, disabled all md*, dmraid services, no luck. Enabled mdadm.service, still no luck. Commented out all partitions but the root, I get a login prompt, but unable to login (don't know why), enabled all partitions on the ssd (/dev/mapper/sil_) in fstab, no luck.
Hmmm... we are basically shooting in the dark here, with no form of error message. Are you using the "verbose" kernel parameter? This should give you some more output, so maybe it will tell you where the problem is (when you get stuck at the login prompt, I'm assuming you are trying to log in as root, right?).
-t
Offline
On the wiki page it says that:
The initscripts-systemd package
This package contains unit files and scripts that are needed to emulate Arch's initscripts. Most people will not need all (if any) of these units, and they can be easily disabled by doing
# systemctl disable <unitfile>
This implies that by default all units are enabled, but i don't believe this is the case, right?
The reason i ask is because the 'systemctl is-enabled' output confuses me a bit. The man-page says:
is-enabled [NAME...]
Checks whether any of the specified unit files is enabled (as with enable). Returns an exit code of 0 if at least one is enabled, non-zero
otherwise.Running it:
% systemctl is-enabled blah Unit name bla is not a valid unit name. Cannot install unit bla: Invalid argument % systemctl is-enabled lvm-activate.service --no output--
This implies that lvm-activate is enabled, but i don't see it in the list of loaded services.
Any ideas how this works?
Nothing should be printed to stdout unless the unit isn't installed, as you discovered.
$ systemctl is-enabled lvm-activate.service && echo "lvm is enabled" || echo "lvm is not enabled"
Offline