You are not logged in.
hi all archers,
yesterday, after a few months off, i decided to re-give a try to systemd readahead.
surprise!! time ago does not gave any benefit, now will not even start!
the service stop with this log message:
systemd-readahead-collect[xxx]: Failed to read event: Value too large for defined data type
this, the service status:
systemd-readahead-collect.service - Collect Read-Ahead Data
Loaded: loaded (/usr/lib/systemd/system/systemd-readahead-collect.service; enabled)
Active: active (exited) (Result: exit-code) since Tue, 26 Jun 2012 18:08:24 +0200; 42min ago
Process: 158 ExecStart=/usr/lib/systemd/systemd-readahead-collect (code=exited, status=1/FAILURE)
Status: "Collecting readahead data"
CGroup: name=systemd:/system/systemd-readahead-collect.service
googling i've seen many of this errors but always related to different software/packages
the older ones tipically involved 32bit and missing LARGEFILE* libraries/functions
the most recent seems to be all on x86_64 systems and EOVERFLOW buffer overflow protection kernel bug or compilation related bug.
my system is a fully updated (26/06/2012) ArchLinux x68_64
standard core repo kernel
full systemd inited, sysvinit replaced/removed
intel hw (asus p5q-e with a q9550)
i personally compiled systemd with no luck
anyone with a similar config and error?
Offline
today it works...
yesterday my /.readahead file was that:
x86_64-unknown-linux-gnu;VERSION=2
R
systemd-readahead-collect immediately exits at first read event
today:
x86_64-unknown-linux-gnu;VERSION=2
R/var/cache/samba/brlock.tdb
/var/cache/samba/connections.tdb
/var/cache/samba/gencache_notrans.tdb
/var/cache/samba/locking.tdb
/var/cache/samba/notify.tdb
/var/cache/samba/notify_onelevel.tdb
/var/cache/samba/printer_list.tdb
/var/cache/samba/serverid.tdb
/var/cache/samba/sessionid.tdb
...
is it likely that the reading order at boot is the same as yesterday?
if yes,
my old (yes yesterday i've, as usual, browsed samba shares) brlock.tdb gave the EOVERFLOW reading error. actual one no! was it to big?
naaa, it's modern code largefile awared and I doubt the file could even reach the gigabyte
if no,
i'll never know
ideas?
p.s.: why read samba cache first to boot????
Offline
forget it
error is back again
Offline
same error here, did you find a solution in the meantime?
Offline
I am getting similar error.
[(a number increasing during the boot process)]Failed to read event: Value too large for defined data type
Offline
no solution here
I still have regularly that log error
Offline
Do you use a swap file? I have the same problem and I think it's because. hash swapfile entry in the /etc/fstab temporarily solved the problem.
https://bugs.archlinux.org/task/33459
Do people with swapfile could confirm bug?
systemctl enable systemd-readahead-collect systemd-readahead-replay
and looked at
dmesg | grep systemd-readahead
if the bug "systemd-readahead[155]: Failed to read event: Value too large for defined data type" occurs?
then hash swapfile entry in the /etc/fstab, reboot and and again look at the dmesg output.
systemctl disable systemd-readahead-collect systemd-readahead-replay
disables service and get an earlier state. Sorry for my english
PS possible that the swap file larger than 2GB
Last edited by kelloco2 (2013-01-19 15:33:04)
sorry for my english. {Arch Linux, Debian} User
Offline
@kelloco2 - yes!! i use a swap file and it's >2Gib. maybe this is the right way but...
1) it's also true, for me, that not always my readahead service fails. seems to be nearly random. 7-8/10 times it fails.
2) /.readhaed never contains paths to swapfile. why readahead collect needs to cache swapfile?
Offline
I do not know exactly why this is so. I reported upsteram bug and I report bug to systemd's bug reports and i hope they repair it.. I do not want to give up swapfile. swap partition temporary solved the problem.
sorry for my english. {Arch Linux, Debian} User
Offline
I have the same problem and I have a 2GB swapfile. Does anybody have any news about this bug?
Offline
Hey guys. I solved the bug decreasing file dimension down to 1.5GB. Now it seems to work.
Offline