You are not logged in.

#1 2017-01-15 16:45:25

whoops
Member
Registered: 2009-03-19
Posts: 891

[solved] systemd journal reader uses too many file descriptors?

edit: As far as I can tell, using at most 4 journal active readers at a time seems to be the only really viable option. Closing + creating + Seeking is faster than I would have thought. The cursor for seeking can be found simply in the '__CURSOR' field of each journal entry.

edit: Changed title. Seems like 14 journal reader objects together use well over 1000 file descriptors (~80 per object; not only python - journalctl etc use that many too)... using a single object instead uses a LOT more IO, RAM, CPU though. So using many instances, while it also uses many file descriptors, appears to be better? Except the soft limit by default is just 1024 afair... so it can't really be relied on that it works on different machines?

Or maybe there's something wrong with my journal / is it normal it uses so many files?

I've been playing around with python and the journal... no matter how I filter or where I seek / how much I read, 13 readers always seem to be working fine... the rest has a "get_usage()" of 0 and returns empty dicts without errors. Some variations on the same test code...

With the following, the first 13 results look fine, the last 2 are empty:

#!/usr/bin/python
from systemd import journal

testarray = []
for i in range(15):
    testarray.append(journal.Reader())

for reader in testarray:
    print(reader.get_next())

*edit-snip-shortened*

With the following, all 15 entries appear to work fine:

...
for reader in testarray:
    reader.wait()
    print(reader.get_next())
    reader.close()

With the following, I get an error message:

...
for reader in testarray:
    reader.wait()
    print(reader.get_next())
Traceback (most recent call last):
  File "./test.py", line 9, in <module>
  File "/usr/lib/python3.6/site-packages/systemd/journal.py", line 273, in wait
OSError: [Errno 24] Too many open files

So, I'm guessing I have to close the journal readers... and then I have to wait for a change even when I don't want to wait for changes? For some reason? Or would it be more appropriate to wrap a single journal reader and have it do all the work, even if that means I constantly have to seek forth and back (using a "cursor" somehow)?

Or am I missing something basic? (just getting started with python)

edit: Just tried it on a different archlinux machine... couldn't reproduce the problem there. Maybe something wrong with my limits?

edit: Dammit, it really are the -n limits. Somehow managed to get my 1024 almost filled up... shouldn't "reader.get_next()" have informed me about it instead of failing quietly? Still not sure if that means something else is wrong with my machine or if I should find a way to use journal reader(s) that needs less resources...

Last edited by whoops (2017-01-25 23:37:57)

Offline

Board footer

Powered by FluxBB