You are not logged in.

#1 2024-09-27 23:22:49

JackMacWindows
Member
Registered: 2022-01-16
Posts: 2

[linux-zen] OOM killer triggering despite plenty of free RAM available

I've been experiencing a weird issue with memory management lately. Sometimes I experience the OOM killer being triggered despite my system having plenty of free RAM available (as evidenced by the logs too). I have earlyoom installed, but this issue is bypassing it entirely and using the kernel killer instead. This issue seems to only happen on the Zen kernel, and occurs around 70% of RAM being used while the CPU is under pressure (e.g. compiling code or running a compute task). Notably, I don't have swap enabled, which could be a contributing factor. Here are the logs from the kernel:

[172200.849320] kswapd0 invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
[172200.849326] CPU: 12 PID: 239 Comm: kswapd0 Tainted: P           OE      6.10.10-zen1-1-zen #1 bb2e27e975e263b999d5cf1514b44f0d982487fe
[172200.849329] Hardware name: System manufacturer System Product Name/TUF GAMING X570-PLUS (WI-FI), BIOS 4403 04/27/2022
[172200.849330] Call Trace:
[172200.849332]  <TASK>
[172200.849335]  dump_stack_lvl+0x5d/0x80
[172200.849340]  dump_header+0x44/0x18d
[172200.849342]  oom_kill_process.cold+0x8/0x83
[172200.849344]  out_of_memory+0x29f/0x5b0
[172200.849347]  kswapd+0x9a8/0xfc0
[172200.849353]  ? __pfx_kswapd+0x10/0x10
[172200.849355]  kthread+0xd2/0x100
[172200.849357]  ? __pfx_kthread+0x10/0x10
[172200.849359]  ret_from_fork+0x34/0x50
[172200.849361]  ? __pfx_kthread+0x10/0x10
[172200.849363]  ret_from_fork_asm+0x1a/0x30
[172200.849367]  </TASK>
[172200.849368] Mem-Info:
[172200.849370] active_anon:1835609 inactive_anon:2980727 isolated_anon:0
                 active_file:48735 inactive_file:52384 isolated_file:0
                 unevictable:4874 dirty:620 writeback:0
                 slab_reclaimable:133016 slab_unreclaimable:108115
                 mapped:266782 shmem:136125 pagetables:34804
                 sec_pagetables:809 bounce:0
                 kernel_misc_reclaimable:0
                 free:2779317 free_pcp:8785 free_cma:0
[172200.849373] Node 0 active_anon:7342436kB inactive_anon:11922908kB active_file:194940kB inactive_file:209536kB unevictable:19496kB isolated(anon):0kB isolated(file):0kB mapped:1067128kB dirty:2480kB writeback:0kB shmem:544500kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:6604800kB writeback_tmp:0kB kernel_stack:48528kB pagetables:139216kB sec_pagetables:3236kB all_unreclaimable? no
[172200.849377] Node 0 DMA free:5076kB boost:0kB min:28kB low:40kB high:52kB reserved_highatomic:0KB active_anon:0kB inactive_anon:6144kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:40kB local_pcp:0kB free_cma:0kB
[172200.849381] lowmem_reserve[]: 0 3161 31967 0 0
[172200.849384] Node 0 DMA32 free:1805924kB boost:0kB min:6680kB low:9916kB high:13152kB reserved_highatomic:0KB active_anon:350636kB inactive_anon:989144kB active_file:12288kB inactive_file:16kB unevictable:0kB writepending:0kB present:3312752kB managed:3246232kB mlocked:0kB bounce:0kB free_pcp:156kB local_pcp:60kB free_cma:0kB
[172200.849388] lowmem_reserve[]: 0 0 28806 0 0
[172200.849391] Node 0 Normal free:9306268kB boost:0kB min:60872kB low:90368kB high:119864kB reserved_highatomic:0KB active_anon:6991928kB inactive_anon:10927620kB active_file:181528kB inactive_file:209140kB unevictable:19496kB writepending:2480kB present:30133248kB managed:29504232kB mlocked:19492kB bounce:0kB free_pcp:34912kB local_pcp:984kB free_cma:0kB
[172200.849395] lowmem_reserve[]: 0 0 0 0 0
[172200.849398] Node 0 DMA: 3*4kB (UM) 3*8kB (UM) 3*16kB (UM) 2*32kB (UM) 3*64kB (UM) 3*128kB (UM) 3*256kB (UM) 3*512kB (UM) 2*1024kB (M) 0*2048kB 0*4096kB = 5076kB
[172200.849411] Node 0 DMA32: 21241*4kB (UME) 16780*8kB (UME) 11446*16kB (UME) 6886*32kB (UME) 3620*64kB (UME) 1724*128kB (UME) 889*256kB (UME) 487*512kB (UME) 248*1024kB (UME) 0*2048kB 0*4096kB = 1805924kB
[172200.849424] Node 0 Normal: 60204*4kB (UME) 244280*8kB (UME) 134416*16kB (UME) 47237*32kB (UME) 21739*64kB (UME) 9238*128kB (UME) 2466*256kB (UME) 404*512kB (UME) 36*1024kB (UM) 0*2048kB 0*4096kB = 9306064kB
[172200.849437] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[172200.849438] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[172200.849440] 241728 total pagecache pages
[172200.849440] 0 pages in swap cache
[172200.849441] Free swap  = 0kB
[172200.849442] Total swap = 0kB
[172200.849443] 8365499 pages RAM
[172200.849443] 0 pages HighMem/MovableOnly
[172200.849444] 174043 pages reserved
[172200.849445] 0 pages cma reserved
[172200.849445] 0 pages hwpoisoned
[172200.849446] Tasks state (memory values in pages):
<redacted>
[172200.849911] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service/app.slice/app-code@452e663388cf4129958c6b9b057d9b9f.service,task=multiplication.,pid=180247,uid=1000
[172200.849927] Out of memory: Killed process 180247 (multiplication.) total-vm:7941268kB, anon-rss:3693244kB, file-rss:1888kB, shmem-rss:0kB, UID:1000 pgtables:7284kB oom_score_adj:200

I've tried to read this output to see if there's actually a lot of memory fragmentation or something, but I don't really know where to start, and the evidence I can understand all points to plenty of RAM being available. Could someone help me figure out what could be going wrong - maybe this is a kernel bug, and if so, how would I go about investigating/reporting it?

Offline

#2 2024-09-28 13:24:04

Lone_Wolf
Administrator
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 12,789

Re: [linux-zen] OOM killer triggering despite plenty of free RAM available

Default settings on arch allow 50% of total physical ram to be used through tmpfs .

Using tmpfs for compiling is often advised to speed up compilation but can take up lots of memory and cause oom errors .

Programs that use LTO- and debug-builds are known to increase the memory needed at buildtime substantially , especially when using ninja to build.

System monitoring programs like top and htop can help to determine what is causing the pressure on your memory.


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.

clean chroot building not flexible enough ?
Try clean chroot manager by graysky

Offline

Board footer

Powered by FluxBB