You are not logged in.
I was running a script which was generating a reasonable amount of CPU load, but the system seemed to be coping fine, that is until Xorg died. In the logs, it looks like something happened to my GPU (integrated into the Ryzen 9 chip) but it's very unclear to me what that something is. Do any of these log lines look familiar (or, at least, is there a good way for me to look for these things)?
Sep 27 14:38:11 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, signaled seq=2146915, emitted seq=2146917
Sep 27 14:38:11 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process firefox pid 42423 thread firefox:cs0 pid 42492
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: MODE2 reset
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset succeeded, trying to resume
Sep 27 14:38:11 leviathan kernel: [drm] PCIE GART of 1024M enabled (table at 0x000000F41FC00000).
Sep 27 14:38:11 leviathan kernel: [drm] VRAM is lost due to GPU reset!
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: PSP is resuming...
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: reserve 0xa00000 from 0xf41e000000 for PSP TMR
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAS: optional ras ta ucode is not available
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAP: optional rap ta ucode is not available
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resuming...
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resumed successfully!
Sep 27 14:38:11 leviathan kernel: [drm] DMUB hardware initialized: version=0x05001C00
Sep 27 14:38:11 leviathan kernel: [drm] kiq ring mec 2 pipe 1 q 0
Sep 27 14:38:11 leviathan kernel: [drm] JPEG decode initialized successfully.
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.1.0 uses VM inv eng 1 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 4 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 5 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 6 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 7 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 8 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 9 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 10 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 11 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 12 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring sdma0 uses VM inv eng 13 on hub 0
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 8
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 8
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 8
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring jpeg_dec uses VM inv eng 5 on hub 8
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow start
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow done
Sep 27 14:38:11 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset(2) succeeded!
Sep 27 14:38:11 leviathan kernel: [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
Sep 27 14:38:11 leviathan systemd-coredump[1125206]: Process 1043257 (alacritty) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Sep 27 14:38:11 leviathan systemd-coredump[1125205]: Process 1013717 (alacritty) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Sep 27 14:38:11 leviathan systemd-coredump[1125204]: Process 853050 (alacritty) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Sep 27 14:38:11 leviathan systemd[1]: Started Process Core Dump (PID 1125205/UID 0).
Sep 27 14:38:11 leviathan systemd[1]: Started Process Core Dump (PID 1125206/UID 0).
Sep 27 14:38:11 leviathan systemd[1]: Started Process Core Dump (PID 1125204/UID 0).
Sep 27 14:38:12 leviathan systemd-coredump[1125210]: Process 1606 (Xorg) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Sep 27 14:38:12 leviathan systemd[1]: Started Process Core Dump (PID 1125210/UID 0).
Sep 27 14:38:12 leviathan systemd-coredump[1125207]: Process 1013717 (alacritty) of user 1000 dumped core.
Stack trace of thread 1013723:
#0 0x00007b05afdba3f4 n/a (libc.so.6 + 0x963f4)
#1 0x00007b05afd61120 raise (libc.so.6 + 0x3d120)
#2 0x00007b05afd484c3 abort (libc.so.6 + 0x244c3)
#3 0x00007b05a61b0283 n/a (libgallium-24.2.3-arch1.1.so + 0x9b0283)
#4 0x00007b05a61b3773 n/a (libgallium-24.2.3-arch1.1.so + 0x9b3773)
#5 0x00007b05a58ab694 n/a (libgallium-24.2.3-arch1.1.so + 0xab694)
#6 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#7 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#8 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013717:
#0 0x00007b05afe3b1fd syscall (libc.so.6 + 0x1171fd)
#1 0x00007b05a58a18fb n/a (libgallium-24.2.3-arch1.1.so + 0xa18fb)
#2 0x00007b05a58ab251 n/a (libgallium-24.2.3-arch1.1.so + 0xab251)
#3 0x00007b05a6196b34 n/a (libgallium-24.2.3-arch1.1.so + 0x996b34)
#4 0x00007b05a5eccc31 n/a (libgallium-24.2.3-arch1.1.so + 0x6ccc31)
#5 0x00007b05a5940a67 n/a (libgallium-24.2.3-arch1.1.so + 0x140a67)
#6 0x00007b05a585f90b n/a (libgallium-24.2.3-arch1.1.so + 0x5f90b)
#7 0x00007b05af0a42a0 glTexStorageAttribs3DEXT (libGLX_mesa.so.0 + 0x4f2a0)
#8 0x00007b05af095456 n/a (libGLX_mesa.so.0 + 0x40456)
#9 0x00007b05af083f65 n/a (libGLX_mesa.so.0 + 0x2ef65)
#10 0x00005dbf491efb97 n/a (alacritty + 0x26cb97)
#11 0x00005dbf4906a155 n/a (alacritty + 0xe7155)
#12 0x00005dbf490db4c1 n/a (alacritty + 0x1584c1)
#13 0x00005dbf490e598b n/a (alacritty + 0x16298b)
#14 0x00005dbf490fd9cb n/a (alacritty + 0x17a9cb)
#15 0x00005dbf49033536 n/a (alacritty + 0xb0536)
#16 0x00005dbf490b2adc n/a (alacritty + 0x12fadc)
#17 0x00005dbf493f6885 n/a (alacritty + 0x473885)
#18 0x00005dbf490fede4 n/a (alacritty + 0x17bde4)
#19 0x00007b05afd49e08 n/a (libc.so.6 + 0x25e08)
#20 0x00007b05afd49ecc __libc_start_main (libc.so.6 + 0x25ecc)
#21 0x00005dbf4902fc35 n/a (alacritty + 0xacc35)
Stack trace of thread 1013718:
#0 0x00007b05afe3d8b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x00005dbf4930174f n/a (alacritty + 0x37e74f)
#2 0x00005dbf4930fafc n/a (alacritty + 0x38cafc)
#3 0x00005dbf4930c193 n/a (alacritty + 0x389193)
#4 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013719:
#0 0x00007b05afe3b1fd syscall (libc.so.6 + 0x1171fd)
#1 0x00005dbf493f6e93 n/a (alacritty + 0x473e93)
#2 0x00005dbf491e8d92 n/a (alacritty + 0x265d92)
#3 0x00005dbf491e88f7 n/a (alacritty + 0x2658f7)
#4 0x00005dbf49032a22 n/a (alacritty + 0xafa22)
#5 0x00005dbf4917bcd4 n/a (alacritty + 0x1f8cd4)
#6 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#7 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#8 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013720:
#0 0x00007b05afe3f0d4 accept4 (libc.so.6 + 0x11b0d4)
#1 0x00005dbf493fea75 n/a (alacritty + 0x47ba75)
#2 0x00005dbf490335d1 n/a (alacritty + 0xb05d1)
#3 0x00005dbf4917bef0 n/a (alacritty + 0x1f8ef0)
#4 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013722:
#0 0x00007b05afe2f63d __poll (libc.so.6 + 0x10b63d)
#1 0x00005dbf49587194 n/a (alacritty + 0x604194)
#2 0x00005dbf4958fd53 n/a (alacritty + 0x60cd53)
#3 0x00005dbf49591093 n/a (alacritty + 0x60e093)
#4 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013725:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013732:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013726:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013730:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013724:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013731:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013727:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013728:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013729:
#0 0x00007b05afdb4a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007b05afdb7479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007b05a58cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007b05a58ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007b05a58ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013734:
#0 0x00005dbf49321108 n/a (alacritty + 0x39e108)
#1 0x00005dbf49323707 n/a (alacritty + 0x3a0707)
#2 0x00005dbf49034348 n/a (alacritty + 0xb1348)
#3 0x00005dbf4917c1f7 n/a (alacritty + 0x1f91f7)
#4 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1013721:
#0 0x00007b05afe2f63d __poll (libc.so.6 + 0x10b63d)
#1 0x00005dbf49587194 n/a (alacritty + 0x604194)
#2 0x00005dbf4958fd53 n/a (alacritty + 0x60cd53)
#3 0x00005dbf49591093 n/a (alacritty + 0x60e093)
#4 0x00005dbf49407a7b n/a (alacritty + 0x484a7b)
#5 0x00007b05afdb839d n/a (libc.so.6 + 0x9439d)
#6 0x00007b05afe3d49c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:12 leviathan systemd-coredump[1125208]: Process 1043257 (alacritty) of user 1000 dumped core.
Stack trace of thread 1043263:
#0 0x00007e0c74b033f4 n/a (libc.so.6 + 0x963f4)
#1 0x00007e0c74aaa120 raise (libc.so.6 + 0x3d120)
#2 0x00007e0c74a914c3 abort (libc.so.6 + 0x244c3)
#3 0x00007e0c70db0283 n/a (libgallium-24.2.3-arch1.1.so + 0x9b0283)
#4 0x00007e0c70db3773 n/a (libgallium-24.2.3-arch1.1.so + 0x9b3773)
#5 0x00007e0c704ab694 n/a (libgallium-24.2.3-arch1.1.so + 0xab694)
#6 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#7 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#8 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043266:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043259:
#0 0x00007e0c74b841fd syscall (libc.so.6 + 0x1171fd)
#1 0x000059d826d82e93 n/a (alacritty + 0x473e93)
#2 0x000059d826b74d92 n/a (alacritty + 0x265d92)
#3 0x000059d826b748f7 n/a (alacritty + 0x2658f7)
#4 0x000059d8269bea22 n/a (alacritty + 0xafa22)
#5 0x000059d826b07cd4 n/a (alacritty + 0x1f8cd4)
#6 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#7 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#8 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043260:
#0 0x00007e0c74b880d4 accept4 (libc.so.6 + 0x11b0d4)
#1 0x000059d826d8aa75 n/a (alacritty + 0x47ba75)
#2 0x000059d8269bf5d1 n/a (alacritty + 0xb05d1)
#3 0x000059d826b07ef0 n/a (alacritty + 0x1f8ef0)
#4 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043257:
#0 0x00007e0c74b841fd syscall (libc.so.6 + 0x1171fd)
#1 0x00007e0c704a18fb n/a (libgallium-24.2.3-arch1.1.so + 0xa18fb)
#2 0x00007e0c704ab251 n/a (libgallium-24.2.3-arch1.1.so + 0xab251)
#3 0x00007e0c70d96b34 n/a (libgallium-24.2.3-arch1.1.so + 0x996b34)
#4 0x00007e0c70accc31 n/a (libgallium-24.2.3-arch1.1.so + 0x6ccc31)
#5 0x00007e0c70540a67 n/a (libgallium-24.2.3-arch1.1.so + 0x140a67)
#6 0x00007e0c7045f90b n/a (libgallium-24.2.3-arch1.1.so + 0x5f90b)
#7 0x00007e0c73be22a0 glTexStorageAttribs3DEXT (libGLX_mesa.so.0 + 0x4f2a0)
#8 0x00007e0c73bd3456 n/a (libGLX_mesa.so.0 + 0x40456)
#9 0x00007e0c73bc1f65 n/a (libGLX_mesa.so.0 + 0x2ef65)
#10 0x000059d826b7bb97 n/a (alacritty + 0x26cb97)
#11 0x000059d8269f6155 n/a (alacritty + 0xe7155)
#12 0x000059d826a674c1 n/a (alacritty + 0x1584c1)
#13 0x000059d826a7198b n/a (alacritty + 0x16298b)
#14 0x000059d826a899cb n/a (alacritty + 0x17a9cb)
#15 0x000059d8269bf536 n/a (alacritty + 0xb0536)
#16 0x000059d826a3eadc n/a (alacritty + 0x12fadc)
#17 0x000059d826d82885 n/a (alacritty + 0x473885)
#18 0x000059d826a8ade4 n/a (alacritty + 0x17bde4)
#19 0x00007e0c74a92e08 n/a (libc.so.6 + 0x25e08)
#20 0x00007e0c74a92ecc __libc_start_main (libc.so.6 + 0x25ecc)
#21 0x000059d8269bbc35 n/a (alacritty + 0xacc35)
Stack trace of thread 1043258:
#0 0x00007e0c74b868b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x000059d826c8d74f n/a (alacritty + 0x37e74f)
#2 0x000059d826c9bafc n/a (alacritty + 0x38cafc)
#3 0x000059d826c98193 n/a (alacritty + 0x389193)
#4 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043262:
#0 0x00007e0c74b7863d __poll (libc.so.6 + 0x10b63d)
#1 0x000059d826f13194 n/a (alacritty + 0x604194)
#2 0x000059d826f1bd53 n/a (alacritty + 0x60cd53)
#3 0x000059d826f1d093 n/a (alacritty + 0x60e093)
#4 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043261:
#0 0x00007e0c74b7863d __poll (libc.so.6 + 0x10b63d)
#1 0x000059d826f13194 n/a (alacritty + 0x604194)
#2 0x000059d826f1bd53 n/a (alacritty + 0x60cd53)
#3 0x000059d826f1d093 n/a (alacritty + 0x60e093)
#4 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043267:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043264:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043271:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043272:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043300:
#0 0x000059d826cad108 n/a (alacritty + 0x39e108)
#1 0x000059d826caf707 n/a (alacritty + 0x3a0707)
#2 0x000059d8269c0348 n/a (alacritty + 0xb1348)
#3 0x000059d826b081f7 n/a (alacritty + 0x1f91f7)
#4 0x000059d826d93a7b n/a (alacritty + 0x484a7b)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043268:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043269:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1043270:
#0 0x00007e0c74afda19 n/a (libc.so.6 + 0x90a19)
#1 0x00007e0c74b00479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007e0c704cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007e0c704ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007e0c704ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007e0c74b0139d n/a (libc.so.6 + 0x9439d)
#6 0x00007e0c74b8649c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@5-1125205-0.service: Deactivated successfully.
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@5-1125205-0.service: Consumed 451ms CPU time, 144.5M memory peak.
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@6-1125206-0.service: Deactivated successfully.
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@6-1125206-0.service: Consumed 447ms CPU time, 139.1M memory peak.
Sep 27 14:38:12 leviathan systemd-coredump[1125209]: Process 853050 (alacritty) of user 1000 dumped core.
Stack trace of thread 853066:
#0 0x00007d6d2e7453f4 n/a (libc.so.6 + 0x963f4)
#1 0x00007d6d2e6ec120 raise (libc.so.6 + 0x3d120)
#2 0x00007d6d2e6d34c3 abort (libc.so.6 + 0x244c3)
#3 0x00007d6d261b0283 n/a (libgallium-24.2.3-arch1.1.so + 0x9b0283)
#4 0x00007d6d261b3773 n/a (libgallium-24.2.3-arch1.1.so + 0x9b3773)
#5 0x00007d6d258ab694 n/a (libgallium-24.2.3-arch1.1.so + 0xab694)
#6 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#7 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#8 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853050:
#0 0x00007d6d2e7c61fd syscall (libc.so.6 + 0x1171fd)
#1 0x00007d6d258a18fb n/a (libgallium-24.2.3-arch1.1.so + 0xa18fb)
#2 0x00007d6d258ab251 n/a (libgallium-24.2.3-arch1.1.so + 0xab251)
#3 0x00007d6d26196b34 n/a (libgallium-24.2.3-arch1.1.so + 0x996b34)
#4 0x00007d6d25eccc31 n/a (libgallium-24.2.3-arch1.1.so + 0x6ccc31)
#5 0x00007d6d25940a67 n/a (libgallium-24.2.3-arch1.1.so + 0x140a67)
#6 0x00007d6d2585f90b n/a (libgallium-24.2.3-arch1.1.so + 0x5f90b)
#7 0x00007d6d2da6c2a0 glTexStorageAttribs3DEXT (libGLX_mesa.so.0 + 0x4f2a0)
#8 0x00007d6d2da5d456 n/a (libGLX_mesa.so.0 + 0x40456)
#9 0x00007d6d2da4bf65 n/a (libGLX_mesa.so.0 + 0x2ef65)
#10 0x0000592603ee8b97 n/a (alacritty + 0x26cb97)
#11 0x0000592603d63155 n/a (alacritty + 0xe7155)
#12 0x0000592603dd44c1 n/a (alacritty + 0x1584c1)
#13 0x0000592603dde98b n/a (alacritty + 0x16298b)
#14 0x0000592603df69cb n/a (alacritty + 0x17a9cb)
#15 0x0000592603d2c536 n/a (alacritty + 0xb0536)
#16 0x0000592603dabadc n/a (alacritty + 0x12fadc)
#17 0x00005926040ef885 n/a (alacritty + 0x473885)
#18 0x0000592603df7de4 n/a (alacritty + 0x17bde4)
#19 0x00007d6d2e6d4e08 n/a (libc.so.6 + 0x25e08)
#20 0x00007d6d2e6d4ecc __libc_start_main (libc.so.6 + 0x25ecc)
#21 0x0000592603d28c35 n/a (alacritty + 0xacc35)
Stack trace of thread 853057:
#0 0x00007d6d2e7c88b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x0000592603ffa74f n/a (alacritty + 0x37e74f)
#2 0x0000592604008afc n/a (alacritty + 0x38cafc)
#3 0x0000592604005193 n/a (alacritty + 0x389193)
#4 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853058:
#0 0x00007d6d2e7c61fd syscall (libc.so.6 + 0x1171fd)
#1 0x00005926040efe93 n/a (alacritty + 0x473e93)
#2 0x0000592603ee1d92 n/a (alacritty + 0x265d92)
#3 0x0000592603ee18f7 n/a (alacritty + 0x2658f7)
#4 0x0000592603d2ba22 n/a (alacritty + 0xafa22)
#5 0x0000592603e74cd4 n/a (alacritty + 0x1f8cd4)
#6 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#7 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#8 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853067:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853073:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853060:
#0 0x00007d6d2e7ba63d __poll (libc.so.6 + 0x10b63d)
#1 0x0000592604280f3b n/a (alacritty + 0x604f3b)
#2 0x0000592604288d53 n/a (alacritty + 0x60cd53)
#3 0x000059260428a093 n/a (alacritty + 0x60e093)
#4 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853068:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853059:
#0 0x00007d6d2e7ca0d4 accept4 (libc.so.6 + 0x11b0d4)
#1 0x00005926040f7a75 n/a (alacritty + 0x47ba75)
#2 0x0000592603d2c5d1 n/a (alacritty + 0xb05d1)
#3 0x0000592603e74ef0 n/a (alacritty + 0x1f8ef0)
#4 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853061:
#0 0x00007d6d2e7ba63d __poll (libc.so.6 + 0x10b63d)
#1 0x0000592604280f3b n/a (alacritty + 0x604f3b)
#2 0x0000592604288d53 n/a (alacritty + 0x60cd53)
#3 0x000059260428a093 n/a (alacritty + 0x60e093)
#4 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853075:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853069:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853070:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853074:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853071:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853072:
#0 0x00007d6d2e73fa19 n/a (libc.so.6 + 0x90a19)
#1 0x00007d6d2e742479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007d6d258cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007d6d258ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007d6d258ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 853077:
#0 0x000059260401a108 n/a (alacritty + 0x39e108)
#1 0x000059260401c707 n/a (alacritty + 0x3a0707)
#2 0x0000592603d2d348 n/a (alacritty + 0xb1348)
#3 0x0000592603e751f7 n/a (alacritty + 0x1f91f7)
#4 0x0000592604100a7b n/a (alacritty + 0x484a7b)
#5 0x00007d6d2e74339d n/a (libc.so.6 + 0x9439d)
#6 0x00007d6d2e7c849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@7-1125204-0.service: Deactivated successfully.
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@7-1125204-0.service: Consumed 548ms CPU time, 183.1M memory peak.
Sep 27 14:38:12 leviathan systemd-coredump[1125211]: Process 1606 (Xorg) of user 1000 dumped core.
Stack trace of thread 1611:
#0 0x00007dee7d3b73f4 n/a (libc.so.6 + 0x963f4)
#1 0x00007dee7d35e120 raise (libc.so.6 + 0x3d120)
#2 0x00007dee7d3454c3 abort (libc.so.6 + 0x244c3)
#3 0x000055aaf9e09b00 OsAbort (Xorg + 0x14ab00)
#4 0x000055aaf9e09e3b FatalError (Xorg + 0x14ae3b)
#5 0x000055aaf9e01d46 n/a (Xorg + 0x142d46)
#6 0x00007dee7d35e1d0 n/a (libc.so.6 + 0x3d1d0)
#7 0x00007dee7d908138 n/a (libpixman-1.so.0 + 0x6a138)
#8 0x00007dee7d8abf3b pixman_fill (libpixman-1.so.0 + 0xdf3b)
#9 0x000055aaf9e7bd5b fbFill (Xorg + 0x1bcd5b)
#10 0x000055aaf9e7c03e fbPolyFillRect (Xorg + 0x1bd03e)
#11 0x00007dee79f006e9 n/a (libglamoregl.so + 0x206e9)
#12 0x000055aaf9d7f14d n/a (Xorg + 0xc014d)
#13 0x00007dee7cbae378 n/a (amdgpu_drv.so + 0xd378)
#14 0x00007dee7cbb4b52 n/a (amdgpu_drv.so + 0x13b52)
#15 0x000055aaf9e38318 n/a (Xorg + 0x179318)
#16 0x00007dee7cbd56d8 n/a (libglx.so + 0xc6d8)
#17 0x000055aaf9e246e6 ddxGiveUp (Xorg + 0x1656e6)
#18 0x000055aaf9e09eec FatalError (Xorg + 0x14aeec)
#19 0x000055aaf9e01d46 n/a (Xorg + 0x142d46)
#20 0x00007dee7d35e1d0 n/a (libc.so.6 + 0x3d1d0)
#21 0x00007dee7d3b73f4 n/a (libc.so.6 + 0x963f4)
#22 0x00007dee7d35e120 raise (libc.so.6 + 0x3d120)
#23 0x00007dee7d3454c3 abort (libc.so.6 + 0x244c3)
#24 0x00007dee7a9b0283 n/a (libgallium-24.2.3-arch1.1.so + 0x9b0283)
#25 0x00007dee7a9b3773 n/a (libgallium-24.2.3-arch1.1.so + 0x9b3773)
#26 0x00007dee7a0ab694 n/a (libgallium-24.2.3-arch1.1.so + 0xab694)
#27 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#28 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#29 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1612:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1613:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1606:
#0 0x00007dee7d4381fd syscall (libc.so.6 + 0x1171fd)
#1 0x00007dee7a0a18fb n/a (libgallium-24.2.3-arch1.1.so + 0xa18fb)
#2 0x00007dee7a0ab251 n/a (libgallium-24.2.3-arch1.1.so + 0xab251)
#3 0x00007dee7a996b34 n/a (libgallium-24.2.3-arch1.1.so + 0x996b34)
#4 0x00007dee7a6ccc31 n/a (libgallium-24.2.3-arch1.1.so + 0x6ccc31)
#5 0x00007dee7a124489 n/a (libgallium-24.2.3-arch1.1.so + 0x124489)
#6 0x00007dee7cbab821 n/a (amdgpu_drv.so + 0xa821)
#7 0x000055aaf9d2cf4c _CallCallbacks (Xorg + 0x6df4c)
#8 0x000055aaf9e02a17 n/a (Xorg + 0x143a17)
#9 0x000055aaf9d3b492 WriteEventsToClient (Xorg + 0x7c492)
#10 0x000055aaf9d64d67 n/a (Xorg + 0xa5d67)
#11 0x000055aaf9d7ac4c n/a (Xorg + 0xbbc4c)
#12 0x000055aaf9d8048d n/a (Xorg + 0xc148d)
#13 0x000055aaf9d873ea n/a (Xorg + 0xc83ea)
#14 0x000055aaf9cef00e n/a (Xorg + 0x3000e)
#15 0x00007dee7d346e08 n/a (libc.so.6 + 0x25e08)
#16 0x00007dee7d346ecc __libc_start_main (libc.so.6 + 0x25ecc)
#17 0x000055aaf9cef5c5 _start (Xorg + 0x305c5)
Stack trace of thread 1614:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1616:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1617:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1618:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1619:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1624:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1731:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1630:
#0 0x00007dee7d43a8b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x000055aaf9dfc897 n/a (Xorg + 0x13d897)
#2 0x000055aaf9e006e9 n/a (Xorg + 0x1416e9)
#3 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#4 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 155849:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 42127:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1615:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 154023:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 154024:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 653072:
#0 0x00007dee7d3b1a19 n/a (libc.so.6 + 0x90a19)
#1 0x00007dee7d3b4479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007dee7a0cebae n/a (libgallium-24.2.3-arch1.1.so + 0xcebae)
#3 0x00007dee7a0ab5bc n/a (libgallium-24.2.3-arch1.1.so + 0xab5bc)
#4 0x00007dee7a0ceadd n/a (libgallium-24.2.3-arch1.1.so + 0xceadd)
#5 0x00007dee7d3b539d n/a (libc.so.6 + 0x9439d)
#6 0x00007dee7d43a49c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@8-1125210-0.service: Deactivated successfully.
Sep 27 14:38:12 leviathan systemd[1]: systemd-coredump@8-1125210-0.service: Consumed 573ms CPU time, 212.2M memory peak.
Sep 27 14:38:12 leviathan at-spi2-registryd[1717]: X connection to :0 broken (explicit kill or server shutdown).
Sep 27 14:38:12 leviathan dunst[155883]: X connection to :0 broken (explicit kill or server shutdown).
Sep 27 14:38:12 leviathan keepassxc[1657]: The X11 connection broke (error 1). Did the X11 server die?
Sep 27 14:38:12 leviathan systemd[1447]: dbus-:1.7-org.a11y.atspi.Registry@0.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 14:38:12 leviathan systemd[1447]: dbus-:1.7-org.a11y.atspi.Registry@0.service: Failed with result 'exit-code'.
Sep 27 14:38:12 leviathan systemd[1447]: dbus-:1.7-org.a11y.atspi.Registry@0.service: Consumed 1.489s CPU time, 2.4M memory peak.
Sep 27 14:38:12 leviathan systemd[1447]: dunst.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 14:38:12 leviathan systemd[1447]: dunst.service: Failed with result 'exit-code'.
Sep 27 14:38:12 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 14:38:12 leviathan systemd[1447]: blueman-applet.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 14:38:12 leviathan systemd[1447]: blueman-applet.service: Failed with result 'exit-code'.
Sep 27 14:38:12 leviathan systemd[1447]: blueman-applet.service: Consumed 34.729s CPU time, 46.4M memory peak.
Sep 27 14:38:12 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'exit-code'.
Sep 27 14:38:12 leviathan systemd[1447]: redshift-gtk.service: Consumed 5.249s CPU time, 26M memory peak.
Sep 27 14:38:12 leviathan systemd[1447]: keepassxc.service: Main process exited, code=exited, status=1/FAILURE
Sep 27 14:38:12 leviathan systemd[1447]: keepassxc.service: Failed with result 'exit-code'.
Sep 27 14:38:12 leviathan systemd[1447]: keepassxc.service: Consumed 34.919s CPU time, 171.9M memory peak.
Sep 27 14:38:12 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 1.
Sep 27 14:38:12 leviathan systemd[1447]: Started Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:12 leviathan redshift-gtk[1125373]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Sep 27 14:38:12 leviathan redshift-gtk[1125373]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:12 leviathan redshift-gtk[1125373]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:12 leviathan redshift-gtk[1125373]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:12 leviathan kernel: redshift-gtk[1125373]: segfault at 18 ip 000073de4f10f864 sp 00007ffe5baac4e0 error 4 in libgtk-3.so.0.2411.32[10f864,73de4f070000+3b5000] likely on CPU 8 (core 10, socket 0)
Sep 27 14:38:12 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Sep 27 14:38:12 leviathan systemd-coredump[1125376]: Process 1125373 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Sep 27 14:38:12 leviathan systemd[1]: Started Process Core Dump (PID 1125376/UID 0).
Sep 27 14:38:13 leviathan systemd-coredump[1125377]: Process 1125373 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 1125373:
#0 0x000073de4f10f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x000073de4f132361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000073de4f11a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x000073de516dfacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x000073de516c4805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x000073de516c5e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x000073de516c6ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x000073de4f347fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x000073de516dfacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x000073de516c4805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x000073de516c5e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000073de518bd71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x000073de52380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#13 0x000073de52389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#14 0x000073de52383a46 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x183a46)
#15 0x000073de523c0877 n/a (libpython3.12.so.1.0 + 0x1c0877)
#16 0x000073de52380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#17 0x000073de52389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#18 0x000073de5244e4b5 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24e4b5)
#19 0x000073de5247284a n/a (libpython3.12.so.1.0 + 0x27284a)
#20 0x000073de5246d72f n/a (libpython3.12.so.1.0 + 0x26d72f)
#21 0x000073de52487d14 n/a (libpython3.12.so.1.0 + 0x287d14)
#22 0x000073de524875a1 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x2875a1)
#23 0x000073de52486cff _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x286cff)
#24 0x000073de5247f4c4 Py_RunMain (libpython3.12.so.1.0 + 0x27f4c4)
#25 0x000073de52439c2c Py_BytesMain (libpython3.12.so.1.0 + 0x239c2c)
#26 0x000073de52034e08 n/a (libc.so.6 + 0x25e08)
#27 0x000073de52034ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x00005cbac2813045 _start (python3.12 + 0x1045)
Stack trace of thread 1125375:
#0 0x000073de5211abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x000073de517e5227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x000073de51781a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x000073de51781ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x000073de517b6026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x000073de520a339d n/a (libc.so.6 + 0x9439d)
#6 0x000073de5212849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:13 leviathan systemd[1]: systemd-coredump@9-1125376-0.service: Deactivated successfully.
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 2.
Sep 27 14:38:13 leviathan systemd[1447]: Started Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:13 leviathan redshift-gtk[1125387]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125387]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125387]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125387]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan kernel: redshift-gtk[1125387]: segfault at 18 ip 00007adb5950f864 sp 00007fffbb488160 error 4 in libgtk-3.so.0.2411.32[10f864,7adb59470000+3b5000] likely on CPU 11 (core 13, socket 0)
Sep 27 14:38:13 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Sep 27 14:38:13 leviathan systemd-coredump[1125390]: Process 1125387 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Sep 27 14:38:13 leviathan systemd[1]: Started Process Core Dump (PID 1125390/UID 0).
Sep 27 14:38:13 leviathan systemd-coredump[1125391]: Process 1125387 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 1125387:
#0 0x00007adb5950f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x00007adb59532361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x00007adb5951a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x00007adb5bbc2acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x00007adb5bba7805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x00007adb5bba8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x00007adb5bba9ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x00007adb59747fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x00007adb5bbc2acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x00007adb5bba7805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x00007adb5bba8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x00007adb5bd7c71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x00007adb5c980c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#13 0x00007adb5c989829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#14 0x00007adb5c983a46 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x183a46)
#15 0x00007adb5c9c0877 n/a (libpython3.12.so.1.0 + 0x1c0877)
#16 0x00007adb5c980c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#17 0x00007adb5c989829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#18 0x00007adb5ca4e4b5 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24e4b5)
#19 0x00007adb5ca7284a n/a (libpython3.12.so.1.0 + 0x27284a)
#20 0x00007adb5ca6d72f n/a (libpython3.12.so.1.0 + 0x26d72f)
#21 0x00007adb5ca87d14 n/a (libpython3.12.so.1.0 + 0x287d14)
#22 0x00007adb5ca875a1 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x2875a1)
#23 0x00007adb5ca86cff _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x286cff)
#24 0x00007adb5ca7f4c4 Py_RunMain (libpython3.12.so.1.0 + 0x27f4c4)
#25 0x00007adb5ca39c2c Py_BytesMain (libpython3.12.so.1.0 + 0x239c2c)
#26 0x00007adb5c634e08 n/a (libc.so.6 + 0x25e08)
#27 0x00007adb5c634ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x000061f8e917d045 _start (python3.12 + 0x1045)
Stack trace of thread 1125389:
#0 0x00007adb5c71abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x00007adb5bca4227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x00007adb5bc40a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x00007adb5bc40ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x00007adb5bc75026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x00007adb5c6a339d n/a (libc.so.6 + 0x9439d)
#6 0x00007adb5c72849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:13 leviathan systemd[1]: systemd-coredump@10-1125390-0.service: Deactivated successfully.
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:13 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 3.
Sep 27 14:38:13 leviathan systemd[1447]: Started Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:13 leviathan redshift-gtk[1125401]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125401]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125401]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan redshift-gtk[1125401]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:13 leviathan kernel: redshift-gtk[1125401]: segfault at 18 ip 000076f9fa10f864 sp 00007ffeda535d40 error 4 in libgtk-3.so.0.2411.32[10f864,76f9fa070000+3b5000] likely on CPU 11 (core 13, socket 0)
Sep 27 14:38:13 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Sep 27 14:38:13 leviathan systemd-coredump[1125404]: Process 1125401 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Sep 27 14:38:13 leviathan systemd[1]: Started Process Core Dump (PID 1125404/UID 0).
Sep 27 14:38:14 leviathan systemd-coredump[1125405]: Process 1125401 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 1125401:
#0 0x000076f9fa10f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x000076f9fa132361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000076f9fa11a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x000076f9fc63bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x000076f9fc620805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x000076f9fc621e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x000076f9fc622ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x000076f9fa347fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x000076f9fc63bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x000076f9fc620805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x000076f9fc621e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000076f9fc7bd71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x000076f9fd380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#13 0x000076f9fd389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#14 0x000076f9fd383a46 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x183a46)
#15 0x000076f9fd3c0877 n/a (libpython3.12.so.1.0 + 0x1c0877)
#16 0x000076f9fd380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#17 0x000076f9fd389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#18 0x000076f9fd44e4b5 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24e4b5)
#19 0x000076f9fd47284a n/a (libpython3.12.so.1.0 + 0x27284a)
#20 0x000076f9fd46d72f n/a (libpython3.12.so.1.0 + 0x26d72f)
#21 0x000076f9fd487d14 n/a (libpython3.12.so.1.0 + 0x287d14)
#22 0x000076f9fd4875a1 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x2875a1)
#23 0x000076f9fd486cff _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x286cff)
#24 0x000076f9fd47f4c4 Py_RunMain (libpython3.12.so.1.0 + 0x27f4c4)
#25 0x000076f9fd439c2c Py_BytesMain (libpython3.12.so.1.0 + 0x239c2c)
#26 0x000076f9fd034e08 n/a (libc.so.6 + 0x25e08)
#27 0x000076f9fd034ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x000056465707e045 _start (python3.12 + 0x1045)
Stack trace of thread 1125403:
#0 0x000076f9fd11abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x000076f9fc71d227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x000076f9fc6b9a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x000076f9fc6b9ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x000076f9fc6ee026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x000076f9fd0a339d n/a (libc.so.6 + 0x9439d)
#6 0x000076f9fd12849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:14 leviathan systemd[1]: systemd-coredump@11-1125404-0.service: Deactivated successfully.
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 4.
Sep 27 14:38:14 leviathan systemd[1447]: Started Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:14 leviathan redshift-gtk[1125415]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125415]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125415]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125415]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan kernel: redshift-gtk[1125415]: segfault at 18 ip 000075333ef0f864 sp 00007fff24cda660 error 4 in libgtk-3.so.0.2411.32[10f864,75333ee70000+3b5000] likely on CPU 11 (core 13, socket 0)
Sep 27 14:38:14 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Sep 27 14:38:14 leviathan systemd-coredump[1125418]: Process 1125415 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Sep 27 14:38:14 leviathan systemd[1]: Started Process Core Dump (PID 1125418/UID 0).
Sep 27 14:38:14 leviathan systemd-coredump[1125419]: Process 1125415 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 1125415:
#0 0x000075333ef0f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x000075333ef32361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000075333ef1a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x00007533415c2acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x00007533415a7805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x00007533415a8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x00007533415a9ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x000075333f147fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x00007533415c2acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x00007533415a7805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x00007533415a8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000075334177c71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x0000753342380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#13 0x0000753342389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#14 0x0000753342383a46 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x183a46)
#15 0x00007533423c0877 n/a (libpython3.12.so.1.0 + 0x1c0877)
#16 0x0000753342380c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#17 0x0000753342389829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#18 0x000075334244e4b5 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24e4b5)
#19 0x000075334247284a n/a (libpython3.12.so.1.0 + 0x27284a)
#20 0x000075334246d72f n/a (libpython3.12.so.1.0 + 0x26d72f)
#21 0x0000753342487d14 n/a (libpython3.12.so.1.0 + 0x287d14)
#22 0x00007533424875a1 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x2875a1)
#23 0x0000753342486cff _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x286cff)
#24 0x000075334247f4c4 Py_RunMain (libpython3.12.so.1.0 + 0x27f4c4)
#25 0x0000753342439c2c Py_BytesMain (libpython3.12.so.1.0 + 0x239c2c)
#26 0x0000753342034e08 n/a (libc.so.6 + 0x25e08)
#27 0x0000753342034ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x000064bba834b045 _start (python3.12 + 0x1045)
Stack trace of thread 1125417:
#0 0x000075334211abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x00007533416a4227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x0000753341640a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x0000753341640ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x0000753341675026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x00007533420a339d n/a (libc.so.6 + 0x9439d)
#6 0x000075334212849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:14 leviathan systemd[1]: systemd-coredump@12-1125418-0.service: Deactivated successfully.
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:14 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 5.
Sep 27 14:38:14 leviathan systemd[1447]: Started Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:14 leviathan redshift-gtk[1125428]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125428]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125428]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan redshift-gtk[1125428]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Sep 27 14:38:14 leviathan kernel: redshift-gtk[1125428]: segfault at 18 ip 00007c602af0f864 sp 00007ffcecc98560 error 4 in libgtk-3.so.0.2411.32[10f864,7c602ae70000+3b5000] likely on CPU 10 (core 12, socket 0)
Sep 27 14:38:14 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Sep 27 14:38:14 leviathan systemd-coredump[1125431]: Process 1125428 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Sep 27 14:38:14 leviathan systemd[1]: Started Process Core Dump (PID 1125431/UID 0).
Sep 27 14:38:15 leviathan systemd-coredump[1125432]: Process 1125428 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 1125428:
#0 0x00007c602af0f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x00007c602af32361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x00007c602af1a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x00007c602d43bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x00007c602d420805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x00007c602d421e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x00007c602d422ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x00007c602b147fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x00007c602d43bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x00007c602d420805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x00007c602d421e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x00007c602d5bd71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x00007c602e180c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#13 0x00007c602e189829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#14 0x00007c602e183a46 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x183a46)
#15 0x00007c602e1c0877 n/a (libpython3.12.so.1.0 + 0x1c0877)
#16 0x00007c602e180c28 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x180c28)
#17 0x00007c602e189829 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x189829)
#18 0x00007c602e24e4b5 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24e4b5)
#19 0x00007c602e27284a n/a (libpython3.12.so.1.0 + 0x27284a)
#20 0x00007c602e26d72f n/a (libpython3.12.so.1.0 + 0x26d72f)
#21 0x00007c602e287d14 n/a (libpython3.12.so.1.0 + 0x287d14)
#22 0x00007c602e2875a1 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x2875a1)
#23 0x00007c602e286cff _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x286cff)
#24 0x00007c602e27f4c4 Py_RunMain (libpython3.12.so.1.0 + 0x27f4c4)
#25 0x00007c602e239c2c Py_BytesMain (libpython3.12.so.1.0 + 0x239c2c)
#26 0x00007c602de34e08 n/a (libc.so.6 + 0x25e08)
#27 0x00007c602de34ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x00005eec5479f045 _start (python3.12 + 0x1045)
Stack trace of thread 1125430:
#0 0x00007c602df1abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x00007c602d51d227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x00007c602d4b9a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x00007c602d4b9ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x00007c602d4ee026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x00007c602dea339d n/a (libc.so.6 + 0x9439d)
#6 0x00007c602df2849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Sep 27 14:38:15 leviathan systemd[1]: systemd-coredump@13-1125431-0.service: Deactivated successfully.
Sep 27 14:38:15 leviathan systemd[1447]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Sep 27 14:38:15 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:15 leviathan systemd[1447]: redshift-gtk.service: Scheduled restart job, restart counter is at 6.
Sep 27 14:38:15 leviathan systemd[1447]: redshift-gtk.service: Start request repeated too quickly.
Sep 27 14:38:15 leviathan systemd[1447]: redshift-gtk.service: Failed with result 'core-dump'.
Sep 27 14:38:15 leviathan systemd[1447]: Failed to start Redshift display colour temperature adjustment (GUI).
Sep 27 14:38:15 leviathan systemd[1447]: Stopped target Current graphical user session.
Sep 27 14:38:15 leviathan systemd[1447]: Stopping Accessibility services bus...
Sep 27 14:38:15 leviathan dbus-broker[1712]: Dispatched 13887 messages @ 1(±1)μs / message.
Sep 27 14:38:15 leviathan systemd[1447]: Stopping Virtual filesystem service...
Sep 27 14:38:15 leviathan systemd[1]: run-user-1000-gvfs.mount: Deactivated successfully.
Sep 27 14:38:15 leviathan systemd[1447]: Stopped Virtual filesystem service.
Sep 27 14:38:15 leviathan systemd[1447]: Stopped Accessibility services bus.
Offline
there're a few recent topics about random crashing of both ryzen cpus and amd gpus - maybe you can find something there
Offline
I've looked through a number of those, and they all seem to be gaming related and have slightly different errors to the one above. I've come back to this after having another crash (they are quite infrequent and I don't know what is common to them) for which the log is below, but I have had no luck when searching for this specific "ring gfx... timeout".
Oct 23 16:28:02 leviathan systemd[1195]: Starting Mailbox Syncronisation Service...
Oct 23 16:28:02 leviathan systemd[1195]: Starting Nextcloud Synchronisation Service...
Oct 23 16:28:02 leviathan systemd[1195]: Starting DAV Synchronisation Service (Calendar and Contacts)...
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Inbox
Oct 23 16:28:02 leviathan nextcloud-sync[14222]: Synchronising notes -> https://org.gtf.io/Notes
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Trash
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Archives
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Spam
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Sent
Oct 23 16:28:02 leviathan email-sync[14221]: Moving 0 messages to personal/Drafts
Oct 23 16:28:02 leviathan email-sync[14221]: Synchronising with remote server
Oct 23 16:28:04 leviathan dhcpcd[1064]: wlan0: carrier lost
Oct 23 16:28:04 leviathan kernel: wlan0: Connection to AP 62:07:b6:81:e4:8e lost
Oct 23 16:28:04 leviathan mullvad-daemon[1061]: [mullvad_daemon::api][INFO] Detecting changes to offline state - Status { ipv4: false, ipv6: false }
Oct 23 16:28:04 leviathan mullvad-daemon[1061]: [mullvad_api::availability][DEBUG] Pausing API requests due to being offline
Oct 23 16:28:04 leviathan dhcpcd[1064]: wlan0: deleting address fe80::6cd9:ad2c:8ee4:8e27
Oct 23 16:28:04 leviathan iwd[1060]: Received Deauthentication event, reason: 4, from_ap: false
Oct 23 16:28:04 leviathan iwd[1060]: event: disconnect-info, reason: 4
Oct 23 16:28:04 leviathan iwd[1060]: event: state, old: connected, new: disconnected
Oct 23 16:28:04 leviathan iwd[1060]: event: state, old: disconnected, new: autoconnect_quick
Oct 23 16:28:04 leviathan dhcpcd[1064]: wlan0: deleting route to 192.168.0.0/24
Oct 23 16:28:04 leviathan dhcpcd[1064]: wlan0: deleting default route via 192.168.0.1
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=10695201, emitted seq=10695203
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process picom pid 1418 thread picom:cs0 pid 1451
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Dumping IP State
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Dumping IP State Completed
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: MODE2 reset
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset succeeded, trying to resume
Oct 23 16:28:04 leviathan kernel: [drm] PCIE GART of 1024M enabled (table at 0x000000F41FC00000).
Oct 23 16:28:04 leviathan kernel: [drm] VRAM is lost due to GPU reset!
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: PSP is resuming...
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: reserve 0xa00000 from 0xf41e000000 for PSP TMR
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAS: optional ras ta ucode is not available
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAP: optional rap ta ucode is not available
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resuming...
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resumed successfully!
Oct 23 16:28:04 leviathan kernel: [drm] DMUB hardware initialized: version=0x05001C00
Oct 23 16:28:04 leviathan kernel: [drm] kiq ring mec 2 pipe 1 q 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.1.0 uses VM inv eng 1 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 4 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 5 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 6 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 7 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 8 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 9 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 10 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 11 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 12 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring sdma0 uses VM inv eng 13 on hub 0
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 8
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 8
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 8
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring jpeg_dec uses VM inv eng 5 on hub 8
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow start
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow done
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset(2) succeeded!
Oct 23 16:28:04 leviathan systemd-coredump[14327]: Process 3942525 (alacritty) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Oct 23 16:28:04 leviathan systemd[1]: Started Process Core Dump (PID 14327/UID 0).
Oct 23 16:28:04 leviathan systemd-coredump[14328]: Process 3942525 (alacritty) of user 1000 dumped core.
Stack trace of thread 3942531:
#0 0x000078964893d3f4 n/a (libc.so.6 + 0x963f4)
#1 0x00007896488e4120 raise (libc.so.6 + 0x3d120)
#2 0x00007896488cb4c3 abort (libc.so.6 + 0x244c3)
#3 0x0000789644bafc03 n/a (libgallium-24.2.4-arch1.1.so + 0x9afc03)
#4 0x0000789644bb30f3 n/a (libgallium-24.2.4-arch1.1.so + 0x9b30f3)
#5 0x00007896442ab794 n/a (libgallium-24.2.4-arch1.1.so + 0xab794)
#6 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#7 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#8 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942525:
#0 0x00007896489be1fd syscall (libc.so.6 + 0x1171fd)
#1 0x00007896442a19fb n/a (libgallium-24.2.4-arch1.1.so + 0xa19fb)
#2 0x00007896442ab351 n/a (libgallium-24.2.4-arch1.1.so + 0xab351)
#3 0x0000789644b964b4 n/a (libgallium-24.2.4-arch1.1.so + 0x9964b4)
#4 0x00007896448ccf71 n/a (libgallium-24.2.4-arch1.1.so + 0x6ccf71)
#5 0x0000789644340ca7 n/a (libgallium-24.2.4-arch1.1.so + 0x140ca7)
#6 0x000078964425fa0b n/a (libgallium-24.2.4-arch1.1.so + 0x5fa0b)
#7 0x0000789647c572a0 glTexStorageAttribs3DEXT (libGLX_mesa.so.0 + 0x4f2a0)
#8 0x0000789647c48456 n/a (libGLX_mesa.so.0 + 0x40456)
#9 0x0000789647c36f65 n/a (libGLX_mesa.so.0 + 0x2ef65)
#10 0x000058f8a9950b97 n/a (alacritty + 0x26cb97)
#11 0x000058f8a97cb155 n/a (alacritty + 0xe7155)
#12 0x000058f8a983c4c1 n/a (alacritty + 0x1584c1)
#13 0x000058f8a984698b n/a (alacritty + 0x16298b)
#14 0x000058f8a985e9cb n/a (alacritty + 0x17a9cb)
#15 0x000058f8a9794536 n/a (alacritty + 0xb0536)
#16 0x000058f8a9813adc n/a (alacritty + 0x12fadc)
#17 0x000058f8a9b57885 n/a (alacritty + 0x473885)
#18 0x000058f8a985fde4 n/a (alacritty + 0x17bde4)
#19 0x00007896488cce08 n/a (libc.so.6 + 0x25e08)
#20 0x00007896488ccecc __libc_start_main (libc.so.6 + 0x25ecc)
#21 0x000058f8a9790c35 n/a (alacritty + 0xacc35)
Stack trace of thread 3942534:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942526:
#0 0x00007896489c08b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x000058f8a9a6274f n/a (alacritty + 0x37e74f)
#2 0x000058f8a9a70afc n/a (alacritty + 0x38cafc)
#3 0x000058f8a9a6d193 n/a (alacritty + 0x389193)
#4 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942538:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942527:
#0 0x00007896489be1fd syscall (libc.so.6 + 0x1171fd)
#1 0x000058f8a9b57e93 n/a (alacritty + 0x473e93)
#2 0x000058f8a9949d92 n/a (alacritty + 0x265d92)
#3 0x000058f8a99498f7 n/a (alacritty + 0x2658f7)
#4 0x000058f8a9793a22 n/a (alacritty + 0xafa22)
#5 0x000058f8a98dccd4 n/a (alacritty + 0x1f8cd4)
#6 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#7 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#8 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942535:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942558:
#0 0x000058f8a9a82108 n/a (alacritty + 0x39e108)
#1 0x000058f8a9a84707 n/a (alacritty + 0x3a0707)
#2 0x000058f8a9795348 n/a (alacritty + 0xb1348)
#3 0x000058f8a98dd1f7 n/a (alacritty + 0x1f91f7)
#4 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942529:
#0 0x00007896489b263d __poll (libc.so.6 + 0x10b63d)
#1 0x000058f8a9ce8f3b n/a (alacritty + 0x604f3b)
#2 0x000058f8a9cf0d53 n/a (alacritty + 0x60cd53)
#3 0x000058f8a9cf2093 n/a (alacritty + 0x60e093)
#4 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942539:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942536:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942530:
#0 0x00007896489b263d __poll (libc.so.6 + 0x10b63d)
#1 0x000058f8a9ce8f3b n/a (alacritty + 0x604f3b)
#2 0x000058f8a9cf0d53 n/a (alacritty + 0x60cd53)
#3 0x000058f8a9cf2093 n/a (alacritty + 0x60e093)
#4 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942537:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942533:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942528:
#0 0x00007896489c20d4 accept4 (libc.so.6 + 0x11b0d4)
#1 0x000058f8a9b5fa75 n/a (alacritty + 0x47ba75)
#2 0x000058f8a97945d1 n/a (alacritty + 0xb05d1)
#3 0x000058f8a98dcef0 n/a (alacritty + 0x1f8ef0)
#4 0x000058f8a9b68a7b n/a (alacritty + 0x484a7b)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942532:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 3942540:
#0 0x0000789648937a19 n/a (libc.so.6 + 0x90a19)
#1 0x000078964893a479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x00007896442cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x00007896442ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x00007896442cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000078964893b39d n/a (libc.so.6 + 0x9439d)
#6 0x00007896489c049c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:04 leviathan systemd[1]: systemd-coredump@30-14327-0.service: Deactivated successfully.
Oct 23 16:28:04 leviathan systemd[1]: systemd-coredump@30-14327-0.service: Consumed 225ms CPU time, 148.8M memory peak.
Oct 23 16:28:04 leviathan iwd[1060]: event: state, old: autoconnect_quick, new: autoconnect_full
Oct 23 16:28:04 leviathan systemd-coredump[14350]: Process 1383 (Xorg) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Oct 23 16:28:04 leviathan systemd[1]: Started Process Core Dump (PID 14350/UID 0).
Oct 23 16:28:04 leviathan rtkit-daemon[1452]: Supervising 7 threads of 1 processes of 1 users.
Oct 23 16:28:04 leviathan rtkit-daemon[1452]: Successfully made thread 14352 of process 1445 owned by '1000' RT at priority 5.
Oct 23 16:28:04 leviathan rtkit-daemon[1452]: Supervising 8 threads of 1 processes of 1 users.
Oct 23 16:28:05 leviathan systemd-coredump[14351]: Process 1383 (Xorg) of user 1000 dumped core.
Stack trace of thread 1392:
#0 0x000074caa84733f4 n/a (libc.so.6 + 0x963f4)
#1 0x000074caa841a120 raise (libc.so.6 + 0x3d120)
#2 0x000074caa84014c3 abort (libc.so.6 + 0x244c3)
#3 0x0000609c47979b00 OsAbort (Xorg + 0x14ab00)
#4 0x0000609c47979e3b FatalError (Xorg + 0x14ae3b)
#5 0x0000609c47971d46 n/a (Xorg + 0x142d46)
#6 0x000074caa841a1d0 n/a (libc.so.6 + 0x3d1d0)
#7 0x000074caa89c4138 n/a (libpixman-1.so.0 + 0x6a138)
#8 0x000074caa8967f3b pixman_fill (libpixman-1.so.0 + 0xdf3b)
#9 0x0000609c479ebd5b fbFill (Xorg + 0x1bcd5b)
#10 0x0000609c479ec03e fbPolyFillRect (Xorg + 0x1bd03e)
#11 0x000074ca9a9e86e9 n/a (libglamoregl.so + 0x206e9)
#12 0x0000609c478ef14d n/a (Xorg + 0xc014d)
#13 0x000074caa7c65378 n/a (amdgpu_drv.so + 0xd378)
#14 0x000074caa7c6bb52 n/a (amdgpu_drv.so + 0x13b52)
#15 0x0000609c479a8318 n/a (Xorg + 0x179318)
#16 0x000074caa7c8c6d8 n/a (libglx.so + 0xc6d8)
#17 0x0000609c479946e6 ddxGiveUp (Xorg + 0x1656e6)
#18 0x0000609c47979eec FatalError (Xorg + 0x14aeec)
#19 0x0000609c47971d46 n/a (Xorg + 0x142d46)
#20 0x000074caa841a1d0 n/a (libc.so.6 + 0x3d1d0)
#21 0x000074caa84733f4 n/a (libc.so.6 + 0x963f4)
#22 0x000074caa841a120 raise (libc.so.6 + 0x3d120)
#23 0x000074caa84014c3 abort (libc.so.6 + 0x244c3)
#24 0x000074caa5bafc03 n/a (libgallium-24.2.4-arch1.1.so + 0x9afc03)
#25 0x000074caa5bb30f3 n/a (libgallium-24.2.4-arch1.1.so + 0x9b30f3)
#26 0x000074caa52ab794 n/a (libgallium-24.2.4-arch1.1.so + 0xab794)
#27 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#28 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#29 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1383:
#0 0x000074caa84f41fd syscall (libc.so.6 + 0x1171fd)
#1 0x000074caa52a19fb n/a (libgallium-24.2.4-arch1.1.so + 0xa19fb)
#2 0x000074caa52ab351 n/a (libgallium-24.2.4-arch1.1.so + 0xab351)
#3 0x000074caa5b964b4 n/a (libgallium-24.2.4-arch1.1.so + 0x9964b4)
#4 0x000074caa58ccf71 n/a (libgallium-24.2.4-arch1.1.so + 0x6ccf71)
#5 0x000074caa53246c9 n/a (libgallium-24.2.4-arch1.1.so + 0x1246c9)
#6 0x000074caa7c62821 n/a (amdgpu_drv.so + 0xa821)
#7 0x0000609c4789cf4c _CallCallbacks (Xorg + 0x6df4c)
#8 0x0000609c47972a17 n/a (Xorg + 0x143a17)
#9 0x0000609c478ab492 WriteEventsToClient (Xorg + 0x7c492)
#10 0x0000609c478d4d67 n/a (Xorg + 0xa5d67)
#11 0x0000609c478eac4c n/a (Xorg + 0xbbc4c)
#12 0x0000609c478f048d n/a (Xorg + 0xc148d)
#13 0x0000609c478f73ea n/a (Xorg + 0xc83ea)
#14 0x0000609c4785f00e n/a (Xorg + 0x3000e)
#15 0x000074caa8402e08 n/a (libc.so.6 + 0x25e08)
#16 0x000074caa8402ecc __libc_start_main (libc.so.6 + 0x25ecc)
#17 0x0000609c4785f5c5 _start (Xorg + 0x305c5)
Stack trace of thread 1400:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1393:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1401:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1404:
#0 0x000074caa84f68b2 epoll_wait (libc.so.6 + 0x1198b2)
#1 0x0000609c4796c897 n/a (Xorg + 0x13d897)
#2 0x0000609c479706e9 n/a (Xorg + 0x1416e9)
#3 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#4 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1394:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1413:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1395:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1402:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1399:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1398:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 1397:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
Stack trace of thread 22019:
#0 0x000074caa846da19 n/a (libc.so.6 + 0x90a19)
#1 0x000074caa8470479 pthread_cond_wait (libc.so.6 + 0x93479)
#2 0x000074caa52cecae n/a (libgallium-24.2.4-arch1.1.so + 0xcecae)
#3 0x000074caa52ab6bc n/a (libgallium-24.2.4-arch1.1.so + 0xab6bc)
#4 0x000074caa52cebdd n/a (libgallium-24.2.4-arch1.1.so + 0xcebdd)
#5 0x000074caa847139d n/a (libc.so.6 + 0x9439d)
#6 0x000074caa84f649c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:05 leviathan systemd[1]: systemd-coredump@31-14350-0.service: Deactivated successfully.
Oct 23 16:28:05 leviathan systemd[1]: systemd-coredump@31-14350-0.service: Consumed 290ms CPU time, 202.6M memory peak.
Oct 23 16:28:05 leviathan at-spi2-registryd[1498]: X connection to :0 broken (explicit kill or server shutdown).
Oct 23 16:28:05 leviathan dunst[1619]: X connection to :0 broken (explicit kill or server shutdown).
Oct 23 16:28:05 leviathan keepassxc[1430]: The X11 connection broke (error 1). Did the X11 server die?
Oct 23 16:28:05 leviathan systemd[1195]: dbus-:1.8-org.a11y.atspi.Registry@0.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 16:28:05 leviathan systemd[1195]: dbus-:1.8-org.a11y.atspi.Registry@0.service: Failed with result 'exit-code'.
Oct 23 16:28:05 leviathan systemd[1195]: dbus-:1.8-org.a11y.atspi.Registry@0.service: Consumed 3.179s CPU time, 2.9M memory peak.
Oct 23 16:28:05 leviathan systemd[1195]: dunst.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 16:28:05 leviathan systemd[1195]: dunst.service: Failed with result 'exit-code'.
Oct 23 16:28:05 leviathan systemd[1195]: dunst.service: Consumed 2.380s CPU time, 6.1M memory peak.
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'exit-code'.
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Consumed 28.303s CPU time, 25M memory peak.
Oct 23 16:28:05 leviathan systemd[1195]: mullvad-vpn.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 16:28:05 leviathan systemd[1195]: mullvad-vpn.service: Failed with result 'exit-code'.
Oct 23 16:28:05 leviathan systemd[1195]: mullvad-vpn.service: Consumed 2min 45.335s CPU time, 411.4M memory peak.
Oct 23 16:28:05 leviathan systemd[1195]: keepassxc.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 16:28:05 leviathan systemd[1195]: keepassxc.service: Failed with result 'exit-code'.
Oct 23 16:28:05 leviathan systemd[1195]: keepassxc.service: Consumed 30.993s CPU time, 177.6M memory peak.
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 1.
Oct 23 16:28:05 leviathan systemd[1195]: Started Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:05 leviathan redshift-gtk[14530]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Oct 23 16:28:05 leviathan redshift-gtk[14530]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:05 leviathan redshift-gtk[14530]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:05 leviathan redshift-gtk[14530]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:05 leviathan systemd-coredump[14534]: Process 14530 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Oct 23 16:28:05 leviathan kernel: redshift-gtk[14530]: segfault at 18 ip 000078e97590f864 sp 00007ffc180194b0 error 4 in libgtk-3.so.0.2411.32[10f864,78e975870000+3b5000] likely on CPU 14 (core 2, socket 0)
Oct 23 16:28:05 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Oct 23 16:28:05 leviathan systemd[1]: Started Process Core Dump (PID 14534/UID 0).
Oct 23 16:28:05 leviathan systemd-coredump[14535]: Process 14530 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 14530:
#0 0x000078e97590f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x000078e975932361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000078e97591a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x000078e977f02acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x000078e977ee7805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x000078e977ee8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x000078e977ee9ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x000078e975b47fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x000078e977f02acb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x000078e977ee7805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x000078e977ee8e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000078e9780bd71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x000078e978b82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#13 0x000078e978b8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#14 0x000078e978b85706 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x185706)
#15 0x000078e978bc1c67 n/a (libpython3.12.so.1.0 + 0x1c1c67)
#16 0x000078e978b82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#17 0x000078e978b8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#18 0x000078e978c4de35 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24de35)
#19 0x000078e978c724aa n/a (libpython3.12.so.1.0 + 0x2724aa)
#20 0x000078e978c6d24f n/a (libpython3.12.so.1.0 + 0x26d24f)
#21 0x000078e978c879d4 n/a (libpython3.12.so.1.0 + 0x2879d4)
#22 0x000078e978c87261 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x287261)
#23 0x000078e978c869bf _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x2869bf)
#24 0x000078e978c7f174 Py_RunMain (libpython3.12.so.1.0 + 0x27f174)
#25 0x000078e978c3c5ec Py_BytesMain (libpython3.12.so.1.0 + 0x23c5ec)
#26 0x000078e978834e08 n/a (libc.so.6 + 0x25e08)
#27 0x000078e978834ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x00005a5973ad0045 _start (python3.12 + 0x1045)
Stack trace of thread 14533:
#0 0x000078e97891abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x000078e977fe4227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x000078e977f80a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x000078e977f80ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x000078e977fb5026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x000078e9788a339d n/a (libc.so.6 + 0x9439d)
#6 0x000078e97892849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Oct 23 16:28:05 leviathan systemd[1]: systemd-coredump@32-14534-0.service: Deactivated successfully.
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:05 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 2.
Oct 23 16:28:05 leviathan systemd[1195]: Started Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:06 leviathan redshift-gtk[14545]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14545]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14545]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14545]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan systemd-coredump[14548]: Process 14545 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Oct 23 16:28:06 leviathan kernel: redshift-gtk[14545]: segfault at 18 ip 00007a88d0b0f864 sp 00007fff96630b30 error 4 in libgtk-3.so.0.2411.32[10f864,7a88d0a70000+3b5000] likely on CPU 3 (core 3, socket 0)
Oct 23 16:28:06 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Oct 23 16:28:06 leviathan systemd[1]: Started Process Core Dump (PID 14548/UID 0).
Oct 23 16:28:06 leviathan systemd-coredump[14549]: Process 14545 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 14545:
#0 0x00007a88d0b0f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x00007a88d0b32361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x00007a88d0b1a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x00007a88d319dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x00007a88d3182805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x00007a88d3183e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x00007a88d3184ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x00007a88d0d47fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x00007a88d319dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x00007a88d3182805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x00007a88d3183e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x00007a88d337c71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x00007a88d3f82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#13 0x00007a88d3f8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#14 0x00007a88d3f85706 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x185706)
#15 0x00007a88d3fc1c67 n/a (libpython3.12.so.1.0 + 0x1c1c67)
#16 0x00007a88d3f82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#17 0x00007a88d3f8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#18 0x00007a88d404de35 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24de35)
#19 0x00007a88d40724aa n/a (libpython3.12.so.1.0 + 0x2724aa)
#20 0x00007a88d406d24f n/a (libpython3.12.so.1.0 + 0x26d24f)
#21 0x00007a88d40879d4 n/a (libpython3.12.so.1.0 + 0x2879d4)
#22 0x00007a88d4087261 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x287261)
#23 0x00007a88d40869bf _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x2869bf)
#24 0x00007a88d407f174 Py_RunMain (libpython3.12.so.1.0 + 0x27f174)
#25 0x00007a88d403c5ec Py_BytesMain (libpython3.12.so.1.0 + 0x23c5ec)
#26 0x00007a88d3c34e08 n/a (libc.so.6 + 0x25e08)
#27 0x00007a88d3c34ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x000058024fb0e045 _start (python3.12 + 0x1045)
Stack trace of thread 14547:
#0 0x00007a88d3d1abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x00007a88d32a3227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x00007a88d323fa55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x00007a88d323fab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x00007a88d3274026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x00007a88d3ca339d n/a (libc.so.6 + 0x9439d)
#6 0x00007a88d3d2849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:06 leviathan systemd[1]: systemd-coredump@33-14548-0.service: Deactivated successfully.
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 3.
Oct 23 16:28:06 leviathan systemd[1195]: Started Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:06 leviathan redshift-gtk[14558]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14558]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14558]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan redshift-gtk[14558]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:06 leviathan systemd-coredump[14561]: Process 14558 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Oct 23 16:28:06 leviathan kernel: redshift-gtk[14558]: segfault at 18 ip 000077572490f864 sp 00007ffe2d746df0 error 4 in libgtk-3.so.0.2411.32[10f864,775724870000+3b5000] likely on CPU 15 (core 3, socket 0)
Oct 23 16:28:06 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Oct 23 16:28:06 leviathan systemd[1]: Started Process Core Dump (PID 14561/UID 0).
Oct 23 16:28:06 leviathan systemd-coredump[14562]: Process 14558 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 14558:
#0 0x000077572490f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x0000775724932361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000077572491a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x0000775726e9dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x0000775726e82805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x0000775726e83e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x0000775726e84ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x0000775724b47fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x0000775726e9dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x0000775726e82805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x0000775726e83e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000077572707c71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x0000775727b82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#13 0x0000775727b8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#14 0x0000775727b85706 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x185706)
#15 0x0000775727bc1c67 n/a (libpython3.12.so.1.0 + 0x1c1c67)
#16 0x0000775727b82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#17 0x0000775727b8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#18 0x0000775727c4de35 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24de35)
#19 0x0000775727c724aa n/a (libpython3.12.so.1.0 + 0x2724aa)
#20 0x0000775727c6d24f n/a (libpython3.12.so.1.0 + 0x26d24f)
#21 0x0000775727c879d4 n/a (libpython3.12.so.1.0 + 0x2879d4)
#22 0x0000775727c87261 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x287261)
#23 0x0000775727c869bf _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x2869bf)
#24 0x0000775727c7f174 Py_RunMain (libpython3.12.so.1.0 + 0x27f174)
#25 0x0000775727c3c5ec Py_BytesMain (libpython3.12.so.1.0 + 0x23c5ec)
#26 0x0000775727834e08 n/a (libc.so.6 + 0x25e08)
#27 0x0000775727834ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x0000578a3900e045 _start (python3.12 + 0x1045)
Stack trace of thread 14560:
#0 0x000077572791abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x0000775726fa3227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x0000775726f3fa55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x0000775726f3fab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x0000775726f74026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x00007757278a339d n/a (libc.so.6 + 0x9439d)
#6 0x000077572792849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:06 leviathan systemd[1]: systemd-coredump@34-14561-0.service: Deactivated successfully.
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:06 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 4.
Oct 23 16:28:06 leviathan systemd[1195]: Started Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:07 leviathan redshift-gtk[14571]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14571]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14571]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14571]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan kernel: redshift-gtk[14571]: segfault at 18 ip 0000706a2cb0f864 sp 00007fffad980750 error 4 in libgtk-3.so.0.2411.32[10f864,706a2ca70000+3b5000] likely on CPU 15 (core 3, socket 0)
Oct 23 16:28:07 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Oct 23 16:28:07 leviathan systemd-coredump[14574]: Process 14571 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Oct 23 16:28:07 leviathan systemd[1]: Started Process Core Dump (PID 14574/UID 0).
Oct 23 16:28:07 leviathan systemd-coredump[14575]: Process 14571 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 14571:
#0 0x0000706a2cb0f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x0000706a2cb32361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x0000706a2cb1a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x0000706a2f19dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x0000706a2f182805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x0000706a2f183e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x0000706a2f184ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x0000706a2cd47fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x0000706a2f19dacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x0000706a2f182805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x0000706a2f183e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x0000706a2f37c71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x0000706a2ff82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#13 0x0000706a2ff8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#14 0x0000706a2ff85706 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x185706)
#15 0x0000706a2ffc1c67 n/a (libpython3.12.so.1.0 + 0x1c1c67)
#16 0x0000706a2ff82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#17 0x0000706a2ff8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#18 0x0000706a3004de35 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24de35)
#19 0x0000706a300724aa n/a (libpython3.12.so.1.0 + 0x2724aa)
#20 0x0000706a3006d24f n/a (libpython3.12.so.1.0 + 0x26d24f)
#21 0x0000706a300879d4 n/a (libpython3.12.so.1.0 + 0x2879d4)
#22 0x0000706a30087261 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x287261)
#23 0x0000706a300869bf _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x2869bf)
#24 0x0000706a3007f174 Py_RunMain (libpython3.12.so.1.0 + 0x27f174)
#25 0x0000706a3003c5ec Py_BytesMain (libpython3.12.so.1.0 + 0x23c5ec)
#26 0x0000706a2fc34e08 n/a (libc.so.6 + 0x25e08)
#27 0x0000706a2fc34ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x00005d718f9c0045 _start (python3.12 + 0x1045)
Stack trace of thread 14573:
#0 0x0000706a2fd1abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x0000706a2f2a3227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x0000706a2f23fa55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x0000706a2f23fab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x0000706a2f274026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x0000706a2fca339d n/a (libc.so.6 + 0x9439d)
#6 0x0000706a2fd2849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:07 leviathan systemd[1]: systemd-coredump@35-14574-0.service: Deactivated successfully.
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 5.
Oct 23 16:28:07 leviathan systemd[1195]: Started Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:07 leviathan redshift-gtk[14585]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14585]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14585]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan redshift-gtk[14585]: _gtk_style_provider_private_get_settings: assertion 'GTK_IS_STYLE_PROVIDER_PRIVATE (provider)' failed
Oct 23 16:28:07 leviathan systemd-coredump[14588]: Process 14585 (redshift-gtk) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Oct 23 16:28:07 leviathan kernel: redshift-gtk[14585]: segfault at 18 ip 000079ae8fb0f864 sp 00007ffd85cb2a60 error 4 in libgtk-3.so.0.2411.32[10f864,79ae8fa70000+3b5000] likely on CPU 3 (core 3, socket 0)
Oct 23 16:28:07 leviathan kernel: Code: 00 00 00 0f 1f 00 f3 0f 1e fa 48 8b 7f 10 48 85 ff 74 0b e9 9e 9c ff ff 66 0f 1f 44 00 00 55 48 89 d7 48 89 e5 e8 4c 2f 19 00 <48> 8b 40 18 48 8b 78 10 67 e8 ce f7 09 00 5d 48 89 c7 e9 75 9c ff
Oct 23 16:28:07 leviathan systemd[1]: Started Process Core Dump (PID 14588/UID 0).
Oct 23 16:28:07 leviathan iwd[1060]: event: connect-info, ssid: il-nidito, bss: 6a:07:b6:81:e4:8f, signal: -62, load: 49/255
Oct 23 16:28:07 leviathan iwd[1060]: event: state, old: autoconnect_full, new: connecting (auto)
Oct 23 16:28:07 leviathan systemd-coredump[14589]: Process 14585 (redshift-gtk) of user 1000 dumped core.
Stack trace of thread 14585:
#0 0x000079ae8fb0f864 n/a (libgtk-3.so.0 + 0x10f864)
#1 0x000079ae8fb32361 n/a (libgtk-3.so.0 + 0x132361)
#2 0x000079ae8fb1a915 n/a (libgtk-3.so.0 + 0x11a915)
#3 0x000079ae9203bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#4 0x000079ae92020805 n/a (libgobject-2.0.so.0 + 0x23805)
#5 0x000079ae92021e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#6 0x000079ae92022ed2 g_object_new (libgobject-2.0.so.0 + 0x25ed2)
#7 0x000079ae8fd47fea n/a (libgtk-3.so.0 + 0x347fea)
#8 0x000079ae9203bacb g_type_create_instance (libgobject-2.0.so.0 + 0x3eacb)
#9 0x000079ae92020805 n/a (libgobject-2.0.so.0 + 0x23805)
#10 0x000079ae92021e7f g_object_new_with_properties (libgobject-2.0.so.0 + 0x24e7f)
#11 0x000079ae921bd71e n/a (_gi.cpython-312-x86_64-linux-gnu.so + 0x1071e)
#12 0x000079ae92d82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#13 0x000079ae92d8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#14 0x000079ae92d85706 _PyObject_FastCallDictTstate (libpython3.12.so.1.0 + 0x185706)
#15 0x000079ae92dc1c67 n/a (libpython3.12.so.1.0 + 0x1c1c67)
#16 0x000079ae92d82818 _PyObject_MakeTpCall (libpython3.12.so.1.0 + 0x182818)
#17 0x000079ae92d8b5b7 _PyEval_EvalFrameDefault (libpython3.12.so.1.0 + 0x18b5b7)
#18 0x000079ae92e4de35 PyEval_EvalCode (libpython3.12.so.1.0 + 0x24de35)
#19 0x000079ae92e724aa n/a (libpython3.12.so.1.0 + 0x2724aa)
#20 0x000079ae92e6d24f n/a (libpython3.12.so.1.0 + 0x26d24f)
#21 0x000079ae92e879d4 n/a (libpython3.12.so.1.0 + 0x2879d4)
#22 0x000079ae92e87261 _PyRun_SimpleFileObject (libpython3.12.so.1.0 + 0x287261)
#23 0x000079ae92e869bf _PyRun_AnyFileObject (libpython3.12.so.1.0 + 0x2869bf)
#24 0x000079ae92e7f174 Py_RunMain (libpython3.12.so.1.0 + 0x27f174)
#25 0x000079ae92e3c5ec Py_BytesMain (libpython3.12.so.1.0 + 0x23c5ec)
#26 0x000079ae92a34e08 n/a (libc.so.6 + 0x25e08)
#27 0x000079ae92a34ecc __libc_start_main (libc.so.6 + 0x25ecc)
#28 0x00005b4b556f6045 _start (python3.12 + 0x1045)
Stack trace of thread 14587:
#0 0x000079ae92b1abb0 ppoll (libc.so.6 + 0x10bbb0)
#1 0x000079ae9211d227 n/a (libglib-2.0.so.0 + 0xc0227)
#2 0x000079ae920b9a55 g_main_context_iteration (libglib-2.0.so.0 + 0x5ca55)
#3 0x000079ae920b9ab2 n/a (libglib-2.0.so.0 + 0x5cab2)
#4 0x000079ae920ee026 n/a (libglib-2.0.so.0 + 0x91026)
#5 0x000079ae92aa339d n/a (libc.so.6 + 0x9439d)
#6 0x000079ae92b2849c n/a (libc.so.6 + 0x11949c)
ELF object binary architecture: AMD x86-64
Oct 23 16:28:07 leviathan systemd[1]: systemd-coredump@36-14588-0.service: Deactivated successfully.
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Main process exited, code=dumped, status=11/SEGV
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:07 leviathan kernel: wlan0: authenticate with 6a:07:b6:81:e4:8f (local address=e8:65:38:27:1d:2f)
Oct 23 16:28:07 leviathan kernel: wlan0: send auth to 6a:07:b6:81:e4:8f (try 1/3)
Oct 23 16:28:07 leviathan kernel: wlan0: send auth to 6a:07:b6:81:e4:8f (try 2/3)
Oct 23 16:28:07 leviathan kernel: wlan0: send auth to 6a:07:b6:81:e4:8f (try 3/3)
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Scheduled restart job, restart counter is at 6.
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Start request repeated too quickly.
Oct 23 16:28:07 leviathan systemd[1195]: redshift-gtk.service: Failed with result 'core-dump'.
Oct 23 16:28:07 leviathan systemd[1195]: Failed to start Redshift display colour temperature adjustment (GUI).
Oct 23 16:28:07 leviathan systemd[1195]: Stopped target Current graphical user session.
Oct 23 16:28:07 leviathan dbus-broker[1493]: Dispatched 16670 messages @ 1(±1)μs / message.
Oct 23 16:28:07 leviathan systemd[1195]: Stopping Accessibility services bus...
Oct 23 16:28:07 leviathan systemd[1195]: Stopping Virtual filesystem service...
Oct 23 16:28:07 leviathan systemd[1]: run-user-1000-gvfs.mount: Deactivated successfully.
Oct 23 16:28:07 leviathan systemd[1195]: Stopped Accessibility services bus.
Oct 23 16:28:07 leviathan systemd[1195]: Stopped Virtual filesystem service.
Oct 23 16:28:07 leviathan kernel: wlan0: authentication with 6a:07:b6:81:e4:8f timed out
Oct 23 16:28:08 leviathan iwd[1060]: event: connect-timeout, reason: 2
Oct 23 16:28:08 leviathan iwd[1060]: event: connect-failed, status: 1
Oct 23 16:28:08 leviathan iwd[1060]: event: state, old: connecting (auto), new: disconnected
Oct 23 16:28:08 leviathan iwd[1060]: event: state, old: disconnected, new: autoconnect_full
Oct 23 16:28:11 leviathan iwd[1060]: event: connect-info, ssid: il-nidito, bss: 62:07:b6:81:e4:8e, signal: -53, load: 26/255
Oct 23 16:28:11 leviathan iwd[1060]: event: state, old: autoconnect_full, new: connecting (auto)
Oct 23 16:28:11 leviathan kernel: wlan0: authenticate with 62:07:b6:81:e4:8e (local address=e8:65:38:27:1d:2f)
Oct 23 16:28:11 leviathan kernel: wlan0: send auth to 62:07:b6:81:e4:8e (try 1/3)
Oct 23 16:28:11 leviathan kernel: wlan0: authenticated
Oct 23 16:28:11 leviathan kernel: wlan0: associate with 62:07:b6:81:e4:8e (try 1/3)
Oct 23 16:28:11 leviathan kernel: wlan0: RX AssocResp from 62:07:b6:81:e4:8e (capab=0x1011 status=0 aid=2)
Oct 23 16:28:11 leviathan kernel: wlan0: associated
Oct 23 16:28:11 leviathan iwd[1060]: event: state, old: connecting (auto), new: connected
Oct 23 16:28:11 leviathan dhcpcd[1064]: wlan0: carrier acquired
Oct 23 16:28:11 leviathan dhcpcd[1064]: wlan0: IAID 38:27:1d:2f
Oct 23 16:28:11 leviathan dhcpcd[1064]: wlan0: adding address fe80::6cd9:ad2c:8ee4:8e27
Oct 23 16:28:11 leviathan kernel: wlan0: Limiting TX power to 30 (30 - 0) dBm as advertised by 62:07:b6:81:e4:8e
Oct 23 16:28:13 leviathan dhcpcd[1064]: wlan0: soliciting an IPv6 router
Oct 23 16:28:13 leviathan dhcpcd[1064]: wlan0: rebinding lease of 192.168.0.233
Oct 23 16:28:13 leviathan dhcpcd[1064]: wlan0: probing address 192.168.0.233/24
Offline
And now I had a very different type of crash, but again graphics card related it seems, in which the X server didn't die, but everything froze up, except the mouse pointer, weirdly, which would move very slowly and jerkily across the screen if I moved my mouse, but nothing else worked -- no keyboard, no power button, screen totally frozen, nothing.
Oct 25 11:02:00 leviathan systemd[1189]: Starting Mailbox Syncronisation Service...
Oct 25 11:02:00 leviathan systemd[1189]: Starting DAV Synchronisation Service (Calendar and Contacts)...
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Inbox
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Trash
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Archives
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Spam
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Sent
Oct 25 11:02:01 leviathan email-sync[2429465]: Moving 0 messages to personal/Drafts
Oct 25 11:02:01 leviathan email-sync[2429465]: Synchronising with remote server
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Archives/2020/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Archives/2020/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Archives/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Archives/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Drafts/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Drafts/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Inbox/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Inbox/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Notes/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Notes/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Sent/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Sent/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Spam/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Spam/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Trash/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/Trash/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/Important/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/Important/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/Starred/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/Starred/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/.mbsyncstate
Oct 25 11:02:04 leviathan email-sync[2429582]: Note: Ignoring non-mail file: /home/gideon/mail/personal/[Gmail]/.uidvalidity
Oct 25 11:02:04 leviathan email-sync[2429582]: Processed 22 total files in almost no time.
Oct 25 11:02:04 leviathan email-sync[2429582]: No new mail.
Oct 25 11:02:04 leviathan systemd[1189]: Finished Mailbox Syncronisation Service.
Oct 25 11:02:05 leviathan systemd[1189]: Finished DAV Synchronisation Service (Calendar and Contacts).
Oct 25 11:02:54 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU: I'm not done with your previous command: SMN_C2PMSG_66:0x00000017 SMN_C2PMSG_82:0x00000000
Oct 25 11:02:54 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Failed to disable gfxoff!
Oct 25 11:03:00 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=5039016, emitted seq=5039018
Oct 25 11:03:00 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process picom pid 1393 thread picom:cs0 pid 1427
Oct 25 11:03:00 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Oct 25 11:03:05 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU: I'm not done with your previous command: SMN_C2PMSG_66:0x00000017 SMN_C2PMSG_82:0x00000000
Oct 25 11:03:05 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Failed to disable gfxoff!
Oct 25 11:03:36 leviathan bluetoothd[1054]: src/profile.c:ext_io_disconnected() Unable to get io data for Hands-Free Voice gateway: getpeername: Transport endpoint is not connected (107)
Oct 25 11:05:01 leviathan systemd[1189]: Starting Nextcloud Synchronisation Service...
Oct 25 11:05:01 leviathan nextcloud-sync[2433192]: Synchronising notes -> https://org.gtf.io/Notes
Oct 25 11:05:03 leviathan nextcloud-sync[2433192]: Synchronising Documents/Papers -> https://org.gtf.io/Papers
Oct 25 11:05:12 leviathan nextcloud-sync[2433192]: Synchronising Documents/Books -> https://org.gtf.io/Books
Oct 25 11:05:15 leviathan systemd[1189]: Finished Nextcloud Synchronisation Service.
Oct 25 11:06:34 leviathan kernel: INFO: task kworker/u98:12:2339594 blocked for more than 122 seconds.
Oct 25 11:06:34 leviathan kernel: Not tainted 6.11.3-arch1-1 #1
Oct 25 11:06:34 leviathan kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 25 11:06:34 leviathan kernel: task:kworker/u98:12 state:D stack:0 pid:2339594 tgid:2339594 ppid:2 flags:0x00004000
Oct 25 11:06:34 leviathan kernel: Workqueue: events_unbound commit_work
Oct 25 11:06:34 leviathan kernel: Call Trace:
Oct 25 11:06:34 leviathan kernel: <TASK>
Oct 25 11:06:34 leviathan kernel: __schedule+0x402/0x1440
Oct 25 11:06:34 leviathan kernel: ? dc_stream_get_scanoutpos+0x73/0xb0 [amdgpu 1400000003000000474e5500a38fdd2dd8475a01]
Oct 25 11:06:34 leviathan kernel: ? srso_alias_return_thunk+0x5/0xfbef5
Oct 25 11:06:34 leviathan kernel: ? dm_crtc_get_scanoutpos+0xab/0x120 [amdgpu 1400000003000000474e5500a38fdd2dd8475a01]
Oct 25 11:06:34 leviathan kernel: schedule+0x27/0xf0
Oct 25 11:06:34 leviathan kernel: schedule_timeout+0x12f/0x160
Oct 25 11:06:34 leviathan kernel: dma_fence_default_wait+0x1d8/0x250
Oct 25 11:06:34 leviathan kernel: ? __pfx_dma_fence_default_wait_cb+0x10/0x10
Oct 25 11:06:34 leviathan kernel: dma_fence_wait_timeout+0x108/0x140
Oct 25 11:06:34 leviathan kernel: drm_atomic_helper_wait_for_fences+0x155/0x1e0
Oct 25 11:06:34 leviathan kernel: commit_tail+0x2f/0x130
Oct 25 11:06:34 leviathan kernel: process_one_work+0x17b/0x330
Oct 25 11:06:34 leviathan kernel: worker_thread+0x2ce/0x3f0
Oct 25 11:06:34 leviathan kernel: ? __pfx_worker_thread+0x10/0x10
Oct 25 11:06:34 leviathan kernel: kthread+0xcf/0x100
Oct 25 11:06:34 leviathan kernel: ? __pfx_kthread+0x10/0x10
Oct 25 11:06:34 leviathan kernel: ret_from_fork+0x31/0x50
Oct 25 11:06:34 leviathan kernel: ? __pfx_kthread+0x10/0x10
Oct 25 11:06:34 leviathan kernel: ret_from_fork_asm+0x1a/0x30
Oct 25 11:06:34 leviathan kernel: </TASK>
Oct 25 11:06:42 leviathan systemd-logind[1059]: Power key pressed short.
Oct 25 11:06:42 leviathan systemd-logind[1059]: Powering off...
Oct 25 11:06:42 leviathan systemd-logind[1059]: System is powering down.
Oct 25 11:06:42 leviathan login[1165]: pam_unix(login:session): session closed for user gideon
Oct 25 11:06:42 leviathan login[1165]: pam_systemd(login:session): New sd-bus connection (system-bus-pam-systemd-1165) opened.
Oct 25 11:06:42 leviathan systemd[1]: Stopping Session 1 of User gideon...
Oct 25 11:06:42 leviathan systemd[1]: Removed slice Slice /system/modprobe.
Oct 25 11:06:42 leviathan systemd[1]: Removed slice Slice /system/systemd-coredump.
Oct 25 11:06:42 leviathan systemd[1]: system-systemd\x2dcoredump.slice: Consumed 10.746s CPU time, 146.5M memory peak.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Bluetooth Support.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Graphical Interface.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Multi-User System.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Login Prompts.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Host and Network Name Lookups.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Sound Card.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Timer Units.
Oct 25 11:06:42 leviathan systemd[1]: archlinux-keyring-wkd-sync.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Refresh existing PGP keys of archlinux-keyring regularly.
Oct 25 11:06:42 leviathan systemd[1]: backup-snapshots.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Local backup timer.
Oct 25 11:06:42 leviathan systemd[1]: freshclam.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Virus database update schedule.
Oct 25 11:06:42 leviathan systemd[1]: man-db.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Daily man-db regeneration.
Oct 25 11:06:42 leviathan systemd[1]: shadow.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Daily verification of password and group files.
Oct 25 11:06:42 leviathan systemd[1]: systemd-tmpfiles-clean.timer: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Daily Cleanup of Temporary Directories.
Oct 25 11:06:42 leviathan systemd[1]: Stopped target Trusted Platform Module.
Oct 25 11:06:42 leviathan systemd[1]: lvm2-lvmpolld.socket: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Closed LVM2 poll daemon socket.
Oct 25 11:06:42 leviathan systemd[1]: systemd-rfkill.socket: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Closed Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 25 11:06:42 leviathan bluetoothd[1054]: Terminating
Oct 25 11:06:42 leviathan systemd[1]: Stopping Bluetooth service...
Oct 25 11:06:42 leviathan systemd[1]: Stopping Getty on tty1...
Oct 25 11:06:42 leviathan systemd[1]: Starting Generate shutdown-ramfs...
Oct 25 11:06:42 leviathan mullvad-daemon[1057]: [mullvad_daemon::shutdown::platform][DEBUG] Process received signal: [Term]
Oct 25 11:06:42 leviathan systemd[1]: Stopping Mullvad VPN daemon...
Oct 25 11:06:42 leviathan systemd[1]: Stopping Nix Daemon...
Oct 25 11:06:42 leviathan systemd[1]: Stopping Authorization Manager...
Oct 25 11:06:42 leviathan systemd[1]: Stopping RealtimeKit Scheduling Policy Service...
Oct 25 11:06:42 leviathan systemd[1]: Stopping Load/Save OS Random Seed...
Oct 25 11:06:42 leviathan systemd[1]: systemd-udev-load-credentials.service: Deactivated successfully.
Oct 25 11:06:42 leviathan systemd[1]: Stopped Load udev Rules from Credentials.
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSink/sbc
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSource/sbc
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSink/sbc_xq_453
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSource/sbc_xq_453
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSink/sbc_xq_512
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSource/sbc_xq_512
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSink/sbc_xq_552
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSource/sbc_xq_552
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSink/faststream
Oct 25 11:06:42 leviathan bluetoothd[1054]: Endpoint unregistered: sender=:1.21 path=/MediaEndpoint/A2DPSource/faststream
Oct 25 11:06:42 leviathan bluetoothd[1054]: Battery Provider Manager destroyed
Oct 25 11:06:42 leviathan bluetoothd[1054]: Stopping SDP server
Oct 25 11:06:42 leviathan bluetoothd[1054]: Exit
Oct 25 11:06:43 leviathan systemd[1]: Stopping Daemon for power management...
Oct 25 11:06:43 leviathan pulseaudio[1426]: org.bluez.BatteryProviderManager1.UnregisterBatteryProvider() Failed: org.freedesktop.DBus.Error.NoReply:Remote peer disconnected
Oct 25 11:06:43 leviathan systemd[1]: run-credentials-systemd\x2dudev\x2dload\x2dcredentials.service.mount: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: bluetooth.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped Bluetooth service.
Oct 25 11:06:43 leviathan systemd[1]: nix-daemon.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped Nix Daemon.
Oct 25 11:06:43 leviathan systemd[1]: getty@tty1.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped Getty on tty1.
Oct 25 11:06:43 leviathan pulseaudio[1426]: Assertion 'pa_atomic_load(&(y)->_ref) > 0' failed at ../pulseaudio/src/modules/bluetooth/bluez5-util.c:2207, function pa_bluetooth_discovery_hook(). Aborting.
Oct 25 11:06:43 leviathan systemd-coredump[2434718]: Process 1426 (pulseaudio) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Oct 25 11:06:43 leviathan systemd[1]: run-credentials-getty\x40tty1.service.mount: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: polkit.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped Authorization Manager.
Oct 25 11:06:43 leviathan systemd[1]: polkit.service: Consumed 17.491s CPU time, 4.8M memory peak.
Oct 25 11:06:43 leviathan systemd[1]: rtkit-daemon.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped RealtimeKit Scheduling Policy Service.
Oct 25 11:06:43 leviathan systemd[1]: upower.service: Deactivated successfully.
Oct 25 11:06:43 leviathan systemd[1]: Stopped Daemon for power management.
Oct 25 11:06:43 leviathan systemd[1]: upower.service: Consumed 45.472s CPU time, 6.2M memory peak.
Oct 25 11:06:43 leviathan systemd-logind[1059]: Session 1 logged out. Waiting for processes to exit.
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: [mullvad_daemon::shutdown][ERROR] Error: Failed to determine if host is shutting down, assuming it is shutting down
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: Caused by: Failed to read SystemState property
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: Caused by: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: [mullvad_daemon::device][DEBUG] Account manager has stopped
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: [talpid_core::dns][INFO] Resetting DNS
Oct 25 11:06:43 leviathan mullvad-daemon[1057]: [talpid_core::tunnel_state_machine][DEBUG] Tunnel state machine exited
Oct 25 11:06:43 leviathan systemd[1]: Removed slice Slice /system/getty.
Oct 25 11:06:44 leviathan mullvad-daemon[1057]: [talpid_core::tunnel_state_machine][INFO] Tunnel state machine shut down
Oct 25 11:06:44 leviathan mullvad-vpn[1419]: [2024-10-25 11:06:44.015][verbose] GRPC Channel connectivity state changed to 0
Oct 25 11:06:44 leviathan mullvad-vpn[1419]: [2024-10-25 11:06:44.016][info] Disconnected from the daemon
Oct 25 11:06:44 leviathan mullvad-daemon[1057]: [mullvad_daemon][INFO] Mullvad daemon is quitting
Oct 25 11:06:44 leviathan systemd[1189]: pulseaudio.service: Main process exited, code=dumped, status=6/ABRT
Oct 25 11:06:44 leviathan systemd[1189]: pulseaudio.service: Failed with result 'core-dump'.
Oct 25 11:06:44 leviathan systemd[1189]: pulseaudio.service: Consumed 1min 5.820s CPU time, 86.4M memory peak.
Oct 25 11:06:44 leviathan systemd[1189]: pulseaudio.service: Scheduled restart job, restart counter is at 1.
Oct 25 11:06:44 leviathan systemd[1189]: Starting Sound Service...
Oct 25 11:06:44 leviathan mullvad-daemon[1057]: [mullvad_daemon][DEBUG] Process exiting with code 0
Oct 25 11:06:44 leviathan systemd[1]: mullvad-daemon.service: Deactivated successfully.
Oct 25 11:06:45 leviathan systemd[1]: Stopped Mullvad VPN daemon.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:45 leviathan pulseaudio[2434730]: Stale PID file, overwriting.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:46 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan dbus-broker-launch[1052]: Activation request for 'org.freedesktop.RealtimeKit1' failed.
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: ERROR unknown event type 37
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 11 with no TDs queued?
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:47 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:48 leviathan kernel: xhci_hcd 0000:09:00.0: ERROR unknown event type 37
Oct 25 11:06:48 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 15 with no TDs queued?
Oct 25 11:06:48 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 11 with no TDs queued?
Oct 25 11:06:48 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 9 with no TDs queued?
Oct 25 11:06:48 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 10500, index 2) beyond range (1315, 159)
Oct 25 11:06:49 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:49 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 10500, index 3) beyond range (1315, 159)
Oct 25 11:06:49 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 11828, index 4) beyond range (1481, 325)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 11828, index 5) beyond range (1481, 325)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 13156, index 6) beyond range (1647, 491)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1150 (reg 13156, index 7) beyond range (1647, 491)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 7396, index 2) beyond range (927, 1819)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 7396, index 3) beyond range (927, 1819)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 8724, index 4) beyond range (1093, 1985)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 8724, index 5) beyond range (1093, 1985)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 8724, index 6) beyond range (1093, 1985)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 762 (reg 8724, index 7) beyond range (1093, 1985)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Frame ID 1426 (reg 12708, index 7) beyond range (1591, 435)
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: Ignore frame ID field, use SIA bit instead
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:50 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: ERROR unknown event type 37
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 11 with no TDs queued?
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 15 with no TDs queued?
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 9 with no TDs queued?
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:52 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: ERROR unknown event type 37
Oct 25 11:06:53 leviathan kernel: retire_capture_urb: 160 callbacks suppressed
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 15 with no TDs queued?
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 11 with no TDs queued?
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN Event TRB for slot 1 ep 9 with no TDs queued?
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 8 on endpoint
Oct 25 11:06:53 leviathan kernel: xhci_hcd 0000:09:00.0: WARN: buffer overrun event for slot 1 ep 6 on endpoint
Oct 25 11:06:54 leviathan mullvad-vpn[1419]: [2024-10-25 11:06:54.016][error] Failed to connect to daemon: Failed to connect before the deadline
Oct 25 11:06:54 leviathan mullvad-vpn[1419]: [2024-10-25 11:06:54.016][error] Failed to reconnect - Error: Failed to connect before the deadline
Offline
Actually it's not as different as I expected, all the crashes have had this "ring gfx timeout", but this time I got a lot more in the log than previous times, and the X11 server did not die and leave me in a login console.
Specifically "amdgpu: SMU: I'm not done with your previous command: SMN_C2PMSG_66:0x00000017 SMN_C2PMSG_82:0x00000000" is new, and then a lot of the other pieces are new, too.
One difference is that in previous crashes, I've had alacritty instances running, which use the GPU to render, whereas I had no terminals open on this run -- I had just finished a google meets call.
Last edited by gtf21 (2024-10-25 10:26:57)
Offline
Sep 27 14:38:11 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process firefox pid 42423 thread firefox:cs0 pid 42492
Oct 23 16:28:04 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process picom pid 1418 thread picom:cs0 pid 1451
Oct 25 11:03:00 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process picom pid 1393 thread picom:cs0 pid 1427
The GPU resets for timouts in FF and picom, the reset causes havoc.
The pretext to the resets seem random, https://gitlab.freedesktop.org/drm/amd/-/issues/3006 suggests to lock into a low performance mode.
"amdgpu.bapm=0 amdgpu.runpm=0 amdgpu.aspm=0 pcie_aspm=off amdgpu.ppfeaturemask=0xffff8", https://wiki.archlinux.org/title/Kernel_parameters and see https://wiki.archlinux.org/title/AMDGPU#Overclocking on how to select fix performance modes/power profiles.
"amdgpu.ppfeaturemask=0xffff8" would seek to disable dpm and since amdgpu.dpm=0 has a tendency to fail the boot entirely, be prepared for that - use the kernel commandline editor of your bootloader.
Offline
Thanks, I'll try that. Do you have an intuition for what it _means_ for there to be these timeouts in FF/picom? I don't really have any intuition regarding what the timeout actually is, or what might be going wrong to cause it (and whether this is "normal behaviour", but given the havoc it wreaks, it feels like it can't be that normal).
Offline
gnome on occasion kills itself for those timeouts (don't ask, the answer is stupid) so they last > 200ms which is 12 frames at 60Hz (just to put things into perspective) an in that context is typically linked to starting gnome or coming back from fullscreen windows (does that fit your experience?)
With the excessive delay (200ms is an eternity from a CPU perspective) and the fullscreen pattern and the linked bug, it's likely the GPU having a hard time waking up/switching gears.
Though ftr: is this a hybrid graphics or multiscreen setup?
The subsequent problems are likely just because of the VRAM loss due to the reset.
Offline
@gtf21
About your "but they're gaming related": Although the actual trigger might be caused by something different the result ends up similar. Hence my thoughrs are: If the result ends up the same maybe the issue higher up the chain is similar, too. Or to put it this: Not the gamibg or whatever you do is the issue in itself but merely the trigger to a common chain converging to the same result - an by this the reason could be some issue either with the driver, the kernel or some common hardware issue doing something wrong to what the typical code path expects and hence ending up in a crash. It's a bit like the intel FDIV bug which had a few bad entries in some lookup tables to speed up devision - and if you rely on a correct answer but don't account for bad input low level code tends to crash (see recent crowdstrike issue).
Offline
gnome on occasion kills itself
I’m not using any of GNOME I don’t think, although I guess a lot of stuff uses gtk.
in that context is typically linked to starting gnome or coming back from fullscreen windows (does that fit your experience?)
The occasions this happened to me were either:
1. While my screen was locked, I was out all day, and came home to an unlocked vtty post-crash.
2. Similar, but overnight.
3. Most recently, after ending a google meets call where I was using the background effects which might fit the “game”-style graphics demand.
4. Oh and also another time but I can’t totally remember the details — I was using the machine though, but not gaming or anything, probably programming in alacritty (which uses the GPU for rendering).
So I don’t think it entirely fits, except maybe the latter two?
Though ftr: is this a hybrid graphics or multiscreen setup?
I only have the integrated graphics on my Ryzen 9 (7900) CPU, no dedicated GPU, and only one monitor.
If the result ends up the same maybe the issue higher up the chain is similar, too
Yeah agree on that, was just that the circumstances AND errors were different in a lot of those cases.
Unfortunately the issue is very sporadic so I’ll try the suggestions above and see if I can exercise the GPU a bit to get a failure condition. Might be a while before I know if anything has worked sadly.
Offline
I’m not using any of GNOME I don’t think, although I guess a lot of stuff uses gtk.
That's besides the point, I just used the gnome situation to gage how long these timeouts will take.
1 & 2 imply a fullscreen situation (the screenlocker is a fullscreen window)
3 was google meets fullscreen?
4. I also cannot remember the details
Unfortunately the issue is very sporadic so I’ll try
triggering it by turning your browser into fullscreen mode, wait 30s, unfullscreen it, fullscreen it wait 20s, unfullscreen it …
Offline
triggering it by turning your browser into fullscreen mode, wait 30s, unfullscreen it, fullscreen it wait 20s, unfullscreen it …
The thing is that I use fullscreen quite a lot and this never happens when doing that (I use i3 so when I want to do something specific in a window I often just put it in fullscreen).
Offline
Ironically, just after writing that (having flicked in and out of fullscreen mode for a bit as suggested), we had another crash:
Oct 27 09:24:44 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=349676, emitted seq=349678
Oct 27 09:24:44 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process firefox pid 89676 thread firefox:cs0 pid 89746
And then, as I was typing this, having booted with "amdgpu.bapm=0 amdgpu.runpm=0 amdgpu.aspm=0 pcie_aspm=off" but NOT the feature mask for disabling dpm, it crashed _again_, and again it was firefox as the culprit.
I have now booted with the feature mask parameter as well, and will see how we fare.
As a sidenote: this is now way more frequent that it was before. So far, in the two months I've had this machine, it had happened ~2-3 times, in the last ~week, an additional 3 times.
Offline
this is now way more frequent that it was before. So far, in the two months I've had this machine, it had happened ~2-3 times, in the last ~week, an additional 3 times.
You mean disabling the powersaving features made it worse?
Offline
You mean disabling the powersaving features made it worse?
Oh no, not yet at least — I’m going to wait and see. Enabling only some of them did make it “worse” (time-to-crash was very short), but too few data to really state that.
Offline
Ok, and it just happened again with all those kernel parameters, again while using firefox. I might try an LTS kernel to see if that helps? The last couple of times I've started seeing this "sysrq: Failed to register input handler, error -22" as well.
Oct 27 22:31:13 leviathan systemd[1178]: Finished Nextcloud Synchronisation Service.
Oct 27 22:33:48 leviathan kernel: sysrq: Failed to register input handler, error -22
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=75514, emitted seq=75516
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Process information: process firefox pid 42112 thread firefox:cs0 pid 42196
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Dumping IP State
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Dumping IP State Completed
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: MODE2 reset
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset succeeded, trying to resume
Oct 27 22:33:48 leviathan kernel: [drm] PCIE GART of 1024M enabled (table at 0x000000F41FC00000).
Oct 27 22:33:48 leviathan kernel: [drm] VRAM is lost due to GPU reset!
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: PSP is resuming...
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: reserve 0xa00000 from 0xf41e000000 for PSP TMR
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAS: optional ras ta ucode is not available
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAP: optional rap ta ucode is not available
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resuming...
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resumed successfully!
Oct 27 22:33:48 leviathan kernel: [drm] DMUB hardware initialized: version=0x05001C00
Oct 27 22:33:48 leviathan kernel: [drm] kiq ring mec 2 pipe 1 q 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.1.0 uses VM inv eng 1 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 4 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 5 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 6 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 7 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 8 on hub 0
Oct 27 22:33:48 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 9 on hub 0
Offline
started seeing this "sysrq: Failed to register input handler, error -22" as well.
Cause you enabled https://wiki.archlinux.org/title/Keyboa … el_(SysRq) ?
Offline
started seeing this "sysrq: Failed to register input handler, error -22" as well.
Cause you enabled https://wiki.archlinux.org/title/Keyboa … el_(SysRq) ?
That’s what’s odd about it — I haven’t (consciously) enabled sysrq (unless, _maybe_, physlock is messing with something but it’s not running when the crash happens).
Offline
Andddd it crashed again, this time on latest lts kernel. Every time it's when using firefox, and every time firefox is the culprit in the system log. The *only* other thing I've changed apart from doing a system upgrade every week is using physlock instead of xsecurelock precisely because when the X-server crashes, I don't want to be dumped into an unsecured session.
But now it's getting so frequent it's sort of absurd.
Offline
Same w/ chromium?
Firefox reading bbs.archlinux.org or playing youtube videos?
Offline
So, this was happening with two FF windows with mostly plain ol' webpages in them, about 10-15 tabs each, and in one of them I was typing in bbs.archlinux.org and then it crashed on me.
I've closed those two windows now (have other windows open, but again, just plain ol' webpages and things like slack, asana, google mail, etc.) and it has been pretty stable. The *other* thing I've done is set the min clock speed on the GPU using CoreCtrl to 1GHz (default was 400MHz) so I'll see if that helps, too.
So far, hasn't crashed since I closed the windows yesterday. Have not tried Chromium or other browsers.
Offline
Adding to the log, happened again, again in firefox. This time a slightly different set of logs including this interesting "Failed to initialize parser -125" after which Xorg dies. This happened after booting into the system fresh, and I hadn't yet run CoreCtrl to set the clock frequency on the GPU (I have no idea if that makes a difference or if CoreCtrl is persistent etc. -- the documentation is somewhat unclear).
Nov 04 08:46:35 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, signaled seq=58160, emitted seq=58162
Nov 04 08:46:35 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process firefox pid 21796 thread firefox:cs>
Nov 04 08:46:35 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Nov 04 08:46:36 leviathan kernel: [drm] REG_WAIT timeout 1us * 100000 tries - optc1_wait_for_state line:839
Nov 04 08:46:36 leviathan kernel: [drm] REG_WAIT timeout 1us * 100000 tries - optc1_wait_for_state line:839
Nov 04 08:46:36 leviathan kernel: [drm] REG_WAIT timeout 1us * 100000 tries - optc1_wait_for_state line:839
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: MODE2 reset
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset succeeded, trying to resume
Nov 04 08:46:36 leviathan kernel: [drm] PCIE GART of 1024M enabled (table at 0x000000F41FC00000).
Nov 04 08:46:36 leviathan kernel: [drm] PSP is resuming...
Nov 04 08:46:36 leviathan kernel: [drm] reserve 0xa00000 from 0xf41e000000 for PSP TMR
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAS: optional ras ta ucode is not available
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: RAP: optional rap ta ucode is not available
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resuming...
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU is resumed successfully!
Nov 04 08:46:36 leviathan kernel: [drm] DMUB hardware initialized: version=0x05001C00
Nov 04 08:46:36 leviathan kernel: [drm] kiq ring mec 2 pipe 1 q 0
Nov 04 08:46:36 leviathan kernel: [drm] VCN decode and encode initialized successfully(under DPG Mode).
Nov 04 08:46:36 leviathan kernel: [drm] JPEG decode initialized successfully.
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 11 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring sdma0 uses VM inv eng 12 on hub 0
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 8
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 8
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 8
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: ring jpeg_dec uses VM inv eng 5 on hub 8
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow start
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: recover vram bo from shadow done
Nov 04 08:46:36 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset(2) succeeded!
Nov 04 08:46:36 leviathan kernel: [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
Nov 04 08:46:36 leviathan systemd-coredump[53917]: Process 1404 (Xorg) of user 1000 terminated abnormally with signal 6/ABRT, processing...
Nov 04 08:46:36 leviathan systemd[1]: Started Process Core Dump (PID 53917/UID 0).
Nov 04 08:46:37 leviathan systemd-coredump[53918]: [?] Process 1404 (Xorg) of user 1000 dumped core.
Offline
Ok, even worse, it just crashed again (picom this time) but the whole system locked up, so the machine just ... died. Stuff like "watchdog: BUG: soft lockup - CPU#15 stuck for 22s" in the logs.
Now I'm really at a loss for what to do -- full log below.
Nov 04 09:34:49 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, signaled seq=481295, emitted seq=481297
Nov 04 09:34:49 leviathan kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process picom pid 54294 thread picom:cs0 pid 54327
Nov 04 09:34:49 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: GPU reset begin!
Nov 04 09:34:53 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: SMU: I'm not done with your previous command: SMN_C2PMSG_66:0x00000017 SMN_C2PMSG_82:0x00000000
Nov 04 09:34:53 leviathan kernel: amdgpu 0000:0c:00.0: amdgpu: Failed to disable gfxoff!
Nov 04 09:35:12 leviathan kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1514456744 wd_nsec: 1514448484
Nov 04 09:35:13 leviathan kernel: watchdog: BUG: soft lockup - CPU#15 stuck for 22s! [kworker/u48:1:146972]
Nov 04 09:35:13 leviathan kernel: Modules linked in: rfcomm ccm algif_aead crypto_null des_generic libdes ecb cmac algif_skcipher bnep md4 algif_hash af_alg 8021q garp mrp stp llc vfat fat intel_rapl_msr intel_rapl_common ext4 mbcache jbd2 mt7921e mt7921_common btusb snd_hda_codec_hdmi mt792x_lib btrtl edac_mce_amd asus_nb_wmi eeepc_wmi mt76_connac_lib btintel uvcvideo snd_hda_intel asus_wmi mt76 videobuf2_vmalloc btbcm ledtrig_audio snd_intel_dspcfg uvc kvm_amd snd_intel_sdw_acpi btmtk videobuf2_memops sparse_keymap snd_usb_audio platform_profile videobuf2_v4l2 mac80211 snd_usbmidi_lib kvm snd_hda_codec snd_ump videodev bluetooth snd_hda_core snd_rawmidi libarc4 videobuf2_common snd_hwdep snd_seq_device irqbypass ecdh_generic snd_pcm mc i8042 crc16 rapl snd_timer serio cfg80211 wmi_bmof sp5100_tco snd pcspkr i2c_piix4 k10temp soundcore rfkill igc mousedev gpio_amdpt joydev gpio_generic mac_hid nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_masq nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables
Nov 04 09:35:13 leviathan kernel: pkcs8_key_parser crypto_user loop fuse nfnetlink bpf_preload ip_tables x_tables dm_crypt cbc encrypted_keys trusted asn1_encoder tee dm_mod hid_logitech_hidpp hid_logitech_dj usbhid amdgpu crct10dif_pclmul crc32_pclmul polyval_clmulni i2c_algo_bit polyval_generic drm_ttm_helper gf128mul ttm ghash_clmulni_intel drm_exec sha512_ssse3 drm_suballoc_helper sha256_ssse3 amdxcp sha1_ssse3 drm_buddy aesni_intel gpu_sched crypto_simd nvme drm_display_helper cryptd ccp cec nvme_core xhci_pci video xhci_pci_renesas nvme_common wmi btrfs blake2b_generic libcrc32c crc32c_generic crc32c_intel xor raid6_pq
Nov 04 09:35:13 leviathan kernel: CPU: 15 PID: 146972 Comm: kworker/u48:1 Not tainted 6.6.59-1-lts #1 1400000003000000474e55005401f114c609ba54
Nov 04 09:35:13 leviathan kernel: Hardware name: ASUS System Product Name/ROG STRIX B650E-I GAMING WIFI, BIOS 2611 04/07/2024
Nov 04 09:35:13 leviathan kernel: Workqueue: amdgpu-reset-dev drm_sched_job_timedout [gpu_sched]
Nov 04 09:35:13 leviathan kernel: RIP: 0010:amdgpu_device_rreg.part.0+0x38/0xe0 [amdgpu]
Nov 04 09:35:13 leviathan kernel: Code: 00 55 89 f5 53 48 89 fb 4c 3b a7 f8 08 00 00 73 1b 83 e2 02 75 09 f6 87 e0 5e 04 00 10 75 79 4c 03 a3 00 09 00 00 45 8b 24 24 <eb> 12 4c 89 e6 48 8b 87 40 09 00 00 ff d0 0f 1f 00 41 89 c4 66 90
Nov 04 09:35:13 leviathan kernel: RSP: 0018:ffffc9002175fbe8 EFLAGS: 00000282
Nov 04 09:35:13 leviathan kernel: RAX: ffff88811540419c RBX: ffff888114e00000 RCX: 000000000000000f
Nov 04 09:35:13 leviathan kernel: RDX: 0000000000000000 RSI: 000000000000eca0 RDI: ffff888114e00000
Nov 04 09:35:13 leviathan kernel: RBP: 000000000000eca0 R08: 0000000000000000 R09: ffffc9002175f970
Nov 04 09:35:13 leviathan kernel: R10: ffff88901e1cffa8 R11: 0000000000000003 R12: 00000000ffffffff
Nov 04 09:35:13 leviathan kernel: R13: 0000000000000001 R14: ffff88814f7dd400 R15: 0000000000000000
Nov 04 09:35:13 leviathan kernel: FS: 0000000000000000(0000) GS:ffff88901e5c0000(0000) knlGS:0000000000000000
Nov 04 09:35:13 leviathan kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 04 09:35:13 leviathan kernel: CR2: 0000767842432f18 CR3: 00000008b8c20000 CR4: 0000000000f50ee0
Nov 04 09:35:13 leviathan kernel: PKRU: 55555554
Nov 04 09:35:13 leviathan kernel: Call Trace:
Nov 04 09:35:13 leviathan kernel: <IRQ>
Nov 04 09:35:13 leviathan kernel: ? watchdog_timer_fn+0x1b1/0x220
Nov 04 09:35:13 leviathan kernel: ? __pfx_watchdog_timer_fn+0x10/0x10
Nov 04 09:35:13 leviathan kernel: ? __hrtimer_run_queues+0x12f/0x2a0
Nov 04 09:35:13 leviathan kernel: ? hrtimer_interrupt+0xf8/0x230
Nov 04 09:35:13 leviathan kernel: ? __sysvec_apic_timer_interrupt+0x4a/0x110
Nov 04 09:35:13 leviathan kernel: ? sysvec_apic_timer_interrupt+0x6d/0x90
Nov 04 09:35:13 leviathan kernel: </IRQ>
Nov 04 09:35:13 leviathan kernel: <TASK>
Nov 04 09:35:13 leviathan kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
Nov 04 09:35:13 leviathan kernel: ? amdgpu_device_rreg.part.0+0x38/0xe0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: ? srso_alias_return_thunk+0x5/0xfbef5
Nov 04 09:35:13 leviathan kernel: gfx_v10_0_set_safe_mode+0x7e/0x1d0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: amdgpu_gfx_rlc_enter_safe_mode+0x50/0x70 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: gfx_v10_0_set_powergating_state+0x86/0x240 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: amdgpu_device_set_pg_state+0x96/0xf0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: ? __irq_put_desc_unlock+0x1c/0x50
Nov 04 09:35:13 leviathan kernel: amdgpu_device_ip_suspend_phase1+0x1a/0xe0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: ? srso_alias_return_thunk+0x5/0xfbef5
Nov 04 09:35:13 leviathan kernel: amdgpu_device_ip_suspend+0x1f/0x70 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: amdgpu_device_pre_asic_reset+0xd3/0x2a0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: amdgpu_device_gpu_recover+0x3e4/0xda0 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: ? ___drm_dbg+0x80/0xd0
Nov 04 09:35:13 leviathan kernel: amdgpu_job_timedout+0x187/0x270 [amdgpu 1400000003000000474e5500e77aad06ed6bcce3]
Nov 04 09:35:13 leviathan kernel: ? psi_task_switch+0xb7/0x200
Nov 04 09:35:13 leviathan kernel: drm_sched_job_timedout+0x82/0x120 [gpu_sched 1400000003000000474e5500e362ad263f1ecbea]
Nov 04 09:35:13 leviathan kernel: process_one_work+0x17d/0x350
Nov 04 09:35:13 leviathan kernel: worker_thread+0x315/0x450
Nov 04 09:35:13 leviathan kernel: ? __pfx_worker_thread+0x10/0x10
Nov 04 09:35:13 leviathan kernel: kthread+0xe5/0x120
Nov 04 09:35:13 leviathan kernel: ? __pfx_kthread+0x10/0x10
Nov 04 09:35:13 leviathan kernel: ret_from_fork+0x31/0x50
Nov 04 09:35:13 leviathan kernel: ? __pfx_kthread+0x10/0x10
Nov 04 09:35:13 leviathan kernel: ret_from_fork_asm+0x1b/0x30
Nov 04 09:35:13 leviathan kernel: </TASK>
Nov 04 09:35:41 leviathan kernel: watchdog: BUG: soft lockup - CPU#15 stuck for 49s! [kworker/u48:1:146972]
Nov 04 09:35:41 leviathan kernel: Modules linked in: rfcomm ccm algif_aead crypto_null des_generic libdes ecb cmac algif_skcipher bnep md4 algif_hash af_alg 8021q garp mrp stp llc vfat fat intel_rapl_msr intel_rapl_common ext4 mbcache jbd2 mt7921e mt7921_common btusb snd_hda_codec_hdmi mt792x_lib btrtl edac_mce_amd asus_nb_wmi eeepc_wmi mt76_connac_lib btintel uvcvideo snd_hda_intel asus_wmi mt76 videobuf2_vmalloc btbcm ledtrig_audio snd_intel_dspcfg uvc kvm_amd snd_intel_sdw_acpi btmtk videobuf2_memops sparse_keymap snd_usb_audio platform_profile videobuf2_v4l2 mac80211 snd_usbmidi_lib kvm snd_hda_codec snd_ump videodev bluetooth snd_hda_core snd_rawmidi libarc4 videobuf2_common snd_hwdep snd_seq_device irqbypass ecdh_generic snd_pcm mc i8042 crc16 rapl snd_timer serio cfg80211 wmi_bmof sp5100_tco snd pcspkr i2c_piix4 k10temp soundcore rfkill igc mousedev gpio_amdpt joydev gpio_generic mac_hid nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_masq nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables
Nov 04 09:35:41 leviathan kernel: pkcs8_key_parser crypto_user loop fuse nfnetlink bpf_preload ip_tables x_tables dm_crypt cbc encrypted_keys trusted asn1_encoder tee dm_mod hid_logitech_hidpp hid_logitech_dj usbhid amdgpu crct10dif_pclmul crc32_pclmul polyval_clmulni i2c_algo_bit polyval_generic drm_ttm_helper gf128mul ttm ghash_clmulni_intel drm_exec sha512_ssse3 drm_suballoc_helper sha256_ssse3 amdxcp sha1_ssse3 drm_buddy aesni_intel gpu_sched crypto_simd nvme drm_display_helper cryptd ccp cec nvme_core xhci_pci video xhci_pci_renesas nvme_common wmi btrfs blake2b_generic libcrc32c crc32c_generic crc32c_intel xor raid6_pq
Nov 04 09:35:41 leviathan kernel: CPU: 15 PID: 146972 Comm: kworker/u48:1 Tainted: G L 6.6.59-1-lts #1 1400000003000000474e55005401f114c609ba54
Offline
◉ modinfo amdgpu | grep -i reset
parm: reset_method:GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco/bamaco) (int)
"amdgpu.reset_method=1", https://wiki.archlinux.org/title/Kernel_parameters and then see whether any of them fares better…
Offline
So I have now tried all of these but I think there must be a hardware issue as I now can’t use the machine at all — within 10 mins of starting a graphical session it locks up. Frequency has steadily increased. This is on lts and recently updated kernel (~a week ago)
Offline