You are not logged in.
Hi all,
I've never understood EDID/monitor-configuration-stuff in e.g. /etc/X11/xorg.conf.d. But I've been running with a 4K monitor for around a year now and it works just perfectly under windows (laptop dualboot), but from time to time it causes really annoying screen flickering and ugly artifacts, if I e.g. have been watching/doing some youtube/video stuff. But it only happens rarely so I'm not really sure what triggers this screen flickering/monitor artifact issues. Today I booted up in Windows and installed and used a tool called "Monitor Assert Manager". I'll just share a bit from that report tool:
Monitor #3 [Real-time 0x2100]
Model name............... BenQ PD2700U
Manufacturer............. BenQ
Plug and Play ID......... BNQ802E
Serial number............ ETD6L01081SL0
Manufacture date......... 2020, ISO week 25
Filter driver............ None
-------------------------
EDID revision............ 1.4
Input signal type........ Digital (DisplayPort)
Color bit depth.......... 10 bits per primary color
Color encoding formats... RGB 4:4:4, YCrCb 4:4:4, YCrCb 4:2:2
Screen size.............. 600 x 340 mm (27,2 in)
Power management......... Active off/sleep
Extension blocs.......... 1 (CEA/CTA-EXT)
-------------------------
DDC/CI................... Supported
MCCS revison............. 2.2
Display technology....... TFT
Controller............... RealTek 0x2797
Firmware revision........ 0.1
Firmware flags........... 0x0004003C
Active power on time..... 3471 hours
Power consumption........ Not supported
Current frequency........ 133,50kHz, 60,10Hz
...
...
Timing characteristics
Horizontal scan range.... 140-140kHz
Vertical scan range...... 40-60Hz
Video bandwidth.......... 600MHz
CVT standard............. Not supported
GTF standard............. Supported
Additional descriptors... None
Preferred timing......... Yes
Native/preferred timing.. 3840x2160p at 60Hz (16:9)
Modeline............... "3840x2160" 533,250 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync
The beginning of the "xrandr"-command is:
Screen 0: minimum 8 x 8, current 7200 x 2560, maximum 32767 x 32767
DP-3.8 connected primary 3840x2160+0+400 (normal left inverted right x axis y axis) 597mm x 336mm
3840x2160 60.00*+ 59.94 50.00 29.97 29.97 25.00 23.98
2560x1440 59.95
1920x1080 60.00 59.94 50.00 29.97 25.00 23.98
1680x1050 59.95
1600x1200 60.00
So, the windows-tool had the line: "Modeline............... "3840x2160" 533,250 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync", but I don't see anything like that in the xrandr-output.
Furthermore, the Windows-display is always crystal clear and sharp - but in linux, this (occasional) screen flickering and ugly artifacts are beginning to really annoy me. Should I do something, like tell linux it should use an EDID-file in order to use the monitor properly and more correct, so the display is always good?
More details: This screen flickering stuff often/always (?) starts when I begin watching this video (using awesome WM, if that matters): https://www.oracle.com/explore/gettings … vigation-2 - it's like ultra-fast dark/white screen refresh, causing like some really really ugly kind of "stroboscopic effect" - and when/if I stop watching, then after a few minutes, the screen is okay again... It's annoying, like linux doesn't understand this 4K-monitor properly...
Last edited by newsboost (2022-11-19 20:44:59)
Offline
hmm, okay. Maybe it's not a monitor-issue, maybe it's NVIDIA-related? I have GeForce RTX 2060 Mobile. I've been looking more into this (searching/googling, reading forum posts) and suspect maybe this could solve my issue https://wiki.archlinux.org/title/NVIDIA … ag_on_Xorg ? But I don't understand the instructions, because I'm supposed to modify:
/etc/environment
CLUTTER_DEFAULT_FPS=YOUR_MAIN_DISPLAY_REFRESHRATE
__GL_SYNC_DISPLAY_DEVICE=YOUR_MAIN_DISPLAY_OUTPUT_NAME
I suppose refresh rate, is just 59.94 (from xrandr, see above), but I don't really understand the instructions: What is "YOUR_MAIN_DISPLAY_OUTPUT_NAME" ? I don't suppose it's neither :0 (from environment variable), nor DP-3.8 (from xrandr) ? And inside /etc/X11/ the only .conf-file I have is "./xorg.conf.d/00-keyboard.conf"... Anyone knows anything?
Offline
Whatever windows reports there, it's the default reduced blanking modeline
% cvt12 3840 2160 60 -r
# 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 59.997 Hz; hsync: 133.312 kHz; pclk: 533.25 MHz
Modeline "3840x2160_60.00_rb1" 533.25 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync
"xrandr --verbose" will print modelines, if you have "nvidia-drm.modeset=1" in your kernel commandline, you can query the outputs edid
edid-decode < /sys/class/drm/card0-DP-3.8/edid
https://aur.archlinux.org/packages/edid-decode-git
__GL_SYNC_DISPLAY_DEVICE is only relevant if you've multiple outputs.
using awesome WM, if that matters
1. do you also run a compositor (picom)
2. watch that video with what? mpv? some browser?
3. does the entire output flicker or only the video window? Does it matter whether it's fullscreen?
You could also try to add and use the cvt12 r/b modeline:
% cvt12 3840 2160 60 -b
# 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 60.000 Hz; hsync: 133.320 kHz; pclk: 522.61 MHz
Modeline "3840x2160_60.00_rb2" 522.61 3840 3848 3880 3920 2160 2208 2216 2222 +hsync -vsync
Online
Whatever windows reports there, it's the default reduced blanking modeline
% cvt12 3840 2160 60 -r # 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 59.997 Hz; hsync: 133.312 kHz; pclk: 533.25 MHz Modeline "3840x2160_60.00_rb1" 533.25 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync
Ah, ok, great. I didn't know what "reduced blanking" meant, so I googled and it seems to mean "Reduced blanking has been created to save bandwidth on panel displays, where the sync and blanking may be reduced as there's no beam that has to be repositioned. WWhat this means is that you can display the same resolution at the same frequency while using much less bandwidth and thus a lower dotclock". And then I found: https://wiki.archlinux.org/title/xrandr#Screen_Blinking which also sounds very relevant and sounds just like what I need. So I tried:
# cvt12 3840 2160 60 -r
# 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 59.997 Hz; hsync: 133.312 kHz; pclk: 533.25 MHz
Modeline "3840x2160_60.00_rb1" 533.25 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync
# xrandr --newmode "3840x2160_REDUCED" 533.25 3840 3888 3920 4000 2160 2163 2168 2222 +hsync -vsync
# xrandr --addmode DP-3.8 3840x2160_REDUCED
# xrandr
Screen 0: minimum 8 x 8, current 7200 x 2560, maximum 32767 x 32767
DP-3.8 connected primary 3840x2160+0+400 (normal left inverted right x axis y axis) 597mm x 336mm
3840x2160 60.00*+ 59.94 50.00 29.97 29.97 25.00 23.98
3840x2160_REDUCED 60.00
2560x1440 59.95
1920x1080 60.00 59.94 50.00 29.97 25.00 23.98
1680x1050 59.95
...
...
# xrandr --output DP-3.8 --mode 3840x2160_REDUCED
(all screens went completely black for a second - and then came back up again)
# xrandr
Screen 0: minimum 8 x 8, current 7200 x 2560, maximum 32767 x 32767
DP-3.8 connected primary 3840x2160+0+400 (normal left inverted right x axis y axis) 600mm x 340mm
3840x2160 60.00 + 59.94 50.00 29.97 29.97 25.00 23.98
3840x2160_REDUCED 60.00*
2560x1440 59.95
1920x1080 60.00 59.94 50.00 29.97 25.00 23.98
1680x1050 59.95
...
...
hmm. But what now? Did this even make sense, what I did? Then I think I tried switching to that mode - but the final "xrandr"-command does not look different... So I'm not sure it helped or did anything?
Unfortunately now I've just upgraded everything and rebooted and now I cannot recreate the screen flickering situation - but I know it comes from time to time, always at annoying times - and yes, I do use "picom", I'm suspecting that could maybe also have an influence and picom usually behaves really nice directly after a total reboot - but not after a 1-2 weeks with hibernating after every day.
"xrandr --verbose" will print modelines, if you have "nvidia-drm.modeset=1" in your kernel commandline, you can query the outputs edid
edid-decode < /sys/class/drm/card0-DP-3.8/edid
https://aur.archlinux.org/packages/edid-decode-git
__GL_SYNC_DISPLAY_DEVICE is only relevant if you've multiple outputs.
1) I don't have nvidia-drm.modeset=1 in my kernel commandline because I usually try to keep things as simple as possible and I don't think I currently understand, it would help my situation to add that to my kernel commandline...
2) I tried the "edid-decode" anyway - but it gave no output (you probably knew that). Now I'm not even sure if this "EDID" has anything to do with my monitor/screen flickering issue...
3) About "__GL_SYNC_DISPLAY_DEVICE": I have 3 outputs: Laptop + an older monitor + this relatively new 4K BenQ-monitor, which is causing the problems (probably because it requires a lot of bandwidth, over the single Displayport-cable, because I'm daisy-chaining both external monitors output via a single displayport-cable, so maybe bandwidth could be the culprit?)
using awesome WM, if that matters
1. do you also run a compositor (picom)
2. watch that video with what? mpv? some browser?
3. does the entire output flicker or only the video window? Does it matter whether it's fullscreen?
1) Yes, exactly, I do run picom. Next time I see the problem, maybe I should "pkill picom" and see if anything changes?
2) I watched that video just in my browser - hmm, either Chromium or Firefox (just updated and rebooted, unfortunately now I cannot reproduce the issue but I do know it never stops appearing, it just comes at "random" annoying times).
3) It's the entire screen that is affected - hence could sound like some "bandwidth"-issue to me. But that issue isn't there on Windows, so that annoys me... About full-screen or not: oh, I wish I could reproduce the problem now after I rebooted - maybe if I'm lucky it starts appearing later. I think the flickering started, as there were some "transitioning effect" (around 16 seconds into https://www.oracle.com/explore/gettings … vigation-2 the background fades from pure white to gray - and that's when the whole screen began flickering for a minute or maybe 2 minutes)... It usually stops a minute or so after I've closed down the video that made the flickering start... About fullscreen or not: I believe it was the same if it was in window-mode or fullscreen - unfortunately I cannot reproduce it right now, immediately after a full system update + reboot (it'll come back at some point, I just don't know when it'll come back)...
You could also try to add and use the cvt12 r/b modeline:
% cvt12 3840 2160 60 -b # 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 60.000 Hz; hsync: 133.320 kHz; pclk: 522.61 MHz Modeline "3840x2160_60.00_rb2" 522.61 3840 3848 3880 3920 2160 2208 2216 2222 +hsync -vsync
Ah, yes - this sounds like what I attempted in the beginning of this reply... But I'm not sure if it changed anything at all:
$ xrandr --verbove --current
...
...
ConnectorType: DisplayPort
ConnectorNumber: 4
_ConnectorLocation: 4
non-desktop: 0
supported: 0, 1
3840x2160 (0x1bd) 533.250MHz +HSync -VSync +preferred
h: width 3840 start 3888 end 3920 total 4000 skew 0 clock 133.31KHz
v: height 2160 start 2163 end 2168 total 2222 clock 60.00Hz
3840x2160_REDUCED (0x238) 533.250MHz +HSync -VSync *current
h: width 3840 start 3888 end 3920 total 4000 skew 0 clock 133.31KHz
v: height 2160 start 2163 end 2168 total 2222 clock 60.00Hz
3840x2160 (0x1be) 593.410MHz +HSync +VSync
h: width 3840 start 4016 end 4104 total 4400 skew 0 clock 134.87KHz
v: height 2160 start 2168 end 2178 total 2250 clock 59.94Hz
3840x2160 (0x1bf) 594.000MHz +HSync +VSync
h: width 3840 start 4896 end 4984 total 5280 skew 0 clock 112.50KHz
v: height 2160 start 2168 end 2178 total 2250 clock 50.00Hz
...
...
I'm still a bit confused... And damnit, because I cannot reliable reproduce the problem... But I'll have to check next time if the problem persists if I "pkill picom" and if I go fullscreen... Assuming this "adding and setting to reduced blank mode" works and helps avoid this screen flickering, if I want to make it persistent across reboots, do I have to make that into a startup script file or how does one usually do that? In any case thanks a lot for the hints!
Offline
https://wiki.archlinux.org/title/Xrandr … esolutions
But as the verbose output shows, the cvt 1 reduced blanking is already the default mode (probably from the EDID), so you'd have to try w/ the cvt 1.2 one (522.61 MHz instead of 533.25)
That being said: certainly get the compositor out of the equation and also try the behavior w/ mpv.
Wrt to reproducibility: does it maybe only happen after and S3 cycle (suspend to ram)?
Online
Ah, thanks a lot!
But as the verbose output shows, the cvt 1 reduced blanking is already the default mode (probably from the EDID), so you'd have to try w/ the cvt 1.2 one (522.61 MHz instead of 533.25)
Aah, that didn't went so smooth:
# cvt12 3840 2160 60 -b
# 3840x2160 @ 60.000 Hz Reduced Blank (CVT) field rate 60.000 Hz; hsync: 133.320 kHz; pclk: 522.61 MHz
Modeline "3840x2160_60.00_rb2" 522.61 3840 3848 3880 3920 2160 2208 2216 2222 +hsync -vsync
# xrandr --newmode "3840x2160_CVT" 522.61 3840 3848 3880 3920 2160 2208 2216 2222 +hsync -vsync
# xrandr --addmode DP-3.8 3840x2160_CVT
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 47
Current serial number in output stream: 48
hmm, but then again, this I suppose ruled out wrong modeline, right?
That being said: certainly get the compositor out of the equation and also try the behavior w/ mpv.
Wrt to reproducibility: does it maybe only happen after and S3 cycle (suspend to ram)?
* Yes, I'm worried about "picom"/compositor, I'll keep this in mind and remember to check up on that...
* About "mpv": I don't have that installed, I suppose both chromium and firefox have some builtin media players... Hmm, but I'm also not sure if it's always playing videos that causes this - it is an annoying thing that had happened maybe around once-twice a month (for a few minutes). I'll install mpv and maybe try with other media players too, next time I see this issue (I haven't done so much about it, until now, because it's difficult to understand the problem, but it's really annoying when it happens, hence now I asked the question/posted about it to maybe learn how to fix it).
* About S3 cycle: Yes, I do almost every day use "systemctl suspend" - but now I just tried suspending and powering on. And again I cannot reproduce it. Yes, it could definitely also have something to do with S3/suspend to RAM, because that's what I do all the time... Maybe something nvidia doesn't resume properly after waking up... Maybe a single suspend to RAM-cycle isn't enough... hmm, unfortunately, the problem isn't there right now, I'll write an update if the problem appears soon so hopefully that can give some additional insights... Thanks a lot, so far with the really great ideas and feedback!
Last edited by newsboost (2022-11-20 16:08:06)
Offline
Aah, that didn't went so smooth:
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Device"
Identifier "Device0"
Driver "nvidia"
Option "ModeValidation" "AllowNonEdidModes"
EndSection
Online
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Device" Identifier "Device0" Driver "nvidia" Option "ModeValidation" "AllowNonEdidModes" EndSection
I added this file, rebooted and tried the same, unfortunately the same:
# xrandr --newmode "3840x2160_CVT" 522.61 3840 3848 3880 3920 2160 2208 2216 2222 +hsync -vsync
# xrandr --addmode DP-3.8 3840x2160_CVT
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 47
Current serial number in output stream: 48
I checked /var/log/Xorg.0.log:
[ 61.277] (II) Applying OutputClass "nvidia" options to /dev/dri/card0
[ 61.277] (**) NVIDIA(0): Option "ModeValidation" "AllowNonEdidModes"
[ 61.277] (**) NVIDIA(0): Option "AllowEmptyInitialConfiguration"
[ 61.277] (**) NVIDIA(0): Enabling 2D acceleration
[ 61.277] (II) Loading sub module "glxserver_nvidia"
[ 61.277] (II) LoadModule: "glxserver_nvidia"
[ 61.277] (II) Loading /usr/lib/nvidia/xorg/libglxserver_nvidia.so
[ 61.304] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation"
[ 61.304] compiled for 1.6.99.901, module version = 1.0.0
[ 61.304] Module class: X.Org Server Extension
[ 61.304] (II) NVIDIA GLX Module 520.56.06 Thu Oct 6 21:26:26 UTC 2022
[ 61.305] (II) NVIDIA: The X server supports PRIME Render Offload.
...
[ 62.843] (II) NVIDIA(0): NVIDIA GPU NVIDIA GeForce RTX 2060 (TU106-A) at PCI:1:0:0
[ 62.843] (II) NVIDIA(0): (GPU-0)
[ 62.843] (--) NVIDIA(0): Memory: 6291456 kBytes
[ 62.843] (--) NVIDIA(0): VideoBIOS: 90.06.2f.00.ca
[ 62.843] (II) NVIDIA(0): Detected PCI Express Link width: 16X
[ 62.909] (--) NVIDIA(GPU-0): BenQ PD2700U (DFP-4.8): connected
[ 62.909] (--) NVIDIA(GPU-0): BenQ PD2700U (DFP-4.8): Internal DisplayPort
[ 62.909] (--) NVIDIA(GPU-0): BenQ PD2700U (DFP-4.8): GUID: 10DE9070-0005-EDEB-93B5-93030000009A
[ 62.909] (--) NVIDIA(GPU-0): BenQ PD2700U (DFP-4.8): 2660.0 MHz maximum pixel clock
[ 62.909] (--) NVIDIA(GPU-0):
...
[ 62.976] (**) NVIDIA(GPU-0): Mode Validation Overrides for BenQ PD2700U (DFP-4.8):
[ 62.976] (**) NVIDIA(GPU-0): AllowNonEdidModes
[ 63.083] (**) NVIDIA(GPU-0): Mode Validation Overrides for DELL U2711 (DFP-4.1):
[ 63.083] (**) NVIDIA(GPU-0): AllowNonEdidModes
[ 63.308] (**) NVIDIA(GPU-0): Mode Validation Overrides for AU Optronics Corporation
[ 63.308] (**) NVIDIA(GPU-0): (DFP-3):
[ 63.308] (**) NVIDIA(GPU-0): AllowNonEdidModes
[ 63.310] (==) NVIDIA(0):
[ 63.310] (==) NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select"
[ 63.310] (==) NVIDIA(0): will be used as the requested mode.
[ 63.310] (==) NVIDIA(0):
[ 63.311] (II) NVIDIA(0): Validated MetaModes:
[ 63.311] (II) NVIDIA(0):
[ 63.311] (II) NVIDIA(0): "DFP-3:nvidia-auto-select,DFP-4.1:nvidia-auto-select,DFP-4.8:nvidia-auto-select"
[ 63.311] (II) NVIDIA(0): Virtual screen size determined to be 8320 x 2160
[ 63.326] (--) NVIDIA(0): DPI set to (143, 144); computed from "UseEdidDpi" X config
[ 63.326] (--) NVIDIA(0): option
[ 63.326] (II) NVIDIA: Reserving 24576.00 MB of virtual memory for indirect memory
[ 63.326] (II) NVIDIA: access.
[ 63.337] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon
[ 63.337] (II) NVIDIA(0): may not be running or the "AcpidSocketPath" X
[ 63.337] (II) NVIDIA(0): configuration option may not be set correctly. When the
[ 63.337] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will
[ 63.337] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For
[ 63.337] (II) NVIDIA(0): details, please see the "ConnectToAcpid" and
[ 63.337] (II) NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X
[ 63.337] (II) NVIDIA(0): Config Options in the README.
[ 63.361] (II) NVIDIA(0): Setting mode "DFP-3:nvidia-auto-select,DFP-4.1:nvidia-auto-select,DFP-4.8:nvidia-auto-select"
...
So it sounds like it understood the "AllowNonEdidModes"-option... I have googled a bit and tried to find similar issues and tried:
# xrandr -q --verbose
....
....
3840x2160 (0x1bd) 533.250MHz +HSync -VSync *current +preferred
h: width 3840 start 3888 end 3920 total 4000 skew 0 clock 133.31KHz
v: height 2160 start 2163 end 2168 total 2222 clock 60.00Hz
3840x2160 (0x1be) 593.410MHz +HSync +VSync
h: width 3840 start 4016 end 4104 total 4400 skew 0 clock 134.87KHz
v: height 2160 start 2168 end 2178 total 2250 clock 59.94Hz
3840x2160 (0x1bf) 594.000MHz +HSync +VSync
h: width 3840 start 4896 end 4984 total 5280 skew 0 clock 112.50KHz
v: height 2160 start 2168 end 2178 total 2250 clock 50.00Hz
3840x2160 (0x1c0) 257.400MHz +HSync -VSync
h: width 3840 start 3848 end 3880 total 3920 skew 0 clock 65.66KHz
v: height 2160 start 2177 end 2185 total 2191 clock 29.97Hz
3840x2160 (0x1c1) 296.700MHz +HSync +VSync
h: width 3840 start 4016 end 4104 total 4400 skew 0 clock 67.43KHz
v: height 2160 start 2168 end 2178 total 2250 clock 29.97Hz
3840x2160 (0x1c2) 297.000MHz +HSync +VSync
h: width 3840 start 4896 end 4984 total 5280 skew 0 clock 56.25KHz
v: height 2160 start 2168 end 2178 total 2250 clock 25.00Hz
3840x2160 (0x1c3) 296.700MHz +HSync +VSync
h: width 3840 start 5116 end 5204 total 5500 skew 0 clock 53.95KHz
v: height 2160 start 2168 end 2178 total 2250 clock 23.98Hz
....
So these are the interesting modes, I guess... And the 533.250MHz seems to be the default that Windows used - and I use(d). But the line with 593.410MHz sounds interesting too, doesn't it (this also has some "+HSync +VSync" instead of "+HSync -VSync", although I'm just guessing here)? So I tried:
# xrandr --newmode "3840x2160test" 593.410 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync
# xrandr --addmode DP-3.8 3840x2160test
# xrandr
Screen 0: minimum 8 x 8, current 7200 x 2560, maximum 32767 x 32767
DP-3.8 connected primary 3840x2160+0+400 (normal left inverted right x axis y axis) 597mm x 336mm
3840x2160 60.00*+ 59.94 50.00 29.97 29.97 25.00 23.98
3840x2160test 59.94
2560x1440 59.95
...
DP-4 disconnected (normal left inverted right x axis y axis)
3840x2160_CVT (0x290) 522.610MHz +HSync -VSync
h: width 3840 start 3848 end 3880 total 3920 skew 0 clock 133.32KHz
v: height 2160 start 2208 end 2216 total 2222 clock 60.00Hz
# xrandr --output DP-3.8 --mode 3840x2160test
Screen went black for 3-4 seconds - and then came back... Not sure if this experiment makes sense... I've been googling and some people suggests that maybe changes to "nvidia-settings" could make a difference, like I currently have "Sync to VBlank" + "Allow Flipping" + "Use Confirmant Texture Clamping" enabled (I think they were enabled by default)... hmm, but I cannot reproduce the original problem so I'm out of ideas... Thanks a lot so far seth, I'll write an update if new things come up, I'm happy I learned a bit about these modelines now, this thread gives me a few options to test and ideas the next time the screen behaves badly and starts flickering!
Offline
593.410MHz sounds interesting
It's a 50Hz mode
1) I don't have nvidia-drm.modeset=1 in my kernel commandline because I usually try to keep things as simple as possible and I don't think I currently understand, it would help my situation to add that to my kernel commandline...
https://wiki.archlinux.org/title/NVIDIA … de_setting
3) About "__GL_SYNC_DISPLAY_DEVICE": I have 3 outputs: Laptop + an older monitor + this relatively new 4K BenQ-monitor, which is causing the problems (probably because it requires a lot of bandwidth, over the single Displayport-cable, because I'm daisy-chaining both external monitors output via a single displayport-cable, so maybe bandwidth could be the culprit?)
Is the most interesting question - did your try to trigger the issue on the same output, but NOT daisy-chained?
"Use Confirmant Texture Clamping"
Texture seams in Quake 3 engine
Many games based on the Quake 3 engine set their textures to use the
"GL_CLAMP" clamping mode when they should be using "GL_CLAMP_TO_EDGE".
This was an oversight made by the developers because some legacy NVIDIA
GPUs treat the two modes as equivalent. The result is seams at the edges
of textures in these games. To mitigate this, older versions of the NVIDIA
display driver remap "GL_CLAMP" to "GL_CLAMP_TO_EDGE" internally to
emulate the behavior of the older GPUs, but this workaround has been
disabled by default. To re-enable it, uncheck the "Use Conformant Texture
Clamping" checkbox in nvidia-settings before starting any affected
applications.
This won't help to mitigate your situation for sure.
Online
593.410MHz sounds interesting
It's a 50Hz mode
I thought it was the "clock 59.94Hz"-line. Are you sure it's 50 Hz and not 59.94 Hz (to me it looks like 3840x2160 (0x1be) 593.410MHz has something horisontal: clock 134.87KHz and vertial: clock 59.94Hz, from the "xrandr -q --verbose"-output)?
1) I don't have nvidia-drm.modeset=1 in my kernel commandline because I usually try to keep things as simple as possible and I don't think I currently understand, it would help my situation to add that to my kernel commandline...
Yeah, thanks, I have been reading a bit about it... It's another variable in the equation that I probably can/should try... Maybe things are better with this option enabled, I might test it later (because I cannot reproduce it so easily, changing many variables is difficult because then I don't know what did the trick, if I find a solution)...
3) About "__GL_SYNC_DISPLAY_DEVICE": I have 3 outputs: Laptop + an older monitor + this relatively new 4K BenQ-monitor, which is causing the problems (probably because it requires a lot of bandwidth, over the single Displayport-cable, because I'm daisy-chaining both external monitors output via a single displayport-cable, so maybe bandwidth could be the culprit?)
Is the most interesting question - did your try to trigger the issue on the same output, but NOT daisy-chained?
Do you think daisy-chaining itself should make a difference? Daisy-chaining is just the physical wiring, right? So I have a single cable going to 2 external monitor as shown here: https://www.cablematters.com/Blog/Displ … n-monitors - I never thought that itself should make a difference. The problem is that my laptop only has a single mini displayport output connector - can't even remember if it got hdmi... But things might be better, with a single monitor (if it's a bandwidth problem, at least). Currently I depend on 2 external monitors so hmm... Also because it's difficult to reproduce the problem I'm a bit reluctant towards using a single monitor for a long time - until the problem eventually might/may not appear again...
"Use Confirmant Texture Clamping"
nvidia README wrote:Texture seams in Quake 3 engine
Many games based on the Quake 3 engine set their textures to use the
"GL_CLAMP" clamping mode when they should be using "GL_CLAMP_TO_EDGE".
This was an oversight made by the developers because some legacy NVIDIA
GPUs treat the two modes as equivalent. The result is seams at the edges
of textures in these games. To mitigate this, older versions of the NVIDIA
display driver remap "GL_CLAMP" to "GL_CLAMP_TO_EDGE" internally to
emulate the behavior of the older GPUs, but this workaround has been
disabled by default. To re-enable it, uncheck the "Use Conformant Texture
Clamping" checkbox in nvidia-settings before starting any affected
applications.This won't help to mitigate your situation for sure.
Ok, thanks a lot. One variable eliminated there, thanks... hmm, maybe I'll have to live with this - but I'll probably write if new interesting things come up... Thanks so far!
Offline
I thought it was the "clock 59.94Hz"-line.
Yes, sorry - I looked at the 594MHz line.
xrandr --output DP-3.8 --mode 0x1be
But obviously the signal payload is even 11% higher that what you're currently running.
Do you think daisy-chaining itself should make a difference?
Yes. That is
if it's a bandwidth problem, at least
Online
xrandr --output DP-3.8 --mode 0x1be
But obviously the signal payload is even 11% higher that what you're currently running.
Yes, that looks really good. Also I remember the "video fading/transition from white to grey" was definitely not "smooth" and at the same time the screen flickering began. So higher bandwidth could be something, I think.
I'm not sure I understand the bandwidth-mode difference: For some reason the default mode: "3840x2160 (0x1bd) 533.250MHz" runs at 60.0 Hz. As I understand it:The number 2 mode "3840x2160 (0x1be) 593.410MHz" runs slightly slower refresh rate with vertical sync at 59.94Hz - but then the bandwidth increases? Is it something like: "we run vertical sync a bit slower, but instead of vertical sync'ing every 60 Hz, slowing a bit down will allow some extra bandwidth to draw what should be on the screen, for every vertical sync signal"? That sounds promising, perhaps... I'll try it and update this thread, if it happens soon and if I see that this is the solution. I guess it's normal for monitor mode-change to make the screen black 3-4 seconds, until the display is restored...
Do you think daisy-chaining itself should make a difference?
Yes. That is
if it's a bandwidth problem, at least
It's not a permanent solution - but you're right. Here's a good variable to try it, next time it happens. Not a permanent solution, but it'll contribute to understanding the problem, thanks... Maybe another variable is electronic interference: Maybe I should slide my mobile phone along the mini-displayport cable, to see if the radiation from the phone affects the signal that is being sent over the cable... I didn't think of that earlier and I don't think it'll reveal anything (if it does, I should buy a high-quality, expensive mini-displayport cable) - but it's good to have some "variables" to test, when the problem appears again... I'll write when there's hopefully something interesting to report back (which can take a while, given the "randomness" of the problem...)!
Offline
slower refresh rate with vertical sync at 59.94Hz - but then the bandwidth increases
Longer blank signal. "593.410MHz" is the relevant number for how much data needs to be pushed across the wire.
You can also try
xrandr --output DP-3.8 --mode 3840x2160 --rate 30
which will run the output at 30Hz and only require a 257.400MHz or 296.700MHz signal (you can select a mode via it's ID to control whether you want the lower signal w/ reduced blanking)
Online
slower refresh rate with vertical sync at 59.94Hz - but then the bandwidth increases
Longer blank signal. "593.410MHz" is the relevant number for how much data needs to be pushed across the wire.
Ok, thanks a lot. I feel I understand all this modeline-phenomena better now so it's also easier to read up on, by googling compared to before I started this flickering-thread...
You can also try
xrandr --output DP-3.8 --mode 3840x2160 --rate 30
which will run the output at 30Hz and only require a 257.400MHz or 296.700MHz signal (you can select a mode via it's ID to control whether you want the lower signal w/ reduced blanking)
Yes, thanks a lot... I've been playing with it for 2 hours now or so and today using that oracle-video I can/could most of the time reproduce the issue. I've tried many things - including checking again in windows. So everything is crystal clear, when I play that oracle-cloud video (link in one of the top posts) on Windows. I noticed using "nvidia-settings" that my monitor has "G-Sync", which Windows correctly detected. So I began reading about https://wiki.archlinux.org/title/Variable_refresh_rate and tried to enable that. I modified my /etc/X11/xorg.conf.d/20-nvidia.conf file:
Section "Device"
Identifier "Device0"
Driver "nvidia"
# Option "ModeValidation" "AllowNonEdidModes"
Option "VariableRefresh" "true"
EndSection
Rebooted and then this should show something: "xrandr --props |grep -i vrr" - but it doesn't. I ignored that something seemed wrong and used nvidia-settings -> X Server Display Configuration -> Advanced -> "Allow G-SYNC on monitor not validated as G-SYNC Compatible" -> Apply. But didn't see noticeable differences in screen quality output and the screen flickering still appeared around those moments where that oracle-cloud video faded/transitioned from white to grey (or grey to white, something like that)... Might've been a dead end with trying to enable G-Sync on Linux, but could be interesting to hear/see if anyone had good luck with that and if enabling G-sync on linux reduced ugly screen artifacts/flickering or something...
Anyway, I also got good news, because I made some progress: I found out that:
$ xrandr
Screen 0: minimum 8 x 8, current 7200 x 2560, maximum 32767 x 32767
DP-3.8 connected primary 3840x2160+0+400 (normal left inverted right x axis y axis) 600mm x 340mm
3840x2160 60.00 + 59.94 50.00* 29.97 29.97 25.00 23.98
...
....
$ xrandr --output DP-3.8 --mode 3840x2160 --rate 50
$ xrandr --output DP-3.8 --mode 3840x2160 --rate 30
$ xrandr --output DP-3.8 --mode 3840x2160 --rate 25
$ xrandr --output DP-3.8 --mode 3840x2160 --rate 24
Everything where refresh rate is 50 or below looks much more "stable" meaning: It's not such that the flickering completely disappears, however the problem seems drastically reduced. So although not a perfect work-around I think "xrandr --output DP-3.8 --mode 3840x2160 --rate 50" is a really good compromise, based on around 1,5-2 hours of testing today... I don't think rate 24, 25 or 30 helps more than rate 50 - all these modes are roughly equally good (or bad). And 60 Hz is clearly the worst, based on todays around 1,5-2 hours of testing... So that's great to learn and remember for me now...
I also tried disabling daisy-chain (although I'm not sure I did it the way I'm supposed to): I just unplugged the second external monitor. It didn't really make any difference - screen flickering issue was the same... Later I read that I might need to go into the monitor-menu and disable Daisy Chain, which I think is called "MST" in the monitor menu. But that can never be a good permanent solution, so reducing refresh rate from 60 Hz to "50 Hz" is currently the best I can do. The problem seems reduced a lot (at least based on todays tests)... I also tried moving my mobile phone along the connected displayport-cable, but it seems the cable is good enough and it's not interference as I think my mobile phone probably sends out some radiation that a bad cable might be affected by.
I also tried various combinations inside "nvidia-settings" (X Screen 0 -> OpenGL Settings -> Performance: Sync to VBlank, Allow Flipping, Allow and Enable G-Sync-stuff - but no luck: No noticeable difference... I suspect what Linux (or my linux-pc) needs is better detection/handling/dealing with G-sync-stuff and maybe that's the difference between linux and windows. But reducing refresh rate from 60 Hz to "50 Hz" is also a REALLY good solution that I think I can easily live with, although the problem doesn't fully disappear it's not too annoying and the flickering is both reduced and stops after a few seconds - instead of after 30-90 seconds or something....
My current conclusion is: To reduce screen flickering a lot, go from 60 Hz to 50 Hz and the problem is much easier to live with (although not an ideal solution, it's currently good enough - the issue will only last for a few seconds instead of maybe 30-90 seconds before, it seems)... Thanks, seth!
Offline