You are not logged in.
Ok, I had time to fiddle with this again today (Just been using only iGPU lately on laptop). Updated fully, now on kernel 4.20 and nvidia 4.15.25-5. Also updated Bios to 1.6.0.
yaworskis work has been merged into the development branch of bumblebee now, so I installed bumblebee-git, and unloading modules with AlwaysUnloadKernelDriver=true works. However, when putting the nvidia GPU in RuntimePM to auto, I'm getting the same "pci 0000:01:00.0: Refused to change power state, currently in D3" errors. From rereading this forum thread, it seems I'm not the only one having these issues. So something has changed again, though I don't know if it is Bios related, something new in kernel related, driver related, etc?
The scripts Tyrells has posted above work (though unloading of modules can be done with AlwaysUnloadKernelDriver instead). But I was hoping that the addition of AlwaysUnloadKernelDriver to bumblebee would be enough (and it would have been 4 months ago). Even michelesr is using the PCI drop and rescan method in nvidia-xrun-pm.
@yaworski - Are you still able to use Optirun without resorting to rescan? (Hopefully you still see notifications for this forumthread) Is there anyone else that is able to get to work without PCI dropping the card and rescanning?
Offline
Ok, another strange development here. I am also running kernel 4.20 and nvidia 415.25-5, but with the bios updated to 1.7.0.
Note: all scripts are the same as the ones I listed in my previous post.
If I do the following:
execute opti-start.sh
execute optirun glxgears
execute opti-stop.sh
This is the strange part:
I can then execute optirun without having to execute the opti-start.sh script again. The power controls of the PCIe bus and NVIDIA card are still in auto, and it seems to just work. Not sure what's going on, and if the BIOS update plays a part.
@IngeniousDox would you be able to test this on your system?
Offline
It sounds like what I have experienced myself: If I boot without putting the dGPU to "Auto", then use Optirun once, then put the dGPU to "Auto", then Optirun seems to keep working fine. I'm describing this exact behaviour on the github issue where some of us have been talking (instead of on this forum thread):
https://github.com/Bumblebee-Project/Bu … -453474037
It feels like the nvidia dGPU has to be used one time, and if it is put to "auto" afterwards, it goes fine and the Kernel remembers/has the state for the dGPU to go from D3 to On by loading modules. But if the dGPU has never been turned on, it simply doesn't have the state in the first place. So trying to turn it on with just loading the modules fails. Using the "opti-start.sh" script or well, at least powering the PCI bus "on" and rescanning seems to make it possible again, since it turns both "On". So it might be that just setting "Auto" on boot is the real culprit.
Yaworski doesn't seem to have an issue atm he says, but he hardly reboots his laptop. Anyways, I'll retest with 1.7.0 when I get a chance next week (the xps9570 isn't my daily driver atm).
Last edited by IngeniousDox (2019-01-13 17:45:17)
Offline
Recently the modified nvidia-xrun script from my fork stopped working correctly: now it can't unload the nvidia modules correctly so it will fail to turn off the card... for now I manually launch a workaround script after that to disable the card, but I couldn't find the reason of that behavior... can anyone replicate this? Let me know if you find an explanation.
Offline
Recently the modified nvidia-xrun script from my fork stopped working correctly: now it can't unload the nvidia modules correctly so it will fail to turn off the card... for now I manually launch a workaround script after that to disable the card, but I couldn't find the reason of that behavior... can anyone replicate this? Let me know if you find an explanation.
ive been having trouble with nvidia-xrun not unloading the nvidia module, on 4.20 kernel i get a black screen after exiting xrun on tty2 and cant recover the screen, just tried using the lts kernel and it works fine (no black screen) BUT still outputs this:
waiting for X server to shut down (II) Server terminated successfully (0). Closing log file.
Unloading nvidia_drm module
rmmod: ERROR: Module nvidia_drm is in use
Unloading nvidia_modeset module
rmmod: ERROR: Module nvidia_modeset is in use by: nvidia_drm
Unloading nvidia module
rmmod: ERROR: Module nvidia is in use by: nvidia_modeset
Turning off nvidia GPU
OFF
Current state of nvidia GPU: 0000:01:00.0 ON
and bbswitch outputs this:
bbswitch: device 0000:01:00.0 is in use by driver 'nvidia', refusing OFF
hope that helps or you can supply me a fix
Offline
After few days of experimenting and testing suggestions from this thread I was able to get switchable graphics to work on my XPS (1.5.0 bios, 4.20 kernel and nvidia 415.25). In my case, the main issue was automatic loading of nvidia modules on boot, immediately after decrypting my drive. Blacklisting by 'blacklist' option in /etc/modprobe.d/blacklist.conf didn't work. Additionally together with nvidia modules ipmi_msghandler and ipmi_devintf were loaded and they prevented unloading of nvidia. I had to use "install module_name /bin/false" command to successfully block loading of these three modules.
However, this metod also disabled loading of modules manually, by using "modprobe -a" command. Renaming / removing this file removes this obstacle, therefore in my scripts I append / remove .disable extension from conf file which holds install directive for nvidia module.
In this configuration my power consumption is ~4W on iGPU. After enabling dGPU and loading nvidia drivers it jumps to 6-7W. After disabling GPU it goes back to ~4W.
Bellow I attached my guide (as I usually prefer to write myself some instructions, as I will not remember details when I will have to do this again). I would be grateful if somebody would be able to reproduce my results by using this configuration (as I may already forgot some details). It is written in markdown.
# XPS 15 9570 - Nvidia Switchable
Guide to setup on/off operation of GPU. Based on works collected in [this](https://bbs.archlinux.org/viewtopic.php?id=238389) thread.
GPU management scripts were created by [tyrells](https://bbs.archlinux.org/viewtopic.php?pid=1825298#p1825298) to which manipulation of blacklist config was added.
## Packages
- nvidia
- bumblebee (for optirun)
- tlp (optional)
- powertop (optional - for verification)
- unigine-valley (aur, optional - for verification)
This guide should be easily adapted to *xrun* as *bumblebee* is only used for *optirun*.
## Configuration
### /etc/default/tlp
Add GPU to TLP **RUNTIME_PM_BLACKLIST**.
```
RUNTIME_PM_BLACKLIST="01:00.0"
```
### /etc/bumblebee/bumblebee.conf
```
Driver=nvidia
```
And in nvidia section:
```
PMMethod=none
```
### /etc/tempfiles.d/nvidia_pm.conf
Allow gpu to poweroff on boot
```
w /sys/bus/pci/devices/0000:01:00.0/power/control - - - - auto
```
### /etc/X11/xorg.conf.d/01-noautogpu.conf
```
Section "ServerFlags"
Option "AutoAddGPU" "off"
EndSection
```
### /etc/X11/xorg.conf.d/20-intel.conf
```
Section "Device"
Identifier "Intel Graphics"
Driver "modesetting"
EndSection
```
## Create blacklist files
### /etc/modprobe.d/blacklist.conf
```
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
blacklist nv
blacklist nvidia
blacklist nvidia-drm
blacklist nvidia-modeset
blacklist nvidia-uvm
blacklist ipmi_msghandler
blacklist ipmi_devintf
```
### /etc/modprobe.d/disable-ipmi.conf
These modules are loaded together with nvidia and block its unloading. I do not need [ipmi](https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface) therefore I simply disabled this functionality.
```
install ipmi_msghandler /usr/bin/false
install ipmi_devintf /usr/bin/false
```
### /etc/modprobe.d/disable-nvidia.conf
```
install nvidia /bin/false
```
## Create GPU management scripts
GPU management scripts were created by [tyrells](https://bbs.archlinux.org/viewtopic.php?pid=1825298#p1825298) to which manipulation of blacklist config was added.
Create two following management scripts. Creation of aliases is recommended.
### enableGpu.sh
``` bash
#!/bin/sh
# allow to load nvidia module
mv /etc/modprobe.d/disable-nvidia.conf /etc/modprobe.d/disable-nvidia.conf.disable
# remove NVIDIA card (currently in power/control = auto)
echo -n 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
sleep 1
# change PCIe power control
echo -n on > /sys/bus/pci/devices/0000\:00\:01.0/power/control
sleep 1
# rescan for NVIDIA card (defaults to power/control = on)
echo -n 1 > /sys/bus/pci/rescan
# load nvidia module
modprobe nvidia
```
Ignore error: *modprobe: ERROR: Error running install command for ipmi_devintf* as this module was blacklised in previous step.
### disableGpu.sh
``` bash
modprobe -r nvidia_drm
modprobe -r nvidia_uvm
modprobe -r nvidia_modeset
modprobe -r nvidia
# change NVIDIA card power control
echo -n auto > /sys/bus/pci/devices/0000\:01\:00.0/power/control
sleep 1
# change PCIe power control
echo -n auto > /sys/bus/pci/devices/0000\:00\:01.0/power/control
sleep 1
# lock system form loading nvidia module
mv /etc/modprobe.d/disable-nvidia.conf.disable /etc/modprobe.d/disable-nvidia.conf
```
## Create service which locks GPU on shutdown
Service which locks GPU on shutdown / restart when it is not disabled by *disableGpu.sh* script is necessary. Otherwise on next boot nvidia will be loaded together with *ipmi* modules (even if we have blacklist with *install* command for them) and it would not be possible to unload them.
### /etc/systemd/system/disable-nvidia-on-shutdown.service
```
[Unit]
Description=Disables Nvidia GPU on OS shutdown
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c "mv /etc/modprobe.d/lock-nvidia.conf.disable /etc/modprobe.d/lock-nvidia.conf || true"
[Install]
WantedBy=multi-user.target
```
## Enabling
Reload systemd daemons and enable service:
``` bash
systemctl daemon-reload
systemctl enable disable-nvidia-on-shutdown.service
```
## Final remarks
1. Reboot and verify that nvidia is not loaded ```lsmod | grep nvidia```
2. Disconnect charger and verify on *powertop* that power consumption is ~4W on idle (Dell XPS 4k, undervolt -168mV core / -145mV cache, disabled touchscreen, powertop --auto-tune)
3. Enable GPU by using script.
4. Verify if GPU is loaded by using ```nvidia-smi```
5. Run unigine-valley ```optirun unigine-valley```
6. Close all nvidia applications and disable gpu.
7. Check again power consumption, it should have similar value as before.
In my case I get ~4w on idle with GPU disabled and ~6W with GPU enabled.
Edit (as this is less visible in guide):
Again great thanks to tyrells for his opti-run.sh and opti-stop.sh (renamed here enableGpu.sh / disableGpu.sh as it better states their function) which serve as main switch here.
Somebody has any idea how to better solve blacklisting than file shuffling implemented by me?
Last edited by Graff (2019-04-18 21:58:19)
Offline
Recently the modified nvidia-xrun script from my fork stopped working correctly: now it can't unload the nvidia modules correctly so it will fail to turn off the card... for now I manually launch a workaround script after that to disable the card, but I couldn't find the reason of that behavior... can anyone replicate this? Let me know if you find an explanation.
Tried again this evening after updating the system and it seems to work fine now. Not sure what caused the issue, maybe a broken driver version.
Offline
After few days of experimenting and testing suggestions from this thread I was able to get switchable graphics to work on my XPS (1.5.0 bios, 4.20 kernel and nvidia 415.25). In my case, the main issue was automatic loading of nvidia modules on boot, immediately after decrypting my drive. Blacklisting by 'blacklist' option in /etc/modprobe.d/blacklist.conf didn't work. Additionally together with nvidia modules ipmi_msghandler and ipmi_devintf were loaded and they prevented unloading of nvidia. I had to use "install module_name /bin/false" command to successfully block loading of these three modules.
However, this metod also disabled loading of modules manually, by using "modprobe -a" command. Renaming / removing this file removes this obstacle, therefore in my scripts I append / remove .disable extension from conf file which holds install directive for nvidia module.
In this configuration my power consumption is ~4W on iGPU. After enabling dGPU and loading nvidia drivers it jumps to 6-7W. After disabling GPU it goes back to ~4W.
Bellow I attached my guide (as I usually prefer to write myself some instructions, as I will not remember details when I will have to do this again). I would be grateful if somebody would be able to reproduce my results by using this configuration (as I may already forgot some details). It is written in markdown.
This worked like a charm on my Dell Precision 5530 with Quadro P2000 Card! Had the issue that nvidia modules were loaded on boot by gnome. Your scripts solved that perfectly. Although one strange thing that happens to me. After enabling the GPU I'm unable to use it until i run nvidia-smi.
Offline
Glad I could help (as other helped me in this thread).
Although one strange thing that happens to me. After enabling the GPU I'm unable to use it until i run nvidia-smi.
Unfortunately, I am unable to reproduce that. If you cannot work it out, it may be good idea to run nvidia-smi as last line in enableGpu.sh script (maybe with redirecting output to /dev/null).
One thing more, as I noticed that when GPU is not disabled before shutdown / restart then all modules (nvidia + ipmi) will be loaded on reboot (as nvidia blacklist is disabled). Therefore I would recommend adding following service on shutdown to restore blacklist.
/etc/systemd/system/disable-nvidia-on-shutdown.service
[Unit]
Description=Disables Nvidia GPU on OS shutdown
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/bin/bash -c "mv /etc/modprobe.d/lock-nvidia.conf.disable /etc/modprobe.d/lock-nvidia.conf || true"
[Install]
WantedBy=multi-user.target
Offline
Glad I could help (as other helped me in this thread).
mreichardt wrote:Although one strange thing that happens to me. After enabling the GPU I'm unable to use it until i run nvidia-smi.
Unfortunately, I am unable to reproduce that. If you cannot work it out, it may be good idea to run nvidia-smi as last line in enableGpu.sh script (maybe with redirecting output to /dev/null).
Yes this is exactly what I did. This maybe specific to a Precision 5530 with a P2000.
One thing more, as I noticed that when GPU is not disabled before shutdown / restart then all modules (nvidia + ipmi) will be loaded on reboot (as nvidia blacklist is disabled). Therefore I would recommend adding following service on shutdown to restore blacklist.
/etc/systemd/system/disable-nvidia-on-shutdown.service
[Unit] Description=Disables Nvidia GPU on OS shutdown [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/true ExecStop=/bin/bash -c "mv /etc/modprobe.d/lock-nvidia.conf.disable /etc/modprobe.d/lock-nvidia.conf || true" [Install] WantedBy=multi-user.target
Great idea! Will add that to my services
Offline
@Graff, thats nice and low power usage with undervolting / disabling touchscreen. I'll have to look into that. How did you disable touchscreen? Did you just disable it in UEFI bios settings? I read something on the 9560 archwiki page that putting it to autosuspend also gives you saving, but I haven't looked into yet how to do that for the 9570.
And slightly more ontopic, I was thinking about also using "nvidia-smi" on boot to just load the drivers once, then unload and put the card to auto. Seems like the way to go.
Offline
There is setting for that in UEFI (1.5.0). I started with autosuspend (as in 9560 wiki), but then disabled touchscreen to test if this would lower power consumption further. I think that due to that disable now I can get < 5W (consumption lowered by 0.25-0.5W). However this may be a placebo effect, as I simply disabled it and tested after reboot without further tests. It may be also tlp, as I have created config with all tweaks I could find (e.g. touchscreen) and only then I started to look into power consumption. As I finally sorted GPU I will probably go back and test this further.
And slightly more ontopic, I was thinking about also using "nvidia-smi" on boot to just load the drivers once, then unload and put the card to auto. Seems like the way to go.
Please post results if you manage to get that working, as this would be nice.
Last edited by Graff (2019-01-15 22:22:31)
Offline
Deleted
Last edited by Sangeppato (2019-01-17 02:40:33)
Offline
Is the touchscreen even the same as on the 9560? lsusb only shows:
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 27c6:5395
Bus 001 Device 002: ID 0cf3:e300 Qualcomm Atheros Communications
Bus 001 Device 004: ID 0c45:671d Microdia
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Which is in order (if I'm correct): 3.0 bus, the fingerprint reader, bluetooth, webcam, 2.0 bus
Dmesg shows for me that the touchscreen is registered as:
[ 12.858480] input: WCOM488E:00 056A:488E Touchscreen as /devices/pci0000:00/0000:00:15.0/i2c_designware.0/i2c-9/i2c-WCOM488E:00/0018:056A:488E.0001/input/input19
And Powertop notes:
Good I2C Device i2c-WCOM488E:00 has no runtime power management
But I figure this is just the low power bus that connects the touchscreen to the motherboard. If it is actually the touchscreen, it seems we don't have to do anything.
Anyways, by just adding "04f3:24a1" as noted on 9560 page to TLP config, I can't really see any difference. And turning it off in bios, might have saved a little bit, though I find it hard to tell. And I'm not sure if it is worth it (I admit to finding it fun use the touchscreen sometimes, just because I can). I guess I'll wait till someone who knows about this, can tell me how low power touchscreen might work on our 9570.
Offline
Hi everyone. I just got a XPS 15 9570 and am facing the same difficulties. I read everything about the optirun configuration I found on the web.
I installed my arch system (4.20.11) with i3 (4.16.1) and followed the @Graff tuto in #106.
Everything is working well except that my power consumption is about 16W idle. The average cpu is < 1% and the power consumption is all the time over 15W.
As I said I followed the @Graff tuto and when rebooting I have these returns:
$ lsmod | grep nvidia
>
$ lspci
>NVIDIA 3D controller ...
the bumblebee service is enabled and active.
It's not clear for me if a good configuration has already really be found (I still have 5 days to retract myself regarding Dell.).
Does someone has an idea about why I have 16W of consumption at idle? Is this because of the discret NVDIA GPU card? What config did I miss?
thanks a lot for your time.
all different - all equal
Offline
Hi everybody, sorry for the partial OT I will start but the issue may be related to Dell XPS 9570 graphic setup.
Does anyboday tried last kernel (5.0)? On my laptop it won't start...
During boot gnome-shell seems to keep crashing (according to journalctl logs), the screen is black and it flashes continuosly.
LTS kernel and kernel 4.20.13 work smooth, no issue with them.
Also disabling wayland doesn't solve the problem...
Seems to be something wrong with gdm, kernel 5.0 and this laptop graphic setup issues.
Sorry again for the partial OT, but since I follow this topic from long time ago I know that you experienced all my issues, maybe I have found a solution...
Offline
Hi ilpanich,
Same issue for me, kernel 5.0 doesn't start. I use xorg with i3wm and there is a problem when starting x.
I justed blocked the linux-5.0.arch1-1 and the virtualbox-host-modules-arch-6.0.4-12 packages from pacman updates and stay with linux-4.20.13.arch1-1 at the moment as I don't have the time currently to dig on it.
Interested in any solution too.
all different - all equal
Offline
Hi ilpanich,
Same issue for me, kernel 5.0 doesn't start. I use xorg with i3wm and there is a problem when starting x.
I justed blocked the linux-5.0.arch1-1 and the virtualbox-host-modules-arch-6.0.4-12 packages from pacman updates and stay with linux-4.20.13.arch1-1 at the moment as I don't have the time currently to dig on it.
Interested in any solution too.
My advice is to use the LTS kernel in these cases.
Does the laptop boot with the nouveau drivers installed instead of nvidia? A friend of mine had similar issues with his Asus ROG and kernel 5.0 and, if I recall correctly, he solved it switching to the nouveau drivers
Offline
Yann wrote:Hi ilpanich,
Same issue for me, kernel 5.0 doesn't start. I use xorg with i3wm and there is a problem when starting x.
I justed blocked the linux-5.0.arch1-1 and the virtualbox-host-modules-arch-6.0.4-12 packages from pacman updates and stay with linux-4.20.13.arch1-1 at the moment as I don't have the time currently to dig on it.
Interested in any solution too.My advice is to use the LTS kernel in these cases.
Does the laptop boot with the nouveau drivers installed instead of nvidia? A friend of mine had similar issues with his Asus ROG and kernel 5.0 and, if I recall correctly, he solved it switching to the nouveau drivers
Yes, booting with LTS kernel works, Also I had backed up a copy of the workin 4.20.13 kernel in a separate dir in boot (so it won't get overwritten by every update).
I can't move to nouveau driver since I use the discrete card only for cuda and computation, not supported in nouveau.
Offline
Sangeppato wrote:Yann wrote:Hi ilpanich,
Same issue for me, kernel 5.0 doesn't start. I use xorg with i3wm and there is a problem when starting x.
I justed blocked the linux-5.0.arch1-1 and the virtualbox-host-modules-arch-6.0.4-12 packages from pacman updates and stay with linux-4.20.13.arch1-1 at the moment as I don't have the time currently to dig on it.
Interested in any solution too.My advice is to use the LTS kernel in these cases.
Does the laptop boot with the nouveau drivers installed instead of nvidia? A friend of mine had similar issues with his Asus ROG and kernel 5.0 and, if I recall correctly, he solved it switching to the nouveau driversYes, booting with LTS kernel works, Also I had backed up a copy of the workin 4.20.13 kernel in a separate dir in boot (so it won't get overwritten by every update).
I can't move to nouveau driver since I use the discrete card only for cuda and computation, not supported in nouveau.
I understand, it was just to know if the laptop booted at all without the proprietary driver
Offline
Yes, booting with LTS kernel works
I can't move to nouveau driver since I use the discrete card only for cuda and computation, not supported in nouveau.
same for me both. I think it's a little normal that the lts kernel works. I will keep looking to this 5.0 kernel issue.
Last edited by Yann (2019-03-11 11:46:03)
all different - all equal
Offline
Just checked with linux-5.0.1.arch1-1 and the issue is still there - the built-in screen is black and the backlight blinks all the time. I don't think nvidia driver is an issue as it's not loaded automatically for me. I've also noticed that the performance of whole system goes down - even with external monitor connected I need to wait a long time before anything shows up. Even switching to a different console is very laggy.
Offline
Have a look here: https://bbs.archlinux.org/viewtopic.php?id=244867
Offline
Hello everyone. Sorry I haven't been able to check this thread for a while, I've been very busy and my system recently got wrecked(same 5.0 kernel flickering issue except that I completely broke my system while trying to fix it). I did not have time to fix it all up or reinstall Arch so I have switched to Manjaro for the time being.
After few days of experimenting and testing suggestions from this thread I was able to get switchable graphics to work on my XPS (1.5.0 bios, 4.20 kernel and nvidia 415.25). In my case, the main issue was automatic loading of nvidia modules on boot, immediately after decrypting my drive. Blacklisting by 'blacklist' option in /etc/modprobe.d/blacklist.conf didn't work. Additionally together with nvidia modules ipmi_msghandler and ipmi_devintf were loaded and they prevented unloading of nvidia. I had to use "install module_name /bin/false" command to successfully block loading of these three modules.
However, this metod also disabled loading of modules manually, by using "modprobe -a" command. Renaming / removing this file removes this obstacle, therefore in my scripts I append / remove .disable extension from conf file which holds install directive for nvidia module.
In this configuration my power consumption is ~4W on iGPU. After enabling dGPU and loading nvidia drivers it jumps to 6-7W. After disabling GPU it goes back to ~4W.
Bellow I attached my guide (as I usually prefer to write myself some instructions, as I will not remember details when I will have to do this again). I would be grateful if somebody would be able to reproduce my results by using this configuration (as I may already forgot some details). It is written in markdown.
# XPS 15 9570 - Nvidia Switchable Guide to setup on/off operation of GPU. Based on works collected in [this](https://bbs.archlinux.org/viewtopic.php?id=238389) thread. GPU management scripts were created by [tyrells](https://bbs.archlinux.org/viewtopic.php?pid=1825298#p1825298) to which manipulation of blacklist config was added. ## Packages - nvidia - bumblebee (for optirun) - tlp (optional) - powertop (optional - for verification) - unigine-valley (aur, optional - for verification) This guide should be easily adapted to *xrun* as *bumblebee* is only used for *optirun*. ## Configuration ### /etc/default/tlp Add GPU to TLP **RUNTIME_PM_BLACKLIST**. ``` RUNTIME_PM_BLACKLIST="01:00.0" ``` ### /etc/bumblebee/bumblebee.conf ``` Driver=nvidia ``` And in nvidia section: ``` PMMethod=none ``` ### /etc/tempfiles.d/nvidia_pm.conf Allow gpu to poweroff on boot ``` w /sys/bus/pci/devices/0000:01:00.0/power/control - - - - auto ``` ### /etc/X11/xorg.conf.d/01-noautogpu.conf ``` Section "ServerFlags" Option "AutoAddGPU" "off" EndSection ``` ### /etc/X11/xorg.conf.d/20-intel.conf ``` Section "Device" Identifier "Intel Graphics" Driver "modesetting" EndSection ``` ## Create blacklist files ### /etc/modprobe.d/blacklist.conf ``` blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv blacklist nv blacklist nvidia blacklist nvidia-drm blacklist nvidia-modeset blacklist nvidia-uvm blacklist ipmi_msghandler blacklist ipmi_devintf ``` ### /etc/modprobe.d/disable-ipmi.conf These modules are loaded together with nvidia and block its unloading. I do not need [ipmi](https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface) therefore I simply disabled this functionality. ``` install ipmi_msghandler /usr/bin/false install ipmi_devintf /usr/bin/false ``` ### /etc/modprobe.d/disable-nvidia.conf ``` install nvidia /bin/false ``` ## Create GPU management scripts GPU management scripts were created by [tyrells](https://bbs.archlinux.org/viewtopic.php?pid=1825298#p1825298) to which manipulation of blacklist config was added. Create two following management scripts. Creation of aliases is recommended. ### enableGpu.sh ``` bash #!/bin/sh # allow to load nvidia module mv /etc/modprobe.d/disable-nvidia.conf /etc/modprobe.d/disable-nvidia.conf.disable # remove NVIDIA card (currently in power/control = auto) echo -n 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove sleep 1 # change PCIe power control echo -n on > /sys/bus/pci/devices/0000\:00\:01.0/power/control sleep 1 # rescan for NVIDIA card (defaults to power/control = on) echo -n 1 > /sys/bus/pci/rescan ``` ### disableGpu.sh ``` bash modprobe -r nvidia_drm modprobe -r nvidia_uvm modprobe -r nvidia_modeset modprobe -r nvidia # change NVIDIA card power control echo -n auto > /sys/bus/pci/devices/0000\:01\:00.0/power/control sleep 1 # change PCIe power control echo -n auto > /sys/bus/pci/devices/0000\:00\:01.0/power/control sleep 1 # lock system form loading nvidia module mv /etc/modprobe.d/disable-nvidia.conf.disable /etc/modprobe.d/disable-nvidia.conf ``` ## Create service which locks GPU on shutdown Service which locks GPU on shutdown / restart when it is not disabled by *disableGpu.sh* script is necessary. Otherwise on next boot nvidia will be loaded together with *ipmi* modules (even if we have blacklist with *install* command for them) and it would not be possible to unload them. ### /etc/systemd/system/disable-nvidia-on-shutdown.service ``` [Unit] Description=Disables Nvidia GPU on OS shutdown [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/true ExecStop=/bin/bash -c "mv /etc/modprobe.d/lock-nvidia.conf.disable /etc/modprobe.d/lock-nvidia.conf || true" [Install] WantedBy=multi-user.target ``` ## Enabling Reload systemd daemons and enable service: ``` bash systemctl daemon-reload systemctl enable disable-nvidia-on-shutdown.service ``` ## Final remarks 1. Reboot and verify that nvidia is not loaded ```lsmod | grep nvidia``` 2. Disconnect charger and verify on *powertop* that power consumption is ~4W on idle (Dell XPS 4k, undervolt -168mV core / -145mV cache, disabled touchscreen, powertop --auto-tune) 3. Enable GPU by using script. 4. Verify if GPU is loaded by using ```nvidia-smi``` 5. Run unigine-valley ```optirun unigine-valley``` 6. Close all nvidia applications and disable gpu. 7. Check again power consumption, it should have similar value as before. In my case I get ~4w on idle with GPU disabled and ~6W with GPU enabled.
Edit (as this is less visible in guide):
Again great thanks to tyrells for his opti-run.sh and opti-stop.sh (renamed here enableGpu.sh / disableGpu.sh as it better states their function) which serve as main switch here.Somebody has any idea how to better solve blacklisting than file shuffling implemented by me?
I know this isn't a Manjaro forum but I believe if I give some feedback about this it would not hurt anyone. I tried this solution on my Manjaro system and it worked for the most part. I had to make a small change though: at the end of the enableGpu script, I had to add "modprobe nvidia" since nvidia module did not get loaded by itself.
I will update OP accordingly. Thanks for all your work.
Edit: Forget what I said, after running unigine-valley for some time with this method\ ipmi_msghandler somehow got loaded and it prevents nvidia from unloading. This might be a Manjaro issue though, I might try this in the future when I find time to install Arch.
Last edited by LazyLucretia (2019-03-19 19:11:46)
Online
I had to make a small change though: at the end of the enableGpu script, I had to add "modprobe nvidia" since nvidia module did not get loaded by itself.
I will update OP accordingly. Thanks for all your work.
You shouldn't have to as Bumblebee loads the nvidia module when you start an application via optirun or primusrun and it unloads it once the application has exited(though not successfully all the time). Once the nvidia module is loaded the GPU drains your battery, that's the whole problem. Doesn't matter much when you're on AC but it affects your battery life noticeably.
Offline