You are not logged in.
I've been using open source drivers, however I need to switch to proprietary ones since I require CUDA capabilities to work. My laptop is Asus K75VJ with output of lspci being:
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 635M] (rev a1)
I'm using following versions of xorg-server, lightdm etc.
xorg-server 1.18.3-1
lightdm 1:1.18.1-1
cuda 7.5.18-2
nvidia 364.19-2
nvidia-libgl 364.19-1
cudnn 5.0.4-1
And have various problems with configuring my Xorg server properly.
My latest Xorg http://pastebin.com/AJmtmLxU
My latest Xorg error is: http://pastebin.com/gigAfZpA
Without Device section I have EDID warnings/erros in my Xorg log and screen in black. I'm trying my hardest over last few hours to make it work, however I'm not completely comfortable with setting Xorg server, nor understanding latest Nvidia driver changes with KMS support (and have vague picture what it means).
Any help getting my system function is much appreciated.
EDIT:
lightdm is configured per wiki https://wiki.archlinux.org/index.pQhp/NVIDIA_Optimus
Last edited by nmiculinic (2016-05-10 21:38:54)
Offline
Not an Installation issue, moving to Multimedia...
Offline
You're using an Optimus laptop, but since it's an ASUS it's likely that your nvidia card is wired to the HDMI output and the VGA output is on the Intel card. This means on startup you'll only have display on the HDMI output (which I guess you don't have wired up to anything), the VGA output will not work.
I recommend just using bumblebee, the least hassle. If you need the power of the nvidia card for specific applications use primusrun. If you don't, you can even consider using PRIME (nouveau only), but bumbleebee will give you better battery life.
If you insist on the way you're currently trying, then get a HDMI monitor to see what you're doing.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
Previously when I used Mint; things worked out of the box with proprietary drivers. (Not tried CUDA though)
I am not sure Bumblebee is the right option for me since I vare about CUDA capabilities of my grafic card. I've tried running Tensorflow (library using CUDA) with open source no(...) drivers, then it complained it cannot find CUDA capable card.
Offline
Note: CUDA 7.5 is not compatible with GCC 6. This means that as of 2016-05-07, with the cuda and gcc packages from the official repositories, it is impossible to compile CUDA code. This will likely be fixed soon with CUDA 8. In the meantime, you can follow #Using CUDA with an older GCC.
Could this be your problem, nmiculinic ?
Assuming that is solved, do you need graphics output from the nvidia card or just cuda support ?
If just cuda support :
keep nvidia &nvidia-opencl but configure X to just use the intel card.
Verify if cuda works by compiling/running the examples.
see https://wiki.archlinux.org/index.php/GPGPU#CUDA for more details.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
I am not sure Bumblebee is the right option for me since I vare about CUDA capabilities.....
I don't think using Bumblebee precludes you from using CUDA.
Matt
"It is very difficult to educate the educated."
Offline
GPGPU page on wiki wrote:Note: CUDA 7.5 is not compatible with GCC 6. This means that as of 2016-05-07, with the cuda and gcc packages from the official repositories, it is impossible to compile CUDA code. This will likely be fixed soon with CUDA 8. In the meantime, you can follow #Using CUDA with an older GCC.
Could this be your problem, nmiculinic ?
Assuming that is solved, do you need graphics output from the nvidia card or just cuda support ?
If just cuda support :
keep nvidia &nvidia-opencl but configure X to just use the intel card.
Verify if cuda works by compiling/running the examples.see https://wiki.archlinux.org/index.php/GPGPU#CUDA for more details.
I'm waiting for gcc5 to finish compiling and test then. I was unable to compile CUDA samples with current config. Right now I'm back on nouveau drivers... If this is output of xrandr:
Screen 0: minimum 8 x 8, current 1600 x 900, maximum 32767 x 32767
LVDS1 connected 1600x900+0+0 (normal left inverted right x axis y axis) 382mm x 215mm
1600x900 60.08*+
1368x768 60.00
1280x720 60.00
1024x768 60.00
1024x576 60.00
960x540 60.00
800x600 60.32 56.25
864x486 60.00
800x450 60.00
640x480 59.94
720x405 60.00
640x360 60.00
DP1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
VGA1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
Does that mean all rendering is happening on Intel integrated graphics card?
Offline
Lone_Wolf wrote:GPGPU page on wiki wrote:Note: CUDA 7.5 is not compatible with GCC 6. This means that as of 2016-05-07, with the cuda and gcc packages from the official repositories, it is impossible to compile CUDA code. This will likely be fixed soon with CUDA 8. In the meantime, you can follow #Using CUDA with an older GCC.
Could this be your problem, nmiculinic ?
Assuming that is solved, do you need graphics output from the nvidia card or just cuda support ?
If just cuda support :
keep nvidia &nvidia-opencl but configure X to just use the intel card.
Verify if cuda works by compiling/running the examples.see https://wiki.archlinux.org/index.php/GPGPU#CUDA for more details.
I'm waiting for gcc5 to finish compiling and test then. I was unable to compile CUDA samples with current config. Right now I'm back on nouveau drivers... If this is output of xrandr:
Screen 0: minimum 8 x 8, current 1600 x 900, maximum 32767 x 32767 LVDS1 connected 1600x900+0+0 (normal left inverted right x axis y axis) 382mm x 215mm 1600x900 60.08*+ 1368x768 60.00 1280x720 60.00 1024x768 60.00 1024x576 60.00 960x540 60.00 800x600 60.32 56.25 864x486 60.00 800x450 60.00 640x480 59.94 720x405 60.00 640x360 60.00 DP1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) VGA1 disconnected (normal left inverted right x axis y axis) VIRTUAL1 disconnected (normal left inverted right x axis y axis)
Does that mean all rendering is happening on Intel integrated graphics card?
xrandr doesn't tell you what card is running what. Run `xrandr --listproviders`. However your xrandr does look like everything is detected, probably a PRIME configuration.
Don't think you can do CUDA on nouveau though. You really should use bumblebee, it doesn't preclude CUDA at all.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
Ok I've tried few things and googled a lot more.
From recent 364.12 it should be possible having PRIUS setup with proprietary drivers.
I'm trying bumblebee and it works for glxgears test (that is it's executed on NVidia GPU),
* priusrun python doesn't see GPU I need
* optirun python sees the GPU and works marvelously... except I learned I need CUDA 3 capabilities, and I have 2.1
Also login time is noticably longer under bumblebee.
I've tried using proprietary drivers without bumblebee, and managed to get tensorflow to see GPU, but it failed due to some version clash.. I think I might need to recompile tensorflow from source instead of using prepackaged. However under this option I couldn't get normal picture set up (I ran tf under tty1 to test).
I'll mark this as solved and leave this information for any potential future users encountering similar problems. Too bad I have outdated GPU.
Offline