You are not logged in.

#1 2018-03-28 06:30:04

glsarch
Member
Registered: 2018-03-27
Posts: 17

CPU/GPU - jupyter with keras and tensorflow GPU (optirun)

Hi,

I have a Dell XPS 15 with an nvidia 1050ti and I have bumblebee set up with nvidia proprietary drivers (optirun).
I can see that this setup works with "glxgears -info" vs "optirun glxgears -info"

I am learning deep learning with Keras, TensorFlow and Theano and I'd like to fully understand how to enable CPU or GPU. I am completely new to this (jupyter, python, python virtual environment and deep learning with GPU) so I might be missing something obvious.
I have a script that trains a model with Keras. I setup two anaconda environment "nb-ml" with GPU support and "tmp-tf" without GPU support.

When I execute the same script in these environment, I can definitely see a speed difference between the two. I can also print logs with the number of GPU available for Keras or the logs from TensorFlow. In these logs, it is obvious that one environment uses CPU and the other one uses GPU.

Now I have a question regarding Jupyter. I found online that I had to create a custom kernel in a kernel.json and add "optirun" to the list of arguments to start a Python kernel.
I did that and it worked but when I removed this kernel, it kept working so the GPU is used even without this kernel and optirun.

First question -> I don't know how this could work. I was expecting to need a kernel started with optirun to be able to use the GPU with keras in my notebook. Anybody knows ?

For information, if I start jupyter notebook from the "tmp-tf" anaconda environment (without GPU), the kernel does not use the GPU.
If I start jupyter notebook from "nb-ml" anaconda environment (with GPU), the kernel uses the GPU.
-> the GPU is used irrelevant to the kernel used (with optirun or not)

Basically, I would expect that if I have two kernels configured in my anaconda environment for jupyter, one with CPU only (= optirun command missing) and one with GPU support (= optirun command in kernel.json), I could switch between the two kernels in jupyter and this would enable/disable GPU support. That doesn't seem to be the case and I don't understand why.

What I would like to achieve is having tensorflow-gpu installed in all my anaconda environment and create two jupyter kernels, one with optirun and one without. Then in my notebook, I would switch between the two kernels to use GPU or not.

Maybe my installation is a mess because I tried to install tensorflow, keras and so on on my real arch system and not in a virtual environment. Maybe that some configuration is taken from my real system ?
Basically, I should never have to install a python library from pacman directly, correct?
I should always install my dependencies using pip or anaconda in a virtual environment. This way my arch installation only contains libraries that are needed by applications and not the one I need for my development.

Anybody here sees what I am doing wrong or where my logic/expectations fail ?

Thank you very much.

------

edit 1: I uninstalled keras, tensorflow and so on from my arch system and now I can't get GPU with jupyter even with the kernel so I will need to figure out why. I will probably understand a few things in the process. Anybody knows what happened ?
edit 2: apparently, you need to at least have cuda, cudnn installed on arch to be able to use the GPU. cuda/cudnn in a virtual environment doesn't seem to work. All the other packages can be installed in the virtual environment (keras-gpu, tensorflow-gpu, ...). Basically, I just need to understand how the normal python kernel without "optirun" in jupyter supports the GPU as I would expect that only the Python 3 GPU kernel with optirun args does

Last edited by glsarch (2018-03-29 06:50:11)

Offline

#2 2018-03-30 07:38:51

glsarch
Member
Registered: 2018-03-27
Posts: 17

Re: CPU/GPU - jupyter with keras and tensorflow GPU (optirun)

Hi,

I have tried to use tf.device() in my notebook but there doesn't seem to be any change.

https://gitlab.com/gillouche/notebooks- … work.ipynb

Anybody knows how I can compare the training speed difference between the CPU and the GPU in a same notebook ?

Offline

Board footer

Powered by FluxBB