Member-only story
How to access NVIDIA GPU from a Docker Container?
Nowadays, most Machine Learning application requires to run on an NVIDIA GPU to speed up the training process and the inference. In this tutorial, we will see how you can use Docker for your machine learning application and still access your GPU(s) to make your life easier when you want to share your work and/or deploy it to other machines.
Why Utilize Docker?
You probably know already that there are a lot of prerequisites before being able to install TensorFlow or PyTorch and start building your machine learning app. and if you didn’t know before now you know ;)
You can follow the official procedure to install your NVIDIA GPU driver, CUDA, and TensorFlow (or PyTorch) libraries from the official websites. However, you learn the hard way that it will easily mess up your computer and your graphics card while installing all these libraries and drivers especially if you need different versions of the libraries for different projects. That’s why I would highly recommend installing TensorFlow/PyTorch inside a Docker container.
“Docker is essentially a self-contained OS with all the dependencies necessary for a smooth installation.”