Inside this tutorial you will learn how to configure your Ubuntu 18.04 machine for deep learning with TensorFlow and Keras.
Configuring a deep learning rig is half the battle when getting started with computer vision and deep learning. I take pride in providing high-quality tutorials that can help you get your environment prepared to get to the fun stuff.
This guide will help you set up your Ubuntu system with the deep learning tools necessary for (1) your own projects and (2) my book, Deep Learning for Computer Vision with Python.
All that is required is Ubuntu 18.04, some time/patience, and optionally an NVIDIA GPU.
If you’re an Apple user, you can follow my macOS Mojave deep learning installation instructions!
To learn how to configure Ubuntu for deep learning with TensorFlow, Keras, and mxnet, just keep reading.
Ubuntu 18.04: Install TensorFlow and Keras for Deep Learning
On January 7th, 2019, I released version 2.1 of my deep learning book to existing customers (free upgrade as always) and new customers.
Accompanying the code updates for compatibility are brand new pre-configured environments which remove the hassle of configuring your own system. In other words, I put the sweat and time into creating near-perfect, usable environments that you can fire up in less than 5 minutes.
This includes an updated (1) VirtualBox virtual machine, and (2) Amazon machine instance (AMI):
- The deep learning VM is self-contained and runs in isolation on your computer in any OS that will run VirtualBox.
- My deep learning AMI is actually freely available to everyone on the internet to use (charges apply for AWS fees of course). It is a great option if you don’t have a GPU at home/work/school and you need to use one or many GPUs for training a deep learning model. This is the same exact system I use when deep learning in the cloud with GPUs.
While some people can get by with either the VM or the AMI, you’ve landed here because you need to configure your own deep learning environment on your Ubuntu machine.
The process of configuring your own system isn’t for the faint of heart, especially for first-timers. If you follow the steps carefully and take extra care with the optional GPU setup, I’m sure you’ll be successful.
And if you get stuck, just send me a message and I’m happy to help. DL4CV customers can use the companion website portal for faster responses.
Let’s begin!
Step #1: Install Ubuntu dependencies
Before we start, fire up a terminal or SSH session. SSH users may elect to use a program called screen
(if you are familiar with it) to ensure your session is not lost if your internet connection drops.
When you’re ready, go ahead and update your system:
$ sudo apt-get update $ sudo apt-get upgrade
Let’s install development tools, image and video I/O libraries, GUI packages, optimization libraries, and other packages:
$ sudo apt-get install build-essential cmake unzip pkg-config $ sudo apt-get install libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev $ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev $ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev $ sudo apt-get install libxvidcore-dev libx264-dev $ sudo apt-get install libgtk-3-dev $ sudo apt-get install libopenblas-dev libatlas-base-dev liblapack-dev gfortran $ sudo apt-get install libhdf5-serial-dev $ sudo apt-get install python3-dev python3-tk python-imaging-tk
CPU users: Skip to “Step #5”.
GPU users: CUDA 9 requires gcc v6 but Ubuntu 18.04 ships with gcc v7 so we need to install gcc and g++ v6:
$ sudo apt-get install gcc-6 g++-6
Step #2: Install latest NVIDIA drivers (GPU only)
This step is for GPU users only.
Note: This section differs quite a bit from my Ubuntu 16.04 deep learning installation guide so make sure you follow it carefully.
Let’s go ahead and add the NVIDIA PPA repository to Ubuntu’s Aptitude package manager:
$ sudo add-apt-repository ppa:graphics-drivers/ppa $ sudo apt-get update
Now we can very conveniently install our NVIDIA drivers:
$ sudo apt install nvidia-driver-396
Go ahead and reboot so that the drivers will be activated as your machine starts:
$ sudo reboot now
Once your machine is booted and you’re back at a terminal or have re-established your SSH session, you’ll want to verify that NVIDIA drivers have been successfully installed:
$ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 396.54 Driver Version: 396.54 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:1E.0 Off | 0 | | N/A 58C P0 61W / 149W | 0MiB / 11441MiB | 99% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
In the first table, top row, we have the NVIDIA GPU driver version.
The next two rows display the type of GPU you have (in my case a Tesla K80) as well as how much GPU memory is being used — this idle K80 is using 0Mb of approximately 12GB.
The nvidi-smi
command will also show you running processes using the GPU(s) in the next table. If you were to issue this command while Keras or mxnet is training, you’d see that Python is using the GPU.
Everything looks good here, so we can go ahead and move to “Step #3”.
Step #3: Install CUDA Toolkit and cuDNN (GPU only)
This step is for GPU users.
Head to the NVIDIA developer website for CUDA 9.0 downloads. You can access the downloads via this direct link:
https://developer.nvidia.com/cuda-90-download-archive
Note: CUDA v9.0 is required for TensorFlow v1.12 (unless you want to build TensorFlow from source which I do not recommend).
Ubuntu 18.04 is not yet officially supported by NVIDIA, but Ubuntu 17.04 drivers will still work.
Make the following selections from the CUDA Toolkit download page:
- “Linux”
- “x86_64”
- “Ubuntu”
- “17.04” (will also work for 18.04)
- “runfile (local)”
…just like this:
You may just want to copy the link to your clipboard and use the wget command to download the runfile:
$ wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda_9.0.176_384.81_linux-run
Be sure to copy the full URL:
https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda_9.0.176_384.81_linux-run
From there install let’s go ahead and install the CUDA Toolkit. This requires that we first give the script executable permissions via the chmod
command and then that we use the super user’s credentials (you may be prompted for the root password):
$ chmod +x cuda_9.0.176_384.81_linux-run $ sudo ./cuda_9.0.176_384.81_linux-run --override
Note: The –override switch is required, otherwise the CUDA installer will complain about gcc-7 still being installed.
During installation, you will have to:
- Use “space” to scroll down and accept terms and conditions
- Select y for “Install on an unsupported configuration”
- Select n for “Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 384.81?”
- Keep all other default values (some are
y
and some aren
). For paths, just press “enter”.
Now we need to update our ~/.bashrc file to include the CUDA Toolkit:
$ nano ~/.bashrc
The nano text editor is as simple as it gets, but feel free to use your preferred editor such as vim or emacs. Scroll to the bottom and add following lines:
# NVIDIA CUDA Toolkit export PATH=/usr/local/cuda-9.0/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64
To save and exit with nano, simply press “ctrl + o”, then “enter”, and finally “ctrl + x”.
Once you’ve saved and closed your bash profile, go ahead and reload the file:
$ source ~/.bashrc
From there you can confirm that the CUDA Toolkit has been successfully installed:
$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_21:08:03_CDT_2017 Cuda compilation tools, release 9.0, V9.0.176
Step #4: Install cuDNN (CUDA Deep Learning Neural Network library) (GPU only)
For this step, you will need to create an account on the NVIDIA website + download cuDNN.
Here’s the link:
https://developer.nvidia.com/cudnn
When you’re logged in and on that page, go ahead and make the following selections:
- “Download cuDNN”
- Login and check “I agree to the terms of service fo the cuDNN Software License Agreement”
- “Archived cuDNN Releases”
- “cuDNN v7.4.1 (Nov 8, 2018) for CUDA 9.0”
- “cuDNN Library for Linux”
Your selections should make your browser page look similar to this:
Once the files reside on your personal computer, you might need to place them to your GPU system. You may SCP the files to your GPU machine using this command (if you’re using an EC2 keypair):
$ scp -i EC2KeyPair.pem ~/Downloads/cudnn-9.0-linux-x64-v7.4.1.5.tgz \ username@your_ip_address:~
On the GPU system (via SSH or on the desktop), the following commands will install cuDNN in the proper locations on your Ubuntu 18.04 system:
$ cd ~ $ tar -zxf cudnn-9.0-linux-x64-v7.4.1.5.tgz $ cd cuda $ sudo cp -P lib64/* /usr/local/cuda/lib64/ $ sudo cp -P include/* /usr/local/cuda/include/ $ cd ~
Above, we have:
- Extracted the cuDNN 9.0 v7.4.1.5 file in our home directory.
- Navigated into the
cuda/
directory. - Copied the
lib64/
directory and all of its contents to the path shown. - Copied the
include/
folder as well to the path shown.
Take care with these commands as they can be a pain point later if cuDNN isn’t where it needs to be.
Step #5: Create your Python virtual environment
This section is for both CPU and GPU users.
I’m an advocate for Python virtual environments as they are a best practice in the Python development world.
Virtual environments allow for the development of different projects on your system while managing Python package dependencies.
For example, I might have an environment on my GPU DevBox system called dl4cv_21
corresponding to version 2.1 of my deep learning book.
But then when I go to release version 3.0 at a later date, I’ll be testing my code with different versions of TensorFlow, Keras, scikit-learn, etc. Thus, I just put the updated dependencies in their own environment called dl4cv_30
. I think you get the idea that this makes development a lot easier.
Another example would be two independent endeavors such as (1) a blog post series — we’re working on predicting home prices right now, and (2) some other project like PyImageSearch Gurus.
I have a house_prices
virtual environment for the 3-part house prices series and a gurus_cv4
virtual environment for my recent OpenCV 4 update to the entire Gurus course.
In other words, you can rack up as many virtual environments as you need without spinning resource hungry VMs to test code.
It’s a no-brainer for Python development.
I use and promote the following tools to get the job done:
- virtualenv
- virtualenvwrapper
Note: I’m not opposed to alternatives (Anaconda, venv, etc.), but you’ll be on your own to fix any problems with these alternatives. Additionally, it may cause some headaches if you mix environment systems, so just be aware of what you’re doing when you follow tutorials you find online.
Without further ado, let’s setup virtual environments on your system — if you’ve done this before, just pick up where we actually create the new environment.
First, let’s install pip, a Python package management tool:
$ wget https://bootstrap.pypa.io/get-pip.py $ sudo python3 get-pip.py
Now that pip is installed, let’s go ahead and install the two virtual environment tools that I recommend — virtualenv and virtualenvwrapper:
$ sudo pip install virtualenv virtualenvwrapper $ sudo rm -rf ~/get-pip.py ~/.cache/pip
We’ll need to update our bash profile with some virtualenvwrapper settings to make the tools work together.
Go ahead and open your ~/.bashrc file using your preferred text editor again and add the following lines at the very bottom:
# virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh
And let’s go ahead and reload our ~/.bashrc file:
$ source ~/.bashrc
The virtualenvwrapper tool now has support for the following terminal commands:
mkvirtualenv
: Creates a virtual environment.rmvirtualenv
: Removes a virtual environment.workon
: Activates a specified virtual environment. If an environment isn’t specified all environments will be listed.deactivate
: Takes you to your system environment. You can activate any of your virtual environments again at any time.
Creating the dl4cv
environment
Using the first command from the list above, let’s go ahead and create the dl4cv virtual environment with Python 3:
$ mkvirtualenv dl4cv -p python3
When your virtual environment is active, your terminal bash prompt will look like this:
If your environment is not active, simply use the workon
command:
$ workon dl4cv
From there your bash prompt will change accordingly.
Step #6: Install Python libraries
Now that our Python virtual environment is created and is currently active, let’s install NumPy and OpenCV using pip:
$ pip install numpy $ pip install opencv-contrib-python
Alternatively, you can install OpenCV from source to get the full install with patented algorithms. But for my deep learning books, those additional algorithms are irrelevant to deep learning.
Let’s install libraries required for additional computer vision, image processing, and machine learning as well:
$ pip install scipy matplotlib pillow $ pip install imutils h5py requests progressbar2 $ pip install scikit-learn scikit-image
Install TensorFlow for Deep Learning for Computer Vision with Python
You have two options to install TensorFlow:
Option #1: Install TensorFlow with GPU support:
$ pip install tensorflow-gpu==1.12.0
Note: Feedback from our readers has led us to realize that newer versions of CUDA don’t support the latest TensorFlow. We recommend installing version 1.12.0 as shown.
Option #2: Install TensorFlow without GPU support:
$ pip install tensorflow
Arguably, a third option is to compile TensorFlow from source, but it is unnecessary for DL4CV.
Go ahead and verify that TensorFlow is installed in your dl4cv
virtual environment:
$ python >>> import tensorflow >>>
Install Keras for DL4CV
We’ll employ pip again to install Keras into the dl4cv
environment:
$ pip install keras
You can verify that Keras is installed via starting a Python shell:
$ python >>> import keras Using TensorFlow backend. >>>
Now let’s go ahead and exit the Python shell and then deactivate the environment before we move on to “Step #7”:
>>> exit() $ deactivate
Note: An issue was raised about DL4CV ImageNet Bundle Chapter 10 “Case Study: Emotion recognition”. The solution is that the following commit-ID from the Keras master branch needs to be installed for compatibility: 9d33a024e3893ec2a4a15601261f44725c6715d1
. To implement the fix, you can (1) clone the Keras repo using the commit-ID, and (2) use setup.py
to install Keras. Eventually PyPi will be updated and the pip installation method described above will work. The bug/fix does not impact all chapters. Reference: Ticket #798 in the DL4CV issue tracker.
Step #7: Install mxnet (DL4CV ImageNet Bundle only)
We use mxnet in the ImageNet Bundle of Deep Learning for Computer Vision with Python due to both (1) its speed/efficiency and (2) its great ability to handle multiple GPUs.
When working with the ImageNet dataset as well as other large datasets, training with multiple GPUs is critical.
It is not to say that you can’t accomplish the same with Keras with the TensorFlow GPU backend, but mxnet does it more efficiently. The syntax is similar, but there are some aspects of mxnet that are less user-friendly than Keras. In my opinion, the tradeoff is worth it, and it is always good to be proficient with more than one deep learning framework.
Let’s get the ball rolling and install mxnet.
Installing mxnet requires OpenCV + mxnet compilation
In order to effectively use mxnet’s data augmentation functions and the im2rec utility we need to compile mxnet from source rather than a simple pip install of mxnet.
Since mxnet is a compiled C++ library (with Python bindings), it implies that we must compile OpenCV from source as well.
Let’s go ahead and download OpenCV (we’ll be using version 3.4.4):
$ cd ~ $ wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.4.zip $ wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.4.zip
And then unzip the archives:
$ unzip opencv.zip $ unzip opencv_contrib.zip
I like to rename the directories, that way our paths will be the same even if you are using a version of OpenCV other than 3.4.4:
$ mv opencv-3.4.4 opencv $ mv opencv_contrib-3.4.4 opencv_contrib
And from there, let’s create a new virtual environment (assuming you followed the virtualenv and virtualenvwrapper instructions from Step #2).
The mxnet
virtual environment will contain packages completely independent and sequestered from our dl4cv
environment:
$ mkvirtualenv mxnet -p python3
Now that your mxnet environment has been created, notice your bash prompt:
We can go on to install packages we will need for DL4CV into the environment:
$ pip install numpy scipy matplotlib pillow $ pip install imutils h5py requests progressbar2 $ pip install scikit-learn scikit-image
Let’s configure OpenCV with cmake:
$ cd ~/opencv $ mkdir build $ cd build $ cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D INSTALL_C_EXAMPLES=OFF \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D PYTHON_EXECUTABLE=~/.virtualenvs/mxnet/bin/python \ -D OPENCV_ENABLE_NONFREE=ON \ -D BUILD_EXAMPLES=ON ..
Provided that your output matches mine, let’s go ahead and kick off the compile process:
$ make -j4
Compiling OpenCV can take quite a bit of time, but since you likely have a GPU, your system specs are probably already very capable of compiling OpenCV in less than 30 minutes. Nevertheless, this is the point where you’d want to go for a walk or grab a fresh cup of coffee.
When OpenCV has been 100% compiled, there are still a few remaining sub-steps to perform, beginning with our actual install commands:
$ sudo make install $ sudo ldconfig
You can confirm that OpenCV has been successfully installed via:
$ pkg-config --modversion opencv 3.4.4
And now for the critical sub-step.
What we need to do is create a link from where OpenCV was installed into the virtual environment itself. This is known as a symbolic link.
Let’s go ahead and take care of that now:
$ cd /usr/local/python/cv2/python-3.6 $ ls cv2.cpython-36m-x86_64-linux-gnu.so
And now let’s rename the .so file to something that makes a little bit more sense + create a sym-link to our mxnet site-packages:
$ sudo mv cv2.cpython-36m-x86_64-linux-gnu.so cv2.opencv3.4.4.so $ cd ~/.virtualenvs/mxnet/lib/python3.6/site-packages $ ln -s /usr/local/python/cv2/python-3.6/cv2.opencv3.4.4.so cv2.so
Note: If you have multiple OpenCV versions ever installed in your system, you can use this same naming convention and symbolic linking method.
To test that OpenCV is installed + symbolically linked properly, fire up a Python shell inside the mxnet environment:
$ cd ~ $ workon mxnet $ python >>> import cv2 >>> cv2.__version__ '3.4.4'
We’re now ready to install mxnet into the environment.
Cloning and installing mxnet
We have gcc and g++ v7 installed for CUDA; however, there is a problem — mxnet requires gcc v6 and g++ v6 to compile from source.
The solution is to remove the gcc and g++ sym-links:
$ cd /usr/bin $ sudo rm gcc g++
And then create new ones, this time pointing to gcc-6 and g++-6
:
$ sudo ln -s gcc-6 gcc $ sudo ln -s g++-6 g++
Let’s download and install mxnet now that we have the correct compiler tools linked up.
Go ahead and clone the mxnet repository as well as check out version 1.3:
$ cd ~ $ git clone --recursive --no-checkout https://github.com/apache/incubator-mxnet.git mxnet $ cd mxnet $ git checkout v1.3.x $ git submodule update --init
With version 1.3 of mxnet ready to go, we’re going to compile mxnet with BLAS, OpenCV, CUDA, and cuDNN support:
$ workon mxnet $ make -j4 USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
The compilation process will likely finish in less than 40 minutes.
And then we’ll create a sym-link for mxnet into the virtual environment’s site-packages:
$ cd ~/.virtualenvs/mxnet/lib/python3.6/site-packages/ $ ln -s ~/mxnet/python/mxnet mxnet
Update 2019-06-04: The sym-link is updated to support the io
module of mxnet.
Let’s go ahead and test our mxnet install:
$ workon mxnet $ cd ~ $ python >>> import mxnet >>>
Note: Do not delete the ~/mxnet directory in your home folder. Not only do our Python bindings live there, but we also need the files in ~/mxnet/bin when creating serialized image datasets (i.e., the im2rec
command).
Now that mxnet is done compiling we can reset our gcc and g++ symlinks to use v7:
$ cd /usr/bin $ sudo rm gcc g++ $ sudo ln -s gcc-7 gcc $ sudo ln -s g++-7 g++
We can also go ahead and delete the OpenCV source code from our home folder:
$ cd ~ $ rm -rf opencv/ $ rm -rf opencv_contrib/
From here you can deactivate this environment, workon
a different one, or create another environment. In the supplementary materials page of the DL4CV companion website, I have instructions on how to setup environments for the TensorFlow Object Detection API, Mask R-CNN, and RetinaNet code.
mxnet + OpenCV 4.1 workaround
Update 2019-08-08: OpenCV 4.1 has been causing a number of readers issues with mxnet. In this section, I’m presenting a workaround by PyImageSearch reader, Gianluca. Thank you, Gianluca!
The problem is that the latest OpenCV (4.1.0) does not create the opencv4.pc
file that is needed by pkg-config command (used in mxnet build) to identify the installed libs and folders.
The fix is to add one more CMake flag in addition to a symlink later on.
Let’s review the CMake flag:
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D INSTALL_C_EXAMPLES=OFF \ -D OPENCV_GENERATE_PKGCONFIG=YES \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D PYTHON_EXECUTABLE=~/.virtualenvs/mxnet/bin/python \ -D OPENCV_ENABLE_NONFREE=ON \ -D BUILD_EXAMPLES=ON ..
Note that Line 5 is highlighted.
Follow the instructions above from that point, however, there is one additional step.
Once installed (via the make
command), we need to symlink the file opencv4.pc
from /usr/local/lib
to the default /usr/share/pkgconfig
folder:
$ cd /usr/share/pkgconfig $ ln -s /usr/local/lib/opencv4.pc opencv4.pc
Be sure to provide your comments in the form below if you have troubles with OpenCV 4.1 and mxnet.
A job well-done
At this point, a “congratulations” in order — you’ve successfully configured your Ubuntu 18.04 box for deep learning!
Great work!
Did you have any troubles configuring your deep learning system?
If you struggled along the way, I encourage you to re-read the instructions again and try to debug. If you’re really stuck, you can reach out in the DL4CV companion website issue tracker (there’s a registration link in the front of your book) or by contacting me.
I also want to take the opportunity to remind you about the pre-configured instances that come along with your book:
- The DL4CV VirtualBox VM is pre-configured and ready to go. It will help you through nearly all experiments in the Starter and Practitioner bundles. For the ImageNet bundle a GPU is a necessity and this VM does not support GPUs.
- My DL4CV Amazon Machine Image for the AWS cloud is freely open to the internet — no purchase required (other than AWS charges, of course). Getting started with a GPU in the cloud only takes about 4-6 minutes. For less than the price of a cup of coffee, you can use a GPU instance for an hour or two which is just enough time to complete some (definitely not all) of the more advanced lessons in DL4CV. The following environments are pre-configured:
dl4cv
,mxnet
,tfod_api
,mask_rcnn
, andretinanet
.
Azure users should consider the Azure DSVM. You can read my review of the Microsoft Azure DSVM here. All code from one of the first releases of DL4CV in 2017 was tested using Microsoft’s DSVM. It is an option and a very good one at that, but at this time it is not ready to support the Bonus Bundle of DL4CV without additional configuration. If Azure is your preferred cloud provider, I encourage you to stay there and take advantage of what the DSVM has to offer.
What's next? I recommend PyImageSearch University.
53+ total classes • 57+ hours of on-demand code walkthrough videos • Last updated: Sept 2022
★★★★★ 4.84 (128 Ratings) • 15,800+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 53+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 53+ Certificates of Completion
- ✓ 57+ hours of on-demand video
- ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 450+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
Today we learned how to set up an Ubuntu 18.04 + CUDA + GPU machine (as well as a CPU-only machine) for deep learning with TensorFlow and Keras.
Keep in mind that you don’t need a GPU to learn how deep learning works! GPUs are great for deeper neural networks and training with tons of data, but if you just need to learn the fundamentals and get some practice on your laptop, your CPU is just fine.
We accomplished our goal of setting up the following tools into two separate virtual environments:
- Keras + TensorFlow
- mxnet
Each of these deep learning frameworks requires additional Python packages to be successful such as:
- scikit-learn, SciPy, matplotlib
- OpenCV, pillow, scikit-image
- imutils (my personal package of convenience functions and tools)
- …and more!
These libraries are now available in each of the virtual environments that we set up today. You’re now ready to train state-of-the-art models using TensorFlow, Keras, and mxnet. Your system is ready to hack with the code in my deep learning book as well as your own projects.
Setting up all of this software is definitely daunting, especially for novice users. If you encountered any issues along the way, I highly encourage you to check that you didn’t skip any steps. If you are still stuck, please get in touch.
I hope this tutorial helps you on your deep learning journey!
To be notified when future blog posts are published here on PyImageSearch (and grab my 17-page Deep Learning and Computer Vision Resource Guide PDF), just enter your email address in the form below!
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning.
Emmanuel
Hello!, great tutorial Adrian.
I make a quick check in the tutorial instantly when i see the mail: “Deep Learning… installation….ubuntu18”. then i saw there is no Caffe tutorial for this… Would you mind to make one to install Caffe(maybe caffe 2… if you think is great to start with this) in ubuntu 18.04, with ot without GPU?
Hope to hear from this soon!
Kind regards
Adrian Rosebrock
Caffe and Caffe2 are great deep learning tools but I prefer using Keras and TensorFlow. You have significantly more control, especially in a programatic sense. I’ll consider doing a Caffe tutorial in the future but I cannot guarantee if or when that would be.
Emmauel
Thanks for your answer!, would be great since i have been troubled to do it.
Scott R
The combination of 16.04 and earlier Nvidia drivers (I believe 375?) made accessing the Ubuntu desktop a real pain, at least for me. Do you know if this combination will produce an out-of-the-box viewable desktop?
And once again, thank you for everything you do.
Adrian Rosebrock
I didn’t try with a monitor hooked up an Ubuntu machine. I honestly can’t remember the last time I used an Ubuntu machine with a monitor — I just SSH into my Ubuntu deep learning boxes. My understanding is that yes, that should produce an out-of-the-box viewable desktop; however, I cannot guarantee that as I haven’t tested it.
Scott R
As it turns out, the desktop is viewable out of the box, with no arcane mods needed.
As usual, GREAT instructions.
Adrian Rosebrock
Fantastic, I’m glad it worked for you Scott!
Jesper Christensen
When working with your Ubuntu box or AWS without monitor, what do you usually do when writing your code, forwarding the graphics (e.g. windows in OpenCV) and so on if all you are doing is SSH’ing into the box? I find it painfully inefficient working with AWS for that excact reason. I usually forward the notebooks, but when it comes to running native Python scripts with OpenCV graphics or having to transfer large files or folders, I always end up using my in-built crappy GPU since I find a Desktop OS easier to work with. Any advice would be great 🙂
Adrian Rosebrock
I like to configure PyCharm with automatic SFTP upload. It only takes a couple minutes to configure the PyCharm project. I point the PyCharm SFTP settings to the AWS instance and then set the remote Python interpreter to my virtual environment on AWS. From there, anytime I hit save the file is automatically uploaded. Super easy and helps with productivity tremendously.
David Bonn
Hi Adrian,
Thanks for the very timely tutorial, as I am in the process of building a spiffy deep learning rig.
One word of warning: Late model Nvidia cards (RTX 2xxx cards) require CUDA 10. Which breaks TensorFlow 1.12. You can work around this by installing the nightly builds for TensorFlow, which as of this writing is in release candidate state for 1.13, so the nightly builds should be relatively low-risk at the moment.
Adrian Rosebrock
Thanks David!
Gabriel Oliveira-Barra
Adrian, congratulations again for another great tutorial!
I used to go through the pain of this setup from time to time… it used to take a whole day of work and a couple of painkillers, even if you are familiar with the whole process.
This whole thing become WAY more simple with nvidia-docker2. You just pull the virtusl container with everything set up (cuda, cudnn, opencv, tensorflow, keras, pytorch, jupyter notebook… you name it) straight from the hub. It runs natively on the system, and all you need is to have NVIDIA drivers.
If you just want to get hands on, I honestly believe this is the easiest and most straightforward way to go.
Adrian Rosebrock
I do like the NVIDIA Docker instances. They make it significantly easier to work with. The main reason I don’t use NVIDIA Docker too often is because I offer my pre-configured AMI in Amazon’s cloud so if and when I break the AMI I just launch a new one or reload from a previous snapshot. I’ve started to get away from having physical hardware but yes, in that instance NVIDIA’s Docker is so nice.
Minh
Hi Adrian,
After I typed the line “sudo reboot now”, my screen became too big, the resolution is 800×600. How can I fix it?
Thank you and hope to hear from you soon.
jisooyu
I had the same problem once. Somehow the Nvidia driver was messed up due to the unknown reason. I fixed the issue by reinstalling the Nvidia driver. Just changing the display resolution won’t fix the problem.
ben
Just a quick question, please.
Why do you use APT-GET instead of PIP. ??
Adrian Rosebrock
Which command are you referring to, Ben? Keep in mind that apt-get is Ubuntu’s package manager while pip is Python’s package/library manager. They are two different programs used for two entirely different reasons.
John M.
I was wondering, as I had already installed Anaconda, what happens if I create a conda virtual-env instead of the tools you suggest, specifically virtualenv and virtualenwrapper?
Is it any difference when I activate a conda environment?
Thank you in advance!
Adrian Rosebrock
If you are already using Anaconda I would not recommend mixing virtualenv and virtualenvwrapper with it. Just create a new conda environment via:
$ conda create -n dl4cv
John M.
Ok, great and thanks a lot for the quick reply ;).
Jurek
Easiest way I found out is to install via Anaconda on whatever system:
conda create -n myenv
activate myenv
conda install tensorflow-gpu
pip install keras
Done…
Adrian Rosebrock
Anaconda is a great tool but I think it’s super important to understand what’s going on under the hood and configure a machine from scratch. Deep learning, at least the latest incarnation of it, is still very much in its infancy so errors can and will happen — being able to diagnose and fix those errors is a critical skill to have. When you configuring a DL system from scratch you’ll learn that skill.
Secondly, whether or not you use Anaconda, virtualenv/virtualenvwrapper, venv, etc. is a personal preference. I’m not telling you which one to use — that’s your choice. But if you follow the PyImageSearch blog I’ll show you the way I do it and in turn I’ll be able to help you better.
Ryan
I have numerous errors when trying to compile, most begin with something like this.
Build output check failed:
Regex: ‘command line option .* is valid for .* but not for C\+\+’
Adrian Rosebrock
What are specifically are you trying to compile?
Shannon
Thanks so much for the regular deep learning rig updates.
Unfortunately I’ve hit a snag common with Ubuntu, it seems–after running the `sudo reboot now` command, Ubuntu refuses to boot back up (yes I’m weird in that I dual-boot my home desktop so I have physical access). I’ve tried several times, with no luck; who knows if the culprit was any of the `apt-get install`s or the nvidia driver itself (I have a 1080 Ti), but at the moment I can’t get past that stage.
Shannon
Yep–sure enough, when I drop into recovery mode and run `sudo apt-get remove –purge nvidia-*`, it boots up just fine after that.
Any ideas?
Adrian Rosebrock
When you say it won’t boot up do you mean that you get stuck at the login screen? It sounds like you can access the terminal if you’re using apt-get.
Fishwolf
for Cloning and installing mxnet without gpu is correct this command?
make -j4 USE_OPENCV=1 USE_BLAS=openblas
Thanks
Adrian Rosebrock
Yes, that is correct if you do not want GPU support.
Charan
After installing nvidia and rebooting, I got the following statement which followed the command “nvidia-smi” as follows:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Where did I go wrong? When my system rebooted, I was given some options of creating “hashes” etc. ( I don’t have any idea what to do here). So I just selected “reboot” option. After which system rebooted the usual way. What was I supposed to do?
Adrian Rosebrock
It sounds like the NVIDIA driver was not installed properly. Are you using Ubuntu 18.04? Is it in the cloud or is it a home desktop?
Pero
Hello, I got the same problem while using 18.04 :/
Matt Barker
Hi Adrian – I previously had 16.04 + CUDA 9.0 + cuDNN 7.3.1 + Tensorflow-GPU 1.9.0 + CV 3.3.0 working on as ASUS ROG Zephyrus with Intel + Nvidia 1070 and performing well.
I decided to test out 18.04 as per the instructions above and did not receive any errors. I thought I would see how the face recognition with OpenCV code would perform under this configuration and received these results:
1. Using OpenCV pip install – no GPU utilisation reported in nvidia-smi (as alluded to in your comments)
2. Compiled OpenCV 3.3.0 – GPU utilisation reported however low memory use and very slow performance compared to my 16.04 configuration.
I am guessing that this is likely an issue with the hybrid GPU configuration of my laptop and current available drivers however I thought I would check in to see if you had any thoughts or whether any other followers had managed to get it to work on a similar hardware configuration?
Adrian Rosebrock
Just to clarify, what code were you trying to run on your GPU? Could you link to a specific blog post?
Matt Barker
The code on this fantastic tutorial by a guy called Adrian https://pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/
Matt Barker
Hi Adrian – progress update – I think my problem is specific to a dlib compile error in that I am receiving the message ‘Disabling CUDA support for dlib. DLIB WILL NOT USE CUDA’. This appears to be due to ‘gcc versions later than 6 are not supported!’
I now need to figure out how to force cmake to use gcc-6 to compile dlib with GPU support…
Matt Barker
Hi Adrian, Solved – I did a fresh install and followed the instructions above and then followed the install for ‘Face recognition with OpenCV, Python, and deep learning’ with the following addition prior to installing dlib with GPU support:
$ sudo apt update
$ sudo apt install gcc-6 g++-6
$ sudo update-alternatives –install “/usr/bin/gcc” “gcc” “/usr/bin/gcc-6” 60 –slave “/usr/bin/g++” “g++” “/usr/bin/g++-6”
$ sudo update-alternatives –config gcc
And then checked that gcc-6 and g++-6 were being utilised before running cmake using:
$ gcc –version
$ g++ –version
So now everything is working well using GPU on my hybrid GPU laptop with Ubuntu 18.04.
Thanks for your continued tutorials which provide the inspiration to keep learning.
Adrian Rosebrock
Awesome, I’m so happy that worked for you! Congrats on resolving the issue.
Tony Holdroyd
Hello Adrian,
Terrific instructions as always.
I have a problem though, nvcc -V reports CUDA installed, but TensorFlow 1.12 doesn’t recognise GPU, just reports CPU execution
Any ideas please?
THanks
Tony
Adrian Rosebrock
Two most likely causes that you can verify via “pip freeze”:
1. Is “tensorflow” or “tensorflow-gpu” shown in the output? You may have installed the CPU version.
2. Are both the CPU and GPU version installed? If so then the CPU version will be used by default.
Tony Holdroyd
Hello Adrian,Thank you for the ideas, however I only have ‘tensorflow-gpu’ installed
Best, Tony
Adrian Rosebrock
At least we’re making progress! What is your output of “nvidia-smi”? Can you confirm that the NVIDIA GPU drivers are properly installed?
Tony Holdroyd
Hello Adrian, well, I didn’t change anything and TF suddenly started reporting GPU available.!
All’s well that end’s well 🙂
Best
Tony
Adrian Rosebrock
I’m glad it started working for you, Tony!
Robert Jones
I had the same issue – doing a reboot solved the problem for me – although I *think* I had done a reboot right after the install.
Theerawat Ramchuen
At this step
$ workon mxnet
$ make -j4 USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
I found the error report as below..
cc1plus: out of memory allocating 65536 bytes after a total of 2752987136 bytes
Anyway I have try to import mxnet in python prompt. There is no error or warning message.
Adrian Rosebrock
Your machine has ran out of memory trying to compile mxnet. Try to compile with just a single core (change “-j4” to “j1”).
Mattia
Hello Adrian,
Do you know which version of the Nvidia drivers should be installed for the Geforce RTX 20 series cards (which require CUDA 10 and tensorflow 1.13rc)? Is the version in the NVIDIA PPA repository fine (396.54) or do you need to download the latest drivers from the Nvidia website (version 410.xx)?
Thanks
Alan
Hi Adrian. It appers to me in some trainings and classifications with Keras (A simple neural network with Python and Keras, Keras Tutorial: How to get started with Keras, Deep Learning, and Python):
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
I am using I5 4440 with GTX 750 Ti 2 GB.
Adrian Rosebrock
That’s not an error or a warning, it’s just TensorFlow letting you know that you can potentially increase performance if you compiled TensorFlow with those optimizations enabled. Ignore it and proceed with the code.
Dave
To ensure that TensorFlow 1.12 is installed use:
pip install tensorflow-gpu==1.12
-or-
pip install tensorflow==1.12
Without the version designation you will get 1.13 which seems to need CUDNN-10.
Adrian Rosebrock
Thanks Dave!
Roberto Sapiain
Hi Adrian, do you have any advice for GPU out-of-memory errors?
I seem to run into those for a 2GB GPU.
And seems frequent from what I was researching, even for people with an 11GB card.
Currently can’t get another one, or use amazon vm.
Kind Regards.
Adrian Rosebrock
What specifically are you trying to do with a 2GB GPU? That’s admittedly a very small GPU. I typically recommend a minimum of a 6GB GPU.
Roberto Sapiain
Thanks Adrian.
Well… when I’m able to will either get a new GPU, or budget for AWS on a p2.xlarge
Currently my test-rig is a 4th gen Core-i5 PC that I was able to buy for around USD$600, with a GTX 1050 (2 GB); upgrading the GPU will get me more ease, but I don’t know when I’ll be able to do it. If I find a used one, hopefully never overclocked, maybe next month.
(seems like the better gpu-mem/price relationship is the p2.xlarge: the p3 is triple the price, but seems it has many more cuda-cores).
Here in Chile regretfully, prices for things above USD$1k get doubled 🙁 (taxes, customs, commerce processing and other factors)
Roberto Sapiain
Well, after some testing:
– For “low mem gpu”, batch_size=5 seems to work decently enough on performance aspects.
flowers17, animals datasets on PB-ch05
Will test later with the caltech-101
Roberto Sapiain
Well.. answered my self:
had an impact on performance, but still runs way faster than CPU only: lower batch_size to 2.
Will try and find a better value for my case.
It happened while running dhe DL4CV PB Tome, Chp 05 exercise with flowers17.
Josh Myers
Once I install tensorflow-gpu and test the installation using import tensorflow, I get the error “ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory”
Adrian Rosebrock
Can you run a “pip freeze” and verify which version of TensorFlow was installed?
yasaman
hi,
i had installed cuda 10.Then following your guideline I erased it and replace with cuda 9.Now I got the error :” ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory”.
I checked that my tensorflow version is tensorflow-gpu==1.12.0
what should I do now?
Adrian Rosebrock
It sounds like CUDA 9 isn’t actually installed. Can you verify that CUDA 10 has been uninstalled and CUDA 9 is actually installed?
Jibin John
Hi Adrian,
I don’t have a system with NVIDIA GPU. Some documentations says that we can use mCUDA and AMD graphics card. can you clarify this ?. I want to make my own custom model. There is most of the people using the NVIDIA GPU.
Adrian Rosebrock
To be honest, I haven’t trained a deep learning model without an NVIDIA GPU in good long while so I can’t really comment there.
Yanny
Hey Adrian!
I am doing this step (make -j4) and get an error (fatal error: can’t write PCH file: No space left on device), not sure what went wrong cause my computer says i have 330GB free of 460GB.
Any advice? thanks!
Adrian Rosebrock
It definitely sounds like your system is out of space. Perhaps you’re compiling OpenCV on a separate volume? Do a “df -h” and verify that you indeed have enough space on the volume.
Usman
i had trouble with installing keras on anaconda and pycharm but it was really easy to install everything on visual studio code. just wanted to help fellow developers who want to stick with windows platform!
Shah
How do we upgrade tensorflow version? Through this tutorial I have installed tf version 1.13.1 on Ubuntu but I need a higher version of tf for my project.
Adrian Rosebrock
You can install/upgrade TensorFlow:
$ pip install --upgrade tensorflow
Masroor
Excellent step by step guide! Just to let you all know that it works great on Ubuntu 19.04 Disco Dingo with minor tweaks. By default 19.04 comes with python 3.7, so that has to be adjusted for. Additionally the default compiler is gcc-8, so that needs to be taken into account. Otherwise, smooth sailing.
Adrian Rosebrock
Awesome, thanks so much for sharing Masroor!
YSK
before step 5
need to add 1 last step on step 4
echo “/usr/local/cuda-9.2/lib64” >> /etc/ld.so.conf
ldconfig
YSK
sudo sh -c “echo ‘/usr/local/cuda-9.0/lib64’ >> /etc/ld.so.conf”
sudo ldconfig
insteed above
kiremitci
Thanks a lot. There was a couple of errors while i am going on the procedure, but i finally install every of them. Nice job 🙂
sarat
If you are facing issues with virtualenv and virtualenvwrapper installation, try them separately. virtualenv installs fine but virtualenvwrapper failed because of SSL error. So I tried this:
pip install pbr
After that, virtualenvwrapper installed smoothly.
Adrian Rosebrock
Thanks for sharing, Sarat!
Danish
Hi Adrian. Can you tell me what’s the difference between a GPU and CPU user? I have dual booted both Windows and Ubuntu on my laptop so does that make me a GPU or CPU user? Thanks.
Adrian Rosebrock
You are most likely a CPU user. The vast majority of laptop GPUs are not suited for deep learning.
Shashank Yadav
Hi Adrian,
the course is super amazing and thank you for providing this.
I have an AMD Radeon Graphics Card and wish to know if I can run the above code on it in my Ubuntu 18.
Is it better to run it on Google Colab or should I run it here?
Also, how would one go about running on Colab as init.py files aren’t required there?
Thank You so much again!!
Shashank Yadav
Adrian Rosebrock
Sorry, I haven’t configured an Ubuntu machine to use a non-NVIDIA GPU for deep learning.