Last August Nvidia brought desktop-class graphics to laptops with GeForce 1060, 1070 and 1080. Laptops with such GPUs seems to be primarily targeted towards gaming, but they can also be used for Deep Learning, e.g. with TensorFlow, Pytorch or Keras. A laptop for Deep Learning can be a convenient supplement to using GPUs in the Cloud (Nvidia K80 or P100) or buying a desktop or server machine with perhaps even more powerful GPUs than in a laptop (e.g. the Pascal Titan X or the new 1080 TI).
1. Choice of GPU
I decided on the GTX 1070 GPU since it had
- Same amount of GPU RAM as GTX 1080 – 8 GB – enough to develop or test a large range of CNN and GAN models
- Was cheaper and used less energy than GTX 1080
- High performance
2. Choice of Laptop and Configuration
I chose the Acer Predator G9-593, it had a nice spec and was upgradable to several disks and up to 64 GB of RAM
There are several youtube videos of people unboxing the G9-593 and looking into how to upgrade hardware (e.g RAM and disks)
I first installed Ubuntu 14.04 with Cuda (8.0), cuDNN, Nvidia drivers, nvidia-docker and then later upgraded to Ubuntu 16.04 – check out the blog post (by Donald Kinghorn) Install Ubuntu 16.04 or 14.04 and CUDA 8 and 7.5 for NVIDIA Pascal GPU. Then I installed TensorFlow and Pytorch, got some issues with GPU support for Pytorch but I assume it is just finger trouble on my side, but Tensorflow worked nicely on the GPU as you can see in the section below.
3. Example training the Pix2Pix Conditional Adversarial Network in TensorFlow on the Laptop
To test Deep Learning on the laptop I chose the pix2pix-tensorflow project, see examples below followed by a gif of actual training on the laptop.–
Amund Tveit (@atveit)
Appendix – Deep Learning benchmark of Nvidia GTX 1070
The benchmark in the table below – from github.com/tobigithub/tensorflow-deep-learning/wiki/tf-benchmarks – was very favorable in the direction of 1070 (note that this compares 1080 and 1070 to older generation GPUs)