New Publications in Deep Learning Publication Navigator

Long overdue update of new publications in Deep Learning Publication Navigator ( – for now the easiest way to discover new publications is probably to convert screenshots (number of papers) per category in the before and after update screenshots below.

Examples of keywords (from publication title) with (several) new Deep Learning publications are:

  1. 3D
  2. Acoustic
  3. Active learning
  4. Adaptive
  5. Adversarial (123 new papers since last update, due to significant activity in GAN Research)
  6. Alzheimer’s (22 new papers related to a disease that cost more than a quarter trillion US$ annually to treat in the USA)
  7. Anomaly detection
  8. Autoencoders
  9. Bayesian
  10. Biomedical
  11. Chinese
  12. Clinical
  13. Collaborative filtering (e.g. for recommender systems)
  14. Dataset
  15. EEG (electric brain signals)
  16. Ensemble
  17. +++++ (many more!)

If you have feature ideas or other requests for Deep Learning Publication Navigator, feel free to reach out.

Best regards,

Amund Tveit

After update (with new papers):

Before update (without new papers):


Continue Reading

Creative AI on the iPhone (with GAN) and Dynamic Loading of CoreML models

Zedge summer interns developed a very cool app using ARKit and CoreML (on iOS11). As parts of their journey the published 2 blog posts on the Zedge corporate web site related to:

  1. How to develop and run Generative Adversarial Networks (GAN) for Creative AI on the iPhone using Apple’s CoreML tools, check out their blog post about it.
  2. Deep Learning models (e.g. for GAN) can take a lot of space on a mobile device (tens of Megabytes to perhaps even Gigabytes), in order to keep initial app download size relatively low it can be useful to dynamically load only the models you need. Check out their blog post about various approaches for hotswapping CoreML models.

Best regards,

Amund Tveit 

Continue Reading

Early Experiences with Deep Learning on a Laptop with Nvidia GTX 1070 GPU – part 1

Last August Nvidia brought desktop-class graphics to laptops with GeForce 1060, 1070 and 1080. Laptops with such GPUs seems to be primarily targeted towards gaming, but they can also be used for Deep Learning, e.g. with TensorFlow, Pytorch or Keras. A laptop for Deep Learning can be a convenient supplement to using GPUs in the Cloud (Nvidia K80 or P100) or buying a desktop or server machine with perhaps even more powerful GPUs than in a laptop (e.g. the Pascal Titan X or the new 1080 TI).

1. Choice of GPU
I decided on the GTX 1070 GPU since it had

  1. Same amount of GPU RAM as GTX 1080 – 8 GB – enough to develop or test a large range of CNN and GAN models
  2. Was cheaper and used less energy than GTX 1080
  3. High performance

2. Choice of Laptop and Configuration
I chose the Acer Predator G9-593, it had a nice spec and was upgradable to several disks and up to 64 GB of RAM

There are several youtube videos of people unboxing the G9-593 and looking into how to upgrade hardware (e.g RAM and disks)

I first installed Ubuntu 14.04 with Cuda (8.0), cuDNN, Nvidia drivers, nvidia-docker and then later upgraded to Ubuntu 16.04 – check out the blog post (by Donald Kinghorn) Install Ubuntu 16.04 or 14.04 and CUDA 8 and 7.5 for NVIDIA Pascal GPU. Then I installed TensorFlow and Pytorch, got some issues with GPU support for Pytorch but I assume it is just finger trouble on my side, but Tensorflow worked nicely on the GPU as you can see in the section below.

3. Example training the Pix2Pix Conditional Adversarial Network in TensorFlow on the Laptop

To test Deep Learning on the laptop I chose the pix2pix-tensorflow project, see examples below followed by a gif of actual training on the laptop.pix2pix-tensorflow

Best regards,
Amund Tveit (@atveit)


Appendix – Deep Learning benchmark of Nvidia GTX 1070

The benchmark in the table below – from – was very favorable in the direction of 1070 (note that this compares 1080 and 1070 to older generation GPUs)


Continue Reading