Deep Learning with Generative and Generative Adverserial Networks – ICLR 2017 Discoveries

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017).

This blog post gives an overview of Deep Learning with Generative and Adverserial Networks related papers submitted to ICLR 2017, see underneath for the list of papers. Want to learn about these topics? See OpenAI’s article about Generative Models and Ian Goodfellow’s paper about Generative Adversarial Networks.

Best regards,

Amund Tveit

ICLR 2017 – Generative and Generative Adversarial Papers

  1. Unsupervised Learning Using Generative Adversarial Training And Clustering – Authors: Vittal Premachandran, Alan L. Yuille
  2. Improving Generative Adversarial Networks with Denoising Feature Matching – Authors: David Warde-Farley, Yoshua Bengio
  3. Generative Adversarial Parallelization – Authors: Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor
  4. b-GAN: Unified Framework of Generative Adversarial Networks – Authors: Masatosi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  5. Generative Adversarial Networks as Variational Training of Energy Based Models – Authors: Shuangfei Zhai, Yu Cheng, Rogerio Feris, Zhongfei Zhang
  6. Boosted Generative Models – Authors: Aditya Grover, Stefano Ermon
  7. Adversarial examples for generative models – Authors: Jernej Kos, Dawn Song
  8. Mode Regularized Generative Adversarial Networks – Authors: Tong Che, Yanran Li, Athul Jacob, Yoshua Bengio, Wenjie Li
  9. Variational Recurrent Adversarial Deep Domain Adaptation – Authors: Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu
  10. Structured Interpretation of Deep Generative Models – Authors: N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H.S. Torr
  11. Inference and Introspection in Deep Generative Models of Sparse Data – Authors: Rahul G. Krishnan, Matthew Hoffman
  12. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy – Authors: Dougal J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton
  13. Unsupervised sentence representation learning with adversarial auto-encoder – Authors: Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang
  14. Unsupervised Program Induction with Hierarchical Generative Convolutional Neural Networks – Authors: Qucheng Gong, Yuandong Tian, C. Lawrence Zitnick
  15. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Noise – Authors: Beilun Wang, Ji Gao, Yanjun Qi
  16. On the Quantitative Analysis of Decoder-Based Generative Models – Authors: Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger Grosse
  17. Evaluation of Defensive Methods for DNNs against Multiple Adversarial Evasion Models – Authors: Xinyun Chen, Bo Li, Yevgeniy Vorobeychik
  18. Calibrating Energy-based Generative Adversarial Networks – Authors: Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville
  19. Inverse Problems in Computer Vision using Adversarial Imagination Priors – Authors: Hsiao-Yu Fish Tung, Katerina Fragkiadaki
  20. Towards Principled Methods for Training Generative Adversarial Networks – Authors: Martin Arjovsky, Leon Bottou
  21. Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning – Authors: Dilin Wang, Qiang Liu
  22. Multi-view Generative Adversarial Networks – Authors: Mickaël Chen, Ludovic Denoyer
  23. LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation – Authors: Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh
  24. Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks – Authors: Emily Denton, Sam Gross, Rob Fergus
  25. Generative Adversarial Networks for Image Steganography – Authors: Denis Volkhonskiy, Boris Borisenko, Evgeny Burnaev
  26. Unrolled Generative Adversarial Networks – Authors: Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
  27. Generative Multi-Adversarial Networks – Authors: Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
  28. Joint Multimodal Learning with Deep Generative Models – Authors: Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  29. Fast Adaptation in Generative Models with Generative Matching Networks – Authors: Sergey Bartunov, Dmitry P. Vetrov
  30. Adversarially Learned Inference – Authors: Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville
  31. Perception Updating Networks: On architectural constraints for interpretable video generative models – Authors: Eder Santana, Jose C Principe
  32. Energy-based Generative Adversarial Networks – Authors: Junbo Zhao, Michael Mathieu, Yann LeCun
  33. Simple Black-Box Adversarial Perturbations for Deep Networks – Authors: Nina Narodytska, Shiva Kasiviswanathan
  34. Learning in Implicit Generative Models – Authors: Shakir Mohamed, Balaji Lakshminarayanan
  35. On Detecting Adversarial Perturbations – Authors: Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff
  36. Delving into Transferable Adversarial Examples and Black-box Attacks – Authors: Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
  37. Adversarial Feature Learning – Authors: Jeff Donahue, Philipp Krähenbühl, Trevor Darrell
  38. Generative Paragraph Vector – Authors: Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, Xueqi Cheng
  39. Adversarial Machine Learning at Scale – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  40. Adversarial Training Methods for Semi-Supervised Text Classification – Authors: Takeru Miyato, Andrew M. Dai, Ian Goodfellow
  41. Sampling Generative Networks: Notes on a Few Effective Techniques – Authors: Tom White
  42. Adversarial examples in the physical world – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  43. Improving Sampling from Generative Autoencoders with Markov Chains – Authors: Kai Arulkumaran, Antonia Creswell, Anil Anthony Bharath
  44. Neural Photo Editing with Introspective Adversarial Networks – Authors: Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
  45. Learning to Protect Communications with Adversarial Neural Cryptography – Authors: Martín Abadi, David G. Andersen

Sign up for Deep Learning newsletter!

Continue Reading

Deep Learning for Natural Language Processing – ICLR 2017 Discoveries

Update: 2017-Feb-03 – launched new service – (navigation and search in papers). Try e.g. out its Natural Language Processing Page.

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017), and there is large amount of Deep Learning papers submitted to the conference, looks like it will be a great event (see word cloud below for most frequent words used in submitted paper titles).


This blog post gives an overview of Natural Language Processing related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Deep Learning with NLP check out Stanford’s CS224d: Deep Learning for Natural Language Processing

Best regards,

Amund Tveit


Character/Word/Sentence Representation

  1. Character-aware Attention Residual Network for Sentence Representation – Authors: Xin Zheng, Zhenzhou Wu
  2. Program Synthesis for Character Level Language Modeling – Authors: Pavol Bielik, Veselin Raychev, Martin Vechev
  3. Words or Characters? Fine-grained Gating for Reading Comprehension – Authors: Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov
  4. Deep Character-Level Neural Machine Translation By Learning Morphology – Authors: Shenjian Zhao, Zhihua Zhang
  5. Opening the vocabulary of neural language models with character-level word representations – Authors: Matthieu Labeau, Alexandre Allauzen
  6. Unsupervised sentence representation learning with adversarial auto-encoder – Authors: Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang
  7. Offline Bilingual Word Vectors Without a Dictionary – Authors: Samuel L. Smith, David H. P. Turban, Nils Y. Hammerla, Steven Hamblin
  8. Learning Word-Like Units from Joint Audio-Visual Analylsis – Authors: David Harwath, James R. Glass
  9. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling – Authors: Hakan Inan, Khashayar Khosravi, Richard Socher
  10. Sentence Ordering using Recurrent Neural Networks – Authors: Lajanugen Logeswaran, Honglak Lee, Dragomir Radev

Search/Question-Answer/Recommender Systems

  1. Learning to Query, Reason, and Answer Questions On Ambiguous Texts – Authors: Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, Ban Kawas, Kartik Talamadupula, Gerry Tesauro, Satinder Singh
  2. Group Sparse CNNs for Question Sentence Classification with Answer Sets – Authors: Mingbo Ma, Liang Huang, Bing Xiang, Bowen Zhou
  3. CONTENT2VEC: Specializing Joint Representations of Product Images and Text for the task of Product Recommendation – Authors: Thomas Nedelec, Elena Smirnova, Flavian Vasile
  4. Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce – Authors: Tom Zahavy, Alessandro Magnani, Abhinandan Krishnan, Shie Mannor

Word/Sentence Embedding

  1. A Simple but Tough-to-Beat Baseline for Sentence Embeddings – Authors: Sanjeev Arora, Yingyu Liang, Tengyu Ma
  2. Investigating Different Context Types and Representations for Learning Word Embeddings – Authors: Bofang Li, Tao Liu, Zhe Zhao, Xiaoyong Du
  3. Multi-view Recurrent Neural Acoustic Word Embeddings – Authors: Wanjia He, Weiran Wang, Karen Livescu
  4. A Self-Attentive Sentence Embedding – Authors: Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, Yoshua Bengio
  5. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks – Authors: Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg


  1. Neural Machine Translation with Latent Semantic of Image and Text – Authors: Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  2. Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context – Authors: Shyam Upadhyay, Kai-Wei Chang, James Zhou, Matt Taddy, Adam Kalai
  3. Learning to Understand: Incorporating Local Contexts with Global Attention for Sentiment Classification – Authors: Zhigang Yuan, Yuting Hu, Yongfeng Huang
  4. Adaptive Feature Abstraction for Translating Video to Language – Authors: Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin
  5. A Convolutional Encoder Model for Neural Machine Translation – Authors: Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin
  6. Fuzzy paraphrases in learning word representations with a corpus and a lexicon – Authors: Yuanzhi Ke, Masafumi Hagiwara
  7. Iterative Refinement for Machine Translation – Authors: Roman Novak, Michael Auli, David Grangier
  8. Vocabulary Selection Strategies for Neural Machine Translation – Authors: Gurvan L’Hostis, David Grangier, Michael Auli

Language Models/Text Comprehension/Matching/Compression/Classification/++

  1. A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks – Authors: Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher
  2. Gated-Attention Readers for Text Comprehension – Authors: Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov
  3. A Compare-Aggregate Model for Matching Text Sequences – Authors: Shuohang Wang, Jing Jiang
  4. A Context-aware Attention Network for Interactive Question Answering – Authors: Huayu Li, Martin Renqiang Min, Yong Ge, Asim Kadav
  5. Compressing text classification models – Authors: Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jegou, Tomas Mikolov
  6. Multi-Agent Cooperation and the Emergence of (Natural) Language – Authors: Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni
  7. Learning a Natural Language Interface with Neural Programmer – Authors: Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, Dario Amodei
  8. Learning similarity preserving representations with neural similarity and context encoders – Authors: Franziska Horn, Klaus-Robert Müller
  9. Adversarial Training Methods for Semi-Supervised Text Classification – Authors: Takeru Miyato, Andrew M. Dai, Ian Goodfellow
  10. Multi-Label Learning using Tensor Decomposition for Large Text Corpora – Authors: Sayantan Dasgupta


Sign up for Deep Learning newsletter!

Continue Reading

Deep Learning for Mobile Personal Expression at Zedge

Wrote a blog post about Deep Learning for Mobile Personal Expression, entire blog post is available at: — and start of blog post is shown underneath.

Our main product is an app — Zedge Ringtones & Wallpapers — that provides wallpapers, ringtones, app icons, game recommendations and notification sounds customized for your mobile device. Zedge apps have been downloaded more than 200 million times for iOS and Android and is used by millions of people worldwide each month.

People use our apps for self-expression. Setting a wallpaper, ringtone or app icons on your mobile device is in many ways similar to selecting clothes, hairstyle or other fashion statements. In fact people try a wallpaper or ringtone in a similar manner as they would try clothes in a dressing room before making a purchase decision, they try different wallpapers or ringtones before deciding on one they want to keep for a while.

The decision for selecting a wallpaper is not taken lightly, since people interact and view their mobile device (and background wallpaper) a lot:

… The entire blog post is available at:

Best regards,

Amund Tveit

Continue Reading

Why Deep Learning matters


Deep Learning, or more specifically a subgroup of Deep Learning called (Deep) Convolutional Neural Networks have had impressive improvements since Alex Krizhevsky’s 2012 publication about (what is now called) AlexNet. AlexNet won the ImageNet Image Recognition competition with the (then close to jawdropping) top-5 error rate of only 17.0% (top-5 error means that if your classifier presents 5 answers at least one of them must be the correct one).

But Image Recognition accuracy have increased many times since then, i.e. from 17% in 2012 to 3.08% in 2016 (see publications in table below to see more about what the error rates mean and how they can be compared).

To put this into context: human beings perform at 5.10% error rate on this Image Recognition task, see Andrej Karpathy’s publication below (to be precise: at least 1 smart, trained and highly education human being performed at that error rate on the ImageNet task).

So what I am saying is that computers with Deep Learning can actually see and understand what is on a picture better than humans! (in some and probably most/all cases)

Year Error% Reference Author(s) Organization
2012 17.00 ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) Alex Krizhevsky et. al University of Toronto
2014 6.66 Going Deeper with Convolutions Christian Szegedy et. al Google
2014 (Sep) 5.10 What I learned from competing against a ConvNet on ImageNet Andrej Karpathy Stanford University
2015 (Feb) 4.94 Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification Kaiming He et. al Microsoft Research
2015 (Dec) 3.57 Deep Residual Learning for Image Recognition Kaiming He et. al Microsoft Research
2016 (Feb) 3.08 Inception v4, Inception-ResNet and the Impact of Residual Connections on Learning Christian Szegedy et. al Google

Implications of better-than-human-level image recognition with Deep Learning?

Deep Convolutional Neural network research field has moved so fast that applications still lag behind on using this. Most robots/drones and software in servers, laptops, mobiles, wearables and medical equipment does not take advantage of these research results yet, but there is a huge untapped potential (will get back to the potential in later postings).
But there are some highly important applications already, e.g. the Samsung Medison RS80A Ultrasound Machine (see image in start of posting) that uses convolutional neural networks for Breast Cancer Diagnosis.


Next blog post is probably going to be about some (simple) analogies to explain the mechanics of Convolutional Neural Networks. Stay tuned and sign up for DeepLearning.Education mailing list below.

Best regards,
Amund Tveit


Continue Reading