For the last couple of months I’ve been creating bibliographies of recent academic publications in various subfields of Deep Learning on this blog. This posting gives an overview of the last 25 bibliographies posted.

Best regards,

Amund Tveit (WeChat: AmundTveit – Twitter: @atveit)

1. Deep Learning with Residual Networks

This posting is recent papers related to residual networks (i.e. very deep networks). Check out Microsoft Research’s paper Deep Residual Learning for Image Recognition and Kaiming He’s ICML 2016 Tutorial Deep Residual Learning, Deep Learning Gets Way Deeper

### 2. Deep Learning for Traffic Sign Detection and Recognition

Traffic Sign Detection and Recognition is key functionality for self-driving cars. This posting has recent papers in this area. Check also out related posting: Deep Learning for Vehicle Detection and Classification

### 3. Deep Learning for Vehicle Detection and Classification

This posting has recent papers about vehicle (e.g. car) detection and classification, e.g. for selv-driving/autonomous cars. Related: check also out Nvidia‘s End-to-End Deep Learning for Self-driving Cars and Udacity‘s Self-Driving Car Engineer (Nanodegree).

### 4. Deep Learning with Long Short-Term Memory (LSTM)

This blog post has some recent papers about Deep Learning with Long-Short Term Memory (LSTM). To get started I recommend checking out Christopher Olah’s Understanding LSTM Networks and Andrej Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networks. This blog post is complemented by Deep Learning with Recurrent/Recursive Neural Networks (RNN) — ICLR 2017 Discoveries.

### 5. Deep Learning in Finance

This posting has recent publications about Deep Learning in Finance (e.g. stock market prediction)

### 6. Deep Learning for Information Retrieval and Learning to Rank

This posting is about Deep Learning for Information Retrieval and Learning to Rank (i.e. of interest if developing search engines). The posting is complemented by the posting Deep Learning for Question Answering. To get started I recommend checking out Jianfeng Gao‘s (Deep Learning Technology Center at Microsoft Research) presentation Deep Learning for Web Search and Natural Language Processing.

Of partial relevance is the posting Deep Learning for Sentiment Analysis, the posting about Embedding for NLP with Deep Learning, the posting about Deep Learning for Natural Language Processing (ICLR 2017 discoveries), and the posting about Deep Learning for Recommender Systems

### 7. Deep Learning for Question Answering

This posting presents recent publications related to Deep Learning for Question Answering. Question Answering is described as “a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language”. I’ll also publish postings about Deep Learning for Information Retrieval and Learning to Rank today.

### 8. Ensemble Deep Learning

Ensemble Based Machine Learning has been used with success in several Kaggle competitions, and this year also the Imagenet competition was dominated by ensembles in Deep Learning, e.g. Trimps-Soushen team from 3rd Research Institute of the Ministry of Public Security (China) used a combination of Inception, Inception-Resnet, Resnet and Wide Residual Network to win the Object Classification/localization challenge. This blog post has recent papers related to Ensembles in Deep Learning.

### 9. Deep Learning for Sentiment Analysis

Recently I published Embedding for NLP with Deep Learning (e.g. word2vec and follow-ups) and Deep Learning for Natural Language Processing — ICLR 2017 Discoveries — this posting is also mostly NLP-related since it provides recent papers related to Deep Learning for Sentiment Analysis, but also has examples of other types of sentiment (e.g. image sentiment).

### 10. Deep Learning with Gaussian Process

Gaussian Process is a statistical model where observations are in the continuous domain, to learn more check out a tutorial on gaussian process(by Univ.of Cambridge’s Zoubin G.). Gaussian Process is an infinite-dimensional generalization of multivariate normal distributions.

Researchers from University of Sheffield — Andreas C. Damanianou and Neil D. Lawrence — started using Gaussian Process with Deep Belief Networks (in 2013). This Blog post contains recent papers related to combining Deep Learning with Gaussian Process.

### 11. Deep Learning for Clustering

### 12. Deep Learning in combination with EEG electrical signals from the brain

EEG (Electroencephalography) is the measurement of electrical signals in the brain. It has long been used for medical purposes (e.g. diagnosis of epilepsy), and has in more recent years also been used in Brain Computer Interfaces (BCI) — *note: if BCI is new to you don’t get overly excited about it, since these interfaces are still in my opinion quite premature. But they are definitely interesting in a longer term perspective .*

This blog post gives an overview of recent research on Deep Learning in combination with EEG, e.g. r for classification, feature representation, diagnosis, safety (cognitive state of drivers) and hybrid methods (Computer Vision or Speech Recognition together with EEG and Deep Learning).

### 13. Embedding for NLP with Deep Learning

Word Embedding was introduced by Bengio in early 2000s, and interest in it really accelerated when Google presented Word2Vec in 2013.

This blog post has recent papers related to embedding for Natural Language Processing with Deep Learning. Example application areas embedding is used for in the papers include finance (stock market prediction), biomedical text analysis, part-of-speech tagging, sentiment analysis, pharmacology (drug adverse effects).

I recommend you to start with the paper: In Defense of Word Embedding for Generic Text Representation

### 14. Zero-Shot (Deep) Learning

Zero-Shot Learning is making decisions after seing only one or few examples (as opposed to other types of learning that typically requires large amount of training examples). Recommend having a look at An embarrassingly simple approach to zero-shot learning first.

### 15. Deep Learning for Alzheimer Diagnostics and Decision Support

Alzheimer’s Disease is the cause of 60–70% of cases of Dementia, costs associated to diagnosis, treatment and care of patients with it is estimated to be in the range of a hundred billion dollars in USA. This blog post have some recent papers related to using Deep Learning for diagnostics and decision support related to Alzheimer’s disease.

### 16. Recommender Systems with Deep Learning

This blog post presents recent research in Recommender Systems (/collaborative filtering) with Deep Learning. To get started I recommend having a look at A Survey and Critique of Deep Learning in Recommender Systems.

### 17. Deep Learning for Ultrasound Analysis

Ultrasound (also called Sonography) are sound waves with higher frequency than humans can hear, they frequently used in medical settings, e.g. for checking that pregnancy is going well with fetal ultrasound. For more about Ultrasound data formats check out Ultrasound Research Interface. This blog post has recent publications about applying Deep Learning for analyzing Ultrasound data.

### 18. Deep Learning for Music

Deep Learning (creative AI) might potentially be used for music analysis and music creation. Deepmind’s Wavenet is a step in that direction. This blog post presents recent papers in Deep Learning for Music.

### 19. Regularized Deep Networks — ICLR 2017 Discoveries

This blog post gives an overview of papers related to using Regularization in** Deep Learning** submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Regularization in Deep Learning check out: www.deeplearningbook.org/contents/regularization.html

### 20. Unsupervised Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to **Unsupervised Deep Learning** submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Unsupervised Deep Learning check out: Ruslan Salkhutdinov’s video Foundations of Unsupervised Deep Learning.

### 21. Autoencoders in Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to **autoencoders** submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about autoencoders check out the Stanford (UFLDL) tutorial about Autoencoders, Carl Doersch’ Tutorial on Variational Autoencoders, DeepLearning.TV’s Video tutorial on Autoencoders, or Goodfellow, Bengio and Courville’s Deep Learning book’s chapter on Autencoders.

### 22. Stochastic/Policy Gradients in Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to stochastic/policy gradient submitted to ICLR 2017, see underneath for the list of papers.

### 23. Deep Learning with Recurrent/Recursive Neural Networks (RNN) — ICLR 2017 Discoveries

This blog post gives an overview of Deep Learning with Recurrent/Recursive Neural Networks (RNN) related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn more about RNN check out Andrej Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networks and Pascanu, Gulcehre, Cho and Bengio’s How to Construct Deep Recurrent Neural Networks.

### 24. Deep Learning with Generative and Generative Adverserial Networks — ICLR 2017 Discoveries

This blog post gives an overview of **Deep Learning with Generative and Adverserial Networks **related papers submitted to ICLR 2017, see underneath for the list of papers. Want to learn about these topics? See OpenAI’s article about Generative Models and Ian Goodfellow et.al’s paper about Generative Adversarial Networks.

### 25. Deep Learning for Natural Language Processing — ICLR 2017 Discoveries

This blog post gives an overview of **Natural Language Processing** related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Deep Learning with NLP check out Stanford’s CS224d: Deep Learning for Natural Language Processing