Deep Learning in Energy Production


This blog post has recent publications about use of Deep Learning in Energy Production context (wind, gas and oil), e.g. wind power prediction, turbine risk assessment, reservoir discovery and price forecasting.

Best regards,

Amund Tveit


Year  Title Author
2017 Short-term Wind Energy Prediction Algorithm Based on SAGA-DBNs  W Fei, WU Zhong
2017 Wind Power Prediction using Deep Neural Network based Meta Regression and Transfer Learning  AS Qureshi, A Khan, A Zameer, A Usman
2017 Wind Turbine Failure Risk Assessment Model Based on DBN  C Fei, F Zhongguang
2017 The optimization of wind power interval forecast  X Yu, H Zang
2016 Deep Learning for Wind Speed Forecasting in Northeastern Region of Brazil  AT Sergio, TB Ludermir
2016 A very short term wind power prediction approach based on Multilayer Restricted Boltzmann Machine  X Peng, L Xiong, J Wen, Y Xu, W Fan, S Feng, B Wang
2016 Short-term prediction of wind power based on deep Long Short-Term Memory  Q Xiaoyun, K Xiaoning, Z Chao, J Shuai, M Xiuda
2016 Deep belief network based deterministic and probabilistic wind speed forecasting approach  HZ Wang, GB Wang, GQ Li, JC Peng, YT Liu
2016 A hybrid wind power prediction method  Y Tao, H Chen
2016 Deep learning based ensemble approach for probabilistic wind power forecasting  H Wang, G Li, G Wang, J Peng, H Jiang, Y Liu
2016 A hybrid wind power forecasting model based on data mining and wavelets analysis  R Azimi, M Ghofrani, M Ghayekhloo
2016 ELM Based Representational Learning for Fault Diagnosis of Wind Turbine Equipment  Z Yang, X Wang, PK Wong, J Zhong
2015 Deep Neural Networks for Wind Energy Prediction  D Díaz, A Torres, JR Dorronsoro
2015 Predictive Deep Boltzmann Machine for Multiperiod Wind Speed Forecasting  CY Zhang, CLP Chen, M Gan, L Chen
2015 Resilient Propagation for Multivariate Wind Power Prediction  J Stubbemann, NA Treiber, O Kramer
2015 Transfer learning for short-term wind speed prediction with deep neural networks  Q Hu, R Zhang, Y Zhou
2014 Wind Power Prediction and Pattern Feature Based on Deep Learning Method  Y Tao, H Chen, C Qiu


Year  Title Author
2017   Sample Document–Inversion Of The Permeability Of A Tight Gas Reservoir With The Combination Of A Deep Boltzmann Kernel …  L Zhu, C Zhang, Y Wei, X Zhou, Y Huang, C Zhang
2017   Deep Learning: Chance and Challenge for Deep Gas Reservoir Identification  C Junxing, W Shikai
2016   Finite-sensor fault-diagnosis simulation study of gas turbine engine using information entropy and deep belief networks  D Feng, M Xiao, Y Liu, H Song, Z Yang, Z Hu
2015   On Accurate and Reliable Anomaly Detection for Gas Turbine Combustors: A Deep Learning Approach  W Yan, L Yu
2015   A Review of Datasets and Load Forecasting Techniques for Smart Natural Gas and Water Grids: Analysis and Experiments.  M Fagiani, S Squartini, L Gabrielli, S Spinsante
2015   Short-term load forecasting for smart water and gas grids: A comparative evaluation  M Fagiani, S Squartini, R Bonfigli, F Piazza
2015   The early-warning model of equipment chain in gas pipeline based on DNN-HMM  J Qiu, W Liang, X Yu, M Zhang, L Zhang


Year  Title Author
2017   Development of a New Correlation for Bubble Point Pressure in Oil Reservoirs Using Artificial Intelligent Technique  S Elkatatny, M Mahmoud
2017   A deep learning ensemble approach for crude oil price forecasting  Y Zhao, J Li, L Yu
2016   Automatic Detection and Classification of Oil Tanks in Optical Satellite Images Based on Convolutional Neural Network  Q Wang, J Zhang, X Hu, Y Wang
2015   A Hierarchical Oil Tank Detector With Deep Surrounding Features for High-Resolution Optical Satellite Imagery  L Zhang, Z Shi, J Wu
Continue Reading

Traffic Sign Detection with Convolutional Neural Networks


Making Self-driving cars work requires several technologies and methods to pull in the same direction (e.g. Radar/Lidar, Camera, Control Theory and Deep Learning). The online available Self-Driving Car Nanodegree from Udacity (divided into 3 terms) is probably the best way to learn more about the topic (see [Term 1], [Term 2] and [Term 3] for more details about each term), the coolest part is that you actually can run your code on an actual self-driving car towards the end of term 3 (I am currently in the middle of term 1 – highly recommended course!).

Note: before taking this course I recommend taking Udacity’s Deep Learning Nanodegree Foundations since most (term 1) projects requires some hands-on experience with Deep Learning.

Traffic Sign Detection with Convolutional Neural Networks

This blog post is a writeup of my (non-perfect) approach for German traffic sign detection (a project in the course) with Convolutional Neural networks (in TensorFlow) – a variant of LeNet with Dropout and (the new) SELU – Self-Normalizing Neural Networks. The effect of SELU was primarily that it quickly gained classification accuracy (even in first epoch), but didn’t lead to higher accuracy than using batch-normalisation + RELU in the end. (Details at: Data Augmentation in particular and perhaps a deeper network could have improved the performance I believe.

For other approaches (e.g. R-CNN and cascaded deep networks) see the blog post: Deep Learning for Vehicle Detection and Recognition.

UPDATE – 2017-July-15:

If you thought Traffic Sign Detection from modern cars was an entire solved problem, think again:



Best regards,

Amund Tveit

1. Basic summary of the German Traffic Sign Data set.

I used numpy shape to calculate summary statistics of the traffic signs data set:

  • The size of training set is ? 34799
  • The size of the validation set is ? 4410
  • The size of test set is ? 12630
  • The shape of a traffic sign image is ? 32x32x3 (3 color channels, RGB)
  • The number of unique classes/labels in the data set is ? 43

2. Visualization of the train, validation and test dataset.

Here is an exploratory visualization of the data set. It is a bar chart showing how the normalized distribution of data for the 43 traffic signs. The key takeaway is that the relative number of data points varies quite a bit between each class, e.g. from around 6.5% (e.g. class 1) to 0.05% (e.g. class 37), i.e. a factor of at least 12 difference (6.5% / 0.05%), this can potentially impact classification performance.

alt text

3 Design of Architecture

3.1 Preprocessing of images

Did no grayscale conversion or other conversion of train/test/validation images (they were preprocessed). For the images from the Internet they were read from using PIL and converted to RGB (from RBGA), resized to 32×32 and converted to numpy array before normalization.

All images were normalized pixels in each color channel (RGB – 3 channels with values between 0 to 255) to be between -0.5 to 0.5 by dividing by (128-value)/255. Did no data augmentation.

Here are sample images from the training set

alt text

3.2 Model Architecture

Given the relatively low resolution of Images I started with Lenet example provided in lectures, but to improve training I added Dropout (in early layers) with RELU rectifier functions. Recently read about self-normalizing rectifier function – SELU – so decided to try that instead of RELU. It gave no better end result after many epochs, but trained much faster (got > 90% in one epoch), so kept SELU in the original. For more information about SELU check out the paper Self-Normalizing Neural Networks from Johannes Kepler University in Linz, Austria.

My final model consisted of the following layers:

Layer Description
Input 32x32x3 RGB image
Convolution 5×5 1×1 stride, valid padding, outputs 28x28x6
Dropout keep_prob = 0.9
Max Pooling 2×2 stride, outputs 14x14x6
Convolution 5×5 1×1 stride, valid padding, outputs 10x10x16
Dropout keep_prob = 0.9
Max Pooling 2×2 stride, outputs 5x5x16
Flatten output dimension 400
Fully connected output dimension 120
Fully connected output dimension 84
Fully connected output dimension 84
Fully connected output dimension 43

3.3 Training of Model

To train the model, I used an Adam optimizer with learning rate of 0.002, 20 epochs (converged fast with SELU) and batch size of 256 (ran on GTX 1070 with 8GB GPU RAM)

3.4 Approach to find solution and getting accuracy > 0.93

Adding dropout to Lenet improved test accuracy and SELU improved training speed. The originally partitioned data sets were quite unbalanced (when plotting), so reading all data, shuffling and creating training/validation/test set also helped. I thought about using Keras and fine tuning a pretrained model (e.g. inception 3), but it could be that a big model on such small images could lead to overfitting (not entirely sure about that though), and reducing input size might lead to long training time (looks like fine tuning is best when you have the same input size, but changing the output classes)

My final model results were:

  • validation set accuracy of 0.976 (between 0.975-0.982)
  • test set accuracy of 0.975

If an iterative approach was chosen:

  • What was the first architecture that was tried and why was it chosen?

Started with Lenet and incrementally added dropout and then several SELU layers.. Also added one fully connected layer more.

  • What were some problems with the initial architecture?

No, but not great results before adding dropout (to avoid overfitting)

  • Which parameters were tuned? How were they adjusted and why?

Tried several combinations learning rates. Could reduce epochs after adding SELU. Used same dropout keep rate.

Since the difference between validation accuracy and test accuracy is very low the model seems to be working well. The loss is also quite low (0.02), so little to gain most likely – at least without changing the model a lot.

4 Test a Model on New Images

4.1. Choose five German traffic signs found on the web

Here are five German traffic signs that I found on the web:

alt text

In the first pick of images I didn’t check that the signs actually were among the the 43 classes the model was built for, and that was actually not the case, i.e. making it impossible to classify correctly. But got interesting results (regarding finding similar signs) for the wrongly classified ones, so replaced only 2 of them with sign images that actually was covered in the model, i.e. making it still impossible to classify 3 of them.

Here are the results of the prediction:

Image Prediction
Priority road Priority road
Side road Speed limit (50km/h)
Adult and child on road Turn left ahead
Two way traffic ahead Beware of ice/snow
Speed limit (60km/h) Speed limit (60km/h)

The model was able to correctly guess 2 of the 5 traffic signs, which gives an accuracy of 40%. For the other ones it can`t classify correctly, but the 2nd prediction for sign 3 – “adult and child on road” – is interesting since it suggests “Go straight or right” – which is quite visually similar (if you blur the innermost of each sign you will get almost the same image).

Continue Reading

Deep Learning for Magnetic Resonance Imaging (MRI)


Magnetic Resonance Imaging (MRI) can be used in many types of diagnosis e.g. cancer, alzheimer, cardiac and muscle/skeleton issues. This blog post has recent publications of Deep Learning applied to MRI (health-related) data, e.g. for segmentation, detection, demonising and classification.

MRI is described in Wikipedia as:

    Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body in both health and disease. MRI scanners use strong magnetic fields, radio waves, and field gradients to generate images of the organs in the body.

Best regards,
Amund Tveit

Year  Title Author
2017   Residual and Plain Convolutional Neural Networks for 3D Brain MRI Classification  S Korolev, A Safiullin, M Belyaev, Y Dodonova
2017   Automatic segmentation of the right ventricle from cardiac MRI using a learning‐based approach  MR Avendi, A Kheradvar, H Jafarkhani
2017   Learning a Variational Network for Reconstruction of Accelerated MRI Data  K Hammernik, T Klatzer, E Kobler, MP Recht
2017   A 2D/3D Convolutional Neural Network for Brain White Matter Lesion Detection in Multimodal MRI  L Roa
2017   On hierarchical brain tumor segmentation in MRI using fully convolutional neural networks: A preliminary study  S Pereira, A Oliveira, V Alves, CA Silva
2017   Classification of breast MRI lesions using small-size training sets: comparison of deep learning approaches  G Amit, R Ben
2017   A deep learning network for right ventricle segmentation in short-axis MRI  GN Luo, R An, KQ Wang, SY Dong, HG Zhang
2017   A novel left ventricular volumes prediction method based on deep learning network in cardiac MRI  GN Luo, GX Sun, KQ Wang, SY Dong, HG Zhang
2017   Classification of MRI data using Deep Learning and Gaussian Process-based Model Selection  H Bertrand, M Perrot, R Ardon, I Bloch
2017   Using Deep Learning to Segment Breast and Fibroglanduar Tissue in MRI Volumes  MU Dalmş, G Litjens, K Holland, A Setio, R Mann
2017   Automatic Liver and Tumor Segmentation of CT and MRI Volumes using Cascaded Fully Convolutional Neural Networks  PF Christ, F Ettlinger, F Grün, MEA Elshaera, J Lipkova
2017   Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning  P Korfiatis, TL Kline, BJ Erickson
2017   Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning  T Zhou, I Icke, B Dogdas, S Parimal, S Sampath
2017   Deep artifact learning for compressed sensing and parallel MRI  D Lee, J Yoo, JC Ye
2017   Deep Generative Adversarial Networks for Compressed Sensing Automates MRI  M Mardani, E Gong, JY Cheng, S Vasanawala
2017   3D Motion Modeling and Reconstruction of Left Ventricle Wall in Cardiac MRI  D Yang, P Wu, C Tan, KM Pohl, L Axel, D Metaxas
2017   Estimation of the volume of the left ventricle from MRI images using deep neural networks  F Liao, X Chen, X Hu, S Song
2017   A fully automatic deep learning method for atrial scarring segmentation from late gadolinium-enhanced MRI images  G Yang, X Zhuang, H Khan, S Haldar, E Nyktari, X Ye
2017   Age estimation from brain MRI images using deep learning  TW Huang, HT Chen, R Fujimoto, K Ito, K Wu, K Sato
2017   Segmenting Atrial Fibrosis from Late Gadolinium-Enhanced Cardiac MRI by Deep-Learned Features with Stacked Sparse Auto-Encoders  S Haldar, E Nyktari, X Ye, G Slabaugh, T Wong
2017   Deep Residual Learning For Compressed Sensing Mri  D Lee, J Yoo, JC Ye
2017   Prostate cancer diagnosis using deep learning with 3D multiparametric MRI  S Liu, H Zheng, Y Feng, W Li
2017   Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions  Z Akkus, A Galimzianova, A Hoogi, DL Rubin
2016   Classification of Alzheimer’s Disease Structural MRI Data by Deep Learning Convolutional Neural Networks  S Sarraf, G Tofighi
2016   De-noising of Contrast-Enhanced MRI Sequences by an Ensemble of Expert Deep Neural Networks  A Benou, R Veksler, A Friedman, TR Raviv
2016   A Combined Deep-Learning and Deformable-Model Approach to Fully Automatic Segmentation of the Left Ventricle in Cardiac MRI  MR Avendi, A Kheradvar, H Jafarkhani
2016   Applying machine learning to automated segmentation of head and neck tumour volumes and organs at risk on radiotherapy planning CT and MRI scans  C Chu, J De Fauw, N Tomasev, BR Paredes, C Hughes
2016   A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI  PV Tran
2016   An Overview of Techniques for Cardiac Left Ventricle Segmentation on Short-Axis MRI  A Krasnobaev, A Sozykin
2016   Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: a clinical study  J Dolz, N Betrouni, M Quidet, D Kharroubi, HA Leroy
2016   Hough-CNN: Deep Learning for Segmentation of Deep Brain Regions in MRI and Ultrasound  F Milletari, SA Ahmadi, C Kroll, A Plate, V Rozanski
2016   Mental Disease Feature Extraction with MRI by 3D Convolutional Neural Network with Multi-channel Input  L Cao, Z Liu, X He, Y Cao, K Li
2016   Deep learning trends for focal brain pathology segmentation in MRI  M Havaei, N Guizard, H Larochelle, PM Jodoin
2016   Identification of Water and Fat Images in Dixon MRI Using Aggregated Patch-Based Convolutional Neural Networks  L Zhao, Y Zhan, D Nickel, M Fenchel, B Kiefer, XS Zhou
2016   Deep MRI brain extraction: A 3D convolutional neural network for skull stripping  J Kleesiek, G Urban, A Hubert, D Schwarz
2016   Active appearance model and deep learning for more accurate prostate segmentation on MRI  R Cheng, HR Roth, L Lu, S Wang, B Turkbey
2016   Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation  RPK Poudel, P Lamata, G Montana
2016   Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis  HK van der Burgh, R Schmidt, HJ Westeneng
2016   Semantic-Based Brain MRI Image Segmentation Using Convolutional Neural Network  Y Chou, DJ Lee, D Zhang
2016   Abstract WP41: Predicting Acute Ischemic Stroke Tissue Fate Using Deep Learning on Source Perfusion MRI  KC Ho, S El
2016   A new ASM framework for left ventricle segmentation exploring slice variability in cardiac MRI volumes  C Santiago, JC Nascimento, JS Marques
2015   Crohn’s disease segmentation from mri using learned image priors  D Mahapatra, P Schüffler, F Vos, JM Buhmann
2015   Discovery Radiomics for Multi-Parametric MRI Prostate Cancer Detection  AG Chung, MJ Shafiee, D Kumar, F Khalvati
2015   Real-time Dynamic MRI Reconstruction using Stacked Denoising Autoencoder  A Majumdar
2015   q-Space Deep Learning for Twelve-Fold Shorter and Model-Free Diffusion MRI Scans  V Golkov, A Dosovitskiy, P Sämann, JI Sperl
Continue Reading

Deep Learning for Image Super-Resolution (Scale Up)


Scaling down images is a craft, scaling up images is an art

Since in the scaling down to a lower resolution you typically need to remove pixels, but in the case of scaling up you need to invent new pixels. But some Deep Learning models with Convolutional Neural Networks (and frequently Deconvolutional layers) has shown successful to scale up images, this is called Image Super-Resolution. These models are typically trained by taking high resolution images and reducing them to lower resolution and then train in the opposite way. Partially related: Recommend also checking out Odeon et. al’s’s publication: Deconvolution and Checkerboard Artifacts that goes into more detail about the one the core operators used in Image Super-Resolution.

Blog post Illustration Source: Eric Esteve’s 2013 article: Super Resolution bring high end camera image quality to smartphone.

Best regards,

Amund Tveit

Year  Title Author
2017   GUN: Gradual Upsampling Network for single image super-resolution  Y Zhao, R Wang, W Dong, W Jia, J Yang, X Liu, W Gao
2017   Dual Recovery Network with Online Compensation for Image Super-Resolution  S Xia, W Yang, T Zhao, J Liu
2017   A New Single Image Super-resolution Method Based on the Infinite Mixture Model  P Cheng, Y Qiu, X Wang, K Zhao
2017   Underwater Image Super-resolution by Descattering and Fusion  H Lu, Y Li, S Nakashima, H Kim, S Serikawa
2017   Single Image Super-Resolution with a Parameter Economic Residual-Like Convolutional Neural Network  Z Yang, K Zhang, Y Liang, J Wang
2017   Single Image Super-Resolution via Adaptive Transform-Based Nonlocal Self-Similarity Modeling and Learning-Based Gradient Regularization  H Chen, X He, L Qing, Q Teng
2017   Ensemble Based Deep Networks for Image Super-Resolution  Z Huang, L Wang, Y Gong, C Pan
2017   Single Image Super-Resolution Using Multi-Scale Convolutional Neural Network  X Jia, X Xu, B Cai, K Guo
2017   Hyperspectral image super-resolution using deep convolutional neural network  Y Li, J Hu, X Zhao, W Xie, JJ Li
2016   Research on the Natural Image Super-Resolution Reconstruction Algorithm based on Compressive Perception Theory and Deep Learning Model  G Duan, W Hu, J Wang
2016   Image super-resolution with multi-channel convolutional neural networks  Y Kato, S Ohtani, N Kuroki, T Hirose, M Numa
2016   Image super-resolution reconstruction via RBM-based joint dictionary learning and sparse representation  Z Zhang, A Liu, Q Lei
2016   End-to-End Image Super-Resolution via Deep and Shallow Convolutional Networks  Y Wang, L Wang, H Wang, P Li
2016   Single image super-resolution using regularization of non-local steering kernel regression  K Zhang, X Gao, J Li, H Xia
2016   Single image super-resolution via blind blurring estimation and anchored space mapping  X Zhao, Y Wu, J Tian, H Zhang
2016   A Versatile Sparse Representation Based Post-Processing Method for Improving Image Super-Resolution  J Yang, J Guo, H Chao
2016   Robust Single Image Super-Resolution via Deep Networks with Sparse Prior.  D Liu, Z Wang, B Wen, J Yang, W Han, T Huang
2016   EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis  MSM Sajjadi, B Schölkopf, M Hirsch
2016   Is Image Super-resolution Helpful for Other Vision Tasks?  D Dai, Y Wang, Y Chen, L Van Gool
2016   Cluster-Based Image Super-resolution via Jointly Low-rank and Sparse Representation  N Han, Z Song, Y Li
2016   Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network  C Ledig, L Theis, F Huszar, J Caballero, A Aitken
2016   Image super-resolution using non-local Gaussian process regression  H Wang, X Gao, K Zhang, J Li
2016   A hybrid wavelet convolution network with sparse-coding for image super-resolution  X Gao, H Xiong
2016   Amortised MAP Inference for Image Super-resolution  CK Sønderby, J Caballero, L Theis, W Shi, F Huszár
2016   X-Ray fluorescence image super-resolution using dictionary learning  Q Dai, E Pouyet, O Cossairt, M Walton, F Casadio
2016   Image super-resolution based on convolution neural networks using multi-channel input  GY Youm, SH Bae, M Kim
2016   Deep Edge Guided Recurrent Residual Learning for Image Super-Resolution  W Yang, J Feng, J Yang, F Zhao, J Liu, Z Guo, S Yan
2016   Image Super-Resolution by PSOSEN of Local Receptive Fields Based Extreme Learning Machine  Y Song, B He, Y Shen, R Nian, T Yan
2016   Incorporating Image Priors with Deep Convolutional Neural Networks for Image Super-Resolution  Y Liang, J Wang, S Zhou, Y Gong, N Zheng
2015   Single Image Super-Resolution Via Bm3D Sparse Coding  K Egiazarian, V Katkovnik
2015   Learning a Deep Convolutional Network for Light-Field Image Super-Resolution  Y Yoon, HG Jeon, D Yoo, JY Lee, I Kweon
2015   Single Image Super-Resolution via Image Smoothing  Z Liu, Q Huang, J Li, Q Wang
2015   Deeply Improved Sparse Coding for Image Super-Resolution  Z Wang, D Liu, J Yang, W Han, T Huang
2015   Conditioned Regression Models for Non-Blind Single Image Super-Resolution  GRSSM Rüther, H Bischof
2015   How Useful Is Image Super-resolution to Other Vision Tasks?  D Dai, Y Wang, Y Chen, L Van Gool
2015   Learning Hierarchical Decision Trees for Single Image Super-Resolution  JJ Huang, WC Siu
2015   Single image super-resolution by approximated Heaviside functions  LJ Deng, W Guo, TZ Huang
2015   Jointly Optimized Regressors for Image Super-resolution  D Dai, R Timofte, L Van Gool
2015   Single Image Super-Resolution via Internal Gradient Similarity  Y Xian, Y Tian
2015   Image Super-Resolution Using Deep Convolutional Networks  C Dong, CC Loy, K He, X Tang
2015   Coupled Deep Autoencoder for Single Image Super-Resolution  K Zeng, J Yu, R Wang, C Li, D Tao
2015   Single Image Super-Resolution Using Maximizing Self-Similarity Prior  J Li, Y Wu, X Luo
2015   Accurate Image Super-Resolution Using Very Deep Convolutional Networks  J Kim, JK Lee, KM Lee
2015   Deeply-Recursive Convolutional Network for Image Super-Resolution  J Kim, JK Lee, KM Lee
2015   Single Face Image Super-Resolution via Solo Dictionary Learning  F Juefei
2014   Single image super-resolution via L0 image smoothing  Z Liu, Q Huang, J Li, Q Wang
Continue Reading

Deep Learning for Acoustic Modelling


This blog post has an overview papers related to acoustic modelling primarily for speech recognition but also speech generation (synthesis). See also for a broader set of (at the time of writing 73) recent Deep Learning papers related to acoustics for speech recognition and other applications of acoustics.

Acoustic Modelling is described in Wikipedia as: “An acoustic model is used in Automatic Speech Recognition to represent the relationship between an audio signal and the phonemes or other linguistic units that make up speech. The model is learned from a set of audio recordings and their corresponding transcripts”. 

Blog Post Illustration Photo Source: Professor Mark Gales‘ (University of Cambridge) 2009 presentation Acoustic Modelling for Speech Recognition: Hidden Markov Models and Beyond?

Best regards,

Amund Tveit

Year  Title Author
2017   Investigation on acoustic modeling with different phoneme set for continuous Lhasa Tibetan recognition based on DNN method  H Wang, K Khyuru, J Li, G Li, J Dang, L Huang
2017   Personalized Acoustic Modeling By Weakly Supervised Multi-Task Deep Learning Using Acoustic Tokens  CK Wei, CT Chung, HY Lee, LS Lee
2017   I-vector estimation as auxiliary task for multi-task learning based acoustic modeling for automatic speech recognition  G Pironkov, S Dupont, T Dutoit
2016   Graph-based Semi-Supervised Learning in Acoustic Modeling for Automatic Speech Recognition  Y Liu
2016   A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition  A Zeyer, P Doetsch, P Voigtlaender, R Schlüter, H Ney
2016   Improvements in IITG Assamese Spoken Query System: Background Noise Suppression and Alternate Acoustic Modeling  S Shahnawazuddin, D Thotappa, A Dey, S Imani
2016   DNN-Based Acoustic Modeling for Russian Speech Recognition Using Kaldi  I Kipyatkova, A Karpov
2015   Doubly Hierarchical Dirichlet Process Hmm For Acoustic Modeling  AHHN Torbati, J Picone
2015   Deep Learning for Acoustic Modeling in Parametric Speech Generation: A systematic review of existing techniques and future trends  ZH Ling, SY Kang, H Zen, A Senior, M Schuster
2015   Acoustic Modeling In Statistical Parametric Speech Synthesis–From Hmm To Lstm-Rnn  H Zen
2015   Acoustic Modeling of Bangla Words using Deep Belief Network  M Ahmed, PC Shill, K Islam, MAH Akhand
2015   Unified Acoustic Modeling using Deep Conditional Random Fields  Y Hifny
2015   Exploiting Low-Dimensional Structures To Enhance Dnn Based Acoustic Modeling In Speech Recognition  P Dighe, G Luyet, A Asaei, H Bourlard
2015   Ensemble Acoustic Modeling for CD-DNN-HMM Using Random Forests of Phonetic Decision Trees  T Zhao, Y Zhao, X Chen
2015   Deep Neural Networks for Acoustic Modeling  V from Embeds, G Hinton, L Deng, D Yu, G Dahl
2015   Integrating Articulatory Data in Deep Neural Network-based Acoustic Modeling  L Badino, C Canevari, L Fadiga, G Metta
2015   Deep learning in acoustic modeling for Automatic Speech Recognition and Understanding-an overview  I Gavat, D Militaru
Continue Reading

Overview of recent Deep Learning Bibliographies

For the last couple of months I’ve been creating bibliographies of recent academic publications in various subfields of Deep Learning on this blog. This posting gives an overview of the last 25 bibliographies posted.

Best regards,

Amund Tveit (WeChat: AmundTveit – Twitter: @atveit)

1. Deep Learning with Residual Networks

This posting is recent papers related to residual networks (i.e. very deep networks). Check out Microsoft Research’s paper Deep Residual Learning for Image Recognition and Kaiming He’s ICML 2016 Tutorial Deep Residual Learning, Deep Learning Gets Way Deeper

2. Deep Learning for Traffic Sign Detection and Recognition

Traffic Sign Detection and Recognition is key functionality for self-driving cars. This posting has recent papers in this area. Check also out related posting: Deep Learning for Vehicle Detection and Classification

3. Deep Learning for Vehicle Detection and Classification

This posting has recent papers about vehicle (e.g. car) detection and classification, e.g. for selv-driving/autonomous cars. Related: check also out Nvidia‘s End-to-End Deep Learning for Self-driving Cars and Udacity‘s Self-Driving Car Engineer (Nanodegree).

4. Deep Learning with Long Short-Term Memory (LSTM)

This blog post has some recent papers about Deep Learning with Long-Short Term Memory (LSTM). To get started I recommend checking out Christopher Olah’s Understanding LSTM Networks and Andrej Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networks. This blog post is complemented by Deep Learning with Recurrent/Recursive Neural Networks (RNN) — ICLR 2017 Discoveries.

5. Deep Learning in Finance

This posting has recent publications about Deep Learning in Finance (e.g. stock market prediction)

6. Deep Learning for Information Retrieval and Learning to Rank

This posting is about Deep Learning for Information Retrieval and Learning to Rank (i.e. of interest if developing search engines). The posting is complemented by the posting Deep Learning for Question Answering. To get started I recommend checking out Jianfeng Gao‘s (Deep Learning Technology Center at Microsoft Research) presentation Deep Learning for Web Search and Natural Language Processing.

Of partial relevance is the posting Deep Learning for Sentiment Analysis, the posting about Embedding for NLP with Deep Learning, the posting about Deep Learning for Natural Language Processing (ICLR 2017 discoveries), and the posting about Deep Learning for Recommender Systems

7. Deep Learning for Question Answering

This posting presents recent publications related to Deep Learning for Question Answering. Question Answering is described as “a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language”. I’ll also publish postings about Deep Learning for Information Retrieval and Learning to Rank today.

8. Ensemble Deep Learning

Ensemble Based Machine Learning has been used with success in several Kaggle competitions, and this year also the Imagenet competition was dominated by ensembles in Deep Learning, e.g. Trimps-Soushen team from 3rd Research Institute of the Ministry of Public Security (China) used a combination of Inception, Inception-Resnet, Resnet and Wide Residual Network to win the Object Classification/localization challenge. This blog post has recent papers related to Ensembles in Deep Learning.

9. Deep Learning for Sentiment Analysis

Recently I published Embedding for NLP with Deep Learning (e.g. word2vec and follow-ups) and Deep Learning for Natural Language Processing — ICLR 2017 Discoveries — this posting is also mostly NLP-related since it provides recent papers related to Deep Learning for Sentiment Analysis, but also has examples of other types of sentiment (e.g. image sentiment).

10. Deep Learning with Gaussian Process

Gaussian Process is a statistical model where observations are in the continuous domain, to learn more check out a tutorial on gaussian process(by Univ.of Cambridge’s Zoubin G.). Gaussian Process is an infinite-dimensional generalization of multivariate normal distributions.

Researchers from University of Sheffield — Andreas C. Damanianou and Neil D. Lawrence — started using Gaussian Process with Deep Belief Networks (in 2013). This Blog post contains recent papers related to combining Deep Learning with Gaussian Process.

11. Deep Learning for Clustering

12. Deep Learning in combination with EEG electrical signals from the brain

EEG (Electroencephalography) is the measurement of electrical signals in the brain. It has long been used for medical purposes (e.g. diagnosis of epilepsy), and has in more recent years also been used in Brain Computer Interfaces (BCI) — note: if BCI is new to you don’t get overly excited about it, since these interfaces are still in my opinion quite premature. But they are definitely interesting in a longer term perspective .

This blog post gives an overview of recent research on Deep Learning in combination with EEG, e.g. r for classification, feature representation, diagnosis, safety (cognitive state of drivers) and hybrid methods (Computer Vision or Speech Recognition together with EEG and Deep Learning).

13. Embedding for NLP with Deep Learning

Word Embedding was introduced by Bengio in early 2000s, and interest in it really accelerated when Google presented Word2Vec in 2013.

This blog post has recent papers related to embedding for Natural Language Processing with Deep Learning. Example application areas embedding is used for in the papers include finance (stock market prediction), biomedical text analysis, part-of-speech tagging, sentiment analysis, pharmacology (drug adverse effects).

I recommend you to start with the paper: In Defense of Word Embedding for Generic Text Representation

14. Zero-Shot (Deep) Learning

Zero-Shot Learning is making decisions after seing only one or few examples (as opposed to other types of learning that typically requires large amount of training examples). Recommend having a look at An embarrassingly simple approach to zero-shot learning first.

15. Deep Learning for Alzheimer Diagnostics and Decision Support

Alzheimer’s Disease is the cause of 60–70% of cases of Dementia, costs associated to diagnosis, treatment and care of patients with it is estimated to be in the range of a hundred billion dollars in USA. This blog post have some recent papers related to using Deep Learning for diagnostics and decision support related to Alzheimer’s disease.

16. Recommender Systems with Deep Learning

This blog post presents recent research in Recommender Systems (/collaborative filtering) with Deep Learning. To get started I recommend having a look at A Survey and Critique of Deep Learning in Recommender Systems.

17. Deep Learning for Ultrasound Analysis

Ultrasound (also called Sonography) are sound waves with higher frequency than humans can hear, they frequently used in medical settings, e.g. for checking that pregnancy is going well with fetal ultrasound. For more about Ultrasound data formats check out Ultrasound Research Interface. This blog post has recent publications about applying Deep Learning for analyzing Ultrasound data.

18. Deep Learning for Music

Deep Learning (creative AI) might potentially be used for music analysis and music creation. Deepmind’s Wavenet is a step in that direction. This blog post presents recent papers in Deep Learning for Music.

19. Regularized Deep Networks — ICLR 2017 Discoveries

This blog post gives an overview of papers related to using Regularization in Deep Learning submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Regularization in Deep Learning check out:

20. Unsupervised Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to Unsupervised Deep Learning submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Unsupervised Deep Learning check out: Ruslan Salkhutdinov’s video Foundations of Unsupervised Deep Learning.

21. Autoencoders in Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to autoencoders submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about autoencoders check out the Stanford (UFLDL) tutorial about Autoencoders, Carl Doersch’ Tutorial on Variational Autoencoders, DeepLearning.TV’s Video tutorial on Autoencoders, or Goodfellow, Bengio and Courville’s Deep Learning book’s chapter on Autencoders.

22. Stochastic/Policy Gradients in Deep Learning — ICLR 2017 Discoveries

This blog post gives an overview of papers related to stochastic/policy gradient submitted to ICLR 2017, see underneath for the list of papers.

23. Deep Learning with Recurrent/Recursive Neural Networks (RNN) — ICLR 2017 Discoveries

This blog post gives an overview of Deep Learning with Recurrent/Recursive Neural Networks (RNN) related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn more about RNN check out Andrej Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networks and Pascanu, Gulcehre, Cho and Bengio’s How to Construct Deep Recurrent Neural Networks.

24. Deep Learning with Generative and Generative Adverserial Networks — ICLR 2017 Discoveries

This blog post gives an overview of Deep Learning with Generative and Adverserial Networks related papers submitted to ICLR 2017, see underneath for the list of papers. Want to learn about these topics? See OpenAI’s article about Generative Models and Ian Goodfellow’s paper about Generative Adversarial Networks.

25. Deep Learning for Natural Language Processing — ICLR 2017 Discoveries

This blog post gives an overview of Natural Language Processing related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Deep Learning with NLP check out Stanford’s CS224d: Deep Learning for Natural Language Processing

Continue Reading

Deep Learning for Traffic Sign Detection and Recognition

Traffic Sign Detection and Recognition is key functionality for self-driving cars. This posting has recent papers in this area. Check also out related posting: Deep Learning for Vehicle Detection and Classification

Best regards,
Amund Tveit
Amund Tveit

Year  Title Author
2016   Road surface traffic sign detection with hybrid region proposal and fast R-CNN  R Qian, Q Liu, Y Yue, F Coenen, B Zhang
2016   Traffic sign classification with deep convolutional neural networks  J CREDI
2016   Real-time Traffic Sign Recognition system with deep convolutional neural network  S Jung, U Lee, J Jung, DH Shim
2016   Traffic Sign Detection and Recognition using Fully Convolutional Network Guided Proposals  Y Zhu, C Zhang, D Zhou, X Wang, X Bai, W Liu
2016   A traffic sign recognition method based on deep visual feature  F Lin, Y Lai, L Lin, Y Yuan
2016   The research on traffic sign recognition based on deep learning  C Li, C Yang
2015   Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature  S Yin, P Ouyang, L Liu, Y Guo, S Wei
2015   Malaysia traffic sign recognition with convolutional neural network  MM Lau, KH Lim, AA Gopalai
2015   Negative-Supervised Cascaded Deep Learning for Traffic Sign Classification  K Xie, S Ge, R Yang, X Lu, L Sun
Continue Reading

Deep Learning for Vehicle Detection and Classification

Update: 2017-Feb-03 – launched new service – (navigation and search in papers). Try e.g. out its Vehicle, Car and Driving pages.

This posting has recent papers about vehicle (e.g. car) detection and classification, e.g. for selv-driving/autonomous cars. Related: check also out Nvidia‘s End-to-End Deep Learning for Self-driving Cars and Udacity‘s Self-Driving Car Engineer (Nanodegree).

Best regards,

<a href=””>Amund Tveit</a> (<a href=””>@atveit</a>)

Year  Title Author
2016   Vehicle Classification using Transferable Deep Neural Network Features  Y Zhou, NM Cheung
2016   A Hybrid Fuzzy Morphology And Connected Components Labeling Methods For Vehicle Detection And Counting System  C Fatichah, JL Buliali, A Saikhu, S Tena
2016   Evaluation of vehicle interior sound quality using a continuous restricted Boltzmann machine-based DBN  HB Huang, RX Li, ML Yang, TC Lim, WP Ding
2016   An Automated Traffic Surveillance System with Aerial Camera Arrays: Data Collection with Vehicle Tracking  X Zhao, D Dawson, WA Sarasua, ST Birchfield
2016   Vehicle type classification via adaptive feature clustering for traffic surveillance video  S Wang, F Liu, Z Gan, Z Cui
2016   Vehicle Detection in Satellite Images by Incorporating Objectness and Convolutional Neural Network  S Qu, Y Wang, G Meng, C Pan
2016   DAVE: A Unified Framework for Fast Vehicle Detection and Annotation  Y Zhou, L Liu, L Shao, M Mellor
2016   3D Fully Convolutional Network for Vehicle Detection in Point Cloud  B Li
2016   A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance  X Liu, W Liu, T Mei, H Ma
2016   TraCount: a deep convolutional neural network for highly overlapping vehicle counting  S Surya, RV Babu
2016   Pedestrian, bike, motorcycle, and vehicle classification via deep learning: Deep belief network and small training set  YY Wu, CM Tsai
2016   Fast Vehicle Detection in Satellite Images Using Fully Convolutional Network  J Hu, T Xu, J Zhang, Y Yang
2016   Local Tiled Deep Networks for Recognition of Vehicle Make and Model  Y Gao, HJ Lee
2016   Vehicle detection based on visual saliency and deep sparse convolution hierarchical model  Y Cai, H Wang, X Chen, L Gao, L Chen
2016   Sound quality prediction of vehicle interior noise using deep belief networks  HB Huang, XR Huang, RX Li, TC Lim, WP Ding
2016   Accurate On-Road Vehicle Detection with Deep Fully Convolutional Networks  Z Jie, WF Lu, EHF Tay
2016   Fault Detection and Identification of Vehicle Starters and Alternators Using Machine Learning Techniques  E Seddik
2016   Fault diagnosis network design for vehicle on-board equipments of high-speed railway: A deep learning approach  J Yin, W Zhao
2016   Real-time state-of-health estimation for electric vehicle batteries: A data-driven approach  G You, S Park, D Oh
2016   The Precise Vehicle Retrieval in Traffic Surveillance with Deep Convolutional Neural Networks  B Su, J Shao, J Zhou, X Zhang, L Mei, C Hu
2016   Online vehicle detection using deep neural networks and lidar based preselected image patches  S Lange, F Ulbrich, D Goehring
2016   A closer look at Faster R-CNN for vehicle detection  Q Fan, L Brown, J Smith
2016   Appearance-based Brake-Lights recognition using deep learning and vehicle detection  JG Wang, L Zhou, Y Pan, S Lee, Z Song, BS Han
2016   Night time vehicle detection algorithm based on visual saliency and deep learning  Y Cai, HW Xiaoqiang Sun, LCH Jiang
2016   Vehicle classification in WAMI imagery using deep network  M Yi, F Yang, E Blasch, C Sheaff, K Liu, G Chen, H Ling
2015   VeTrack: Real Time Vehicle Tracking in Uninstrumented Indoor Environments  M Zhao, T Ye, R Gao, F Ye, Y Wang, G Luo
2015   Vehicle Color Recognition in The Surveillance with Deep Convolutional Neural Networks  B Su, J Shao, J Zhou, X Zhang, L Mei
2015   Vehicle Speed Prediction using Deep Learning  J Lemieux, Y Ma
2015   Monza: Image Classification of Vehicle Make and Model Using Convolutional Neural Networks and Transfer Learning  D Liu, Y Wang
2015   Night Time Vehicle Sensing in Far Infrared Image with Deep Learning  H Wang, Y Cai, X Chen, L Chen
2015   A Vehicle Type Recognition Method based on Sparse Auto Encoder  HL Rong, YX Xia
2015   Occluded vehicle detection with local connected deep model  H Wang, Y Cai, X Chen, L Chen
2015   Performance Evaluation of the Neural Network based Vehicle Detection Models  K Goyal, D Kaur
2015   A Smartphone-based Connected Vehicle Solution for Winter Road Surface Condition Monitoring  MA Linton
2015   Vehicle Logo Recognition System Based on Convolutional Neural Networks With a Pretraining Strategy  Y Huang, R Wu, Y Sun, W Wang, X Ding
2015   SiftKeyPre: A Vehicle Recognition Method Based on SIFT Key-Points Preference in Car-Face Image  CY Zhang, XY Wang, J Feng, Y Cheng
2015   Vehicle Detection in Aerial Imagery: A small target detection benchmark  S Razakarivony, F Jurie
2015   Vehicle license plate recognition using visual attention model and deep learning  D Zang, Z Chai, J Zhang, D Zhang, J Cheng
2015   Domain adaption of vehicle detector based on convolutional neural networks  X Li, M Ye, M Fu, P Xu, T Li
2015   Trainable Convolutional Network Apparatus And Methods For Operating A Robotic Vehicle  P O’connor, E Izhikevich
2015   Vehicle detection and classification based on convolutional neural network  D He, C Lang, S Feng, X Du, C Zhang
2015   The AdaBoost algorithm for vehicle detection based on CNN features  X Song, T Rui, Z Zha, X Wang, H Fang
2015   Deep neural networks-based vehicle detection in satellite images  Q Jiang, L Cao, M Cheng, C Wang, J Li
2015   Vehicle License Plate Recognition Based on Extremal Regions and Restricted Boltzmann Machines  C Gou, K Wang, Y Yao, Z Li
2014   Multi-modal Sensor Registration for Vehicle Perception via Deep Neural Networks  M Giering, K Reddy, V Venugopalan
2014   Mooting within the curriculum as a vehicle for learning: student perceptions  L Jones, S Field
2014   Vehicle Type Classification Using Semi-Supervised Convolutional Neural Network  Z Dong, Y Wu, M Pei, Y Jia
2014   Vehicle License Plate Recognition With Random Convolutional Networks  D Menotti, G Chiachia, AX Falcao, VJO Neto
2014   Vehicle Type Classification Using Unsupervised Convolutional Neural Network  Z Dong, M Pei, Y He, T Liu, Y Dong, Y Jia
Continue Reading

Zero-Shot (Deep) Learning

Zero-Shot Learning is making decisions after seing only one or few examples (as opposed to other types of learning that typically requires large amount of training examples). Recommend having a look at An embarrassingly simple approach to zero-shot learning first.

Best regards,

Amund Tveit

  1. Less is more: zero-shot learning from online textual documents with noise suppression
    – Authors: R Qiao, L Liu, C Shen, A Hengel (2016)
  2. Synthesized Classifiers for Zero-Shot Learning
    – Authors: S Changpinyo, Wl Chao, B Gong, F Sha (2016)
  3. Tinkering Under The Hood: Interactive Zero-Shot Learning with Pictorial Classifiers
    – Authors: V Krishnan (2016)
  4. Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks
    – Authors: E Gavves, T Mensink, T Tommasi, Cgm Snoek… (2015)
  5. Transductive Multi-view Zero-Shot Learning
    – Authors: Y Fu, Tm Hospedales, T Xiang, S Gong (2015)
  6. Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions
    – Authors: J Ba, K Swersky, S Fidler, R Salakhutdinov (2015)
  7. Zero-Shot Learning with Structured Embeddings
    – Authors: Z Akata, H Lee, B Schiele (2014)
Continue Reading

Deep Learning with Generative and Generative Adverserial Networks – ICLR 2017 Discoveries

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017).

This blog post gives an overview of Deep Learning with Generative and Adverserial Networks related papers submitted to ICLR 2017, see underneath for the list of papers. Want to learn about these topics? See OpenAI’s article about Generative Models and Ian Goodfellow’s paper about Generative Adversarial Networks.

Best regards,

Amund Tveit

ICLR 2017 – Generative and Generative Adversarial Papers

  1. Unsupervised Learning Using Generative Adversarial Training And Clustering – Authors: Vittal Premachandran, Alan L. Yuille
  2. Improving Generative Adversarial Networks with Denoising Feature Matching – Authors: David Warde-Farley, Yoshua Bengio
  3. Generative Adversarial Parallelization – Authors: Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor
  4. b-GAN: Unified Framework of Generative Adversarial Networks – Authors: Masatosi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  5. Generative Adversarial Networks as Variational Training of Energy Based Models – Authors: Shuangfei Zhai, Yu Cheng, Rogerio Feris, Zhongfei Zhang
  6. Boosted Generative Models – Authors: Aditya Grover, Stefano Ermon
  7. Adversarial examples for generative models – Authors: Jernej Kos, Dawn Song
  8. Mode Regularized Generative Adversarial Networks – Authors: Tong Che, Yanran Li, Athul Jacob, Yoshua Bengio, Wenjie Li
  9. Variational Recurrent Adversarial Deep Domain Adaptation – Authors: Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu
  10. Structured Interpretation of Deep Generative Models – Authors: N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H.S. Torr
  11. Inference and Introspection in Deep Generative Models of Sparse Data – Authors: Rahul G. Krishnan, Matthew Hoffman
  12. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy – Authors: Dougal J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton
  13. Unsupervised sentence representation learning with adversarial auto-encoder – Authors: Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang
  14. Unsupervised Program Induction with Hierarchical Generative Convolutional Neural Networks – Authors: Qucheng Gong, Yuandong Tian, C. Lawrence Zitnick
  15. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Noise – Authors: Beilun Wang, Ji Gao, Yanjun Qi
  16. On the Quantitative Analysis of Decoder-Based Generative Models – Authors: Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger Grosse
  17. Evaluation of Defensive Methods for DNNs against Multiple Adversarial Evasion Models – Authors: Xinyun Chen, Bo Li, Yevgeniy Vorobeychik
  18. Calibrating Energy-based Generative Adversarial Networks – Authors: Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville
  19. Inverse Problems in Computer Vision using Adversarial Imagination Priors – Authors: Hsiao-Yu Fish Tung, Katerina Fragkiadaki
  20. Towards Principled Methods for Training Generative Adversarial Networks – Authors: Martin Arjovsky, Leon Bottou
  21. Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning – Authors: Dilin Wang, Qiang Liu
  22. Multi-view Generative Adversarial Networks – Authors: Mickaël Chen, Ludovic Denoyer
  23. LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation – Authors: Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh
  24. Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks – Authors: Emily Denton, Sam Gross, Rob Fergus
  25. Generative Adversarial Networks for Image Steganography – Authors: Denis Volkhonskiy, Boris Borisenko, Evgeny Burnaev
  26. Unrolled Generative Adversarial Networks – Authors: Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
  27. Generative Multi-Adversarial Networks – Authors: Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
  28. Joint Multimodal Learning with Deep Generative Models – Authors: Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  29. Fast Adaptation in Generative Models with Generative Matching Networks – Authors: Sergey Bartunov, Dmitry P. Vetrov
  30. Adversarially Learned Inference – Authors: Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville
  31. Perception Updating Networks: On architectural constraints for interpretable video generative models – Authors: Eder Santana, Jose C Principe
  32. Energy-based Generative Adversarial Networks – Authors: Junbo Zhao, Michael Mathieu, Yann LeCun
  33. Simple Black-Box Adversarial Perturbations for Deep Networks – Authors: Nina Narodytska, Shiva Kasiviswanathan
  34. Learning in Implicit Generative Models – Authors: Shakir Mohamed, Balaji Lakshminarayanan
  35. On Detecting Adversarial Perturbations – Authors: Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff
  36. Delving into Transferable Adversarial Examples and Black-box Attacks – Authors: Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
  37. Adversarial Feature Learning – Authors: Jeff Donahue, Philipp Krähenbühl, Trevor Darrell
  38. Generative Paragraph Vector – Authors: Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, Xueqi Cheng
  39. Adversarial Machine Learning at Scale – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  40. Adversarial Training Methods for Semi-Supervised Text Classification – Authors: Takeru Miyato, Andrew M. Dai, Ian Goodfellow
  41. Sampling Generative Networks: Notes on a Few Effective Techniques – Authors: Tom White
  42. Adversarial examples in the physical world – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  43. Improving Sampling from Generative Autoencoders with Markov Chains – Authors: Kai Arulkumaran, Antonia Creswell, Anil Anthony Bharath
  44. Neural Photo Editing with Introspective Adversarial Networks – Authors: Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
  45. Learning to Protect Communications with Adversarial Neural Cryptography – Authors: Martín Abadi, David G. Andersen

Sign up for Deep Learning newsletter!

Continue Reading