For easy portability I chose to run the Keras part inside docker (i.e. could e.g. use nvidia-docker for a larger model that would need a GPU to train e.g. in the cloud or on a desktop or a powerful laptop). The current choice of Keras backend was TensorFlow, but believe it should also work for other backends (e.g. CNTK, Theano or MXNet). The code for this blog post is available at github.com/atveit/keras2ios
This blog post has an overview papers related to acoustic modelling primarily for speech recognition but also speech generation (synthesis). See also ai.amundtveit.com/keyword/acoustic for a broader set of (at the time of writing 73) recent Deep Learning papers related to acoustics for speech recognition and other applications of acoustics.
Acoustic Modelling is described in Wikipedia as: “An acoustic model is used in Automatic Speech Recognition to represent the relationship between an audio signal and the phonemes or other linguistic units that make up speech. The model is learned from a set of audio recordings and their corresponding transcripts”.
Last August Nvidia brought desktop-class graphics to laptops with GeForce 1060, 1070 and 1080. Laptops with such GPUs seems to be primarily targeted towards gaming, but they can also be used for Deep Learning, e.g. with TensorFlow, Pytorch or Keras. A laptop for Deep Learning can be a convenient supplement to using GPUs in the Cloud (Nvidia K80 or P100) or buying a desktop or server machine with perhaps even more powerful GPUs than in a laptop (e.g. the Pascal Titan X or the new 1080 TI).
1. Choice of GPU
I decided on the GTX 1070 GPU since it had
Same amount of GPU RAM as GTX 1080 – 8 GB – enough to develop or test a large range of CNN and GAN models
Was cheaper and used less energy than GTX 1080
2. Choice of Laptop and Configuration
I chose the Acer Predator G9-593, it had a nice spec and was upgradable to several disks and up to 64 GB of RAM
This blog post has recent papers about Deep Learning for authentication, e.g. iris (eye), fingerprint and various other patterns of the user, e.g. behavior writing style (stylometry) and other user patterns. Partially related is the Quora question and answer: How can Deep Learning be used for Computer Security?
Tweets (i.e. microblogging with very short documents) is a frequent data source in machine learning, e.g. for sentiment analysis and financial (stock) predictions. Here are some recent papers related to use of Analyzing Twitter Data with Deep Learning. (note: Twitter itself also does Deep Learning on Twitter data with its Cortex Team). Many of these papers could probably also apply similar data sources such as e.g. Weibo or Facebook.
For the last couple of months I’ve been creating bibliographies of recent academic publications in various subfields of Deep Learning on this blog. This posting gives an overview of the last 25 bibliographies posted.
This posting presents recent publications related to Deep Learning for Question Answering. Question Answering is described as “a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language”. I’ll also publish postings about Deep Learning for Information Retrieval and Learning to Rank today.
Ensemble Based Machine Learning has been used with success in several Kaggle competitions, and this year also the Imagenet competition was dominated by ensembles in Deep Learning, e.g. Trimps-Soushen team from 3rd Research Institute of the Ministry of Public Security (China) used a combination of Inception, Inception-Resnet, Resnet and Wide Residual Network to win the Object Classification/localization challenge. This blog post has recent papers related to Ensembles in Deep Learning.
EEG (Electroencephalography) is the measurement of electrical signals in the brain. It has long been used for medical purposes (e.g. diagnosis of epilepsy), and has in more recent years also been used in Brain Computer Interfaces (BCI) — note: if BCI is new to you don’t get overly excited about it, since these interfaces are still in my opinion quite premature. But they are definitely interesting in a longer term perspective .
This blog post gives an overview of recent research on Deep Learning in combination with EEG, e.g. r for classification, feature representation, diagnosis, safety (cognitive state of drivers) and hybrid methods (Computer Vision or Speech Recognition together with EEG and Deep Learning).
This blog post has recent papers related to embedding for Natural Language Processing with Deep Learning. Example application areas embedding is used for in the papers include finance (stock market prediction), biomedical text analysis, part-of-speech tagging, sentiment analysis, pharmacology (drug adverse effects).
Alzheimer’s Disease is the cause of 60–70% of cases of Dementia, costs associated to diagnosis, treatment and care of patients with it is estimated to be in the range of a hundred billion dollars in USA. This blog post have some recent papers related to using Deep Learning for diagnostics and decision support related to Alzheimer’s disease.
Ultrasound (also called Sonography) are sound waves with higher frequency than humans can hear, they frequently used in medical settings, e.g. for checking that pregnancy is going well with fetal ultrasound. For more about Ultrasound data formats check out Ultrasound Research Interface. This blog post has recent publications about applying Deep Learning for analyzing Ultrasound data.
Deep Learning (creative AI) might potentially be used for music analysis and music creation. Deepmind’s Wavenet is a step in that direction. This blog post presents recent papers in Deep Learning for Music.
This blog post gives an overview of papers related to using Regularization in Deep Learning submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Regularization in Deep Learning check out: www.deeplearningbook.org/contents/regularization.html
This blog post gives an overview of papers related to Unsupervised Deep Learning submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Unsupervised Deep Learning check out: Ruslan Salkhutdinov’s video Foundations of Unsupervised Deep Learning.
This blog post gives an overview of Natural Language Processing related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Deep Learning with NLP check out Stanford’s CS224d: Deep Learning for Natural Language Processing