Keras Deep Learning with Apple’s CoreMLTools on iOS 11 – Part 1

kerasxcode

This is a basic example of train and use a basic Keras neural network model (XOR) on iPhone using Apple’s coremltools on iOS11. Note that showing the integration starting from a Keras model to having it running in the iOS app is the main point and not the particular choice of model, in principle a similar approach could be used for any kind of Deep Learning model, e.g. generator part of Generative Adversarial Networks, a Recurrent Neural Network (or LSTM) or a Convolutional Neural Network.

For easy portability I chose to run the Keras part inside docker (i.e. could e.g. use nvidia-docker for a larger model that would need a GPU to train e.g. in the cloud or on a desktop or a powerful laptop). The current choice of Keras backend was TensorFlow, but believe it should also work for other backends (e.g. CNTK, Theano or MXNet). The code for this blog post is available at github.com/atveit/keras2ios

Best regards,

Amund Tveit

1. Building and training Keras Model for XOR problem – PYTHON

1.1 Training data for XOR

1.2 Keras XOR Neural Network Model

1.3 Train the Keras model with Stochastic Gradient Descent (SGD)

1.4 Use Apple’s coreml tool to convert the Keras model to coreml model

2. Using the converted Keras model on iPhone – SWIFT

2.1 Create new Xcode Swift project and add keras_model.mlmodel

kerasxcode

2.2 Inspect keras_model.mlmodel by clicking on it in xcode

mlmodelinspect

2.3 Update ViewController.swift with prediction function

2.4 Run app with Keras model on iPhone and look at debug output

run output

0 xor 0 = 1 xor 1 = 0 (if rounding down), and 1 xor 0 = 0 xor 1 = 1 (if rounding up)

LGTM!

Sign up for Deep Learning newsletter!


Continue Reading

Regularized Deep Networks – ICLR 2017 Discoveries

This blog post gives an overview of papers related to using Regularization in Deep Learning submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Regularization in Deep Learning check out: www.deeplearningbook.org/contents/regularization.html

  1. Mode Regularized Generative Adversarial Networks – Authors: Tong Che, Yanran Li, Athul Jacob, Yoshua Bengio, Wenjie Li
  2. Representation Stability as a Regularizer for Neural Network Transfer Learning – Authors: Matthew Riemer, Elham Khabiri, Richard Goodwin
  3. Neural Causal Regularization under the Independence of Mechanisms Assumption – Authors: Mohammad Taha Bahadori, Krzysztof Chalupka, Edward Choi, Walter F. Stewart, Jimeng Sun
  4. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations – Authors: David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh  Goyal, Yoshua Bengio, Aaron Courville, Christopher Pal
  5. Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units – Authors: Dan Hendrycks, Kevin Gimpel
  6. Regularizing CNNs with Locally Constrained Decorrelations – Authors: Pau Rodríguez, Jordi Gonzàlez, Guillem Cucurull, Josep M. Gonfaus, Xavier Roca
  7. Regularizing Neural Networks by Penalizing Confident Output Distributions – Authors: Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton
  8. Multitask Regularization for Semantic Vector Representation of Phrases – Authors: Xia Song, Saurabh Tiwary \& Rangan Majumdar
  9. (F)SPCD: Fast Regularization of PCD by Optimizing Stochastic ML Approximation under Gaussian Noise – Authors: Prima Sanjaya, Dae-Ki Kang
  10. Crossmap Dropout : A Generalization of Dropout Regularization in Convolution Level – Authors: Alvin Poernomo, Dae-Ki Kang
  11. Non-linear Dimensionality Regularizer for Solving Inverse Problems – Authors: Ravi Garg, Anders Eriksson, Ian Reid
  12. Support Regularized Sparse Coding and Its Fast Encoder – Authors: Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang
  13. An Analysis of Feature Regularization for Low-shot Learning – Authors: Zhuoyuan Chen, Han Zhao, Xiao Liu, Wei Xu
  14. Dropout with Expectation-linear Regularization – Authors: Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, Eduard Hovy
  15. SoftTarget Regularization: An Effective Technique to Reduce Over-Fitting in Neural Networks – Authors: Armen Aghajanyan
Continue Reading

Deep Learning with Recurrent/Recursive Neural Networks (RNN) – ICLR 2017 Discoveries

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017).

This blog post gives an overview of Deep Learning with Recurrent/Recursive Neural Networks (RNN) related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn more about RNN check out Andrej Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networksand Pascanu, Gulcehre, Cho and Bengio’s How to Construct Deep Recurrent Neural Networks.

Best regards,
Amund Tveit

  1. Making Neural Programming Architectures Generalize via Recursion – Authors: Jonathon Cai, Richard Shin, Dawn Song
  2. Multi-label learning with the RNNs for Fashion Search – Authors: Taewan Kim
  3. Recursive Regression with Neural Networks: Approximating the HJI PDE Solution – Authors: Vicenç Rubies Royo
  4. SampleRNN: An Unconditional End-to-End Neural Audio Generation Model – Authors: Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Manuel Rodriguez Sotelo, Aaron Courville, Yoshua Bengio
  5. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations – Authors: David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Christopher Pal
  6. LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation – Authors: Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh
  7. TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency – Authors: Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley

Sign up for Deep Learning newsletter!


Continue Reading

Deep Learning with Reinforcement Learning – ICLR 2017 Discoveries

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017).

This blog post gives an overview of Deep Learning with Reinforcement related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn more about combining Deep Learning with Reinforcement Learning check out Nervana’s Demystifying Deep Reinforcement Learning, Andrej Karpathy’s Deep Reinforcement Learning: Pong From Pixels, DeepMind’s Deep Reinforcement Learning, and Berkeley University’s CS 294: Deep Reinforcement Learning (starting in Sprint 2017)

Best regards,

Amund Tveit

ICLR 2017 – Reinforcement Related paper

  1. Stochastic Neural Networks for Hierarchical Reinforcement Learning – Authors: Carlos Florensa, Yan Duan, Pieter Abbeel
  2. #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning – Authors: Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Xi Chen, Yan Duan, John Schulman, Filip De Turck, Pieter Abbeel
  3. Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning – Authors: Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, Sergey Levine
  4. Deep Reinforcement Learning for Accelerating the Convergence Rate – Authors: Jie Fu, Zichuan Lin, Danlu Chen, Ritchie Ng, Miao Liu, Nicholas Leonard, Jiashi Feng, Tat-Seng Chua
  5. Generalizing Skills with Semi-Supervised Reinforcement Learning – Authors: Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine
  6. Learning to Perform Physics Experiments via Deep Reinforcement Learning – Authors: Misha Denil, Pulkit Agrawal, Tejas D Kulkarni, Tom Erez, Peter Battaglia, Nando de Freitas
  7. Designing Neural Network Architectures using Reinforcement Learning – Authors: Bowen Baker, Otkrist Gupta, Nikhil Naik, Ramesh Raskar
  8. Reinforcement Learning with Unsupervised Auxiliary Tasks -Authors: Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
  9. Options Discovery with Budgeted Reinforcement Learning – Authors: Aurelia Lon, Ludovic Denoyer
  10. Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU – Authors: Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons,Jan Kautz
  11. Multi-task learning with deep model based reinforcement learning – Authors: Asier Mujika
  12. Neural Architecture Search with Reinforcement Learning -Authors: Barret Zoph, Quoc Le
  13. Tuning Recurrent Neural Networks with Reinforcement Learning -Authors: Natasha Jaques, Shixiang Gu, Richard E. Turner, Douglas Eck
  14. RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning – Authors: Yan Duan, John Schulman, Xi Chen, Peter Bartlett, Ilya Sutskever, Pieter Abbeel
  15. Learning to Repeat: Fine Grained Action Repetition
    for Deep Reinforcement Learning
    – Authors: Sahil Sharma, Aravind S. Lakshminarayanan, Balaraman Ravi
    ndran
  16. Learning to Play in a Day: Faster Deep Reinforcemen
    t Learning by Optimality Tightening
    – Authors: Frank S.He, Yang Liu, Alexander G. Schwing, Jian Peng
  17. Surprise-Based Intrinsic Motivation for Deep Reinfo
    rcement Learning
    – Authors: Joshua Achiam, Shankar Sastry
  18. Learning to Compose Words into Sentences with Reinf
    orcement Learning
    – Authors: Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, Wang Ling
  19. Spatio-Temporal Abstractions in Reinforcement Learn
    ing Through Neural Encoding
    – Authors: Nir Baram, Tom Zahavy, Shie Mannor
  20. Modular Multitask Reinforcement Learning with Policy Sketches – Authors: Jacob Andreas, Dan Klein, Sergey Levine
  21. Combating Deep Reinforcement Learning’s Sisyphean Curse with Intrinsic Fear – Authors: Zachary C. Lipton, Jianfeng Gao, Lihong Li, Jianshu Chen, Li Deng

Sign up for Deep Learning newsletter!


Continue Reading

Deep Learning with Generative and Generative Adverserial Networks – ICLR 2017 Discoveries

The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017).

This blog post gives an overview of Deep Learning with Generative and Adverserial Networks related papers submitted to ICLR 2017, see underneath for the list of papers. Want to learn about these topics? See OpenAI’s article about Generative Models and Ian Goodfellow et.al’s paper about Generative Adversarial Networks.

Best regards,

Amund Tveit

ICLR 2017 – Generative and Generative Adversarial Papers

  1. Unsupervised Learning Using Generative Adversarial Training And Clustering – Authors: Vittal Premachandran, Alan L. Yuille
  2. Improving Generative Adversarial Networks with Denoising Feature Matching – Authors: David Warde-Farley, Yoshua Bengio
  3. Generative Adversarial Parallelization – Authors: Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor
  4. b-GAN: Unified Framework of Generative Adversarial Networks – Authors: Masatosi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  5. Generative Adversarial Networks as Variational Training of Energy Based Models – Authors: Shuangfei Zhai, Yu Cheng, Rogerio Feris, Zhongfei Zhang
  6. Boosted Generative Models – Authors: Aditya Grover, Stefano Ermon
  7. Adversarial examples for generative models – Authors: Jernej Kos, Dawn Song
  8. Mode Regularized Generative Adversarial Networks – Authors: Tong Che, Yanran Li, Athul Jacob, Yoshua Bengio, Wenjie Li
  9. Variational Recurrent Adversarial Deep Domain Adaptation – Authors: Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu
  10. Structured Interpretation of Deep Generative Models – Authors: N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H.S. Torr
  11. Inference and Introspection in Deep Generative Models of Sparse Data – Authors: Rahul G. Krishnan, Matthew Hoffman
  12. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy – Authors: Dougal J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton
  13. Unsupervised sentence representation learning with adversarial auto-encoder – Authors: Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang
  14. Unsupervised Program Induction with Hierarchical Generative Convolutional Neural Networks – Authors: Qucheng Gong, Yuandong Tian, C. Lawrence Zitnick
  15. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Noise – Authors: Beilun Wang, Ji Gao, Yanjun Qi
  16. On the Quantitative Analysis of Decoder-Based Generative Models – Authors: Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger Grosse
  17. Evaluation of Defensive Methods for DNNs against Multiple Adversarial Evasion Models – Authors: Xinyun Chen, Bo Li, Yevgeniy Vorobeychik
  18. Calibrating Energy-based Generative Adversarial Networks – Authors: Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville
  19. Inverse Problems in Computer Vision using Adversarial Imagination Priors – Authors: Hsiao-Yu Fish Tung, Katerina Fragkiadaki
  20. Towards Principled Methods for Training Generative Adversarial Networks – Authors: Martin Arjovsky, Leon Bottou
  21. Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning – Authors: Dilin Wang, Qiang Liu
  22. Multi-view Generative Adversarial Networks – Authors: Mickaël Chen, Ludovic Denoyer
  23. LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation – Authors: Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh
  24. Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks – Authors: Emily Denton, Sam Gross, Rob Fergus
  25. Generative Adversarial Networks for Image Steganography – Authors: Denis Volkhonskiy, Boris Borisenko, Evgeny Burnaev
  26. Unrolled Generative Adversarial Networks – Authors: Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
  27. Generative Multi-Adversarial Networks – Authors: Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
  28. Joint Multimodal Learning with Deep Generative Models – Authors: Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  29. Fast Adaptation in Generative Models with Generative Matching Networks – Authors: Sergey Bartunov, Dmitry P. Vetrov
  30. Adversarially Learned Inference – Authors: Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville
  31. Perception Updating Networks: On architectural constraints for interpretable video generative models – Authors: Eder Santana, Jose C Principe
  32. Energy-based Generative Adversarial Networks – Authors: Junbo Zhao, Michael Mathieu, Yann LeCun
  33. Simple Black-Box Adversarial Perturbations for Deep Networks – Authors: Nina Narodytska, Shiva Kasiviswanathan
  34. Learning in Implicit Generative Models – Authors: Shakir Mohamed, Balaji Lakshminarayanan
  35. On Detecting Adversarial Perturbations – Authors: Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff
  36. Delving into Transferable Adversarial Examples and Black-box Attacks – Authors: Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
  37. Adversarial Feature Learning – Authors: Jeff Donahue, Philipp Krähenbühl, Trevor Darrell
  38. Generative Paragraph Vector – Authors: Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, Xueqi Cheng
  39. Adversarial Machine Learning at Scale – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  40. Adversarial Training Methods for Semi-Supervised Text Classification – Authors: Takeru Miyato, Andrew M. Dai, Ian Goodfellow
  41. Sampling Generative Networks: Notes on a Few Effective Techniques – Authors: Tom White
  42. Adversarial examples in the physical world – Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
  43. Improving Sampling from Generative Autoencoders with Markov Chains – Authors: Kai Arulkumaran, Antonia Creswell, Anil Anthony Bharath
  44. Neural Photo Editing with Introspective Adversarial Networks – Authors: Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
  45. Learning to Protect Communications with Adversarial Neural Cryptography – Authors: Martín Abadi, David G. Andersen

Sign up for Deep Learning newsletter!


Continue Reading

Deep Learning for Natural Language Processing – ICLR 2017 Discoveries

Update: 2017-Feb-03 – launched new service – ai.amundtveit.com (navigation and search in papers). Try e.g. out its Natural Language Processing Page.


The 5th International Conference on Learning Representation (ICLR 2017) is coming to Toulon, France (April 24-26 2017), and there is large amount of Deep Learning papers submitted to the conference, looks like it will be a great event (see word cloud below for most frequent words used in submitted paper titles).

iclr2017wordcloud

This blog post gives an overview of Natural Language Processing related papers submitted to ICLR 2017, see underneath for the list of papers. If you want to learn about Deep Learning with NLP check out Stanford’s CS224d: Deep Learning for Natural Language Processing

Best regards,

Amund Tveit

ICLR 2017 – NLP PAPERs

Character/Word/Sentence Representation

  1. Character-aware Attention Residual Network for Sentence Representation – Authors: Xin Zheng, Zhenzhou Wu
  2. Program Synthesis for Character Level Language Modeling – Authors: Pavol Bielik, Veselin Raychev, Martin Vechev
  3. Words or Characters? Fine-grained Gating for Reading Comprehension – Authors: Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov
  4. Deep Character-Level Neural Machine Translation By Learning Morphology – Authors: Shenjian Zhao, Zhihua Zhang
  5. Opening the vocabulary of neural language models with character-level word representations – Authors: Matthieu Labeau, Alexandre Allauzen
  6. Unsupervised sentence representation learning with adversarial auto-encoder – Authors: Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang
  7. Offline Bilingual Word Vectors Without a Dictionary – Authors: Samuel L. Smith, David H. P. Turban, Nils Y. Hammerla, Steven Hamblin
  8. Learning Word-Like Units from Joint Audio-Visual Analylsis – Authors: David Harwath, James R. Glass
  9. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling – Authors: Hakan Inan, Khashayar Khosravi, Richard Socher
  10. Sentence Ordering using Recurrent Neural Networks – Authors: Lajanugen Logeswaran, Honglak Lee, Dragomir Radev

Search/Question-Answer/Recommender Systems

  1. Learning to Query, Reason, and Answer Questions On Ambiguous Texts – Authors: Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, Ban Kawas, Kartik Talamadupula, Gerry Tesauro, Satinder Singh
  2. Group Sparse CNNs for Question Sentence Classification with Answer Sets – Authors: Mingbo Ma, Liang Huang, Bing Xiang, Bowen Zhou
  3. CONTENT2VEC: Specializing Joint Representations of Product Images and Text for the task of Product Recommendation – Authors: Thomas Nedelec, Elena Smirnova, Flavian Vasile
  4. Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce – Authors: Tom Zahavy, Alessandro Magnani, Abhinandan Krishnan, Shie Mannor

Word/Sentence Embedding

  1. A Simple but Tough-to-Beat Baseline for Sentence Embeddings – Authors: Sanjeev Arora, Yingyu Liang, Tengyu Ma
  2. Investigating Different Context Types and Representations for Learning Word Embeddings – Authors: Bofang Li, Tao Liu, Zhe Zhao, Xiaoyong Du
  3. Multi-view Recurrent Neural Acoustic Word Embeddings – Authors: Wanjia He, Weiran Wang, Karen Livescu
  4. A Self-Attentive Sentence Embedding – Authors: Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, Yoshua Bengio
  5. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks – Authors: Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg

Multilingual/Translation/Sentiment

  1. Neural Machine Translation with Latent Semantic of Image and Text – Authors: Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
  2. Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context – Authors: Shyam Upadhyay, Kai-Wei Chang, James Zhou, Matt Taddy, Adam Kalai
  3. Learning to Understand: Incorporating Local Contexts with Global Attention for Sentiment Classification – Authors: Zhigang Yuan, Yuting Hu, Yongfeng Huang
  4. Adaptive Feature Abstraction for Translating Video to Language – Authors: Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin
  5. A Convolutional Encoder Model for Neural Machine Translation – Authors: Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin
  6. Fuzzy paraphrases in learning word representations with a corpus and a lexicon – Authors: Yuanzhi Ke, Masafumi Hagiwara
  7. Iterative Refinement for Machine Translation – Authors: Roman Novak, Michael Auli, David Grangier
  8. Vocabulary Selection Strategies for Neural Machine Translation – Authors: Gurvan L’Hostis, David Grangier, Michael Auli

Language Models/Text Comprehension/Matching/Compression/Classification/++

  1. A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks – Authors: Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher
  2. Gated-Attention Readers for Text Comprehension – Authors: Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov
  3. A Compare-Aggregate Model for Matching Text Sequences – Authors: Shuohang Wang, Jing Jiang
  4. A Context-aware Attention Network for Interactive Question Answering – Authors: Huayu Li, Martin Renqiang Min, Yong Ge, Asim Kadav
  5. FastText.zip: Compressing text classification models – Authors: Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jegou, Tomas Mikolov
  6. Multi-Agent Cooperation and the Emergence of (Natural) Language – Authors: Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni
  7. Learning a Natural Language Interface with Neural Programmer – Authors: Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, Dario Amodei
  8. Learning similarity preserving representations with neural similarity and context encoders – Authors: Franziska Horn, Klaus-Robert Müller
  9. Adversarial Training Methods for Semi-Supervised Text Classification – Authors: Takeru Miyato, Andrew M. Dai, Ian Goodfellow
  10. Multi-Label Learning using Tensor Decomposition for Large Text Corpora – Authors: Sayantan Dasgupta

 

Sign up for Deep Learning newsletter!


Continue Reading