Deep Learning in Energy Production

wind

This blog post has recent publications about use of Deep Learning in Energy Production context (wind, gas and oil), e.g. wind power prediction, turbine risk assessment, reservoir discovery and price forecasting.

Best regards,

Amund Tveit

Wind

Year  Title Author
2017 Short-term Wind Energy Prediction Algorithm Based on SAGA-DBNs  W Fei, WU Zhong
2017 Wind Power Prediction using Deep Neural Network based Meta Regression and Transfer Learning  AS Qureshi, A Khan, A Zameer, A Usman
2017 Wind Turbine Failure Risk Assessment Model Based on DBN  C Fei, F Zhongguang
2017 The optimization of wind power interval forecast  X Yu, H Zang
2016 Deep Learning for Wind Speed Forecasting in Northeastern Region of Brazil  AT Sergio, TB Ludermir
2016 A very short term wind power prediction approach based on Multilayer Restricted Boltzmann Machine  X Peng, L Xiong, J Wen, Y Xu, W Fan, S Feng, B Wang
2016 Short-term prediction of wind power based on deep Long Short-Term Memory  Q Xiaoyun, K Xiaoning, Z Chao, J Shuai, M Xiuda
2016 Deep belief network based deterministic and probabilistic wind speed forecasting approach  HZ Wang, GB Wang, GQ Li, JC Peng, YT Liu
2016 A hybrid wind power prediction method  Y Tao, H Chen
2016 Deep learning based ensemble approach for probabilistic wind power forecasting  H Wang, G Li, G Wang, J Peng, H Jiang, Y Liu
2016 A hybrid wind power forecasting model based on data mining and wavelets analysis  R Azimi, M Ghofrani, M Ghayekhloo
2016 ELM Based Representational Learning for Fault Diagnosis of Wind Turbine Equipment  Z Yang, X Wang, PK Wong, J Zhong
2015 Deep Neural Networks for Wind Energy Prediction  D Díaz, A Torres, JR Dorronsoro
2015 Predictive Deep Boltzmann Machine for Multiperiod Wind Speed Forecasting  CY Zhang, CLP Chen, M Gan, L Chen
2015 Resilient Propagation for Multivariate Wind Power Prediction  J Stubbemann, NA Treiber, O Kramer
2015 Transfer learning for short-term wind speed prediction with deep neural networks  Q Hu, R Zhang, Y Zhou
2014 Wind Power Prediction and Pattern Feature Based on Deep Learning Method  Y Tao, H Chen, C Qiu

Gas

Year  Title Author
2017   Sample Document–Inversion Of The Permeability Of A Tight Gas Reservoir With The Combination Of A Deep Boltzmann Kernel …  L Zhu, C Zhang, Y Wei, X Zhou, Y Huang, C Zhang
2017   Deep Learning: Chance and Challenge for Deep Gas Reservoir Identification  C Junxing, W Shikai
2016   Finite-sensor fault-diagnosis simulation study of gas turbine engine using information entropy and deep belief networks  D Feng, M Xiao, Y Liu, H Song, Z Yang, Z Hu
2015   On Accurate and Reliable Anomaly Detection for Gas Turbine Combustors: A Deep Learning Approach  W Yan, L Yu
2015   A Review of Datasets and Load Forecasting Techniques for Smart Natural Gas and Water Grids: Analysis and Experiments.  M Fagiani, S Squartini, L Gabrielli, S Spinsante
2015   Short-term load forecasting for smart water and gas grids: A comparative evaluation  M Fagiani, S Squartini, R Bonfigli, F Piazza
2015   The early-warning model of equipment chain in gas pipeline based on DNN-HMM  J Qiu, W Liang, X Yu, M Zhang, L Zhang

Oil

Year  Title Author
2017   Development of a New Correlation for Bubble Point Pressure in Oil Reservoirs Using Artificial Intelligent Technique  S Elkatatny, M Mahmoud
2017   A deep learning ensemble approach for crude oil price forecasting  Y Zhao, J Li, L Yu
2016   Automatic Detection and Classification of Oil Tanks in Optical Satellite Images Based on Convolutional Neural Network  Q Wang, J Zhang, X Hu, Y Wang
2015   A Hierarchical Oil Tank Detector With Deep Surrounding Features for High-Resolution Optical Satellite Imagery  L Zhang, Z Shi, J Wu
Continue Reading

Lane Finding (on Roads) for Self Driving Cars with OpenCV

lanefinding

This blog post is a (basic) approach of how to potentially use OpenCV for Lane Finding for self-driving cars (i.e. the yellow and white stripes along the road) – did this as one of the projects of term 1 of Udacity’s self-driving car nanodegree (highly recommended online education!).

Disclaimer: the approach presented in this blog post is way to simple to use for an actual self-driving car, but was a good way (for me) to learn more about (non-deep learning based) computer vision and the lane finding problem.

See github.com/atveit/LaneFindingForSelfDrivingCars for more details about the approach (python code)

Best regards,

Amund Tveit

Lane Finding (On Roads) for Self Driving Cars with OpenCV

1. First I selected the region of interest (with hand-made vertices)

2. Converted the image to grayscale

3. Extracted likely white lane information from the grayscale image.

Used 220 as limit (255 is 100% white, but 220 is close enough)

4. Extracted likely yellow lane information from the (colorized) region of interest image.

RGB for Yellow is [255,255,0] but found [220,220,30] to be close enough

5. Converted the yellow lane information image to grayscale

6. Combined the likely yellow and white lane grayscale images into a new grayscale image (using max value)

7. Did a gaussian blur (with kernel size 3) followed by canny edge detection

Gaussian blur smooths out the image using Convolution, this is reduce false signalling to the (canny) edge detector

8. Did a hough (transform) image creation, I also modified the draw_lines function (see GitHub link above) by calculating average derivative and b value (i.e. calculating y = x-b for all the hough lines to find a and b, and then average over them).

For more information about Hough Transform, check out this hough transformation tutorial.

(side note: believe it perhaps could have been smarter to use hough line center points instead of hough lines, since the directions of them seem sometimes a bit unstable, and then use average of derivatives between center points instead)

9. Used the weighted image to overlay the hough image with lane detection on top of the original image

Continue Reading

Traffic Sign Detection with Convolutional Neural Networks

selfdrivingcar

Making Self-driving cars work requires several technologies and methods to pull in the same direction (e.g. Radar/Lidar, Camera, Control Theory and Deep Learning). The online available Self-Driving Car Nanodegree from Udacity (divided into 3 terms) is probably the best way to learn more about the topic (see [Term 1], [Term 2] and [Term 3] for more details about each term), the coolest part is that you actually can run your code on an actual self-driving car towards the end of term 3 (I am currently in the middle of term 1 – highly recommended course!).

Note: before taking this course I recommend taking Udacity’s Deep Learning Nanodegree Foundations since most (term 1) projects requires some hands-on experience with Deep Learning.

Traffic Sign Detection with Convolutional Neural Networks

This blog post is a writeup of my (non-perfect) approach for German traffic sign detection (a project in the course) with Convolutional Neural networks (in TensorFlow) – a variant of LeNet with Dropout and (the new) SELU – Self-Normalizing Neural Networks. The effect of SELU was primarily that it quickly gained classification accuracy (even in first epoch), but didn’t lead to higher accuracy than using batch-normalisation + RELU in the end. (Details at: github.com/atveit/TrafficSignClassification). Data Augmentation in particular and perhaps a deeper network could have improved the performance I believe.

For other approaches (e.g. R-CNN and cascaded deep networks) see the blog post: Deep Learning for Vehicle Detection and Recognition.

UPDATE – 2017-July-15:

If you thought Traffic Sign Detection from modern cars was an entire solved problem, think again:

TeslaTrafficSign

 

Best regards,

Amund Tveit

1. Basic summary of the German Traffic Sign Data set.

I used numpy shape to calculate summary statistics of the traffic signs data set:

  • The size of training set is ? 34799
  • The size of the validation set is ? 4410
  • The size of test set is ? 12630
  • The shape of a traffic sign image is ? 32x32x3 (3 color channels, RGB)
  • The number of unique classes/labels in the data set is ? 43

2. Visualization of the train, validation and test dataset.

Here is an exploratory visualization of the data set. It is a bar chart showing how the normalized distribution of data for the 43 traffic signs. The key takeaway is that the relative number of data points varies quite a bit between each class, e.g. from around 6.5% (e.g. class 1) to 0.05% (e.g. class 37), i.e. a factor of at least 12 difference (6.5% / 0.05%), this can potentially impact classification performance.

alt text

3 Design of Architecture

3.1 Preprocessing of images

Did no grayscale conversion or other conversion of train/test/validation images (they were preprocessed). For the images from the Internet they were read from using PIL and converted to RGB (from RBGA), resized to 32×32 and converted to numpy array before normalization.

All images were normalized pixels in each color channel (RGB – 3 channels with values between 0 to 255) to be between -0.5 to 0.5 by dividing by (128-value)/255. Did no data augmentation.

Here are sample images from the training set

alt text

3.2 Model Architecture

Given the relatively low resolution of Images I started with Lenet example provided in lectures, but to improve training I added Dropout (in early layers) with RELU rectifier functions. Recently read about self-normalizing rectifier function – SELU – so decided to try that instead of RELU. It gave no better end result after many epochs, but trained much faster (got > 90% in one epoch), so kept SELU in the original. For more information about SELU check out the paper Self-Normalizing Neural Networks from Johannes Kepler University in Linz, Austria.

My final model consisted of the following layers:

Layer Description
Input 32x32x3 RGB image
Convolution 5×5 1×1 stride, valid padding, outputs 28x28x6
Dropout keep_prob = 0.9
SELU
Max Pooling 2×2 stride, outputs 14x14x6
Convolution 5×5 1×1 stride, valid padding, outputs 10x10x16
SELU
Dropout keep_prob = 0.9
Max Pooling 2×2 stride, outputs 5x5x16
Flatten output dimension 400
Fully connected output dimension 120
SELU
Fully connected output dimension 84
SELU
Fully connected output dimension 84
SELU
Fully connected output dimension 43

3.3 Training of Model

To train the model, I used an Adam optimizer with learning rate of 0.002, 20 epochs (converged fast with SELU) and batch size of 256 (ran on GTX 1070 with 8GB GPU RAM)

3.4 Approach to find solution and getting accuracy > 0.93

Adding dropout to Lenet improved test accuracy and SELU improved training speed. The originally partitioned data sets were quite unbalanced (when plotting), so reading all data, shuffling and creating training/validation/test set also helped. I thought about using Keras and fine tuning a pretrained model (e.g. inception 3), but it could be that a big model on such small images could lead to overfitting (not entirely sure about that though), and reducing input size might lead to long training time (looks like fine tuning is best when you have the same input size, but changing the output classes)

My final model results were:

  • validation set accuracy of 0.976 (between 0.975-0.982)
  • test set accuracy of 0.975

If an iterative approach was chosen:

  • What was the first architecture that was tried and why was it chosen?

Started with Lenet and incrementally added dropout and then several SELU layers.. Also added one fully connected layer more.

  • What were some problems with the initial architecture?

No, but not great results before adding dropout (to avoid overfitting)

  • Which parameters were tuned? How were they adjusted and why?

Tried several combinations learning rates. Could reduce epochs after adding SELU. Used same dropout keep rate.

Since the difference between validation accuracy and test accuracy is very low the model seems to be working well. The loss is also quite low (0.02), so little to gain most likely – at least without changing the model a lot.

4 Test a Model on New Images

4.1. Choose five German traffic signs found on the web

Here are five German traffic signs that I found on the web:

alt text

In the first pick of images I didn’t check that the signs actually were among the the 43 classes the model was built for, and that was actually not the case, i.e. making it impossible to classify correctly. But got interesting results (regarding finding similar signs) for the wrongly classified ones, so replaced only 2 of them with sign images that actually was covered in the model, i.e. making it still impossible to classify 3 of them.

Here are the results of the prediction:

Image Prediction
Priority road Priority road
Side road Speed limit (50km/h)
Adult and child on road Turn left ahead
Two way traffic ahead Beware of ice/snow
Speed limit (60km/h) Speed limit (60km/h)

The model was able to correctly guess 2 of the 5 traffic signs, which gives an accuracy of 40%. For the other ones it can`t classify correctly, but the 2nd prediction for sign 3 – “adult and child on road” – is interesting since it suggests “Go straight or right” – which is quite visually similar (if you blur the innermost of each sign you will get almost the same image).

Continue Reading

Deep Learning for Emotion Recognition and Simulation

robotfeelings

This blog post has recent publications about applying Deep Learning methods for emotion recognition (e.g. from voice, music, visual OR EEG input) and simulation (e.g. for robots).

The quote “Emotion is what makes us human” from a Human Computer Interaction (HCI) perspective be interpreted as: “For computers to properly communicate with humans they need to recognize human emotion, and simulate the appropriate emotion when communicating with humans”. Wikipedia describes emotion as:

    Emotion is any conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure.Scientific discourse has drifted to other meanings and there is no consensus on a definition. Emotion is often intertwined with mood, temperament, personality, disposition, and motivation. In some theories, cognition is an important aspect of emotion. Those acting primarily on the emotions they are feeling may seem as if they are not thinking, but mental processes are still essential, particularly in the interpretation of events. For example, the realization of our believing that we are in a dangerous situation and the subsequent arousal of our body’s nervous system (rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of our feeling afraid. Other theories, however, claim that emotion is separate from and can precede cognition.

Best regards,
Amund Tveit

Year  Title Author
2017   End-to-End Multimodal Emotion Recognition using Deep Neural Networks  P Tzirakis, G Trigeorgis, MA Nicolaou, B Schuller
2017   EEG-based emotion recognition using hierarchical network with subnetwork nodes  Y Yang, QMJ Wu, WL Zheng, BL Lu
2017   Multimodal architecture for emotion in robots using deep learning  M Ghayoumi, AK Bansal
2017   Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech  M Neumann, NT Vu
2017   Learning Affective Features with a Hybrid Deep Model for Audio-Visual Emotion Recognition  S Zhang, S Zhang, T Huang, W Gao, Q Tian
2017   A Literature Review on Emotion Recognition Using Various Methods  R Khan, O Sharif
2017   Evaluating deep learning architectures for Speech Emotion Recognition  HM Fayek, M Lech, L Cavedon
2017   Emotion Recognition from Scrambled Facial Images via Many Graph Embedding  R Jiang, ATS Ho, I Cheheb, N Al
2017   Prediction-based learning for continuous emotion recognition in speech  J Han, Z Zhang, F Ringeval, B Schuller
2017   Deep learning and SVM‐based emotion recognition from Chinese speech for smart affective services  W Zhang, D Zhao, Z Chai, LT Yang, X Liu, F Gong
2017   Deep spatio-temporal features for multimodal emotion recognition  D Nguyen, K Nguyen, S Sridharan, A Ghasemi, D Dean
2017   Imitation of human expressions based on emotion estimation by mental simulation  T Horii, Y Nagai, M Asada
2017   On Line Emotion Detection Using Retrainable Deep Neural Networks  D Kollias, A Tagaris, A Stafylopatis
2017   Quantum-inspired associative memories for incorporating emotion in a humanoid/Naoki Masuyama  M Naoki
2017   Respiration-based emotion recognition with deep learning  Q Zhang, X Chen, Q Zhan, T Yang, S Xia
2017   Wearable Biosensor Network Enabled Multimodal Daily-life Emotion Recognition Employing Reputation-driven Imbalanced Fuzzy Classification  Y Dai, X Wang, P Zhang, W Zhang
2016   Towards real-time Speech Emotion Recognition using deep neural networks  HM Fayek, M Lech, L Cavedon
2016   A Multi-task Learning Framework for Emotion Recognition Using 2D Continuous Space  R Xia, Y Liu
2016   Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos  SH Lee, WJ Baddar, YM Ro
2016   TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth  PU Diehl, BU Pedroni, A Cassidy, P Merolla, E Neftci
2016   Discriminatively Trained Recurrent Neural Networks for Continuous Dimensional Emotion Recognition from Audio  F Weninger, F Ringeval, E Marchi, B Schuller
2016   Feature Transfer Learning for Speech Emotion Recognition  J Deng
2016   Emotion Recognition in Speech with Deep Learning Architectures  M Erdal, M Kächele, F Schwenker
2016   Error-correcting output codes for multi-label emotion classification  C Li, Z Feng, C Xu
2016   Software Effort Estimation Framework To Improve Organization Productivity Using Emotion Recognition Of Software Engineers In …  BP Rao, PS Ramaiah
2016   How Deep Neural Networks Can Improve Emotion Recognition on Video Data  P Khorrami, TL Paine, K Brady, C Dagli, TS Huang
2016   Automatic emotion recognition in the wild using an ensemble of static and dynamic representations  MM Ghazi, HK Ekenel
2016   HoloNet: towards robust emotion recognition in the wild  A Yao, D Cai, P Hu, S Wang, L Sha, Y Chen
2016   Deep learning driven hypergraph representation for image-based emotion recognition  Y Huang, H Lu
2016   A Review on Deep Learning Algorithms for Speech and Facial Emotion Recognition  CP Latha, M Priya
2016   Novel Affective Features For Multiscale Prediction Of Emotion In Music  N Kumar, T Guha, CW Huang, C Vaz, SS Narayanan
2016   Facial emotion detection using deep learning  DL Spiers
2016   Speech Emotion Recognition Based on Deep Belief Networks and Wavelet Packet Cepstral Coefficients.  Y Huang, A Wu, G Zhang, Y Li
2016   Audio-Video Based Multimodal Emotion Recognition Using SVMs and Deep Learning  B Sun, Q Xu, J He, L Yu, L Li, Q Wei
2016   Transfer Learning of Deep Neural Network for Speech Emotion Recognition  Y Huang, M Hu, X Yu, T Wang, C Yang
2016   Feature Learning via Deep Belief Network for Chinese Speech Emotion Recognition  S Zhang, X Zhao, Y Chuang, W Guo, Y Chen
2016   Multiagent Social Influence Detection Based on Facial Emotion Recognition  P Mishra, R Hadfi, T Ito
2016   Emotion Recognition from Speech Signals Using Deep Learning Methods  S Pathak, MV Kolhe
2016   Emotion Recognition Using Facial Expression Images for a Robotic Companion  V Palade
2016   Multimodal Emotion Recognition Using Multimodal Deep Learning  W Liu, WL Zheng, BL Lu
2016   Self-Configuring Ensemble of Neural Network Classifiers for Emotion Recognition in the Intelligent Human-Machine Interaction  E Sopov, I Ivanov
2016   The Role of Emotion and Context in Musical Preference  Y Song
2016   Facing Realism in Spontaneous Emotion Recognition from Speech: Feature Enhancement by Autoencoder with LSTM Neural Networks  Z Zhang, F Ringeval, J Han, J Deng, E Marchi
2016   The University of Passau Open Emotion Recognition System for the Multimodal Emotion Challenge  J Deng, N Cummins, J Han, X Xu, Z Ren, V Pandit
2016   Building a large scale dataset for image emotion recognition: The fine print and the benchmark  Q You, J Luo, H Jin, J Yang
2016   Emotion Prediction from User-Generated Videos by Emotion Wheel Guided Deep Learning  CT Ho, YH Lin, JL Wu
2016   Emotion Recognition Using Multimodal Deep Learning  W Liu, WL Zheng, BL Lu
2016   FDBN: Design and development of Fractional Deep Belief Networks for speaker emotion recognition  K Mannepalli, PN Sastry, M Suman
2016   A novel Adaptive Fractional Deep Belief Networks for speaker emotion recognition  K Mannepalli, PN Sastry, M Suman
2016   Affect and Legal Education: Emotion in Learning and Teaching the Law  C Maughan
2016   Unsupervised domain adaptation for speech emotion recognition using PCANet  Z Huang, W Xue, Q Mao, Y Zhan
2016   Learning Auditory Neural Representations for Emotion Recognition  P Barros, C Weber, S Wermter
2016   Towards an” In-the-Wild” Emotion Dataset Using a Game-based Framework  W Li, F Abtahi, C Tsangouri, Z Zhu
2016   Deep Learning for Emotion Recognition in Faces  A Ruiz
2016   Emotion Classification on face images  M Jorda, N Miolane, A Ng
2016   Paralinguistic Speech Recognition: Classifying Emotion in Speech with Deep Learning Neural Networks  ER Segal
2016   Architecture of Emotion in Robots Using Convolutional Neural Networks  M Ghayoumi, AK Bansal
2016   Emotion recognition from face dataset using deep neural nets  D Das, A Chakrabarty
2016   Recognize the facial emotion in video sequences using eye and mouth temporal Gabor features  PI Rani, K Muneeswaran
2016   Deep Learning Based Emotion Recognition from Chinese Speech  W Zhang, D Zhao, X Chen, Y Zhang
2016   Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset  R Malheiro, R Panda, P Gomes, R Paiva
2016   Speech Emotion Recognition Using Voiced Segment Selection Algorithm  Y Gu, E Postma, HX Lin, J van den Herik
2015   Multi-modal Dimensional Emotion Recognition using Recurrent Neural Networks  S Chen, Q Jin
2015   Quantification of Cinematography Semiotics for Video-based Facial Emotion Recognition in the EmotiW 2015 Grand Challenge  AC Cruz
2015   EEG Based Emotion Identification Using Unsupervised Deep Feature Learning  X Li, P Zhang, D Song, G Yu, Y Hou, B Hu
2015   Pattern-Based Emotion Classification on Social Media  E Tromp, M Pechenizkiy
2015   Investigating Critical Frequency Bands and Channels for EEG-based Emotion Recognition with Deep Neural Networks  WL Zheng, BL Lu
2015   Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network  WL Zheng, HT Guo, BL Lu
2015   Analysis of Physiological for Emotion Recognition with IRS Model  C Li, C Xu, Z Feng
2015   Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns  G Levi, T Hassner
2015   Negative Emotion Recognition in Spoken Dialogs  X Zhang, H Wang, L Li, M Zhao, Q Li
2015   Combining Multimodal Features within a Fusion Network for Emotion Recognition in the Wild  B Sun, L Li, G Zhou, X Wu, J He, L Yu, D Li, Q Wei
2015   Recurrent Neural Networks for Emotion Recognition in Video  S Ebrahimi Kahou, V Michalski, K Konda, R Memisevic
2015   A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition  W Li, F Abtahi, Z Zhu
2015   Learning Speech Emotion Features by Joint Disentangling-Discrimination  W Xue, Z Huang, X Luo, Q Mao
2015   Data selection for acoustic emotion recognition: Analyzing and comparing utterance and sub-utterance selection strategies  D Le, EM Provost
2015   Leveraging Inter-rater Agreement for Audio-Visual Emotion Recognition  Y Kim, EM Provost
2015   The Research on Cross-Language Emotion Recognition Algorithm for Hearing Aid  X Shulan, W Jilin
2015   Optimized multi-channel deep neural network with 2D graphical representation of acoustic speech features for emotion recognition  MN Stolar, M Lech, IS Burnett
2015   EmoNets: Multimodal deep learning approaches for emotion recognition in video  SE Kahou, X Bouthillier, P Lamblin, C Gulcehre
2015   Deep learninig of EEG signals for emotion recognition  Y Gao, HJ Lee, RM Mehmood
2015   Emotion Recognition & Classification using Neural Networks  K Koupidis, A Ioannis
2015   Emotion recognition from embedded bodily expressions and speech during dyadic interactions  PM Müller, S Amin, P Verma, M Andriluka, A Bulling
2015   Speech emotion recognition with unsupervised feature learning  Z HUANG, W XUE, Q MAO
2015   Emotion identification by facial landmarks dynamics analysis  A Bandrabur, L Florea, C Florea, M Mancas
2014   Speech Emotion Recognition Using CNN  Z Huang, M Dong, Q Mao, Y Zhan
2014   Multi-scale Temporal Modeling for Dimensional Emotion Recognition in Video  L Chao, J Tao, M Yang, Y Li, Z Wen
2014   Improving generation performance of speech emotion recognition by denoising autoencoders  L Chao, J Tao, M Yang, Y Li
2014   Acoustic emotion recognition using deep neural network  J Niu, Y Qian, K Yu
2014   Prosodic, spectral and voice quality feature selection using a long-term stopping criterion for audio-based emotion recognition  M Kächele, D Zharkov, S Meudt, F Schwenker
2014   Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning  JK Chen, Z Chen, Z Chi, H Fu
2014   Emotion Modeling and Machine Learning in Affective Computing  K Kim
2014   A Study of Deep Belief Network Based Chinese Speech Emotion Recognition  B Chen, Q Yin, P Guo
Continue Reading

Deep Learning for Protein(omics)

dnarotate

This blog post has recent publications related to Deep Learning for proteinomics (the study of proteins). Proteins are a set of molecules in the human (and animal) bodies (probably best known for their role related to muscle mass and in DNA replication).

Wikipedia describes proteins as:

    Proteins (/ˈproʊˌtiːnz/ or /ˈproʊti.ᵻnz/) are large biomolecules, or macromolecules, consisting of one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific three-dimensional structure that determines its activity.

Best regards,
Amund Tveit (WeChat ID: AmundTveit)

Year  Title Author
2017   Deep Recurrent Neural Network for Protein Function Prediction from Sequence  XL Liu
2017   Sequence-based prediction of protein protein interaction using a deep-learning algorithm  T Sun, B Zhou, L Lai, J Pei
2017   Protein Model Quality Assessment: A Machine Learning Approach  K Uziela
2017   Deep convolutional neural networks for detecting secondary structures in protein density maps from cryo-electron microscopy  R Li, D Si, T Zeng, S Ji, J He
2017   Towards recognition of protein function based on its structure using deep convolutional networks  A Tavanaei, AS Maida, A Kaniymattam
2017   Improved protein model quality prediction by changing the target function  K Uziela, D Menendez Hurtado, N Shu, B Wallner
2017   A Novel Model Based On Fcm-Lm Algorithm For Prediction Of Protein Folding Rate  L Liu, M Ma, J Cui
2017   EPSILON-CP: using deep learning to combine information from multiple sources for protein contact prediction  K Stahl, M Schneider, O Brock
2017   Prediction of protein function using a deep convolutional neural network ensemble  EI Zacharaki
2017   Protein Function Prediction using Deep Restricted Boltzmann Machines  X Zou, G Wang, G Yu
2017   Next-Step Conditioned Deep Convolutional Neural Networks Improve Protein Secondary Structure Prediction  A Busia, N Jaitly
2017   Predicting membrane protein contacts from non-membrane proteins by deep transfer learning  Z Li, S Wang, Y Yu, J Xu
2017   A Template-Based Protein Structure Reconstruction Method Using Deep Autoencoder Learning  H Li, Q Lyu, J Cheng
2017   DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations  X Zhou, J Cheng
2017   Computational Methods for the Prediction of Drug-Target Interactions from Drug Fingerprints and Protein Sequences by Stacked Auto-Encoder Deep Neural Network  L Wang, ZH You, X Chen, SX Xia, F Liu, X Yan, Y Zhou
2017   Multi-task Deep Neural Networks in Automated Protein Function Prediction  AS Rifaioglu, T Doğan, MJ Martin, R Cetin
2016   AUC-Maximized Deep Convolutional Neural Fields for Protein Sequence Labeling  S Wang, S Sun, J Xu
2016   Evaluation of Protein Structural Models Using Random Forests  R Cao, T Jo, J Cheng
2016   A Protein Domain and Family Based Approach to Rare Variant Association Analysis  TG Richardson, HA Shihab, MA Rivas, MI McCarthy
2016   Protein Sequencing And Neural Network Classification Methods  V Indarni, SK Terala, PV Bhushan, MR Ireddy
2016   Accurate prediction of docked protein structure similarity using neural networks and restricted Boltzmann machines  R Farhoodi, B Akbal
2016   Identification of thermostabilizing mutations for a membrane protein whose three‐dimensional structure is unknown  Y Kajiwara, S Yasuda, Y Takamuku, T Murata
2016   Identification of Genetic Sequences Recognized by Human SC35 Protein Using Artificial Neural Networks: A Deep Learning Approach  AJ Luke, S Fergione
2016   MUST-CNN: A MUltilayer Shift-and-sTitch Deep Convolutional Architecture for Sequence-based Protein Structure Prediction  Z Lin, Y Qi
2016   Protein Secondary Structure Prediction Using Deep Multi-scale Convolutional Neural Networks and Next-Step Conditioning  A Busia, J Collins, N Jaitly
2016   A computational framework for disease grading using protein signatures  E Zerhouni, B Prisacari, Q Zhong, P Wild, M Gabrani
2016   ProtPOS: a python package for the prediction of protein preferred orientation on a surface  JCF Ngai, PI Mak, SWI Siu
2016   DeepQA: Improving the estimation of single protein model quality with deep belief networks  R Cao, D Bhattacharya, J Hou, J Cheng
2016   Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images  V Golkov, MJ Skwark, A Golkov, A Dosovitskiy, T Brox
2016   Protein Secondary Structure Prediction by using Deep Learning Method  Y Wang, H Mao, Z Yi
2016   On the importance of composite protein multiple ligand interactions in protein pockets  S Tonddast‐Navaei, B Srinivasan, J Skolnick
2016   Protein function in precision medicine: deep understanding with machine learning  B Rost, P Radivojac, Y Bromberg
2016   Protein Residue-Residue Contact Prediction Using Stacked Denoising Autoencoders  IV Luttrell, J Bailey
2016   Protein Residue Contacts and Prediction Methods  B Adhikari, J Cheng
2016   RaptorX-Property: a web server for protein structure property prediction.  S Wang, W Li, S Liu, J Xu
2016   AUCpreD: proteome-level protein disorder prediction by AUC-maximized deep convolutional neural fields  S Wang, J Ma, J Xu
2016   Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11  T Liu, Y Wang, J Eickholt, Z Wang
2015   Theory, Methods, and Applications of Coevolution in Protein Contact Prediction  J Ma, S Wang
2015   A topological approach for protein classification  Z Cang, L Mu, K Wu, K Opron, K Xia, GW Wei
2015   Application of Learning to Rank to protein remote homology detection  B Liu, J Chen, X Wang
2015   Improving Protein Fold Recognition by Deep Learning Networks  T Jo, J Hou, J Eickholt, J Cheng
2015   Proteins, physics and probability kinematics: a Bayesian formulation of the protein folding problem  T Hamelryck, W Boomsma, J Ferkinghoff
2015   DeepCNF-D: Predicting Protein Order/Disorder Regions by Weighted Deep Convolutional Neural Fields  S Wang, S Weng, J Ma, Q Tang
2015   A deep learning framework for modeling structural features of RNA-binding protein targets  S Zhang, J Zhou, H Hu, H Gong, L Chen, C Cheng
2015   A serum protein test for improved prognostic stratification of patients with myelodysplastic syndrome (MDS)  J Roder, J Löffler
2015   An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions  X Deng, J Gumm, S Karki, J Eickholt, J Cheng
2015   A panel of mass spectrometry based serum protein tests for predicting graft-versus-host disease (GvHD) and its severity  H Roder, AC Hoffmann, J Roder, M Koldehoff
2015   Learning Deep Architectures for Protein Structure Prediction  K Baek
2015   Protein secondary structure prediction using deep convolutional neural fields  S Wang, J Peng, J Ma, J Xu
2015   Protein sequence labelling by AUC-maximized Deep Convolutional Neural Fields  S Wang, J Ma, S Sun, J Xu
2015   Fast loop modeling for protein structures  J Zhang, S Nguyen, Y Shang, D Xu, I Kosztin
2015   Introducing Students to Protein Analysis Techniques: Separation and Comparative Analysis of Gluten Proteins in Various Wheat Strains  AL Pirinelli, JC Trinidad, NLB Pohl
2014   Predicting backbone Cα angles and dihedrals from protein sequences by stacked sparse auto‐encoder deep neural network  J Lyons, A Dehzangi, R Heffernan, A Sharma
2014   Improved contact predictions using the recognition of protein like contact patterns.  MJ Skwark, D Raimondi, M Michel, A Elofsson
Continue Reading

Deep Learning for Embedded Systems

bioniceyeargust2

This blog post has recent publications related to Deep Learning for Embedded Systems (e.g. computer systems in toys, biometrics, cars, kitchen equipment, medical equipment such as bionic eyes, etc).

Wikipedia defines Embedded systems as:

    An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints.[1][2] It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today.[3] Ninety-eight percent of all microprocessors are manufactured as components of embedded systems

Best regards,
Amund Tveit (WeChat ID: AmundTveit – Twitter: atveit)

Year  Title Author
2017   Six Degree-of-Freedom Localization of Endoscopic Capsule Robots using Recurrent Neural Networks embedded into a Convolutional Neural Network  M Turan, A Abdullah, R Jamiruddin, H Araujo
2017   Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices  W Meng, Z Gu, M Zhang, Z Wu
2017   14.1 A 2.9 TOPS/W deep convolutional neural network SoC in FD-SOI 28nm for intelligent embedded systems  G Desoli, N Chawla, T Boesch, S Singh, E Guidetti
2017   Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning  G Bologna, Y Hayashi
2017   Moving Object Detection in Heterogeneous Conditions in Embedded Systems  A Garbo, S Quer
2016   Re-architecting the on-chip memory sub-system of machine-learning accelerator for embedded devices  Y Wang, H Li, X Li
2016   DELAROSE: A Case Example of the Value of Embedded Course Content and Assessment in the Workplace  JSG Wells, M Bergin, C Ryan
2016   Neurosurgery Conference Experience Embedded within PCOM’s Clinical and Basic Neuroscience Curriculum: An Active Learning Model  J Okun, S Yocom, M McGuiness, M Bell, D Appelt
2016   Scene Parsing using Inference Embedded Deep Networks  S Bu, P Han, Z Liu, J Han
2016   Improving Deep Learning Accuracy with Noisy Autoencoders Embedded Perturbative Layers  L Xia, X Zhang, B Li
2016   Noise Robust Keyword Spotting Using Deep Neural Networks For Embedded Platforms  R Abdelmoula
2016   14.1 A 126.1 mW real-time natural UI/UX processor with embedded deep-learning core for low-power smart glasses  S Park, S Choi, J Lee, M Kim, J Park, HJ Yoo
2016   A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning  M Poggi, S Mattoccia
2016   Optimizing convolutional neural networks on embedded platforms with OpenCL  A Lokhmotov, G Fursin
2016   Demonstration Abstract: Accelerating Embedded Deep Learning Using DeepX  ND Lane, S Bhattacharya, P Georgiev, C Forlivesi
2016   Feedback recurrent neural network-based embedded vector and its application in topic model  L Li, S Gan, X Yin
2016   Human Pose Estimation from Depth Images via Inference Embedded Multi-task Learning  K Wang, S Zhai, H Cheng, X Liang, L Lin
2015   Memory Heat Map: Anomaly Detection in Real-Time Embedded Systems Using Memory Behavior  MK Yoon, S Mohan, J Choi, L Sha
2015   Accelerating real-time embedded scene labeling with convolutional networks  L Cavigelli, M Magno, L Benini
2015   Business meeting training on its head: inverted and embedded learning  E Van Praet
2015   CNN optimizations for embedded systems and FFT  A Vasilyev
2015   Learning Socially Embedded Visual Representation from Scratch  S Liu, P Cui, W Zhu, S Yang
2015   Inter-Tile Reuse Optimization Applied to Bandwidth Constrained Embedded Accelerators  M Peemen, B Mesman, H Corporaal
2015   Emotion recognition from embedded bodily expressions and speech during dyadic interactions  PM Müller, S Amin, P Verma, M Andriluka, A Bulling
2015   Incremental extreme learning machine based on deep feature embedded  J Zhang, S Ding, N Zhang, Z Shi
2015   Utilizing deep neural nets for an embedded ECG-based biometric authentication system  A Page, A Kulkarni, T Mohsenin
2015   A scalable and adaptable probabilistic model embedded in an electronic nose for intelligent sensor fusion  CT Tang, CM Huang, KT Tang, H Chen
Continue Reading

Deep Learning for Magnetic Resonance Imaging (MRI)

mri

Magnetic Resonance Imaging (MRI) can be used in many types of diagnosis e.g. cancer, alzheimer, cardiac and muscle/skeleton issues. This blog post has recent publications of Deep Learning applied to MRI (health-related) data, e.g. for segmentation, detection, demonising and classification.

MRI is described in Wikipedia as:

    Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body in both health and disease. MRI scanners use strong magnetic fields, radio waves, and field gradients to generate images of the organs in the body.

Best regards,
Amund Tveit

Year  Title Author
2017   Residual and Plain Convolutional Neural Networks for 3D Brain MRI Classification  S Korolev, A Safiullin, M Belyaev, Y Dodonova
2017   Automatic segmentation of the right ventricle from cardiac MRI using a learning‐based approach  MR Avendi, A Kheradvar, H Jafarkhani
2017   Learning a Variational Network for Reconstruction of Accelerated MRI Data  K Hammernik, T Klatzer, E Kobler, MP Recht
2017   A 2D/3D Convolutional Neural Network for Brain White Matter Lesion Detection in Multimodal MRI  L Roa
2017   On hierarchical brain tumor segmentation in MRI using fully convolutional neural networks: A preliminary study  S Pereira, A Oliveira, V Alves, CA Silva
2017   Classification of breast MRI lesions using small-size training sets: comparison of deep learning approaches  G Amit, R Ben
2017   A deep learning network for right ventricle segmentation in short-axis MRI  GN Luo, R An, KQ Wang, SY Dong, HG Zhang
2017   A novel left ventricular volumes prediction method based on deep learning network in cardiac MRI  GN Luo, GX Sun, KQ Wang, SY Dong, HG Zhang
2017   Classification of MRI data using Deep Learning and Gaussian Process-based Model Selection  H Bertrand, M Perrot, R Ardon, I Bloch
2017   Using Deep Learning to Segment Breast and Fibroglanduar Tissue in MRI Volumes  MU Dalmş, G Litjens, K Holland, A Setio, R Mann
2017   Automatic Liver and Tumor Segmentation of CT and MRI Volumes using Cascaded Fully Convolutional Neural Networks  PF Christ, F Ettlinger, F Grün, MEA Elshaera, J Lipkova
2017   Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning  P Korfiatis, TL Kline, BJ Erickson
2017   Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning  T Zhou, I Icke, B Dogdas, S Parimal, S Sampath
2017   Deep artifact learning for compressed sensing and parallel MRI  D Lee, J Yoo, JC Ye
2017   Deep Generative Adversarial Networks for Compressed Sensing Automates MRI  M Mardani, E Gong, JY Cheng, S Vasanawala
2017   3D Motion Modeling and Reconstruction of Left Ventricle Wall in Cardiac MRI  D Yang, P Wu, C Tan, KM Pohl, L Axel, D Metaxas
2017   Estimation of the volume of the left ventricle from MRI images using deep neural networks  F Liao, X Chen, X Hu, S Song
2017   A fully automatic deep learning method for atrial scarring segmentation from late gadolinium-enhanced MRI images  G Yang, X Zhuang, H Khan, S Haldar, E Nyktari, X Ye
2017   Age estimation from brain MRI images using deep learning  TW Huang, HT Chen, R Fujimoto, K Ito, K Wu, K Sato
2017   Segmenting Atrial Fibrosis from Late Gadolinium-Enhanced Cardiac MRI by Deep-Learned Features with Stacked Sparse Auto-Encoders  S Haldar, E Nyktari, X Ye, G Slabaugh, T Wong
2017   Deep Residual Learning For Compressed Sensing Mri  D Lee, J Yoo, JC Ye
2017   Prostate cancer diagnosis using deep learning with 3D multiparametric MRI  S Liu, H Zheng, Y Feng, W Li
2017   Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions  Z Akkus, A Galimzianova, A Hoogi, DL Rubin
2016   Classification of Alzheimer’s Disease Structural MRI Data by Deep Learning Convolutional Neural Networks  S Sarraf, G Tofighi
2016   De-noising of Contrast-Enhanced MRI Sequences by an Ensemble of Expert Deep Neural Networks  A Benou, R Veksler, A Friedman, TR Raviv
2016   A Combined Deep-Learning and Deformable-Model Approach to Fully Automatic Segmentation of the Left Ventricle in Cardiac MRI  MR Avendi, A Kheradvar, H Jafarkhani
2016   Applying machine learning to automated segmentation of head and neck tumour volumes and organs at risk on radiotherapy planning CT and MRI scans  C Chu, J De Fauw, N Tomasev, BR Paredes, C Hughes
2016   A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI  PV Tran
2016   An Overview of Techniques for Cardiac Left Ventricle Segmentation on Short-Axis MRI  A Krasnobaev, A Sozykin
2016   Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: a clinical study  J Dolz, N Betrouni, M Quidet, D Kharroubi, HA Leroy
2016   Hough-CNN: Deep Learning for Segmentation of Deep Brain Regions in MRI and Ultrasound  F Milletari, SA Ahmadi, C Kroll, A Plate, V Rozanski
2016   Mental Disease Feature Extraction with MRI by 3D Convolutional Neural Network with Multi-channel Input  L Cao, Z Liu, X He, Y Cao, K Li
2016   Deep learning trends for focal brain pathology segmentation in MRI  M Havaei, N Guizard, H Larochelle, PM Jodoin
2016   Identification of Water and Fat Images in Dixon MRI Using Aggregated Patch-Based Convolutional Neural Networks  L Zhao, Y Zhan, D Nickel, M Fenchel, B Kiefer, XS Zhou
2016   Deep MRI brain extraction: A 3D convolutional neural network for skull stripping  J Kleesiek, G Urban, A Hubert, D Schwarz
2016   Active appearance model and deep learning for more accurate prostate segmentation on MRI  R Cheng, HR Roth, L Lu, S Wang, B Turkbey
2016   Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation  RPK Poudel, P Lamata, G Montana
2016   Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis  HK van der Burgh, R Schmidt, HJ Westeneng
2016   Semantic-Based Brain MRI Image Segmentation Using Convolutional Neural Network  Y Chou, DJ Lee, D Zhang
2016   Abstract WP41: Predicting Acute Ischemic Stroke Tissue Fate Using Deep Learning on Source Perfusion MRI  KC Ho, S El
2016   A new ASM framework for left ventricle segmentation exploring slice variability in cardiac MRI volumes  C Santiago, JC Nascimento, JS Marques
2015   Crohn’s disease segmentation from mri using learned image priors  D Mahapatra, P Schüffler, F Vos, JM Buhmann
2015   Discovery Radiomics for Multi-Parametric MRI Prostate Cancer Detection  AG Chung, MJ Shafiee, D Kumar, F Khalvati
2015   Real-time Dynamic MRI Reconstruction using Stacked Denoising Autoencoder  A Majumdar
2015   q-Space Deep Learning for Twelve-Fold Shorter and Model-Free Diffusion MRI Scans  V Golkov, A Dosovitskiy, P Sämann, JI Sperl
Continue Reading

A closer look at Startup Equity Crowd Funding

Crowd Funding is a way new projects aim to get funding from (typically many) people (per funding campaign) that are not professional investors (hence the word “crowd”), e.g. at Indigogo and Kickstarter (or services such as GoFundMe, CrowdRise, RocketHub and many others)  . The diversity of crowd funding projects is very high, e.g. charity funding of people and organizations as well as funding of startups (typically for product development) in an early phase (by buying the product before it is ready). Probably the most well-known startup that got crowd funding was Virtual Reality startup Oculus VR that raised 2.5 million USD from Kickstarter in 2012, and was acquired by Facebook for 2 billion USD in 2014

1. Equity Crowd Funding

However, from a financial perspective the people or companies that help fund crowd funding campaigns get very little returns (note: not to discount the feeling of helping and making projects happen). With Equity Crowd Funding this is different, it is similar to Crowd Funding that people invest moderate amounts (at least compared to what an angel investor, venture capitalist or private equity would do), but it also gives the funders equity (stocks, options or equity-guaranteed convertible loans) in the startup. Startups are an incredibly risky investment since most never succeed hence provide zero returns (just loss and not to forget opportunity loss).  In a quickly moving world (due to accelerating technology change, e.g. in areas such as AI/Deep Learning and Robotics) with very low interest rates getting any kind of return on investment is very hard (without taking risk).  

Let me give you examples of hard it is to get high return of investment (ROI) with low risk:

a) In the 1980s the Norwegian postal bank had a “Gullbok” (Gold Book) savings account that provided around 11-13% interest rates – which seems almost unbelievable today – but had probably relatively high risk at the time – Norway did significant devaluations of the Krone currency relative to other currencies both in May 1986 and 1993 (the latter when the Norwegian bank sector almost collapsed)

b) Recently saw an ad for a regional bank’s savings account where you had to lock more than 50 thousand USD for more than a year to get less than 2% interest rate (Norway’s Bank target inflation rate is 2.5% which roughly means that you get -0.5% annual ROI from a purchasing parity view instead of 2%) (This ROI estimate is probably less risky than the one in the 1980s)

2. Crowd Equity Funding is Very Risky

For those that are willing to take a much higher risk of loosing all their invested money Crowd Equity Funding can be an approach, but please keep in mind that Crowd Equity Funding should be considered in a similar way of considering buying tickets in the lottery, doing any kind of gambling, giving away money or as regular crowd funding, i.e. only surplus money that you can afford to loose entirely and never get any ROI from. The U.S. Securities and Exchange Commission proposed crowd equity related regulations to protect people from gambling away their money, for most people the upper bound would be maximum of either $2000 or 5% of annual income or net worth.

3. Examples of ROI of Startup Investments

US early stage investors Angel List (angel.co) and 500 Startups have reported on ROI for their funds (note that none of these are currently supports Crowd Equity Funding but requires you to be an accredited investor to be allowed to invest), they both report Internal Rate of Return (IRR)

  1. Angel List’s 2013 syndicate had a 46% unrealized returns (IRR) by the end of 2015 (source: Angel List – angel.co/returns), and
  2. 500 startups’ 2010 fund had 18.5% IRR, the 2012 fund had 23.1% IRR and 2014 fund had 20.3% (source: Wall Street Journal – www.wsj.com/articles/500-startups-seeks-broader-acceptance-reveals-return-data-1469014201).

4. Examples of Equity Crowd Funding Platforms

As opposed to regular startup funding done by angel investors and venture capitalists – where Silicon Valley is absolutely leading, my impression is that crowd equity funding is so far most common in Europe and in particular in Nordics and UK (probably due to the novelty of the previously mentioned SEC regulations for crowd equity funding, see SEC’s update from May 2017). Examples of Equity Crowd Funding platforms are:

  1. Seedrs (United Kingdom)
  2. Invesdor (Finland)
  3. FundedByMe (Sweden)
  4. MyShare (Norway, focus on live crowdfunding for conferences/events)
  5. OurCrowd (Israel)
  6. MyMicroInvest (Belgium)
  7. Shadow Foundr (United Kingdom)
  8. WeFunder (USA)
  9. Fundable (USA)
  10. CrowdFunder (USA)

Investor – based in Finland – claims to have Europe’s first (equity) crowd funding exit via Initial Public Offering (IPO) at the Nasdaq First North Helsinki stock market (source: home.invesdor.com/en/blog/2016/11/10/the-first-crowdfunding-backed-public-company-starts-trading-today).

Seedrs – based in UK – also reports an IPO at the London Stock Exchange (source: www.crowdfundinsider.com/2016/11/92618-seedrs-funded-company-freeagent-trades-london-stock-exchanges-aim and www.bloomberg.com/news/articles/2016-10-31/u-k-to-see-another-tech-ipo-as-freeagent-aims-to-list-in-london for more about the IPO itself).

In addition to Startup-oriented Crowd Funding there are increasing amounts of Crowd Funding for Real Estate – source: A Review of Spanish Real Estate Crowdfunding Platforms

What the Equity Crowd Funding platforms have in common is that they want to provide easy-to-use and transparent platforms for doing investing with relatively high security for both the crowd equity investors and the startup, i.e. there are quite stringent requirements for registrations (for investors) and documentation about the investment round (for startups). However, there is still significant risk involved in investing.

5. Realize Returns of Startup Investments

A challenge investing in startups is how to realize returns (despite having grown), since you typically can not sell shares directly as you could with publically listed companies on a reasonably liquid stock exchange (note that Angel List reported unrealized returns for their 2013 Syndicate, see above). 

A few years back there were massive amounts of startup acquisitions – some at very early stage – performed primarily by public tech companies (e.g. Alphabet(Google), Facebook, Apple, Microsoft and others) or large late-stage startups (e.g Uber, Airbnb and other unicorn startups) – (try a web search for: list of startup acquisitions by PutCompanyNameHere to get an overview) this meant that for a lucky startup investor there was a chance for a quick realized return, however in most cases – even for successful startups (some big unicorn startups have strict regulations on share sales/purchases) – it is very hard to realize returns unless the startup does an IPO or get acquired by a bigger company (in some countries startups can be traded at listed smaller exchanges – Over The Counter (OTC) – which has less regulations than the large public stock exchanges and are typically considered much riskier than the larger exchanges wrt liquidity and pricing)

The Crowd Equity Funding platform Seedrs (see previous section) aims to increase liquidity of startup investments to allow for easier realisation of returns by introduction of a secondary market (source: techcrunch.com/2017/05/07/equity-crowdfunding-platform-seedrs-to-launch-secondary-market/) and they claim that the first sales on the secondary market have been successful (source: www.forbes.com/sites/davidprosser/2017/06/16/seedrs-claims-success-for-first-secondary-market-sales/).

Secondary markets (e.g. SecondMarket – that later got acquired by Nasdaq) got a lot of attention prior to the Facebook IPO. These secondary markets might be parts of the reason why later unicorn startups have had strong regulations of share sales and purchases. (According to Fortune – SecondMarket did a pivot of their model – source: fortune.com/2014/07/25/secondmarket-pivoted-after-facebooks-ipo-now-volume-is-higher-than-ever/). Examples of secondary markets for startup shares (or entire startups as for ExitRound) of varies types are:

  1. SharesPost
  2. ExitRound
  3. Equidate
  4. Nasdaq Private Market

Conclusion

Startups want and need funding, and despite being very high risk investments Equity Crowd Funding aims to make it easier to get funds for startups and to invest for investors (and perhaps realize if there are returns in secondary markets), and it is a very interesting area to follow. But please take into consideration the immense risk if wanting to take the step into becoming a crowd equity investor, being involved the startup world can become addictive, but remember that you are playing with real money. If you want to learn more about the topic I recommend the book Equity Crowdfunding: The Complete Guide For Startups And Growing Companies (by Nathan Rose)

Best regards,

Amund Tveit

Continue Reading

Keras Deep Learning with Apple’s CoreMLTools on iOS 11 – Part 1

kerasxcode

This is a basic example of train and use a basic Keras neural network model (XOR) on iPhone using Apple’s coremltools on iOS11. Note that showing the integration starting from a Keras model to having it running in the iOS app is the main point and not the particular choice of model, in principle a similar approach could be used for any kind of Deep Learning model, e.g. generator part of Generative Adversarial Networks, a Recurrent Neural Network (or LSTM) or a Convolutional Neural Network.

For easy portability I chose to run the Keras part inside docker (i.e. could e.g. use nvidia-docker for a larger model that would need a GPU to train e.g. in the cloud or on a desktop or a powerful laptop). The current choice of Keras backend was TensorFlow, but believe it should also work for other backends (e.g. CNTK, Theano or MXNet). The code for this blog post is available at github.com/atveit/keras2ios

Best regards,

Amund Tveit

1. Building and training Keras Model for XOR problem – PYTHON

1.1 Training data for XOR

1.2 Keras XOR Neural Network Model

1.3 Train the Keras model with Stochastic Gradient Descent (SGD)

1.4 Use Apple’s coreml tool to convert the Keras model to coreml model

2. Using the converted Keras model on iPhone – SWIFT

2.1 Create new Xcode Swift project and add keras_model.mlmodel

kerasxcode

2.2 Inspect keras_model.mlmodel by clicking on it in xcode

mlmodelinspect

2.3 Update ViewController.swift with prediction function

2.4 Run app with Keras model on iPhone and look at debug output

run output

0 xor 0 = 1 xor 1 = 0 (if rounding down), and 1 xor 0 = 0 xor 1 = 1 (if rounding up)

LGTM!

Sign up for Deep Learning newsletter!


Continue Reading
1 2 3 5