Deep Learning for Ultrasound Analysis

Ultrasound (also called Sonography) are sound waves with higher frequency than humans can hear, they frequently used in medical settings, e.g. for checking that pregnancy is going well with fetal ultrasound. For more about Ultrasound data formats check out Ultrasound Research Interface. This blog post has recent publications about applying Deep Learning for analyzing Ultrasound data.

Best regards,
Amund Tveit

Year  Title Authors
2016   Early-stage atherosclerosis detection using deep learning over carotid ultrasound images  RM Menchón
2016   Automatic Detection of Standard Sagittal Plane in the First Trimester of Pregnancy Using 3-D Ultrasound Data  S Nie, J Yu, P Chen, Y Wang, JQ Zhang
2016   Detection of prostate cancer using temporal sequences of ultrasound data: a large clinical feasibility study  S Azizi, F Imani, S Ghavidel, A Tahmasebi, JT Kwak
2016   Hough-CNN: Deep Learning for Segmentation of Deep Brain Regions in MRI and Ultrasound  F Milletari, SA Ahmadi, C Kroll, A Plate, V Rozanski
2016   Hybrid approach for automatic segmentation of fetal abdomen from ultrasound images using deep learning  H Ravishankar, SM Prabhu, V Vaidya, N Singhal
2016   Iterative Multi-domain Regularized Deep Learning for Anatomical Structure Detection and Segmentation from Ultrasound Images  H Chen, Y Zheng, JH Park, PA Heng, SK Zhou
2016   4D Cardiac Ultrasound Standard Plane Location by Spatial-Temporal Correlation  Y Gu, GZ Yang, J Yang, K Sun
2016   Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods  J Shan, SK Alam, B Garra, Y Zhang, T Ahmed
2016   Stacked Deep Polynomial Network Based Representation Learning for Tumor Classification with Small Ultrasound Image Dataset  J Shi, S Zhou, X Liu, Q Zhang, M Lu, T Wang
2016   Coupling Convolutional Neural Networks and Hough Voting for Robust Segmentation of Ultrasound Volumes  C Kroll, F Milletari, N Navab, SA Ahmadi
2016   Classifying Cancer Grades Using Temporal Ultrasound for Transrectal Prostate Biopsy  S Azizi, F Imani, JT Kwak, A Tahmasebi, S Xu, P Yan
2015   Tumor Classification by Deep Polynomial Network and Multiple Kernel Learning on Small Ultrasound Image Dataset  X Liu, J Shi, Q Zhang
2015   Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector  B Lei, EL Tan, S Chen, L Zhuo, S Li, D Ni, T Wang
2015   Estimation of the Arterial Diameter in Ultrasound Images of the Common Carotid Artery  RM Menchón
2015   Cell recognition based on topological sparse coding for microscopy imaging of focused ultrasound treatment  Z Wang, J Zhu, Y Xue, C Song, N Bi
2014   Mapping between ultrasound and vowel speech using DNN framework  X Zheng, J Wei, W Lu, Q Fang, J Dang
2014   High-definition 3D Image Processing Technology for Ultrasound Diagnostic Scanners  M Ogino, T Shibahara, Y Noguchi, T Tsujita
2014   Fully automatic segmentation of ultrasound common carotid artery images based on machine learning  RM Menchón
Continue Reading

Why Deep Learning matters


Deep Learning, or more specifically a subgroup of Deep Learning called (Deep) Convolutional Neural Networks have had impressive improvements since Alex Krizhevsky’s 2012 publication about (what is now called) AlexNet. AlexNet won the ImageNet Image Recognition competition with the (then close to jawdropping) top-5 error rate of only 17.0% (top-5 error means that if your classifier presents 5 answers at least one of them must be the correct one).

But Image Recognition accuracy have increased many times since then, i.e. from 17% in 2012 to 3.08% in 2016 (see publications in table below to see more about what the error rates mean and how they can be compared).

To put this into context: human beings perform at 5.10% error rate on this Image Recognition task, see Andrej Karpathy’s publication below (to be precise: at least 1 smart, trained and highly education human being performed at that error rate on the ImageNet task).

So what I am saying is that computers with Deep Learning can actually see and understand what is on a picture better than humans! (in some and probably most/all cases)

Year Error% Reference Author(s) Organization
2012 17.00 ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) Alex Krizhevsky et. al University of Toronto
2014 6.66 Going Deeper with Convolutions Christian Szegedy et. al Google
2014 (Sep) 5.10 What I learned from competing against a ConvNet on ImageNet Andrej Karpathy Stanford University
2015 (Feb) 4.94 Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification Kaiming He et. al Microsoft Research
2015 (Dec) 3.57 Deep Residual Learning for Image Recognition Kaiming He et. al Microsoft Research
2016 (Feb) 3.08 Inception v4, Inception-ResNet and the Impact of Residual Connections on Learning Christian Szegedy et. al Google

Implications of better-than-human-level image recognition with Deep Learning?

Deep Convolutional Neural network research field has moved so fast that applications still lag behind on using this. Most robots/drones and software in servers, laptops, mobiles, wearables and medical equipment does not take advantage of these research results yet, but there is a huge untapped potential (will get back to the potential in later postings).
But there are some highly important applications already, e.g. the Samsung Medison RS80A Ultrasound Machine (see image in start of posting) that uses convolutional neural networks for Breast Cancer Diagnosis.


Next blog post is probably going to be about some (simple) analogies to explain the mechanics of Convolutional Neural Networks. Stay tuned and sign up for DeepLearning.Education mailing list below.

Best regards,
Amund Tveit


Continue Reading