Lane Finding (on Roads) for Self Driving Cars with OpenCV

This blog post is a (basic) approach of how to potentially use OpenCV for Lane Finding for self-driving cars (i.e. the yellow and white stripes along the road) – did this as one of the projects of term 1 of Udacity’s self-driving car nanodegree (highly recommended online education!).

Disclaimer: the approach presented in this blog post is way to simple to use for an actual self-driving car, but was a good way (for me) to learn more about (non-deep learning based) computer vision and the lane finding problem.

See github.com/atveit/LaneFindingForSelfDrivingCars for more details about the approach (python code)

Best regards,

Amund Tveit

Lane Finding (On Roads) for Self Driving Cars with OpenCV

1. First I selected the region of interest (with hand-made vertices)

2. Converted the image to grayscale

3. Extracted likely white lane information from the grayscale image.

Used 220 as limit (255 is 100% white, but 220 is close enough)

4. Extracted likely yellow lane information from the (colorized) region of interest image.

RGB for Yellow is [255,255,0] but found [220,220,30] to be close enough

5. Converted the yellow lane information image to grayscale

6. Combined the likely yellow and white lane grayscale images into a new grayscale image (using max value)

7. Did a gaussian blur (with kernel size 3) followed by canny edge detection

Gaussian blur smooths out the image using Convolution, this is reduce false signalling to the (canny) edge detector

8. Did a hough (transform) image creation, I also modified the draw_lines function (see GitHub link above) by calculating average derivative and b value (i.e. calculating y = x-b for all the hough lines to find a and b, and then average over them).

For more information about Hough Transform, check out this hough transformation tutorial.

(side note: believe it perhaps could have been smarter to use hough line center points instead of hough lines, since the directions of them seem sometimes a bit unstable, and then use average of derivatives between center points instead)

9. Used the weighted image to overlay the hough image with lane detection on top of the original image

Continue Reading

Keras Deep Learning with Apple’s CoreMLTools on iOS 11 – Part 1

This is a basic example of train and use a basic Keras neural network model (XOR) on iPhone using Apple’s coremltools on iOS11. Note that showing the integration starting from a Keras model to having it running in the iOS app is the main point and not the particular choice of model, in principle a similar approach could be used for any kind of Deep Learning model, e.g. generator part of Generative Adversarial Networks, a Recurrent Neural Network (or LSTM) or a Convolutional Neural Network.

For easy portability I chose to run the Keras part inside docker (i.e. could e.g. use nvidia-docker for a larger model that would need a GPU to train e.g. in the cloud or on a desktop or a powerful laptop). The current choice of Keras backend was TensorFlow, but believe it should also work for other backends (e.g. CNTK, Theano or MXNet). The code for this blog post is available at github.com/atveit/keras2ios

Best regards,

Amund Tveit

1. Building and training Keras Model for XOR problem – PYTHON

1.1 Training data for XOR

1.2 Keras XOR Neural Network Model

1.3 Train the Keras model with Stochastic Gradient Descent (SGD)

1.4 Use Apple’s coreml tool to convert the Keras model to coreml model

2. Using the converted Keras model on iPhone – SWIFT

2.1 Create new Xcode Swift project and add keras_model.mlmodel

kerasxcode

2.2 Inspect keras_model.mlmodel by clicking on it in xcode

mlmodelinspect

2.3 Update ViewController.swift with prediction function

2.4 Run app with Keras model on iPhone and look at debug output

run output

0 xor 0 = 1 xor 1 = 0 (if rounding down), and 1 xor 0 = 0 xor 1 = 1 (if rounding up)

LGTM!

Sign up for Deep Learning newsletter!


Continue Reading