Written by Div Gill, former Westie

Lane detection is one of the techniques currently being used to offer driver-assist features to motorists and maybe, in the future, full autonomous vehicles. Today, we can demonstrate what is required to detect lanes on a highway; creating a binary image by marking everything we think is a lane as one value, using techniques with low overhead and can be implemented in real time on a mobile system.

 

In Figure 1 we have an image of a highway on a nice sunny day. It’s in colour, but most of what is required for lane detection can be done without colour, as we need to only work with grayscale values and not separate RGB values, making this process simpler.

 

Image source: https://cdn-images-1.medium.com/max/2000/1*qtuIbycQUWjP0hUtY9Zj_g.jpeg

 

To achieve the results from Figure 2, we convert the image to grayscale using a simple equation that combines the RGB values of the image to produce a single grayscale value.

 

 

Figure 2

 

We will be using image filters, which allow us to globally remove certain kinds of information from an image, to assist us in remove lanes from the highway. The first of these filters is a gaussian blur filter. This filter blurs the image, and the degree of the blurring can be controlled by a single parameter. Another way of thinking about a blurring filter is that the filter doesn’t allow “sharp” features in the image to pass through and it filters them out. The higher the degree of blurring, the less the filter is allowing to pass.

 

Figure 3 is our grayscale image blurred by a factor of 2. You can see it’s a bit duller than Figure 2.

 

Figure 3

 

In Figure 4 we have the same grayscale image and it has been blurred by a factor of 4, now showing a significant change. Compare Figures 3 and 4 to see what features in these images were affected by the blurring and what were not; it may be hard to see but but the the lanes are more prominent in Figure 4.

 

Figure 4

 

What would happen if you subtracted a less blurry image by a more blurry image? The less blurry image will allow slightly more ‘sharp’ features to pass through that the blurrier image will block. The result of subtracting will leave only the ‘sharp’ features that the less blurred image allowed to pass.

 

Before we do this subtraction, however, we will blur the original image from Figure 3, because noise in the image shows up as ‘sharp’ features and we are not interested in sharp features. By blurring the image, the sharpest features get blurred out.

 

Figure 5 is the subtracted image of Figures 3 and 4.

 

Figure 5

 

Now, let’s threshold by converting all values above 80 to 255 (the maximum value allowed) and all other values set to 0. The result is a binary image. Notice in Figure 6 how the lanes on the road survive but nothing else on the road does. This is because the sharp transition of the white lanes to the black road are very sharp and are thus preserved. Other, less sharp features are also preserved, but their intensities are not as high, and our threshold doesn’t allow them to pass through.

 

Figure 6

 

We could finish there, but there is one additional step we could take: If we use a line detection kernel–one for vertical lines and one for horizontal lines–we can extract the areas where the boundaries are of the surviving lanes.

 

Figure 7

 

When you compare the final image with the initial image we started with (Figures 8A and 8B), you can see that we did quite a lot of work and have successfully extracted the lanes. You will still need to go through a few more steps to make what we have useful, but now you can see what you can do with simple image filters.

 

Figure 8a

Figure 8b

 

Readers might comment that the value we chose to get our results was uniquely chosen for our specific example image and would not work as well on other images in different lighting. This is true, and using methods like histogram normalization and other techniques can normalize the images that are coming into the lane detection algorithm, and thus solve this problem (to a degree).

My final note to make here is that the use of filters is not actually the best way to solve this problem. State of the art lane detection algorithms use much more sophisticated techniques to detect lanes, and these techniques are invariant to brightness changes as well as outliers, and have been proven to be far more robust than techniques based on simple filters like ours. None-the-less, even these algorithms use image filters in their processing steps, so understanding how these filters work is can be useful to better understand the process of lane detection.