The work presented in this paper aims to address the problem of autonomous driving (especially along ill-defined roads) by using convolutional neural networks to predict the position and width of roads from camera input images. The networks are trained with supervised learning (i.e., back-propagation) using a dataset of annotated road images. We train two different network architectures for images corresponding to six colour models. They are tested “off-line” on a road detection task using image sequences not used in training. To benchmark our approach, we compare the performance of our networks with that of a different image processing method that relies on differences in colour distribution between the road and non-road areas of the camera input. Finally, we use a trained convolutional network to successfully navigate a Pioneer 3-AT robot on 5 distinct test paths. Results show that the network can safely guide the robot in this navigation task and that it is robust enough to deal with circumstances much different from those encountered during training.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.