Vision Processing
Techniques for
Lane Detection
Path Planning & Navigation


Contact Information

Roger Lopez
Manager
Autonomous Systems & Controls
(210) 522-3832
rlopez@swri.org

The Mobile Autonomous Robotics Technology Initiative (MARTI®) formerly the Southwest Safe Transport Initiative (SSTI) at Southwest Research Institute (SwRI) investigates, develops, and commercializes vehicle autonomy techniques to improve safety and facilitate traffic flows. To enhance driver safety and ultimately enable vehicle autonomy on existing roadways, SwRI is developing sensing techniques to distinguish the designated vehicle path from the roadway, including various vision-based path detection and lane detection algorithm. Since various techniques have strengths and weaknesses, SwRI has fused multiple techniques to overcome weaknesses and create a more robust system.

Color-Based Road Line Detection

Lines on the road are the most useful and accurate indicators of the intended vehicle path on marked roadways. First, they indicate the boundaries of the path in which the vehicle is to travel according to the laws governing the roadways. Second, the color designates the relative direction of travel in the lanes separated by the line. A yellow line indicates separation of opposing traffic flow while a white line separates lanes of traffic traveling in the same direction. Lastly, the type of line also provides information about the lanes. If the line is dashed and white then it indicates that the path may change from the current path on one side of the line to a new path in the same direction on the other side of the line. A solid white line indicates the outside edge of the designated pathways on the road that should not be crossed except for emergencies. A solid yellow line indicates the boundary between pathways with opposing flow directions that should not be crossed. Additionally, a dashed yellow line represents a boundary between opposing flows of traffic that can be crossed for short periods of time if there is no oncoming traffic in the lane across the line.

While so much valuable information can be found in the location, color and type of road line, many difficulties arise in the recognition of these lines based solely on color. The major factor that affects this technique is lighting. In extremely dim lighting conditions, boundaries and contrasts between the various colors and features in the image tend to decrease and become harder to discern. Similarly, extremely bright lighting conditions tend to saturate the pixel values again reducing their contrast and discernable color. Different lighting conditions within the same image, most notably in the form of shadows, also create errors in color based recognition. Shadows can make a single feature of uniform color appear to be two different colors. Additionally, dirty or faded road lines can appear darker than clean or newly painted lines.

Image: Different lighting conditions within the same image, most notably in the form of shadows, creates errors in color based recognition.

Different lighting conditions within the same image, most notably in the form of shadows, creates errors in color based recognition.

The image can be filtered for either white or yellow lines. Once the image is filtered for the proper color, a binary threshold can be applied to the image to create the maximum contrast between the desired color and the rest of the image. Once this is done, the neighboring pixels of the same color are grouped together. A least-squares curve fit can be applied to each of these groups of pixels to fit a function to them to create a line to represent those pixels. Filtering the curve fit to use only lines with small fit errors helps to eliminate other, non-line entities from the image. Once this line equation is derived, the values it represents can be converted from pixel space to real world relative coordinates using a matrix transformation based on values obtained during the camera calibration procedure. In this particular case, the MARTI vehicle is using monocular vision.

In order to obtain 3D values from the image, the ground plane is assumed to be flat extending from the vehicle’s wheelbase (an assumption commonly used to simplify vision processing calculations). This creates what is sometimes referred to as 2.5D vision, which can pose errors in situations where the vehicle is approaching a surface with a gradient that differs from the gradient the vehicle is currently on. The fact that lines on roads are mostly parallel can be used to detect this issue and correct it.

In order to further increase the confidence of the measurements and reduce the effects of erroneous noise, the line measurements can be accumulated. These lines are averaged over time, and the middle of the road can be found using them and the known width of the lane. The SwRI MARTI vehicle has been able to use this technique to drive at speeds up to 30 mph based solely on vision.

Hue Histogram Back Projection for Road Detection

In addition to road line location information, it is also important to know what part of the visible area is drivable path. This information is valuable in circumstances where there are no lines to dictate road edges, but rather just a transition from asphalt to dirt and grass. It can also be useful to determine possible alternative paths if one is needed to avoid an obstacle. One way to achieve this type of detection is to take a hue sample of the area of the image directly in front of the vehicle. If the vehicle can be assumed to be on a drivable surface at the start, this will provide a sample of the hue value distribution for anything else that might be drivable surface. By creating a histogram of the hue values for this sample area and back projecting those values onto the entire image, objects matching the color scheme of the sample area are brightened and objects that do not match disappear.

This method can be susceptible to difficulties such as dead grass being similar enough in color to certain types of concrete to the extent that it is not completely filtered out of an image. In order to accurately filter out such erroneous data, the image can additionally be thresholded based on the saturation values of HSV space. This is due to the fact that asphalt is generally more reflective than grass and dirt. The threshold value to filter the saturation can vary with lighting conditions, however, which makes it difficult to set a filter value that will work regardless of the environmental state.

This technique has also been used to drive the vehicle on a single lane road (the SwRI test track) based solely on vision at speeds up to 30 mph.

Image: In addition to road line location information, it is also important to know what part of the visible area is drivable path.

In addition to road line location information, it is also important to know what part of the visible area is drivable path.

This technique is less susceptible to shadows than many of the other techniques. This is due to the fact that this technique primarily uses hue values for filtering, which is independent of lighting. Unfortunately, in the cases where additional saturation thresholding is needed because there is not enough contrast in the hues between the asphalt and object like dead grass (more prevalent in the winter) the solution becomes lighting condition dependant and less immune to shadows.

 

Image: Sometimes additional saturation thresholding is needed because there is not enough contrast in the hues between the asphalt and object like dead grass.

Sometimes additional saturation thresholding is needed because there is not enough contrast in the hues between the asphalt and object like dead grass.

In addition, by first filtering the image for a large contour that represents the drivable path similar to that which the vehicle is currently traversing, the region can then further be scanned for inhomogeneous entities in the drivable path. This method for yellow line detection proves to be more robust to shadows and dirty or faded lines than the color-based solution.

Image: By first filtering the image for a large contour that represents the drivable path similar to that which the vehicle is currently traversing, the region can then further be scanned for inhomogeneous entities in the
drivable path.

By first filtering the image for a large contour that represents the drivable path similar to that which the vehicle is currently traversing, the region can then further be scanned for inhomogeneous entities in the drivable path.

Edge Detection to Determine Lanes

Edge detection algorithms pull out edges between objects of contrasting colors in an image. This method can pull out the edges of road lines, transitions between asphalt and grass, and various other edges that consequently point in the direction of the path designated by the road. Drawbacks to this method include confusion based on miscellaneous, random spots and lines on the road such as tar lines to fill cracks. This method also cannot differentiate between colors of lines. In addition, this method is also particularly susceptible to false readings created by shadows. That being the case, edge detection is commonly accompanied by model-based filtering

Image: Edge detection algorithms pull out edges between objects of contrasting colors in an image.

Edge detection algorithms pull out edges between objects of contrasting colors in an image.

Technique Fusion for Increased Robustness

The various methods discussed above have inherent advantages and disadvantages that make them good for certain situations, but unreliable for all ambient visual circumstances. A compilation of some or all of these methods can be used to enhance the strengths of each method and compensate for each other’s weaknesses. For example, the drivable path detection method can be used as a pre-filter for the color based line detection or edge detection methods to help reduce some of the erroneous noise in the frame.

One implementation of methodological fusion is the MARTI implementation of artificial potential fields representing various sensor inputs to create the desired vehicle path. The potential field algorithm takes in the inputs from various sensors and uses the information to create a map where brighter colors represent desirable paths. The inputs can be represented as either lines or points of varying widths and can be a solid color or change intensity across their diameters. For instance, the drivable path detection input creates a color of uniform consistency over its whole area, whereas the detected road lines are represented by a line that is darkest at the line location, because the vehicle should not drive centered on the line, and increases to a maximum intensity at a distance of half of the lane width. Information from the laser incident detection and ranging (LIDAR) sensor. LIDAR system is represented as solid black points with a radius of half the vehicle width to prevent the path from bringing the vehicle too close to a potential obstacle.

One issue that can be compensated for with this is an offset global positioning system (GPS) signal. An artificial GPS offset was added to a correct signal, and the path for the vehicle was found, displayed as a white line in the colored path provided based on the GPS input. With the combination of the color-based lane detection of white lines, drivable path detection, GPS waypoints, and LIDAR obstacle data, another path for the vehicle was calculated, again displayed as a white line represents the path and the colored area the path selected based on the GPS data. This fusion of the other sensor data in addition to the erroneous GPS data is able to create a corrected path for the system despite the GPS offset of a few meters.

Image: Fusion of the other sensor data in addition to the erroneous GPS data is able to create a corrected path for the system despite the GPS offset of a few meters.

Fusion of the other sensor data in addition to the erroneous GPS data is able to create a corrected path for the system despite the GPS offset of a few meters.

Related Terminology

lane detection  •  vision processing  •  hue histogram  •  road detection  •  edge detection  •  model-based filtering  •  sobel edge detection  •  canny edge detection  •  unmanned ground vehicle  •  autonomous vehicle

Benefiting government, industry and the public through innovative science and technology
Southwest Research Institute® (SwRI®), headquartered in San Antonio, Texas, is a multidisciplinary, independent, nonprofit, applied engineering and physical sciences research and development organization with 10 technical divisions.

04/15/14