Advanced science.  Applied technology.


Machine Learning Assisted Computer Vision for Visibility Sensing, 10-R8843

Principal Investigators
Richard Downs
Kyle Widmann
Inclusive Dates 
04/01/18 to 04/01/19


Impaired visibility, frequently caused by adverse weather conditions, is a common cause of and severity multiplier for roadway collisions and is a major concern to Departments of Transportation (DOTs). Dedicated sensors for detecting roadway visibility, typically based on infrared technology, are costly to deploy and maintain. Many state DOTs have limited budgets, restricting their field hardware deployments to digital highway signs and cameras. If these commodity roadway cameras, which are already deployed, can serve dual purposes as both a means for manual roadway observation by operators and a cost-effective, wide-spread visibility sensing network, this would provide significant additional value to these DOTs and strengthen SwRI’s value proposition to prospective and existing customers.


Our research evaluated applying a Convolutional Neural Network (CNN) to process traffic video feeds and detect when visibility is inhibited and what may be the inhibiting cause, including fog, rain, and snow. SwRI manages and develops Advanced Traffic Management Systems (ATMS) for over 20% of all states and territories across the United States and these relationships were leveraged to collect a broad range of sample video feeds across climates where natural visibility-inhibiting conditions are likely to occur. Florida, New Mexico, Texas, and Vermont contributed traffic camera and weather sensor data to this effort.


With our best CNN models, we achieved a maximum accuracy in each category of greater than 90% classification on validation data. This level of accuracy is comparable to state-of-the-art computer vision algorithms. Applying transfer learning allowed for most predefined neural networks to utilize the visual features developed from a larger data set and perform better when compared to a network with randomly initialized weights. Pretrained networks consistently learned quicker and achieved a higher degree of accuracy across multiple runs.

Below are the relevant metrics achieved over the course of this research program:

  • Data Curation and Labeling:
    • 5,285 images in trainval/test dataset
    • 2,000 per visibility outcome category
    • 1,000 images per causal condition (e.g., clear, fog, dust, rain)
  • Dust storms were too rare given camera density in New Mexico to obtain a sizeable training set.
    • 400 or more view angles
    • Representation of various geography, climate, and scene complexity
    • 5% or lower labeling error on training data
  • CNN Model Precision and Variance:
    • Visibility detection (clear/poor) 90.92% +/- 1.38%
    • Weather condition detection (clear/rain/fog/snow) 85.13% +/- 3.89%
    • Snow detection (snow on the road/roads plowed) 89.60% +/- 1.07%