Advanced science.  Applied technology.

Search

Active-Vision

Cost-Effective Mobility Solutions

Each year, tens of thousands of people die in traffic accidents in the United States. In 2022 alone, an estimated 42,975 people died in motor vehicle crashes.1 While vehicle safety, roadside infrastructure and traveler information continue to improve, the stark reality remains that as long as there are vehicles, there will be accidents. While research tends to focus on mitigating or eliminating the causes of accidents, more research is needed to improve accident recognition and response.

Department of Transportation (DOT) operators around the nation agree that accelerating response times will help save lives2 and reduce congestion. One study found a nine-minute median response time across more than 2,000 counties3. Making it easier for people to report accidents or interfacing directly with emergency service computer-aided dispatch (CAD) systems could help, but using the ever-growing network of traffic cameras has produced the most promising results when successfully deployed.

Imagine you had thousands of employees, each monitoring the camera feed, counting each and every vehicle, recording their speeds and reporting any incidents or abnormalities. This would certainly help reduce incident response time but would come at an exorbitant cost that no state DOT can afford. However, computer-vision-based machine-learning (ML) techniques can achieve the same level of perception at a fraction of the cost. Using these systems, organizations with traffic cameras could constantly monitor their roadways, recognizing accidents, wrong-way drivers and more, while simultaneously notifying first responders with virtually zero delay.

With these goals in mind, Southwest Research Institute’s intelligent transportation specialists developed the initial idea for the Active-Vision™ system. Through Internal Research and Development (IR&D) funding, engineers addressed these problems, developing an application to detect traffic anomalies such as accidents or wrongway drivers. This application drew on years of machine vision and learning research, providing consistent, reliable anomaly detection at a cost that DOTs, cities and municipalities can afford.

DETAIL

A geographic information system, or GIS, is a spatial system that creates, manages, analyzes and maps data.

ACTIVE-VISION LAUNCH

The project first needed to develop algorithms that could identify a series of pixels across a two-dimensional screen as a vehicle. The first question was, how do people look at an image and know they are looking at a vehicle? For instance, at some point in my early childhood, my mom or dad showed me a toy car and said “car.” Then, when I pointed at the dog and said “car,” they gently corrected me by pointing out another car driving past. It turns out that teaching an ML algorithm is not terribly different. Without getting into the fuzzy logic details of the ML algorithms, you simply provide tons of data, and you label each proper area “car.” When the algorithm incorrectly interprets something as a vehicle, you correct it. Over time, just like us, it gets better and better at determining what is, or is not, a car.

Ryan McBee is developing software to process feeds from traffic cameras deployed across highway systems to automatically detect traffic anomalies such as accidents or wrong-way drivers.

Ryan McBee is developing software to process feeds from traffic cameras deployed across highway systems to automatically detect traffic anomalies such as accidents or wrong-way drivers.

Active-Vision deployment enables video feed processing using cloud-based, on-premises or edge deployments, with a hybrid of the three offering the maximum efficiency.

Active-Vision deployment enables video feed processing using cloud-based, on-premises or edge deployments, with a hybrid of the three offering the maximum efficiency. Cloud-based models are low maintenance but require high-bandwidth connectivity. On-premises models deployed at traffic management centers require moderate maintenance. Edge or roadside deployments (field kits) process video feeds onsite, eliminating the need to stream video feeds over the field network, but are higher maintenance. Detections identified on-site then can be uploaded to the central field.

Central to Active-Vision vehicle detection is the process of building a homography or “mapping” between what the camera sees (near right) and a satellite perspective (far right).

Central to Active-Vision vehicle detection is the process of building a homography or “mapping” between what the camera sees (above left) and a satellite perspective (above right).

Once you can determine what a vehicle is, how do you know what that vehicle is doing? Did it stop? Is it going backwards? A video is just a series of images, so processing each individual image allows you to determine each time what is and is not a vehicle. Now we’re adding the ability to understand that a vehicle found in the previous image is also the same vehicle in the current image, and in subsequent images. From this, we can derive if, and how far, a vehicle moved, as well as its direction.

Determining the direction a vehicle is moving relative to the camera does not indicate where the vehicle actually is. It can be problematic and potentially dangerous if we think a vehicle is on the wrong side of the road or perhaps driving through a lake. We must determine the vehicle’s position relative to the roadway it is traversing. To do this, Active-Vision does what computer vision software does well, using very complicated math to determine a visual perspective. In a process known as auto-homography, different images of the same planar surface, in this case a road, are stitched together to enable navigation, allowing the insertion of 3D models of objects, such as vehicles, into an image or video at the correct perspective. This process translates the view of a camera to a top-down placement of a vehicle on a GIS map, connecting data to a map, integrating location along with condition descriptions. This process allows the system to report precise latitude and longitude positions of the vehicles it detects. Combining this technology with free OpenStreetMap (OSM) data, the system can accurately determine where the vehicle is relative to the roadways it’s traveling. But we still have a problem. As good as the auto-homography process is, it is not infallible and can, at times, suggest an incorrect configuration. To combat this, the system cleverly applies logic and simply watches where vehicles are actually traveling. Any time the camera view is moved or repositioned, before assuming that the homography is correct, the system “watches” where the vehicles are and determines if that matches the underlying roadway. If it matches, which happens most of the time, all is well. If not, it rebuilds the homography until it gets it right — just like you or I would do.

DETAIL

OpenStreetMap is a free, open geographic database updated and maintained by a community of volunteers.

Once the system accurately and reliably detects traffic conditions and anomalies, the final piece of the puzzle is establishing when these things actually happen. Toward this goal, Active-Vision shares data through an application programming interface (API), allowing external systems to collect data. The Active-Vision API is specifically written to interface with ActiveITS™, the SwRI-built traffic management software that notifies ATMS operators when abnormal conditions occur.

A new “auto-homography” tool automatically assigns a vehicle’s GPS location based on its proximity to a traffic camera, instead of requiring a human to manually match points between a traffic camera and satellite image.

 

A new “auto-homography” tool automatically assigns a vehicle’s GPS location based on its proximity to a traffic camera, instead of requiring a human to manually match points between a traffic camera and satellite image.

 

A new “auto-homography” tool automatically assigns a vehicle’s GPS location based on its proximity to a traffic camera, instead of requiring a human to manually match points between a traffic camera and satellite image.

A new “auto-homography” tool automatically assigns a vehicle’s GPS location based on its proximity to a traffic camera, instead of requiring a human to manually match points between a traffic camera and satellite image.

ACTIVE-VISION TODAY

Today, Active-Vision can reliably detect vehicle counts and speeds, wrong-way drivers, and stalled or disabled vehicles and report that information to first responders to reduce response times. However, SwRI is continuing scale-up activities. The system can currently process over 50 camera feeds on a single server but will eventually need to be distributed across multiple servers to handle thousands of feeds.

DETAIL

An application programming interface, or API, is a set of rules allowing different applications to communicate, acting as an intermediary layer, processing data transfers between systems.

The City of San Antonio ran the first Active-Vision pilot program, and the Central Florida Expressway Authority is currently conducting a second deployment and evaluation. Additional DOTs across the nation have expressed interest in Active-Vision based on these demonstrations and its reasonable cost.

Additional internally funded research continues to hone the accuracy of the algorithms, aiming to deliver sub-meter-level accuracy to provide lane-level reporting and simulated connected vehicle information. This information mimics the data that current connected vehicles send, enabling fine-grained analysis to characterize how vehicles move and react to different conditions. For instance, perhaps a curve in the roadway is a little too sharp, causing hard braking events, or an exit ramp is not long enough, causing backups onto the main roadway. This research, and the research that follows, will enable future systems to detect more than ever.

ACTIVE-VISION TOMORROW

As a stable, deliverable platform that can accurately detect and place vehicles, Active-Vision can provide meaningful results, mitigate congestion and, above all, save lives. However, we are not done yet. The software shows promise for additional capabilities to improve safety and mobility, such as detecting weather, including rain, snow or dust. Additional capabilities could identify pedestrians, animals or debris on roadways or shoulders. The system could be trained to find ramp backups, lane departures or the precise location of congestion. Active-Vision could classify vehicles — truck, car, motorcycle, bicycle, etc. — or the severity of accidents. The system could be trained to identify and provide alerts for distracted or drunk drivers. SwRI is considering developing these capabilities, all realistic candidates based on the current architecture of the system.

Detection Possibilities

Detection Possibilities

IR&D

SwRI’s Internal Research and Development program has helped to create this and many more successful projects that advance science and technology while providing real-world benefits. This work builds on previous research in intelligent transportation and automated vehicles, using the skills and expertise gained in computer vision and machine learning and applying it to a different, but related, problem set. While the primary benefits of internal research projects are specific technology advancements, these projects help enhance staff skillsets to create experts in cutting-edge technologies.

As we move to expand Active-Vision capabilities, the technology and skills gained will undoubtedly continue to help fulfill SwRI’s mission statement of benefiting government, industry and the public through innovative science and technology.

Clay Weston with Active-Vision on screen behind him

ABOUT THE AUTHOR

Clay Weston is assistant director of the Intelligent Transportation Systems (ITS) Department, responsible for leading a team of ITS specialists developing innovations in the mobility domain. The team develops software to support existing client systems and identify tomorrow’s mobility challenges.

Questions about this story or Active-Vision Anomaly Detection? Contact Clay Weston at +1 210 522-2954.