Unmanned and Downrange

SwRI engineers successfully demonstrated military applications for autonomous unmanned ground vehicles during 2012

Ryan D. Lamm     image of PDF button

image of author Ryan Lamm

Ryan D. Lamm is a manager in the Intelligent Systems Department of SwRI’s Automation & Data Systems Division. He has more than 15 years of experience in intelligent vehicle system research and development, foreign and domestic. He is a senior member of IEEE and a U.S. expert for ISO TC204 in Vehicle/Roadway Warning and Control Systems, and has published more than 20 technical articles.

unmanned ground vehicle approaching a village

Southwest Research Institute demonstrated its MARTI unmanned ground vehicle (green vehicle) in both leading and following positions in a convoy operation at Fort Hood, Texas.

image of 3 vehicles on road  
image of an unmanned ground vehicle maneuvering around trees image of SUMET logo  
image of SUMET EV-1 vehicle

The SUMET EV-1 vehicle navigates autonomously over austere terrain using only electro-optical sensing.

image of author Ryan Lamm unmanned ground vehicle approaching a village image of 3 vehicles on road

These images illustrate the SUMET system’s real-time visualization applications

Although combatants experimented with remote-controlled, explosives- laden vehicles for land, sea and air as early as World War I, by World War II Germany successfully deployed an unmanned ground vehicle (UGV). “Goliath,” a small, tracked vehicle fielded in 1944, was controlled via a 400-meter cable and was intended solely to deliver an explosive charge at a stand-off distance.

By the late 1960s, one of the first autonomous vehicles, a mobile robot nicknamed “Shakey,” was developed in the United States. Its practicality was limited, however, in that it took almost an hour to decide where and how to move about in the laboratory.

Over the next 40 years, scientists and engineers, by then referred to as roboticists, strived to develop electro-mechanical vehicle systems capable of real-time perception and navigation in unstructured environments to perform various dull, dirty and dangerous operations. The technology has made great strides in the past two decades, largely led by the Defense Advanced Research Projects Agency’s two Grand Challenges in 2004 and 2005 and the Urban Challenge in 2007. These challenges resulted in impressive demonstrations of autonomous navigation by full-size passenger vehicles, but at an extremely high and deployment-prohibitive cost for their sensor and computing implementation.

The military deployed thousands of small tele-operated robotic systems for purposed, deterministic tasks such as route reconnaissance and countering improvised explosive devices in support of Operation Iraqi Freedom, Operation Enduring Freedom and the International Security Assistance Force. While not autonomous, these robotic systems proved effective and saved the lives of hundreds of warfighters. Several recently developed mid-size robotic systems with semi-autonomous functionality are undergoing testing in war zones. To get to the next level, however, fully autonomous tactical unmanned ground vehicles need reliable and safe performance in military-relevant scenarios at a much lower cost.

Since the DARPA Challenges, the science and technology community has continued to advance the performance and reliability of autonomous navigation. The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) sponsored a Robotics Rodeo in September 2009 at Fort Hood, Texas, and at Fort Benning, Georgia, in October 2010 and again in June 2012, to bring together representatives of industry, the war-fighter and those who set requirements for robotic vehicles, to observe the latest technology advancements and facilitate a dialog to accelerate technology deployment. A team of engineers from Southwest Research Institute (SwRI) participated in all three events, showcasing applied technology that addressed barriers to deployment, such as cost and reliability. This applied technology included new, sophisticated algorithms, based on those originally developed under SwRI’s internal research and development program known as the Mobile Autonomous Robotics Technology Initiative (MARTI®), as well as technology developed on two externally funded projects.

SwRI created the MARTI software to develop UGV enabling technology for the autonomous control of tactical and combat military ground vehicles, passenger cars, commercial trucks, agriculture/construction tractors and mobile robots. One of the fundamental aspects of the program was rapid portability to multiple platforms. The SwRI team emphasized unique custom perception and control algorithms using commercial-off-the-shelf hardware. The result was an autonomous vehicle benchmarking platform uniquely suited to rapidly assess sensor and algorithm performance over a wide array of environments, missions and behaviors. The multidisciplinary team included engineers with backgrounds in active and passive sensor processing, machine vision, sensor fusion, robotics, control systems, wireless communications, safety and reliability systems, modeling and simulation, multi-agent cooperative systems, engineering dynamics, independent testing and evaluation, software architectures and electrical and mechanical system design. In November 2008, SwRI publically demonstrated MARTI’s autonomous capabilities on a Ford Explorer on the streets of New York City, where it successfully negotiated intersections, interacted with other manned and unmanned vehicles, avoided dynamic obstacles such as vehicles and pedestrians, and coordinated maneuvers with other vehicles and roadside infrastructure devices such as traffic signals. The MARTI internal research and development program concluded in 2011.

Robotics Rodeo

At the first Robotics Rodeo at Fort Hood in 2009, SwRI demonstrated how a UGV can reliably support military multi-vehicle convoy operations. The modularity of the SwRI-developed autonomous UGV technology allowed MARTI’s autonomous behaviors to be rapidly adapted to directly satisfy a U.S. Army Operational Needs Statement — Convoy Logistics/Operations. SwRI’s demonstrated technology at Fort Hood provided the ability for convoy operations to utilize a UGV in numerous ways. For example, a convoy can instruct a UGV to “lead upon command,” and “follow where appropriate,” in various formations, navigate an urban environment as the lead of a convoy and then fall back into formation upon command, and rapidly switch between human operation and fully autonomous modes. The Cooperative Convoy System (CCS) technology also enables a UGV to convoy using either GPS waypoints and a defined map or active sensors to track a leading vehicle.

At the second Robotics Rodeo one year later, SwRI demonstrated MARTI’s ability to autonomously follow a dismounted warfighter at low speed without the need for active RF beacons or tags carried by the soldier, using a combination of electro-optical (cameras) and light detection and ranging (LIDAR) sensing. The operator selects the desired dismount from a video image displayed on a touch-screen control unit, and the UGV then identifies and tracks the selected pedestrian. Additionally, SwRI demonstrated a tele-operation capability allowing an operator to remotely control the unit within line of sight, or tele-operate the unit beyond line of sight. The seamless switching between different autonomy modes was highlighted.

The recent 2012 Robotics Rodeo at Fort Benning included one of the largest demonstration operations ever conducted by SwRI. In all, 15 technical and support staff members from the Institute were on-site at various times, using five vehicles in two independent demonstrations, one of which involved two other companies. The demonstrations were successful despite 100-degree temperatures, blowing sand and Georgia clay, high humidity and very long days in the field.

Small Unit Mobility Enhancement Technology (SUMET) Program

The first demonstration highlighted low-cost electro-optical perception on a UGV. Performance results from the Small Unit Mobility Enhancement Technology (SUMET) program, funded by the Office of Naval Research (ONR), were demonstrated in real time to more than 20 government subject-matter experts. The SUMET program aims to increase the platform capability and affordability of unmanned, ground vehicle-enabling technologies to include low-cost, video-based perception systems, advanced video processing techniques, cognitive reasoning architectures and novel algorithm coding methodologies. A primary objective is to achieve reliable autonomous vehicle operation in austere, harsh, off-road environments without depending on GPS. SUMET achieves this by using electro-optical perception and advanced path-planning algorithms.

For the SUMET program, SwRI developed a low-cost perception system that uses data from eight forward-looking cameras (six of which are spectral cameras), two cameras on each side of the vehicle and two cameras in the rear. This pure electro-optical system provides some unique advantages over more commonly used active sensing, such as radar and LIDAR. Additionally, SwRI has been able to achieve full processing at 12 Hz, fast enough for off-road navigation by a tactical vehicle.

Technical approach

The local ground segmentation process uses the disparity image to distinguish between the ground plane and vertical obstacles. Its nodes incorporate the v-disparity and Hough lines to identify which disparities correspond to the ground plane. The disparities that are close to the ground line within a predefined threshold are therefore segmented as the ground plane. All disparities that are not contained on the ground plane are segmented as obstacles. The height of an obstacle is calculated based on height from the ground plane and also published, in addition to the ground and obstacles.

Material classification uses the imagery from all eight forward-looking cameras to classify individual pixels in the scene. The system produces a stream of images labeled to indicate their material classification. Classification features include depth perception, derived from stereo images, whether the object has been identified as part of the ground plane or as an object protruding from the ground plane; the spectral values from the six spectral cameras; the spectral values obtained from the color camera; and a myriad of statistical and textural measurements computed from one of the left images.

The object-tracking process detects and tracks nearby pedestrians and vehicles. The system contains separate detection nodes for pedestrians and vehicles that have been previously programmed offline. As objects are detected, their positions are continuously updated and provided to the tracking node, which updates a “world” model of the vehicle’s environment with the objects’ positions for continuous tracking. As the objects are being tracked, the world model is also being updated with the new position and trajectory estimates.

The world-model software manages persistent data from the other subsystems into a common frame in a central location. It combines range-sensor and material-classification data into a common frame over time, storing data and querying it for information about tracked objects such as pedestrians and vehicles, vehicle and system state, elevation data, aerial imagery, situational awareness and mission configuration, as well as generating maps for navigation based on the different data models.

The localization system is connected directly to several traditional, low-cost localization sensors, such as an inertial measurement unit and a fiber-optic gyro. It also is connected to the controller-area network bus and to the vehicle network over ethernet, which allows it to receive sensor data from the low-level controller, the actuators themselves and GPS. The localization system fuses these inputs, along with visual odometry, to provide an estimated positional change from one sample to another. This allows the vehicle to operate without depending on GPS for extended periods of time.

The near-field planning occurs within the platform perception horizon to generate low-level paths that facilitate obstacle avoidance. Costmaps are generated using perception data stored in the world model. These are fused and converted into gridded, or rasterized, representations of ground and voxel (volumetric picture element, analogous to a 3-D pixel) data that assign costs to specific parts of the near-field environment. Higher costs represent less traversable parts of the environment, while lower costs represent more traversable, and desirable, parts of the environment to drive through. The processor runs a search algorithm to find the best path for the vehicle to follow to a goal waypoint. This goal waypoint is generated along the far-field route at a prescribed distance in front of the vehicle. The goal “state” is extended from this waypoint, perpendicularly to the far-field route in either direction, allowing near-field paths to be generated that do not explicitly reach the near-field goal waypoint (useful if the far-field route is partially blocked by obstacles or difficult terrain, thus allowing the vehicle to “wander” or feel its way through difficult environments that the far-field route traverses). The low-level controller runs a control loop that attempts to minimize cross-track and heading error between the current platform position and orientation and the path segment representation of the near-field solution path. Actuator commands are calculated for steering, throttle, and brake according to the control scheme and are passed down to the drive-by-wire system for actuation.

Robotics Rodeo demonstration of SUMET

At Fort Benning, the SwRI team and the Naval Surface Warfare Center Dahlgren Division jointly demonstrated a high-mobility motorized multi-wheel vehicle (HMMWV) equipped with the SUMET system where it successfully navigated a range typically used for testing vehicle mobility. During the demonstration, several visual elements of the real-time system were transmitted to an observation point and were displayed and described in real time to subject-matter experts attending the demonstration. This included a live image from a forward-looking RGB camera, a processed image highlighting the material classification from the electro-optical algorithms, and an aerial image with a costmap overlay, along with both the far- and near-field planned paths.

The demonstration represented the first time the SUMET system had been tested off SwRI grounds in a tactical environment. Following the demonstration, SwRI engineers were able to capture a significant amount of data relative to different types of terrain around Fort Benning, with help from the Maneuver Battle Lab, also located at Fort Benning, for use in a future program experiment.

Gesture recognition

The second Robotics Rodeo demonstration of 2012 was a joint demonstration among SwRI, AM General LLC and Synexxus Inc.

The team demonstrated the SwRI-developed AM General Gesture Recognition System for UGVs, consisting of an image-processing algorithm capable of identifying and distinguishing different arm gestures of a dismounted warfighter to allow more natural interaction between the warfighter and the autonomous vehicle and enable the UGV to function as a member of the squad. Commands such as follow-me, stop, and offset right and left were demonstrated through integration of gesture recognition with SwRI’s existing dismount following capability. The framework allows for adding other commands to meet specific squad tactics.

In the second part of the demon-stration, a Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) system from Synexxus was installed in an SwRI-owned, fully autonomous HMMWV 1165. The HMMWV was directed to a forward position to provide overwatch or surveillance using the C4ISR system. While not integrated directly into the autonomous vehicle’s navigation system, the currently fielded C4ISR system was used onboard with the UGV acting as a host mobile platform to give the warfighters monitoring the C4ISR additional standoff distance from the area of operation.

The third element of this demonstration highlighted the ability of the hardware running the autonomy software, and the manned vehicle kit — the hardware used in manned vehicles in a convoy — to be removed in less than 20 seconds and switched between different host tactical platforms. This demonstration element was a direct result of a post-MARTI IR&D program size, weight and power reduction effort during the past 12 months by Automation and Data Systems Division engineers. That effort not only produced packaging for autonomous capability that is no longer deployment-prohibitive, but also reduced the overall system cost.

The MARTI algorithms utilized in the SUMET system have been provided to the government as government-purpose rights, along with background intellectual property rights from subcontractors on the SUMET program. The SwRI team’s UGV demonstrations sparked significant discussion within the military community. The focus on technology maturation, platform portability, high functionality and low-cost sensing and packaging, along with tactically relevant and scalable autonomous behaviors, has positioned SwRI as a leader in the unmanned systems industry.

image of an autonomous vehicle following man on ground  
image of an autonomous vehicle following man on ground

Questions about this article? Contact Lamm at (210) 522-5350 or ryan.lamm@swri.org.


The author acknowledges the technical contributions of SwRI staff members to this project. Additionally, the author acknowledges AM General LLC, Synexxus Inc., Naval Surface Warfare Center Dahlgren Division, and the Office of Naval Research.

Benefiting government, industry and the public through innovative science and technology
Southwest Research Institute® (SwRI®), headquartered in San Antonio, Texas, is a multidisciplinary, independent, nonprofit, applied engineering and physical sciences research and development organization with 9 technical divisions.