Natural Language Dialogue for Supervisory Control of Autonomous Ground Vehicles (UGVs)

Overview

Southwest Research Institute (SwRI) is developing advanced stereo-vision and control algorithms in support of SoarTech’s Smart Interaction Device (SID) that facilitates natural interaction between a human operator and autonomous ground vehicles (UGVs). SwRI has developed algorithms to provide stereo-vision range data, is developing a mobility interface to implement the platform control associated with the interpreted gestures and speech, and is helping integrate SoarTech’s SID into SwRI’s platform.

computer image of a pedestrian being detected by autonomous, driverless vehicle

Approach

Advanced stereo-vision algorithms have been developed that provide range information in the form of disparity maps and point clouds. A focused region of high-resolution stereo data provides sufficient information to detect pedestrians and capture hand gestures at safe stand-off distances. The mobility interface works with a mid-level controller to generate steering, throttle, and brake commands to move the autonomous, driverless vehicle according to the desired commands resulting from external interpretation of the speech and gestures. The modular nature of the mobility interface currently allows it to respond to teleop-style commands, such as "move forward/backward," "turn left/right," and "stop," and could potentially respond to higher-level commands, such as "follow route X," "go to waypoint Y," and "follow me." These higher-level behaviors could take advantage of existing path planning and obstacle avoidance algorithms.

image of target ground vehicle

Results

The target ground vehicle platform is an autonomous HMMWV SwRI has developed for the Small Unit Mobility Enhancement Technology (SUMET) program; however, the developed algorithms can be compatible with any drive-by-wire vehicle with stereo cameras available.


Christopher Mentzer, Manager, (210) 522-4240, cmentzer@swri.org

07/13/16