Signal Identification Using Gesture Notifications for Advanced Launching (SIGNAL)—Funded by AM General LLC

image of a driverless HMMWV

Overview

The SIGNAL software module is a gesture-based solution for initiating vehicle commands for dismount following. SIGNAL utilizes low-cost electro-optical (EO) stereo perception for detecting, tracking, and classifying static arm gestures from a self-driving vehicle platform in dynamic environments.


image of driverless vehicle following a man using hand signals image of self-driving vehicle following a man using hand signals image of driverless HMMWV following a man using hand signals

Approach

The SIGNAL module components include pedestrian tracking, stereo processing, and gesture recognition. The pedestrian tracking leverages SwRI's continuous LIDAR-based pedestrian tracking to generate a stereo disparity of the leading dismount. The gesture recognition uses this disparity information to track and classify arm gestures based on a trained gesture database.

SIGNAL was successfully demonstrated in a joint SwRI and AM General presentation at the 2012 TARDEC Robotics Rodeo. SIGNAL detects the following static arm gestures to execute vehicle commands:

  • Follow me
  • Offset-left
  • Offset-right
  • Halt (stop)

Results

The SIGNAL architecture is designed to not require specific identifiers, be independent to the dismount, easily add additional gestures, and have the capability to be hosted on distributed hardware computer system. The SIGNAL architecture detects gestures on a frame-by-frame basis at real-time rate of 10 fps and robustly confirms gesture prior to commanding the driverless vehicle.


Christopher Mentzer, Manager, (210) 522-4240, cmentzer@swri.org

04/15/14