Advanced science.  Applied technology.

Search

Perception Technologies for Dynamic Environments

bike still

HOW CAN WE HELP YOU?

We develop technology that enables automated systems to understand environments and perceive the movements of dynamic objects such as vehicles, cyclists, and pedestrians. From machine learning algorithms to integration of passive sensors such as cameras, SwRI’s perception solutions advance the capabilities of driverless vehicles and unmanned aerial systems for clients worldwide. 

Overall Perception Technology Features

  • Monocular or stereo camera-based detection
  • Applicable to various types of sensors such as thermal cameras, infrared cameras, and Lidar
  • Modular and configurable system, trainable for desired object detection applications
  • Verified state-of-the-art performance
  • Very low false positive rate
  • Real-time operation on nVIDIA® TX1
  • ITAR Exempt with commercialization rights; customer-focused intellectual property (IP) policy

Object Detection Technology

SwRI’s current object detection approach uses state-of-the-art deep learning algorithms employing neural network designs such as Convolutional Neural Networks (CNN), Resnet50, and Single Shot Detection. Key technology features include:

  • Intelligent Single Shot camera-based algorithms that accurately detect high-pose objects such as bicycles, people, and animals in cluttered environments
  • >99.95% accuracy
  • Detection, classification, and tracking algorithms use a custom convolutional neural network algorithm to increase performance in cluttered environments
  • Semantic segmentation of desired object
  • SwRI maintains a significant training data set relevant to on and off-road objects
  • Operation speed greater than 50 fps (VGA images, 768-core GPU)
  • Current detected objects include, but are not limited to partially occluded:
    • Bicycles, Humans, Vehicles, Road Signs, and Work Zone Objects (Cones, Barrels, etc.)

Environment Understanding Technology

SwRI intelligently applies deep learning algorithms to produce full scene segmentation, classifying every material in a scene. This application enables autonomous systems to understand the entirety of a scene and make more informed decisions. Key features of this application include:

  • Multi-spectral classification with a multi-year, multi-million dollar foundational effort that enables low-cost camera-based autonomous navigation in structured and unstructured environments
  • Low-cost camera-based recognition of transitions between structured and unstructured environments that enable seamless navigation between said environments
  • Operation speed greater than 10 fps for full scene segmentation (1MP images, 768-core GPU)
  • Autonomous path recognition
  • Currently classified materials include, but are not limited to:
    • Pavement, Lane Lines and Road Markings, Dirt, Sidewalks, Sky, Road Railings, Foliage (e.g. Trees), Rocks, Grass, and Concrete Barriers