Unmanned
Aerial Vehicle Vision
Unmanned Aerial Vehicles


Contact Information

Roger Lopez
Manager
Autonomous Systems & Controls
(210) 522-3832
rlopez@swri.org

Image: Unmanned aerial vehicle in flight over city. Buster® UAV Courtesy of Mission Technologies

Unmanned aerial vehicle in flight over city. Buster® UAV Courtesy of Mission Technologies

 

Image: Current state-of-the-art UAVs use “traditional” air vehicle sensors for flight control.

Current state-of-the-art UAVs use “traditional” air vehicle sensors for flight control.

Research in the area of vision for unmanned aerial vehicles (UVAs) is conducted at Southwest Research Institute (SwRI). This research is intended to support improvements in flight controls and usability of the UAV for its primary tasks, such as surveillance, while maintaining the low weight and power requirements to allow the UAV to be small and inexpensive with relatively long flight times.

Use of Visual Sensor Technology & Requirements for Flight Control

Current state-of-the-art unmanned aerial vehicles use traditional air vehicle sensors for flight control. Some examples are:

  • Global positioning system (GPS) sensors
  • Inertial sensors
  • Magnetometers
  • Pressure sensors
  • Air flow sensors

Next-generation UAVs have the potential to provide advanced capabilities, such as:

  • Flying in urban canyons
  • Avoiding obstacles
  • Detecting and recognizing potential targets
  • Flying in a leader/follower formation
  • Conducting automated visual searches
  • Achieving localization using geo-recognition
  • Flying inside buildings or other structures

These next-generation capabilities will require additional types of data that could be provided by adding sensors such as:

  • Laser range finders
  • Ultrasonic sensors
  • Acoustic sensors
  • Radar systems

These sensors could increase the size and cost of the entire system and would not be feasible in a small UAV. However, it is possible that the type of data needed for these advanced capabilities, as well as flight control, could be provided using visual sensors.

A simulation environment was used to implement image processing algorithms to detect the UAV pitch, roll, and ground speed using simulated images. The simulation environment included:

  • A Simulink® model of a generic UAV
  • Flight dynamics
  • Flight controller
  • Sensors

The implementation of each algorithm depended on the state that was being implemented. For example, the UAV roll angle was determined by detecting the angle of the horizon in front of the UAV. Likewise, the pitch angle was determined by detecting the angle of the horizon to the left or right of the UAV. Groundspeed was determined by measuring the rate at which features on the ground travelled past the UAV, and required the altitude of the UAV to be known.

The pitch and roll algorithms were limited by the flatness of the horizon, while the groundspeed algorithm was limited by the detail in the ground images. These algorithms yielded results close to the actual states of the aircraft when the limitations were not exceeded. An estimation algorithm, such as a Kalman filter, would also be necessary to make use of the algorithms as an approach to flight control.

A Digital Camera Array for Small UAVs

UAVs are commonly used in surveillance operations. Larger UAVs currently utilize gimbaled pan/tilt/zoom cameras for reconnaissance and monitoring. These cameras allow large areas to be viewed by panning, tilting, or zooming a camera to examine a region of interest. Current technologies allow for small, light UAVs to be deployed in the field; however, because of weight and power constraints, the pan/tilt/zoom cameras used on larger UAVs are not practical for smaller models.

The team designed and simulated a digital camera array with multiple imaging modes and built a demonstration prototype with digital pan/tilt/zoom capabilities. This solution used a fixed array of four cameras, each one oriented to view a different part of the terrain below. A larger image was formed by projecting the processed images from these four cameras to a common ground plane from which specific regions of interest could be selected by the user. The necessary transformations were performed by an on-board processor, and a 640 x 480 pixel image was sent to the ground station via a video link.

There were two primary components:

  • A software simulation that could be used to quickly develop algorithms
  • A hardware component that could demonstrate the functionality of the hardware

First, the team created a software simulation to test and evaluate algorithms and camera arrangements while the hardware was being developed. The simulation of the onboard processor was done in MATLAB®with USB connections to four digital cameras. To test the image correction algorithms, the team wrote MATLAB scripts to efficiently correct camera images through a look-up table.

Before the images from the camera could be stitched together, the effects of lens distortion and perspective needed to be corrected. Wide angle lenses cause a distinctive distortion known as barrel distortion, which is characterized by a distinct bulge in the center of the image. The Camera Calibration Toolbox for MATLAB, developed at Caltech, was used to correct for lens distortion by analyzing the effects caused by lens distortion on calibration images of a checker board and applying this analysis to correct the image.

Because cameras are oriented differently, the individual images needed to be projected to a common reference plane before stitching could occur. This was done by mapping a quadrilateral region of the calibration image of known size to a rectangle in the final image. The camera views were stitched together to form the overall view. The region of pixels could be subsampled for an overall view or windowed to zoom.

These image processing steps are very computationally intensive. Because the cameras are fixed with respect to each other, the corrections should be identical frame to frame, so one look-up table can be generated to map pixels to their correct positions after the transformations.

 

Constrain Extended Goal Achieved
Power Consumption 1 Watt 0.75 Watts 1 Watts
Weight 1 Pound 8 Ounces < 8 Ounces
Field of View 75 Degrees 120 Degrees 120 Degrees
Frame Rate 15 Frames/Sec 30 Frames/Sec
Resolution 12 Megapixels 20 Megapixels
Bandwidth NTSC Video Link

 

Based on the constraints for the hardware design, appropriate cameras, lenses, and a processing unit were selected. For on-board processing, an Analog Devices fixed-point digital signal processor (DSP) was selected for its low power consumption and adaptability.

The team was able to design a system for processing image data from multiple cameras that allow pan, tilt, and zoom capabilities on a platform that could be smaller, lighter, and more power efficient than current solutions. The software simulation allows for streaming of processed-stitched images and the DSP is set up to do basic image processing. Though additional work must be done to obtain a full working prototype, this work gives a strong indication of the system’s capabilities and potential for future development including additional imaging modes.

Image: Left: Optical Field of View (FOV); Right: User's View of Selected FOV

Left: Optical Field of View (FOV); Right: User's View of Selected FOV

Related Terminology

unmanned aerial vehicle vision  •  visual sensor technology  •  digital camera array  •  optical tracking  •  auto-land  •  flight control  •  next-generation UAV

Benefiting government, industry and the public through innovative science and technology
Southwest Research Institute® (SwRI®), headquartered in San Antonio, Texas, is a multidisciplinary, independent, nonprofit, applied engineering and physical sciences research and development organization with 10 technical divisions.
04/15/14