Quick Look

Real-Time Graphics Processing Unit-Based Video Blending
for Mixed Reality Training Systems, 07-9502

Printer Friendly Version

Principal Investigators
J. Brian Fisher
Warren C. Couvillion
Eric C. Peterson
Ryan C. Logan

Inclusive Dates:  08/11/04 – 12/11/04

Background - Under a previous internal research effort, the Training, Simulation and Performance Improvement Division developed a novel approach for integrating real objects into virtual environments using a video see-through head-mounted display (VSTHMD) and chromakeying to combine real and computer-generated images into a single resultant image. Because of hardware limitations at the time, the approach suffered from poor reliability and usability. In the initial concept, separate hardware components were used to convert, mix, and scale imagery, which increased system complexity and reduced portability and usability. Additionally, the miniature camera used in the VSTHMD had inherent distortion, particularly near the edges of the field of view, resulting in poor registration between the real and virtual worlds. 

Approach - The objective of this project was to utilize the SwRI-owned Graphics Interface Library (GraIL) and commercial off-the-shelf (COTS) miniature cameras and graphics cards to eliminate these components and perform the blending entirely on a personal computer. 

Accomplishments - We successfully developed and demonstrated methods that blended real and computer-generated images using real-time chromakeying and shader programs that run on the Graphics Processing Unit (GPU) of COTS graphics cards. A simple interface was developed allowing a user to select a chromakey color in an image captured by a USB camera and to individually modify tolerances for color hue, saturation, and brightness to obtain the desired video blending. In addition, a shader application was written to measure the distortion of a camera and then interactively correct for the distortion to improve the registration of real and virtual objects.

Shown here are an image captured by a miniature camera of real objects in front of a chromakey backdrop (left) and the resultant blended image (right) including virtual objects generated using the techniques developed during this effort.

2005 Program Home