Real-Time Graphics Processing Unit-Based Video Blending
Inclusive Dates: 08/11/04 12/11/04
Background - Under a previous internal research effort, the Training, Simulation and Performance Improvement Division developed a novel approach for integrating real objects into virtual environments using a video see-through head-mounted display (VSTHMD) and chromakeying to combine real and computer-generated images into a single resultant image. Because of hardware limitations at the time, the approach suffered from poor reliability and usability. In the initial concept, separate hardware components were used to convert, mix, and scale imagery, which increased system complexity and reduced portability and usability. Additionally, the miniature camera used in the VSTHMD had inherent distortion, particularly near the edges of the field of view, resulting in poor registration between the real and virtual worlds.
Approach - The objective of this project was to utilize the SwRI-owned Graphics Interface Library (GraIL) and commercial off-the-shelf (COTS) miniature cameras and graphics cards to eliminate these components and perform the blending entirely on a personal computer.
Accomplishments - We successfully developed and demonstrated methods that blended real and computer-generated images using real-time chromakeying and shader programs that run on the Graphics Processing Unit (GPU) of COTS graphics cards. A simple interface was developed allowing a user to select a chromakey color in an image captured by a USB camera and to individually modify tolerances for color hue, saturation, and brightness to obtain the desired video blending. In addition, a shader application was written to measure the distortion of a camera and then interactively correct for the distortion to improve the registration of real and virtual objects.