Seeing in Black and White

New light-based technology allows surface measurement without contact

By Ernest Franke, PhD     image of PDF button


Dr. Ernest Franke is an Institute engineer in the Manufacturing Systems Department within the Automation and Data Systems Division. He has been working in the area of machine vision and development of specialized sensors and algorithms or nearly 20 years. In recent years his research has been in multispectral sensor data fusion and 3-D sensor investigations.


Accurate measurement of surface topography is an important element in the automated inspection of parts or in the reverse-engineering of parts no longer in production.

A new system developed by engineers at Southwest Research Institute can accomplish this--and also avoid the risk of damaging the object being measured--by mapping a series of points on the object's surface without touching it. The new, patent-pending method is based on computer analysis of carefully structured, moving light patterns projected onto the object from an optic system.


Alternating bars of light and shadow aid in computer-based surface measurement system developed at SwRI.


Existing techniques for performing optic-based measurement include coordinate measuring machines and laser displacement gauges. However, these and similar methods can only measure the surface one point at a time. To measure the entire surface would require the time-consuming process of moving either the part or the instrument for each successive measurement.

There remains a need for a system that can map an array of surface points simultaneously. The SwRI team developed a novel application of the principle of structured light projection to create what amounts to a digital, three-dimensional (3-D) measuring process which, by simply varying the projector and camera lenses, may accurately map the surfaces of objects as small as a microscopic structure or as large as an airplane flap.

Structured light projection, in which a line, grid or other regular pattern is optically projected onto the surface of interest, has been used with machine vision techniques in the past. However, the resulting patterns were difficult to interpret and surface discontinuities or irregularities could result in ambiguous measurements.

Such techniques, which include Moire interferometry, Fourier transform profilometry and other grid projection methods, require that a complete image (or many complete images) of the surface be analyzed to measure the 3-D position of even a single point on the surface. The time required for data acquisition and analysis using this method is lengthy. Moreover, if some flaw or irregularity results in a discontinuous step in the surface, a corresponding jump will occur in the grid pattern, possibly making it impossible to identify gridlines uniquely across this discontinuity, and thus preventing accurate measurement.

The dynamic structured light method developed at the Institute makes use of an extension and improvement of the structured light approach, combined with a new computational approach, to overcome these difficulties. A rotating, striped pattern of light and shadow is moved over the surface in a predetermined way by a light projector shining through a rotating grid plate. Changes in the resulting patterns of light and shadow as they pass over the object are recorded at desired measurement points. The surface elevation at any location can then be determined by measuring the angular changes in these light-and-dark stripes as they pass over that location, without considering other points on the surface.

This development replaces complex, two-dimensional pattern analyses with a large number of independent, one-dimensional analysis problems. It also removes ambiguities associated with surface discontinuities. Because it analyzes changing light patterns it is known as the dynamic structured light, or DSL 3-D imaging method.

Theory of Operation

The measurement system comprises a video camera and a projection system. Illumination from a light source passes through an optical grating and a lens and is then focused onto the surface. The grating that produces the optical pattern rotates at a constant rate, and the resulting black-and-white images are recorded by a camera installed at an offset location.

The projected grid pattern rotates around a central axis. An imaginary line passing through the axis of rotation and crossing each of the lines of the grating at a right angle would generate distances from the center of rotation to each grating line that could be designated as 1,2,3...n. Projected rays passing through the intersections of the normal line and the grating lines would trace out a set of quadric surfaces in space as the grating rotates. These quadric surfaces can be designated 1,2,3...n to correspond to the grating lines numbered from the center of rotation.

Originally, engineers proposed that precision measurement of three-dimensional objects be based on the premise that both the grid projection and the camera imaging systems used parallel rays (collimated beams) rather than the more conventional projection systems. Such a configuration would result in simpler geometric constructions and computations, but it was undesirable for two reasons: First, it is difficult to achieve parallel projection for the grid lines and the camera system without either sacrificing depth of focus for the system or severely constraining the measurement space for target objects. Second, because the original (parallel projection) formulation relies on strong knowledge of the spatial relationships such as orientation and translation between system components, measurement of these spatial relationships is critical and thereby contributes to a system setup that is potentially complex and time-consuming because of this more complicated data requirement.


This schematic shows how a ray of light shines through a rotating grid and is focused onto an object, while an obliquely mounted camera records the changing patterns of light and shadow as they pass over the object. The unique point where the surface of the object, the ray from the camera, and the generated quadric surface all intersect, uniquely defines a three-dimensional location on the surface of the object.


Calibration method

For these reasons, SwRI engineers adopted a more general method for calibrating the system based on the intersection of quadric surfaces with camera rays. This method does not rely on assumptions inherent in or implied by parallel projection, does not require an accurate measurement of the orientations and locations of system components at calibration time, and is generally applicable to any projection system in which projected points on the rotating grid generate quadric surfaces.

The calibration method consists of recording data at three or more reference planes and then calculating the equations for the family of 3-D quadric surfaces that are generated by the position of grid lines sweeping around the axis of rotation of the grid. This method has the additional advantage of compensating for first- and second-order distortion in the optical projection system. After calibration, the height of a given location on the surface is measured by recording a sequence of images as the grid rotates and calculating the position at which the camera ray intersects the quadric surface projected onto that location on the target. Interpolating between the generated quadric surfaces increases the accuracy of the measurement.

The lenses used for the projection system and the imaging camera determine the working volume (the 3-D zone in which effective measurement is possible) as well as the measurement resolution. Design calculations are incorporated into an Excel® spreadsheet so that systems can be customized for different applications. Input to the spreadsheet consists of parameters specifying the characteristics and geometry of the rotating grid, the projection system, and the camera. The spreadsheet calculates the theoretical vertical resolution of a system constructed with the given parameters. The theoretical resolution is not constant at all points in the field of view but varies from pixel to pixel, increasing with distance from the grating rotational axis.

Engineers constructed a prototype system to verify the concept and test the validity of the spreadsheet design technique. They then performed a series of static tests to characterize the projection system and verify that the grid pattern was projected in focus across the measurement space. After the static tests were completed, engineers used the projection lens setup to perform dynamic tests in which they analyzed the effects of projecting the rotating grid lines onto a U-shaped object.

Measurement of Small Parts

An early test of the system involved measuring the surface profile of a coin. The camera and grid projection geometry were arranged to provide a working measurement volume of about 1.2 inches by 1.2 inches by 1 inch, and the system was calibrated.

SwRI engineers placed a quarter on the measurement stage, then recorded and processed the images. The resulting rendered surface showed surface profile features down to 0.001 inch. Some points at the edge of the coin and near the shadow of the coin could not be calculated because they were in a shadowed area or because of light reflected from the coin. These were displayed as spikes extending above and below the image. In practice, objects to be measured would be oriented to eliminate or minimize reflection, or multiple views would be used to ensure that good data is generated at all points.

Measurements also were taken successfully of a more geometrically complex object, a machined part approximately 1 inch long by 1 inch wide, but with a vertical plate, a horizontal tab, and a counterbored hole drilled in the horizontal tab. The machined part was rendered in full 3-D, with the counterbore visible in the drilled hole and sloping surfaces on the vertical plate readily apparent.

Three-dimensional measurement of a surface can be used for quality inspection of manufactured parts or to verify that assembly is correct. Using an optical, non-contact method allows an automated inspection process to verify values at hundreds or thousands of points. The DSL 3-D measurement system also can be used in reverse-engineering mechanical parts. Three-dimensional images can be measured from several angles and the surfaces merged to form a complete object. Objects described in this way can be imported into computer-assisted design programs or into rapid prototyping machines that generate parts from the mathematical description.


The profile of George Washington emerges on the rendered surface of a quarter that has been scanned using the DSL 3-D method. Spikes above and below the image are anomalies caused by shadows and light reflections.


Large Area Measurements

For large-area testing, the DSL 3-D measurement system was reconfigured with shorter focal-length lenses to provide a larger projected grid area and a larger field of view for the camera. Institute engineers used a progressive scan digital camera so that images could be acquired rapidly. The viewing geometry was calibrated for a 4-foot by 8-foot field of view. Even larger objects could be measured by using a more powerful lamp for the projector system.

As a test object, engineers used an aircraft elevator that had been damaged by hail. The elevator surface was approximately 2.5 feet by 3 feet and was flat except for numerous shallow depressions caused by impacting hailstones. Capturing the images took about one minute. Calculating the surface data took approximately 40 milliseconds per point. A wire-frame representation of the hail-dented elevator surface was derived with a depth resolution of 0.005 inch. In a rendered 3-D view of the surface, the depressions could be seen clearly and their circular shape was apparent.

A similarly produced 3-D image was made of an aircraft flap with a single dent in its surface. The dent had been previously measured at 0.120-inch deep using a contact-method dial indicator that was moved across the surface. Using the DSL 3-D method and calculating the depth of the dent from the difference between two parallel z-axis lines - one of which passed through the deepest part of the dented area and the other of which was measured a few inches away from the dent - resulted in a maximum distance of 0.118 inch. This was in excellent agreement with the value derived through contact measurement. The DSL 3-D method also could have been used to mathematically determine the length of the dent or to show those areas where the surface is depressed more than a specified amount. Such measurements could be used to assess damage to an aircraft surface, plan repair operations, or verify that a replacement part has been fabricated correctly.


Dr. Michael P. Rigney holds an aircraft flap that contains a dent. The accompanying 3-D image illustrates the dent (exaggerated here by magnifying the image's vertical dimension). A dial indicator directly measured the dent depth at 0.12 inch, while the DSL 3-D process calculated a depth of 0.118 inch.


Future Applications for Microscopic Object Measurement

As with photography or other optics-based processes, the size range of potential objects to be measured is a function of lenses and focal lengths. The limits for practical applications are bounded by the optical power that can be projected (for large objects) and by the microscope objective lens design and optical diffraction (for very small objects). A new internal research program at SwRI is under way to explore the effectiveness of using DSL 3-D imaging as a means of mapping the surface features of microscopic objects, such as micro-electromechanical systems (MEMS). Measurement accuracy of better than 100 nanometers is the expected result of projecting and imaging the optical pattern through microscope objectives.

One of the original system's greatest drawbacks was the lengthy processing time required by the computer's limited capability. For the new internal research program, the Institute is developing high-speed image processing algorithms to run on a Pentium™-equipped PC, so that the computation time associated with 3-D mapping can be decreased from as long as one hour to less than one minute for the same number of measurement points. This will allow high-performance 3-D imaging systems to be assembled from relatively inexpensive, off-the-shelf components such as a PC, a video camera, optical grating, and projection lenses.

Acknowledgments

The author would like to acknowledge SwRI staff members Dr. Michael P. Rigney, Joseph N. Mitchell, and Dr. Michael J. Magee, who have made substantial contributions to the internal research project described in this article.

Published in the Fall 2002 issue of Technology Today®, published by Southwest Research Institute. For more information, contact Joe Fohn.

Technics Fall 2002 Technology Today
SwRI Publications SwRI Home