Visual Servoing for Flexible Robotic Assembly Tasks, 10-9410

Printer Friendly Version

Principal Investigators
Brent Nowak
Michael P. Rigney
Jeremy Zoss

Inclusive Dates:  07/01/03 - 04/01/05

Background - Processor computational power and speeds are reaching a threshold that should enable real-time closed-loop vision-based control. The computational power and speeds are required for the real-time control loop and the image processing. This work addressed the extraction of information from a visual image and the use of that information to guide a robotic manipulator in the performance of a task. This integration of machine vision technology and robot control theory is termed "visual servoing." The resulting robotic system should achieve positional accuracies greater than those of an open-loop robot controller and can function in loosely constrained environments. The integration of image-derived data and a robot controller is a nontrivial problem. Previous research programs have explored solutions to undesirable robot motions and excessive joint velocities that potentially occur when commanding motions based on translation of two-dimensional image features into the three-dimensional Cartesian robot workspace.

Approach - This project investigated a number of visual servoing approaches referred to as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS). IBVS and PBVS and their hybrids have been examined in the academic research for the past decade during controlled academic conditions. This objective of this research was to baseline the capabilities of integrated off-the-shelf industrial robotics, controller, and vision system in real-world manufacturing assembly applications.

Accomplishments - A visual servoing robotic test-bed was integrated, tested, and baselined. IBVS, PBVS - Homography, PBVS – Epipolar Geometry, and PBVS – POSIT code was designed, developed and tested. Certain algorithms such as IBVS performed better in planar or non-planar points where it needed at least three points. In contrast, the PBVS – Homography, algorithm needs at least four points and was susceptible to noise. In general, the resulting robotic system achieved positional accuracies greater than those of an open loop robot controller, which confirmed the hypothesis. However, the success of an integrated visual-servoing system, including the algorithm, is application specific and requires a nontrivial design and test effort.

2005 Program Home