Advanced science.  Applied technology.

Search

Perception-Based Region Selection for Human to Robot Collaboration, 10-R6062

Principal Investigators
Jorge Nicho
Inclusive Dates 
04/28/20 to 10/30/20

Background

The need for robotic systems that can collaborate with humans on the factory floor is in demand by the manufacturing community, but collaborative robotic solutions are still lacking in many respects. One such problem appears in quality control of subtractive manufacturing applications, such as sanding, grinding, and deburring, where material from a part is removed using an abrasive tool until a desired surface condition is obtained. In such scenarios, the quality of the finish can be assessed by an expert human operator and, therefore, it would be advantageous to leverage this expertise to guide semi-automated robotic systems to work on the regions that need further work until the desired quality is achieved.

Given this challenge, this research focused on enhanced human-robot collaboration, by producing a capability that allows a human operator to guide the process by physically drawing a closed selection region on the part itself. This region will then be sensed by a vision system coupled with an algorithmic solution to crop out sections of the nominal process toolpaths that fall outside the confines of this region.

Approach

Initially, a small dataset of hand-drawn closed-region images was produced to aid the initial development of the 2D contour detection method and projection into 3D. These images were made with a dark marker on white paper laying on a flat surface and imaged with the Framos d435 camera. The 2D contour method that resulted from this dataset was implemented with the OpenCV open-source library and comprised the following filters/method: grayscaling, thresholding, dilation, canny edge detection, and contour finding. The output of this operation was the 2D pixel coordinates of the detected contours (Figures 1a and 1b).

Amoeba 2D detection.

Figure 1a: Amoeba 2D detection

Box 2D detection.

Figure 1b: Box 2D detection.

The following stage used the 2D pixel coordinates and located the corresponding 3D points from the point cloud associated with the image; this was possible because both the 2D image and point cloud were of the same size. Following that, some additional filters were applied, and adjacent lines were merged to form larger segments. In the final steps, the segments were classified as open and closed contours and then normal vectors were estimated. Results are shown in Figures 2a and 2b.

Triangle region detected.

Figure 2a: Triangle region detected.

Amoeba region detected

Figure 2b: Amoeba region detected

Additional datasets were collected with varying conditions such as thicker, thinner lines, curved surfaces and multiple images containing parts of the same closed contour. These datasets allowed refining the method and addressed corner cases that emerged under more challenging conditions such as regions spanning multiple images (Figures 3a, 3b, 3c).

Box multi-image 2D contour.

Figure 3a: Box multi-image 2D contour.

Box multi-image 2D contour.

Figure 3b: Box multi-image 2D contour.

Multi-image region detected.

Figure 3c: Multi-image region detected.

Accomplishments

This research lead to the creation of an open-source C++ library that can be used to detect regions that have a similar need for human-robot collaboration. Visit the repository at https://github.com/swri-robotics/Region-Detection.