Robots at Work
SwRI-developed technologies are guiding the future of automation in industry
Clay Flannigan is manager of the Robotics and Automation Engineering Section within the Automation and Data Systems Division. His areas of expertise include machine design, robotics, control software and sensing systems. His section specializes in robotics, controls, computer perception and general automation hardware for a variety of industries.
SwRI has developed methods that permit robots to pick objects from cluttered piles or bins (above), recognize those objects based on their shape, and sort them (below).
For more than 20 years, SwRI has been developing large robotic systems for aerospace coatings applications such as the robotic depaint system designed to maintain the U.S. Air Force’s fleet of F-15 fighter jets. Current research has demonstrated that for some applications, a mobile solution can be less costly and more flexible.
SwRI engineers developed a system to detect and track humans in manufacturing environments, even in the presence of occlusions or variations in the human pose. The system uses color and 3-D images like those shown below and learns the “signature,” or visible characteristics, of individuals in real time so that they can be uniquely tracked through the field of view.
To recognize human actions, the SwRI-developed system tracks gross motions using skeleton models. A machine learning system, which has prior knowledge about different types of actions, can then classify the motion by type.
SwRI started the ROS-Industrial open source project to build an international community around the use of the advanced, open-source Robot Operating System (ROS) for industrial applications.
One of the earliest robotic manipulators was developed in the late 1950s and deployed on a General Motors automotive assembly line in 1961. The robot, called Unimate, handled hot, die-cast parts that were potentially dangerous to workers. Although robots like Unimate did not fulfill the human-like depictions of mid-20th Century sciencefiction robots, they were steadily adopted by manufacturers for jobs such as spot welding or painting automobile bodies.
Industrial robot capabilities have continued to advance in areas such as payload, accuracy and speed. Today’s robotic arms can pick up complete truck bodies or emplace minute electronic components, and they can package goods much faster than a human. Despite all these advancements, however, robots have barely ventured beyond the repetitive tasks of the factory floor.
Unlike the futuristic expectations of the 1950s, people still have limited exposure to robots in their daily lives, and even the robots in manufacturing environments typically are relegated to simple, repetitive and highly structured tasks. Why is this? Shouldn’t there be a market for a robot that is able to fold our laundry or perform our mundane daily work tasks?
Engineers at Southwest Research Institute (SwRI) are developing technologies to overcome some of the historical limitations in the use of automation for complex industrial tasks. Through internal research and client-funded projects, SwRI teams are giving robots greater intelligence, more flexibility and greater ability to work collaboratively with humans.
Perception and planning
Traditionally, industrial robots have been deployed in jobs that require little decision-making. They typically perform the same task repetitively and have little ability to adapt to new situations. Providing robots with more human-like flexibility to adapt to dynamic or uncertain environments is a classic problem for robotics researchers. Many cognitive models exist to describe this problem, but they all share common elements of perceiving the environment and using this data, combined with prior knowledge, to plan an action.
Recently, there has been a dramatic shift in the use of 3-D sensing techniques to provide better context for robotic decisionmaking. Computing power has progressed to make real-time stereo imaging practical, and the console gaming industry has provided a revolutionary 3-D sensing capability with the Microsoft Kinect® sensor. These sensing solutions combine high resolution, color and 3-D views of the robot’s workspace, permitting the development of new algorithms to locate and identify objects within that space.
Using novel 3-D data analysis algorithms, the SwRI team recently developed techniques for object recognition in cluttered scenes. This enables robots to perform material handling tasks without need for dedicated tooling or fixtures. Such techniques enable robots to pick randomly oriented parts from bins or boxes and then insert them into a subassembly. In addition, sortation of highly varied parts is a common need for applications like mail handling or waste recycling facilities.
The SwRI-developed techniques combine digital models, built using prior knowledge of the parts, and various matching algorithms to identify the parts in the robot’s field of view. In some cases, machinelearning algorithms are employed to “teach” the robot what a particular object looks like. Once a hypothesis for an object is generated from the sensor data, a pose estimate is created. This pose information is then provided to the planning algorithms to create robot arm trajectories and grasp strategies.
Giving robots mobility
Most industrial robot installations are permanently bolted in place with cages surrounding them, excluding human interaction with the robot. In such a paradigm, the parts must be brought to the tool, rather than the tool to the parts. For many industries, such as those that use assembly lines, this is the preferred approach. However, there are situations where it is preferable to bring the tool to the workpiece.
In aerospace manufacturing, for example, it is often easier to move the manufacturing process rather than the part due to the size of most commercial aircraft. SwRI has a long history of developing large robots for use in aerospace coating removal processes, but to date, the robots have been limited to relatively small aircraft such as fighter jets. For larger aircraft, such as commercial airliners, mobile robotic systems may be more cost-effective and flexible than the traditional fixed or tracked systems.
SwRI engineers recently demonstrated the ability to integrate a commercially available off-the-shelf (COTS) robotic manipulator onto a COTS mobile base to increase the effective workspace of the robot by a factor of 10 or more. This system, called MR ROAM (Metrology Referenced Roving Accurate Manipulator) uses a high-accuracy metrology system to locate the mobile system to sub-millimeter accuracy in work volumes of more than 500 square meters. The SwRI team developed specialized control strategies to permit coordinated motion of the mobile base with the manipulator, thereby providing the capabilities of a much larger robot. In addition to larger scales, MR ROAM technologies can be more flexible because the mobile base does not require significant facility modifications for tracks or dedicated work cells.
Human factors in robot interaction
Robot mobility and the manipulation of objects in unstructured environments are two capabilities that set the stage for robotic systems to operate openly in the “human” environments found in most factories. However, such a future vision is only possible if it can be done safely. There is significant activity in the robotics community and at SwRI to address these issues. Recently, the Robotics Industries Association (RIA), which is responsible for robotics safety standards in the U.S., ratified an updated ANSI/RIA R15.06-2012 standard. For the first time, this standard outlines situations where people may work collaboratively with industrial robots.
SwRI engineers also have been performing enabling research in the area of human tracking and behavior monitoring. Effective collaboration between machines and people requires that the machines be able to detect human presence and actions. For the former, SwRI collaborated with the National Institute of Standards and Technology (NIST) to develop a 3-D sensor-based capability to detect humans and track them in typical manufacturing environments. NIST is using this system to develop measurement methods and standards for incorporating human tracking systems onto machines like automated guided vehicles (AGV), forklifts and mobile manipulators.
In addition to knowing the location and velocity of a person in a robotic workspace, often one would like to recognize specific actions so the machines can respond appropriately. For example, if a person holds up a tool in a certain posture, the robot might respond by grasping the tool and taking it from the person. SwRI engineers are working on machine learning methods that enable robots to visually detect such classes of actions. These methods extract a kinematic “skeleton” model of the person from a 3-D image. By tracking this skeleton over time, SwRI’s methods are able to classify certain repeated motion sequences as specific actions to which the robot can then react in a more meaningful, or safer, manner.
An open software framework
In 2010, version 1.0 of the Robot Operating System (ROS) was made publically available. ROS is an open-source software framework for developing robotic systems. Since then, it has become the predominant platform for robotics research used by many academic research labs, especially for mobile and service robotics. Stewardship of ROS was initially provided by Willow Garage, a private technology startup, but has recently transitioned to the Open Source Robotics Foundation (OSRF).
ROS provides a flexible architecture with advanced capabilities not found in most industrial robot controller solutions. In addition, it has a large community of developers who use it for a huge range of applications. Because of the potential value of integrating the capabilities of ROS more closely with industrial robots, SwRI invested internal research funding for a visiting researcher position at Willow Garage. Over the next four months SwRI created the foundation of ROS-Industrial, an open-source extension of ROS that focuses on the needs of manufacturers and industrial robot users. It includes software packages for things such as low-level drivers for various robots and their ancillary equipment. It also has high-level functionality for capabilities, such as path planning, that are unique to industrial problems.
In its first year the ROS-Industrial project has attracted dozens of developers worldwide and gained support from several major robot vendors. End users are beginning to develop production systems using the software, and the ROS-Industrial consortium has formed to provide a roadmap to continue to foster the project. ROS-Industrial provides an important link between the robotics research community and end users, and SwRI is contributing many of the technologies it has developed back to the project. In doing so, there is a clear path to commercial adoption for these advanced capabilities.
The combination of technologies for advanced perception, planning, mobility and human interaction within an open software framework is poised to accelerate the adoption of robotics in new manufacturing areas. Industries that traditionally have been difficult to automate are seeing rapid advances, and the ability for workers to interact with machines could improve productivity dramatically. Just as the early robotic systems were rapidly adopted for repetitive tasks in automated manufacturing, the next decade will witness a similar revolution in robots used for repetitive tasks where more flexibility and better decision-making are required.
Questions about this article? Contact Flannigan at (210) 522-6805 or firstname.lastname@example.org.