Advanced science.  Applied technology.

Search

Adversarial Learning for Camera Sensors, 10-R8844

Principal Investigators
David Chambers
Inclusive Dates 
04/01/18 - Current

BACKGROUND

Over the last few years, deep learning neural networks have become immensely popular for use in a wide variety of applications and industries. Deep learning has proven to be effective at many tasks, such as image classification and object detection, and the algorithms are finding their way into many safety-critical applications, as well. However, because they were adopted so quickly, many developers did not fully consider the security implications that come with a new class of algorithms. Security researchers have come together to begin research into a field called adversarial learning, which attempts to find and document vulnerabilities in machine learning algorithms. Attacks on machine learning algorithms, typically referred to as adversarial examples, have now been known to work against some machine learning algorithms in some situations (e.g., in simulation only). To understand and mitigate the true risk of these algorithms, researchers are now looking into the feasibility of making these adversarial examples physically-realizable.

APPROACH

With this research project, SwRI aims to design a robust, reliable testing framework for creating physically-realizable adversarial examples, or real-world deep-learning attacks, which can easily adapt to a wide variety of deep learning algorithms. The first step in creating this testing framework is to incorporate current adversarial learning research with deep learning vulnerabilities (misclassification). We also aim to improve on the state of the art by developing and testing a novel approach to spoofing object detection neural networks. While typical object detection spoofing has involved manipulating the classification task, SwRI will demonstrate that it is also possible to spoof object localization. With this new class of vulnerabilities, many safety critical applications (such as automated vehicle vision systems) could be rendered ineffective (at best), or extremely dangerous (at worst). SwRI also aims to improve the state of the art in security testing by creating better physically-realizable adversarial examples. Current adversarial example training methods use affine transformations to make the adversarial example scale and rotation invariant. SwRI aims to improve on this by creating what it calls “perception invariant” adversarial examples, which are created using full homography transformations of the adversarial example during training. When testing in the physical world, the advantage to this approach is that the adversarial example does not need to be perfectly parallel with the camera system. Instead, it can have a certain degree of rotation in all directions without compromising its effectiveness. This method allows SwRI to accurately test how a malicious party would use this to exploit an image processing system in the real world.

ACCOMPLISHMENTS

While the research is still ongoing, some success has been observed in creating an adversarial learning framework that can quickly adapt to new deep learning algorithms and test their vulnerability to both misclassification and mis-localization attacks. Physically-realizable examples have been printed and demonstrated on SwRI vehicles without requiring the example to cover the entire surface of the original object (something which had not yet been shown in research). The researchers have shown some success in demonstrating a mis-localization attack in simulations. Finally, the researchers have created an improved method of creating more physically-realizable adversarial examples, while also reducing the footprint (size) of the adversarial example, when compared to the current state-of-the-art.