SwRI has extensive experience integrating high level autonomy and perception algorithms onto ground vehicles but is not as proficient in operating and modifying commercial autopilots for small unmanned aerial systems (UAS). The performance and requirements to adapt our existing algorithms to this different domain is currently unknown.
This research project had two main components. First, we acquired commercially available UAS autopilots and integrated SwRI’s existing Robot Operating System (ROS) tools into them. Second, we collected data to evaluate our deep learning approaches to object detection and pixel-wise semantic segmentation to aerial data.
We were able to integrate a Snapdragon Flight Pro autopilot and electronics onto a custom designed and fabricated carbon fiber frame. This small UAS is suitable for operating in confined spaces for tasks such as disaster response and cave or underground exploration. This small UAS is extremely maneuverable, while still carrying both stereo and monocular cameras. All flight software can run onboard the platform, enabling fully autonomous operation.
We also integrated a larger camera and a Nvidia Jetson TX2 computer onto a larger DJI S1000 UAS. This system can process camera imagery through a large deep neural network onboard the UAS. This powerful computational capability enables accurate obstacle detection and avoidance, as well as complex visual scene analysis.
Finally, we evaluated our existing deep learning algorithms on the DJI S1000. This revealed significant challenges in operating existing neural networks from the aerial platforms. Shadowing, lighting conditions, and objects exposing small amounts of themselves to the air all pose significant challenges in operating neural networks from aerial platforms. In the future, this will require acquiring aerial specific datasets and retraining neural networks to achieve comparable performance to our ground-based approaches.