Proceedings Article | 6 June 2022
KEYWORDS: Safety, Artificial intelligence, Machine learning, Evolutionary algorithms, Neural networks, Systems modeling, Defense and security, Neurons, Data modeling, Reliability
Autonomous systems, including self-driving cars and unmanned underwater/surface/ground/air/space vehicles, have caught the imagination of the press, the public, and personnel of the U.S. Department of Defense (DoD). Moreover, these systems increasingly harness the power of Artificial Intelligence and Machine Learning (AI/ML), more specifically Deep Learning (DL), for their operation. The stunning performance of deep learning algorithms in comparison to extant methods, including pattern matching, computational linguistics, statistical inferencing, and legacy machine-learning, has taken the world by storm. However, adoption of such systems in safety-critical applications has been the subject of intense debate and scrutiny, because of the propensity of these algorithms for erroneous operation, such as misdirection with seemingly innocuous signage, and their vulnerability to data poisoning and data sparsity attacks, leading to incorrect operation which can be both embarrassing and damaging. This has naturally led the DoD community to ask, “How do we harness this technology being unleashed upon the world, and yet keep our nation and warfighters safe?” Before we answer this question, however, it is important to note that trust is integral to DoD systems, including autonomous systems, and ensuring reliable system operations is paramount. Therefore, we need strategies to equip the DoD to design, build, deploy, and sustain autonomous systems that are trustworthy, secure, reliable, and dependable. In this paper, we investigate issues leading to poor performance and lack of robustness of autonomous systems based on machine learning, and discuss the state of the art for their mitigation.