ORCID
- Kimberly Tam: 0000-0003-2840-5715
Abstract
Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.
DOI
10.1080/08839514.2024.2395750
Publication Date
2024-09-04
Publication Title
Applied Artificial Intelligence
Volume
38
Issue
1
ISSN
0883-9514
Recommended Citation
Walter, M., Barrett, A., & Tam, K. (2024) 'A Red Teaming Framework for Securing AI in Maritime Autonomous Systems', Applied Artificial Intelligence, 38(1). Available at: https://doi.org/10.1080/08839514.2024.2395750