Josh Lospinoso and artificial intelligence: a balance between innovation and security
The founder of Shift5 illustrates the potential and threats of AI in the field of cyber security, warning about possible vulnerabilities
Josh Lospinoso, a cybersecurity veteran, has an impressive resume. In 2017, its first cybersecurity startup was acquired by Raytheon/Forcepoint. His second venture, Shift5, partners with the US military, railroad operators, and airlines like JetBlue. Lospinoso, a 2009 West Point graduate and Rhodes Scholar, spent more than a decade as an Army captain and author of hacking tools for the National Security Agency and the US Cyber Command. He recently appeared before a Senate subcommittee on military affairs to discuss how artificial intelligence can help protect military operations.
Artificial Intelligence: A Double Edged Sword
In a recent interview with the Associated Press, Lospinoso explained how software vulnerabilities in weaponry pose a major threat to the US military. He identified two major dangers to AI-enabled technologies: data theft and data poisoning. Data poisoning, he explains, is akin to a form of digital disinformation. If adversaries are able to manipulate the data displayed by AI-enabled technologies, they can greatly affect how that technology works. While this phenomenon is not widespread yet, some notable cases have already been reported, such as the 2016 incident where a Twitter chatbot developed by Microsoft, Tay, started generating offensive content following interactions with malicious users.
Security and Artificial Intelligence: A Priority for the Armed Forces
Speaking of military software systems, Lospinoso refers to a disturbing 2018 Government Accountability Office report, which stated that nearly all newly developed weapons systems had critical vulnerabilities. Despite these shortcomings, the Pentagon is considering integrating AI into such systems. According to Lospinoso, before this can be done, it is necessary to properly ensure the security of existing weapons systems, a task that will take a long time. Furthermore, adding AI-enabled capabilities to systems that are already deeply vulnerable poses a significant risk.
On the race for AI and its potential risks
Finally, Lospinoso expressed concern about the frantic rush towards AI products. While he stresses that halting AI research would be counterproductive as it would benefit China and other competitors, he is concerned about the risks associated with rushed-to-market AI products. These often have security issues or fail, causing unforeseen damage. Lospinoso argues that while the evolution of AI is inevitable, it is imperative that it proceed in a safe and responsible manner. Finally, regarding the use of AI in military decisions, such as targeting, he is adamant: he does not believe that current AI technologies are ready to take control of decisions in lethal weapons systems.
Follow us on Instagram for more pills like this05/30/2023 10:06
Editorial AI