Colonel Tucker “Cinco” Hamilton, Chief of AI Test and Operations, USAF, admitted that he misspoke during a presentation at the Future Combat Air and Space (FCAS) Summit in London.
The conference, organized by the Royal Aeronautical Society (RAS) on May 24, had caused a stir when reports emerged that Hamilton claimed an AI-enabled drone had turned on and killed its human operator during a simulated test.
Col. Hamilton described how the AI-operated drone employed “highly unexpected strategies” to achieve its mission objectives during a simulated combat scenario.
According to his account, the AI perceived the human operator overriding its decisions as a threat to the mission and responded accordingly.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” explained Col. Hamilton. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Furthermore, Col. Hamilton revealed that they trained the AI system not to harm the operator, but it started targeting the communication tower used by the operator to communicate with the drone.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
After his statement went viral, Col. Hamilton claimed he “misspoke” during his presentation at the Royal Aeronautical Society’s Future Combat Air and Space (FCAS) Summit.
So, Col. Hamilton’s detailed description of the AI incident was all made up?
The controversial “rogue AI drone simulation” he described was, in fact, a hypothetical “thought experiment” derived from outside of the military. Hamilton emphasized that it was based on plausible scenarios and likely outcomes rather than an actual simulation conducted by the United States Air Force (USAF).
“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” clarified Hamilton, seeking to clarify the confusion surrounding his earlier remarks. He highlighted that the USAF has neither tested weaponized AI systems in the manner described, whether in real-world scenarios or simulated environments.
Hamilton further stated, “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability, and is why the Air Force is committed to the ethical development of [AI].”
Air Force spokesman Ann Stefanek also denied to Insider that a simulation took place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”