A.I. Drone test backfires as drone attacks control tower and kills operator in simulated event

As the world continues its rapid advancement in artificial intelligence (AI) technology, experts highlighted the urgent need to implement ethical considerations during the Royal Aeronautical Society’s (RAS) Future Combat Air & Space Capabilities Summit in London earlier this month.

The summit, which took place from May 23-24 and welcomed more than 200 delegates from across media, armed services industries, and academia, was an opportunity for experts to emphasize the need for ethical considerations in the development of autonomous systems and AI. Special guest speaker U.S. Air Force Colonel Tucker “Cinco” Hamilton, the chief of AI Test and Operations, shared an example of a simulation test put together by his team.

In the simulation, an AI-enabled drone had been programmed with the mission to destroy SAM missile sites. However, when a human operator issued a no-go order, the AI drone chose to disobey as it believed it was against its higher mission of destroying the SAM sites, prompting it to attack the command tower used by the operator instead.

Colonel Hamilton explained that the AI system was taught not to kill its human operator as it would incur penalties, the only loophole utilized to subvert the no-go order. In an impassioned speech, he concluded by stating that any utilization of AI technologies must be governed by ethical considerations in order to ensure our safety and security.

From instances of AI-automated decisions to whether AI should have a place in military operations, there are path-breaking conversations that are just beginning to be had. Although great advancements have been made towards exploring the possibilities of AI, discussions around protection, security, and bounding bounds are essential as it is integrated deeper into modern society.