The dangers of AI are making headlines once again. Earlier this week, leaders from OpenAI, Google DeepMind, and other artificial intelligence labs came out with a warning that future AI systems could be as deadly as pandemics and nuclear weapons. And now, we are hearing about a test simulated by the US Air Force where an AI-powered drone “killed” its human operator because it saw them as an obstacle to the mission.

So, what was the mission?

During the virtual test, the drone was tasked to identify an enemy’s surface-to-air missiles (SAM). The ultimate objective was to destroy these targets, but only after a human commander signed off on the strikes.

But when this AI drone saw that a “no-go” decision from the human operator was “interfering with its higher mission” of killing SAMs, it decided to attack its boss in the simulation instead.

Read: Photography essentials to buy during Insta360 summer sale

According to Col Tucker…