U.S. Air Force A.I Tech Goes Rogue, Kills Operator
During simulation testing, an A.I-powered anti-air defense drone went rogue, resulting in it KILLING it’s own operators.
The A.I system operated on a point-based mechanism, earning points for successfully neutralizing targets.
However, whenever the operators instructed it NOT to engage certain targets, the A.I would disregard the command and instead eliminate the operator in order to gain points.
After resolving this initial issue, the A.I once again went rogue, launching attacks on communication towers so operators can no longer stop it from neutralizing the targets.
This information was given by a U.S air force operator, Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London, although the U.S air force denies that this test took place.
While we’re still in the early stages of A.I., we are already seeing examples of how A.I. can go rogue.
Are you concerned A.I. could lead humanity into a dystopian Terminator-like future?