Posted on

Could killer robots kill us?

Could killer robots kill us?

The Ukrainians have also experimented with autonomous machine guns and drones that use AI to identify and attack enemy targets. Meanwhile, in the Middle East, and echoing Raven, it was recently revealed that Israel used an AI program called Lavender to designate nearly 37,000 Palestinians as Hamas targets.

But as the catastrophic civilian casualties inflicted by systems like Lavender suggest, battlefield efficiency and wartime morale are two very different things. Geoffrey Hinton, the so-called “Godfather of AI” and Nobel Prize winner in physics, warned of the “potentially dire consequences” of the technology – with particular reference to the fact that robot killers may one day escape our control.

Of course, it hardly helps that some of the most passionate AI enthusiasts are arguably far from perfect. Although Scully unsurprisingly emphasizes the ethics of Palantir’s platforms, her company faced a challenge Test about how it collects and uses data, including apparently inviting small children to an AI war conference. That’s before you think about it cloudy Relationship with the US government, with Palantir also responsible for this controversy about his work for the NHS here in the UK.

But these challenges aside, Moiseev is ultimately confident that few people want to see society torn apart in a future ruled by killer robots. “Rather,” he suggests, “we should develop AI to prevent and resolve disagreements.” More broadly, predictive AI can now be used not only to thwart attacks, but also to respond to conflicts and help civilians. Whatever the question marks surrounding Palantir, the company is currently using AI to help mine over 150,000 square kilometers of Ukrainian fields.

“Palantir apparently invited small children to an AI war conference.”

And what about the future that Jalalabad foreshadows? Could AI predict future conflict? Moiseev thinks so. As he says, while the invasion of Ukraine came as a shock to most, a team of scientists and engineers based in Silicon Valley did predicted Russia’s move almost to the day – even Months before the war actually started. “There are often a lot of signs that a conflict is on the horizon” Moiseev adds: “Whether it’s unusual movements at missile sites or a sudden stockpile of critical materials.” The problem is that humans aren’t particularly good at detecting subtle clues. But for AI, this is one of its greatest strengths.”

No wonder U.S. policymakers hope to use AI to analyze data to detect future Chinese actions around Taiwan. Admiral Samuel Paparo, a commander in the U.S. Pacific Fleet, certainly hinted at this. As he said at a recent defense innovation conference, the Pentagon is looking for ways to find “those clues” about an impending attack by the People’s Liberation Army. Given that any outbreak in the Pacific could occur without warning, experts argue that AI could equally improve the overall readiness of U.S. forces year-round.

The question then becomes whether the computers could be outsmarted by an enemy intent on maintaining a minimum level of surprise. Significantly, this can potentially be done via even more intelligent machines millions of times more powerful than normal supercomputers. Quantum machines could analyze enemy movements and break their encryption in a second.

However, it would be reckless to give computers complete control. As Spahr says, war is ultimately fought by men and women, which means we must never allow “automation bias” to cloud our strategic judgment. Considering how his country’s adventure in Kabul ultimately ended, that’s certainly good advice.