Military AI caught in tension between speed and control

Military planners and industry figures say artificial intelligence (AI) can unlock back-office efficiency for the UK’s armed forces and help commanders make faster, better-informed decisions, but “intractable problems” baked into the technology could further reduce military accountability.  

Speaking on a panel about the ethics of using autonomous technologies in warfare at the Alan Turing Institute-hosted AI UK event in mid-March, industry figures and a retired senior British Army officer claimed there is an ethical imperative to deploy AI in the military.

They argued that proliferating AI throughout UK defence will deter future conflict, free up resources, improve various decision-making processes – including military planning and target selection – and stop the country from irreversibly falling behind its adversaries.

While these speakers did highlight the importance of ensuring meaningful human oversight of military AI, and the need for global regulation to limit the proliferation of “uncontrollable” AI systems in this context, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, argued there is a clear tension between autonomy and control that is baked into the technology.

She added this “intractable problem” with AI means there is a real risk that humans are taken further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.

The military potential of AI

Major general Rupert Jones, for example, argued that greater use of AI can help UK defence better navigate the “muddy context” of modern warfare, which is characterised by less well-defined enemies and proxy conflicts.

“Warfare’s got more complicated. Victory and success are harder to define,” he said, adding the highest potential use of AI is in how it can help commanders make the best possible decisions in the least time.

“To those who are not familiar with defence, it really is a race – you’re racing your adversary to make better, quicker decisions than they can. If they make faster decisions at you, even if they’re not perfect positions, they will probably be able to gain the momentum over you.”

With decision-making, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker
Elke Schwarz, Queen Mary University London

On top of the technology’s potential to enhance decision-making, Jones said the “hugely expensive” nature of running defence organisations means AI can also be used to boost back-office efficiency, which in turn would unlock more funds for use on front-line capabilities.

“AI gives you huge efficiency, takes humans out of the loop, frees up money – and one thing we need in UK defence right now is to free up some money so we can modernise the front end,” he said.

However, he noted that the potential of the technology to enhance decision-making and unlock back-office efficiencies would rest on the ability of UK defence to improve its underlying data practices so that the vast amounts of information it holds can be effectively exploited by AI.

Jones added that UK defence organisations should begin deploying in the back office first to build up their confidence in using the technology, before moving on to more complex use cases like autonomous weapons and other AI-powered front-line systems: “Build an AI baseline you can grow from.”

While Schwarz agreed that AI will be most useful to the military for back-office tasks, she took the view this is because the technology is simply not good enough for lethal use cases, and that the use of AI in decision-making will muddy the waters further.

“With decision-making, for example, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker,” she said, adding the dynamics inherent in the technology create a clear tension between speed and control.

“On one hand, we say, ‘Well, we need to have meaningful human control at all points of using these systems’, but ultimately the raison d’être for these systems is to take the human further out of the loop, so there’s always tension,” said Schwarz.

“The reason the human is taken further out of the loop is because the logic of the system doesn’t cohere that well with the cognitive logic of how we, as humans, process data.”

Elke added that on top of the obvious tension between cognition speed and meaningful human control, there is also the problem of automation bias, whereby humans are more likely to trust computer outputs because there is a misplaced sense the results are inherently objective.

“We are more likely to trust the machine decision that we have less time to overrule, where we cannot create a full mental picture in time to make a human decision – as we are further embedded into digital systems, those are the kinds of tensions that I don’t see going away anytime soon. They’re intractable problems,” she said.

“That takes us to ethics and the question of, what do we do with ethical decisions when the human is taken out?”

While Schwarz urged extreme caution, Henry Gates, associate director at AI defence startup Helsing, said there is a pressing need to “move fast” with the development of military AI so that the UK does not fall behind “other nefarious actors” and is able to have a greater say over how autonomous military systems are regulated.

“If we are just a country that doesn’t have any of these weapons … people aren’t really going to listen to us,” he said, adding that moving at pace with military AI can also help build an alternative deterrence.

“In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict.”

Schwarz, however, warned against “putting all our eggs in the AI basket to deter war”, arguing there needs to be greater investment in human capabilities for dialogue, trust and diplomacy.

She also warned that instead of acting as a deterrent, AI’s socio-technical nature – whereby the technical components of a given system are informed by social processes and vice versa – means it can negatively shape humans’ perspectives of one another, leading to dehumanisation.

“Ultimately, it has always been the case [with] technologies that the more we come to rely on them, the more they shape our perspectives about us, and about others as well,” she said, adding this is certainly the case with AI as, unlike other tools of war, like tanks or guns that are used as physical prosthetics, the technology acts as a cognitive prosthetic.

“What is the logic of all of that? Well, an AI system sees other humans as objects, necessarily – edges and traces – so implicit then is an objectification, which is problematic if we want to establish relationships.”

Beyond human cognition

On the issue of meaningful human control, Gates added there are three things to consider: the extent to which decision-making is delegated to AI, performance monitoring to ensure models do not “drift” from their purpose, and keeping humans in full control of how AI systems are being developed.

In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict
Henry Gates, Helsing

However, Keith Dear, managing director of Fujitsu’s Centre for Cognitive and Advanced Technologies, argued that the capabilities of AI have come so far in such a short space of time that it will soon be able to outperform humans on how to apply the laws of war to its decisions.

“For a target to be justified under the law of armed conflict, it has to be positively identified, has to be necessary … has to be proportionate, it has to be humane, so no uncontrolled effects, and it has to be lawful. All of those things are tests that you could apply to an AI in the same way that we apply them to a soldier, sailor or an airman serving on the front line,” he said.

“When you delegate authority, it has to outperform us on those things, and if it does outperform us in those roles where you can baseline and benchmark that, it becomes unethical not to delegate authority to the machine, which has a lower false negative in making those decisions than us.”

Highlighting how the speed of modern stock trading means it is largely left to computers, Dear added AI will create a similar situation in warfare in that, because it will have eclipsed the speed of human cognition, decision-making can and should be left to these autonomous systems.

“It’s an AI watching the AI. You may have humans before the loop, but the idea that, as warfare speeds up and we get to AGI [artificial general intelligence], there’ll be someone in the loop is perverse – I think it’s a choice to lose,” he said.

Commenting on the idea that AI will reduce human suffering in conflict and create a future where wars are fought between armies of drones, Gates added it was unlikely, noting that while it may change the character of war, it does not change the underlying logic, which is how one group can “impose its will” on another.

Jones agreed, noting that whether or not an AI is sat in the middle, the idea is to “hurt” the people on the other side. “You are still trying to influence populations, political decision-makers, militaries,” he said.

For Dear, there will be no role for humans on the battlefield. “When your machines finish fighting and one side has won, it’ll be no different to having a human army that won on the battlefield – the point then is that [either way] you have no choice but to surrender or face a war of extermination,” he said.

Schwarz, however, highlighted the reality that many of today’s AI systems are simply not very good yet, and warned against making “wildly optimistic” claims about the revolutionary impacts of the technology in every aspect of life, including warfare. “It is not a panacea for absolutely everything,” she said.

#Military #caught #tension #speed #control