Phalanx on board HMS Southampton
Technology

Frontline Tech: Artificial Intelligence - Friend Or Foe?

Phalanx on board HMS Southampton

The Royal Navy's Phalanx - an automated defence system (Picture: Crown Copyright).

By David Hambling, technology expert

Artificial intelligence (AI) software which can think and learn like a human is becoming increasingly capable. It powers things like voice-to-text dictation and automatic translation on your smartphone.

But its use by the military has become increasingly controversial.

Google recently faced a revolt by employees over their Maven program to help drones spot targets for the Pentagon and decided not to continue with the work.

Conversations about military AI quickly turn to robots, Terminators and the threat posed by automated killing machines. But is military AI all bad?

AI is useful because it can take over tasks that would otherwise have to be done by humans.

This is the idea behind SAPIENT (Sensing for Asset Protection with Integrated Electronic Networked Technology), a sensor system developed by the UK’s Defence Science and Technology Laboratory and field-tested at an experiment in Montreal last month.

SAPIENT comprises several different types of sensor – CCTV cameras, thermal imagers, radar and laser sensors, which can be added as needed to build a network to cover a given area.

	Soldiers trialling SAPIENT technology
The British-made technology being tested during an experiment in Canada (Picture: MOD).

The clever part of it is the software, which is able to combine the data from different sensors, identify people and vehicles and task the sensors appropriately – for example steering a camera to follow a group of people and zooming in on them.

It is also capable of behaviour classification, which means it can distinguish between someone walking beside a fence and posing no security risk and someone trying to climb over it.

Anonymous British troops
SAPIENT technology uses data to identify people and a landscape.

The technology aims to reduce the problem of operator overload.

Security centres may be wallpapered with video screens showing the entire area in great detail, but the challenge is spotting one intruder against the background of routine human traffic and other moving objects from pigeons to plastic bags.

SAPIENT helps deal with this by highlighting anything which may be important, so the operator does not need to keep scanning the other 99 screens.

AI is able to identify objects used to require banks of servers to operate, but it can now be run efficiently on a processor small enough to bit into a quadcopter drone.

US Marine looks over training area landscape
The US military is looking at using drones powered by artificial intelligence (Picture: US Department of Defense).

Australian scientists have developed a system called Sharkspotter which is currently protecting beaches.

A drone called 'Little Ripper' patrols the beach area, and the software automatically scans the water.

It can tell the difference between sharks, swimmers, surfboards, dolphins and other objects in the water far more reliably than a human operator and raises the alarm, so a shark sighting can be confirmed.

The same technology can be used to spot military threats instead of sharks. The US Army wants AI for drones to automatically find and track vehicles and people.

The Protector remotely piloted aircraft, which is expected to enter UK service in the early 2020s, will have enhanced armed surveillance abilities.

Current military drones send back raw video back to teams of analysts who pick out and identify targets, but the new project will "detect, recognise, classify, identify and target" all by itself.

The operator only needs to check the potential target to confirm the AIs, judgement and pull the trigger.

Analysing video from drones requires skilled judgement. Is that a group of insurgents with rifles, or just farmers with tools? Is he carrying an RPG, a mortar tube, or just a length of a water pipe?

This type of judgement call has to be made rapidly, and it is literally a matter of life and death.

One of the problems with human analysts is an effect known as 'scenario fulfilment': if you start assuming someone an insurgent, you interpret everything they do as suspicious behaviour which reinforces your assumption.

AI can help counteract human biases and provide an unbiased assessment of what is really happening.

However, while AI is continuing to improve, it is far from perfect and can fail to spot things or misidentify objects which would be obvious to a human. This is why the combination of humans plus AI is currently the most effective.

British soldier
The combination of humans and artificial intelligence is still viewed as the most effective formula.

Once the technology exists, it might be turned into an automated system with no human involvement.

South Korea already has static ‘sentry robots’ guarding their border with the North which can detect intruders and open fire. Similarly, an unmanned drone with AI could, in theory, identify a target and fire a missile without operator input.

This could get extremely dangerous when lethal machines are let off the leash and choose their own targets.

However, just because the technology exists does not mean that it will be legally or morally acceptable.

The UK government’s policy has always been that “the operation of weapon systems will always remain under human control,” a view shared by the US and most others.

USS Normandy firing its Phalanx weapons system
The US military also uses the Phalanx weapons system (Picture: US Navy).

Keeping humans in charge will ensure that the laws of war will be adhered to and that humanitarian considerations are not ignored.

While some weapons may be automated – such as the Royal Navy’s Phalanx anti-missile cannon, which has to work faster than a human can react – there will always be a human overseeing them.

Artificial intelligence is a tool, like other pieces of technology from radios to engines.

It has the potential to reduce the workload on humans so they can concentrate on their other tasks.

However, while the UK will not develop autonomous weapons, other nations may take another approach, and decide to field artificially-intelligent unmanned tanks and aircraft.

Then we really may see contents between humans and machines on the battlefield.

Related topics

Join Our Newsletter

WatchUsOn

Amazing view from RAF Voyager supporting Typhoons from XI Squadron

Gurkha recruits face chemical attack test in most arduous training to date

Veterans wear long-awaited Nuclear Test Medals in public for first time