AI expert Noel Sharkey on becoming an 'accidental activist' against killer robots

  1. Fully autonomous weapons could be a reality within 'a couple of years'
  2. Soon autonomous weapons may be able to kill without human approval
  3. 'Most advanced AI that I have seen is the Deep Mind in London'

Micah Xavier Johnson died after a police siege in Dallas, Texas in 2016. The young man had shot 14 policemen, killing five and wounding nine. But Johnson is also remembered for the manner of his death: he barricaded himself in a college building and police used a remote controlled bomb disposal robot, armed with plastic explosives. This was the first time a robot was used to kill a suspect in the U.S. (The Guardian).

Soon the police or military could use autonomous robots to kill without any human intervention. This prospect deeply disturbs Noel Sharkey, professor of artificial intelligence at the University of Sheffield in northern England.

Sharkey, a distinctive white-bearded figure with an approachable manner, is widely known for his role as senior judge on the UK TV programme Robot Wars.  But he is also a national face of expertise on robots, especially “killer robots” which he opposes vehemently.

He was born in Belfast, Northern Ireland, but left in 1969 just as the violent period known as The Troubles was starting. Obsessed by toy soldiers from childhood (TEDx), he briefly joined the Territorial Army (military reserve). Following his PhD in developing and psychologically testing AI’s ability to read, he got his first job outside of academia at Yale University’s AI labs (The Guardian). Sharkey is married with five daughters.

‘The richness of the human mind was so great compared to these dumb machines’

Late in life, he told WikiTribune, he became an “accidental activist” against “killer robots”, after a journalist asked him about them. This was in 2007, and although he knew plenty about AI, Sharkey was not an expert on military uses. He was horrified at what he discovered when he researched the topic and Lethal Autonomous Weapons (LAWs). From then on his views on the nature of war were transformed. 

“I am here to help resist the application of AI and robotics on strictly economic grounds without consideration of human responsibility and societal impact,” he told WikiTribune.

His fear is that LAWs may soon kill without human oversight, which is why he is seeking a global ban. At the moment they operate independently but — critically — a human gives the final command for them to attack and is therefore accountable. Sharkey is not alone in his concerns; in 2015, more than 1,000 AI experts signed a letter calling for a ban on autonomous weapons without human intervention.

Signatories included the late physicist Stephen Hawking, SpaceX founder Elon Musk, Apple’s co founder Steve Wozniak, academic Noam Chomsky, and Google DeepMind co-founder Demis Hassabis (Wall Street Journal). More recently, AI experts boycotted South Korean university KAIST’s project with weapons manufacturer Hanwha Systems, saying: “This Pandora’s box will be hard to close if it is opened.”

Sharkey recently returned from the fifth “killer robots” UN convention in Geneva, where 123 United Nations members discussed autonomous weapons. He is also chair of the International Committee for Robot Arms Control and spokesperson for the Campaign To Stop Killer Robots, which are calling for a preemptive ban on LAWs and is now backed by 26 states after China announced their support for a ban on the last day of the conference (Janes).

Sharkey hopes more nations will join and that by the end of 2019 the Convention on Certain Conventional Weapons will have begun to put together a new protocol on banning LAWs.

The questions and answers below have been lightly edited for clarity.

WikiTribune:  For many people the idea of autonomous weapons is science fiction. What is the reality of the threat they present?

Sharkey: Whenever this is reported in the media … it’s rare to find an article where there isn’t a picture of the Terminator on it … That makes it look like science fiction. It makes it look like we have people talking about the rise of the robots taking over the planet … Russia and the United States, Israel, China and the UK and possibly South Korea have been developing these [autonomous] weapons. We know that Turkey is beginning as well. They’re not science fiction weapons. They look exactly like conventional weapons … The United States has a submarine hunting submarines, fully autonomous … [Russia] has also been developing a self-learning gun, a massive gun that will learn its targets for itself. Turkey announced in February that they were going to develop unmanned tanks, autonomous tanks. The Koreans have a border guard, but it can be switched to autonomous mode… The British [are] … also developing, working on something called the Iron Clad (Telegraph), which is like a little tank, not very big, but it’s an autonomous tank … Not science fiction at all.

You’ve said that lethal autonomous weapons can’t make necessary decisions needed to be taken by a person. Please explain?

Sharkey: For instance, Osama bin Laden was a major target. He was a very high-value target meaning that they could cause collateral damage to kill him. This decision is not quantifiable. You can’t say that Osama bin Laden was worth 25 old ladies, 10 people in wheelchairs and 47 children … It requires a human commander to think about this … 

One of the cornerstones of the laws of war is the principle of distinction … That means that any weapon has to be tested to ensure that it can be used in a way that can distinguish between military targets and civilian targets … These things are not capable of doing that. They’re not capable of that kind of fine grained discrimination. You’d be lucky if it could distinguish between a tank and a truck.

How advanced is current artificial intelligence (AI)?

Sharkey: It’s a field that’s always been full of hype, meaning its ambitions are always much greater than its achievements. People have always been talking about developing human-like intelligence, reasoning, vision systems. I really believed that … Then I went off to the United States. I got my first job at Yale University’s AI labs. I had an opportunity  to go around and try some of the best AI programmes in the world. I was really quite shocked at the limitation of these programmes from what I had read. It was a very different thing. You put a comma in the wrong place, and the language machine would crash … I realized the richness of the human mind was so great compared to these dumb machines I was working with. Then I stopped thinking that they were really going to be able to think and work like humans, but I still loved the field.

I think probably the best, in terms of quality of engineering, the most advanced AI that I have seen is the Deep Mind in London … Last year they had a device, a computational device that beat the world’s best player at the game of Go. That’s the most advanced, complex game in the world … There are more possible moves than there are atoms in the planet. It [the machine] didn’t know it was playing a game. No idea whatsoever. It didn’t care if you switched it off half way through the game. It didn’t know what the board looked like. It didn’t know what the pieces looked like. It didn’t care if it won or lost. It wouldn’t give you a high five when it finished and say, ‘Yay, I won the game.’ It couldn’t make you a cup of tea afterwards, and it can’t do anything else. If I had a friend like that — well I don’t think I could have a friend like that – I wouldn’t call them intelligent.

What’s the future of AI?

Sharkey:  I really don’t believe it when people talk about the Singularity and those things happening within the next 20 years, I don’t believe that at all. I think if anything like this does happen, we’re probably talking about more than 100 years or more.

‘Uber and Tesla both have been very careless’

Do you think the self-driving car crash which crashed in March and killed a woman in Arizona will set back AI progress?

Sharkey: I don’t think it will set back any development in artificial intelligence, but it should ring a bell of caution about autonomous cars. I think there’s a possibility of autonomous cars doing great things for us, but my worry has always been not to rush into it because the public will reject it. We really need to be very, very cautious there. I think that Uber and Tesla both have been very careless. Google have been exemplary.

Why are you concerned about the use of autonomous machines for policing?

Sharkey: North Carolina, for instance, passed a bill two years ago saying that their drones could carry less-than-lethal weapons, so they could fire rubber bullets or taser some people … In South Africa now there’s a company called Desert Storm that have been making these octicopters — propellered machines with pepper spray. They can fire plastic balls and spray protesters with paint … the problem there was they developed them for striking miners. We have a right to peaceful protest … Once you start using them, you begin to slip into a way [where] policing could become much more authoritarian, and that’s the big worry for me … 

Were you interested in robotics or the military in your childhood?

Sharkey: My father and my uncles — all of them — served in the military. We watched war movies. I was very interested in the military and had thought of joining myself. I was actually in the Territorial Army for a while, which was the military reserves in the UK. Only for a short while … Science fiction has been something that I’ve been interested in all my life as well. It’s not surprising I ended up in artificial intelligence and robotics.

What got you interested in autonomous weapons?

Sharkey: A journalist asked me about military weapons, robots. I didn’t know anything about them. I went and looked them up in 2007 and was completely shocked to find out what the U.S. were doing. Shocked to the point where I spent six months researching it and then became an activist. I’m an accidental activist completely. I just thought — we can’t allow this.

See WikiTribune’s Explainer: The automation of war for more.

 

  • Share
    Share

Subscribe to our newsletter

Be the first to collaborate on our developing articles

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Connect with us on Discord Email us