Please flip your device.

AI Stories · The Tech We Won't Build

We Can Say ‘No’ to Killer Robots

Laura Nolan is a software engineer in Dublin, Ireland and a volunteer with the Stop Killer Robots coalition. She worked at Google for nine years as a staff site reliability engineer. In 2018, she walked out over Google’s involvement in Project Maven, a US military project to use AI to analyze drone surveillance footage.

This is an extended cut of the interview from The Tech We Won’t Build that has been edited for ease of reading.

How did you first become aware of Project Maven?

I was a software engineer at Google. It was late-2017 and I was a tech lead for my team, which was responsible for part of Google’s cloud infrastructure. Whenever I was in San Francisco, there was one particular technical lead that I always tried to meet up with. We had a one-to-one meeting one afternoon that was a pivotal half hour in my life. He told me, “There’s this very big project coming, and your team is going to have to be involved.”

That project turned out to be building these air gapped data centers, which would let the company process classified information. I said, “Why? This is going to be a very complex and expensive project.” And the answer I got was, “Well, it’s this thing, Project Maven….”

I couldn’t really talk about it to anyone in my life that was outside of Google. Other Googlers I would speak to, would just say “That’s weird, Laura, that can’t be happening. It can’t be that Google is analyzing drone videos so that the U S can kill people more efficiently.” We wouldn’t do that, surely.” But that was exactly what Google was doing.

What did you do about it?

I talked about Project Maven, and the problems I thought it had, to pretty much every senior person I could. I wasn’t the only one; one of our researchers, Meredith Whittaker, did a bunch of organizing around this and wrote a very good open letter that was published in The Guardian and The New York Times.

Some employees supported it. Certainly a lot of the senior execs supported it. The narrative that was mostly coming from Google’s leadership was that it was sort of supporting the US troops. It has to be said that executives are not neutral in this, they are well compensated in stock and probably make the vast majority of their money that way.

It wasn’t only Maven, which was a relatively small contract, at the time. Google was also bidding on this absolutely enormous contract called JEDI, the Joint Enterprise Defense Initiative, which I believe was a $10 billion contract. I think that was really the prize that the executives were looking at.

What kind of surveillance are we talking about?

Google was helping the US Department of Defense to automate widespread and pervasive surveillance. So we’re talking about every individual in a particular geographical area being tracked in close to real time with this system. It knows which buildings you go to, it knows who comes to your house, it knows who you’re connected to. They were talking about building a user interface where you’d be able to click on a house and it would show you a timeline of all the interactions that the inhabitants are having. That’s hugely intrusive, especially when we’re talking about areas where there is not an active war.

There’s a great report from 2012 [Living Under Drones] on a surveilled area in Pakistan where people were afraid to send their children to school. When the drones were out, people were afraid to go to funerals or other community gatherings. There are real human rights aspects of this.

What did Google’s leadership say at the time?

Executives were under a lot of pressure from employees to justify why the company thought it was okay to do this work without telling them, so they came up with these AI principles that said, ‘We won’t develop AI weapons.’ But the weapon is the tip of the spear. Behind each use of weapons is a whole lot of surveillance and analysis, and that’s the part Google’s interested in.

A section of the principles said, ‘We will not do surveillance projects that violate internationally accepted norms.’ But international norms about surveillance are whatever the Five Eyes countries want to do. There are no treaties or commonly accepted international rules. It’s a free for all where the powerful do what they want.

It was the last straw for me. The fact that they included such a broad range of surveillance projects underlined that Google was very much willing to do this sort of work, and I thought that I could probably speak about it more freely if I was outside of Google.

After leaving Google, you joined the Stop Killer Robots coalition. What is that?

After I spoke to The New York Times for a feature about Project Maven, the campaign Stop Killer Robots contacted me. And, when somebody says, “Do you want to come to the United Nations and talk about why AI weapons are bad?” you say, “Yes.”

Stop Killer Robots advocates for international legal instruments or regulation about so-called killer robots — weapons that are able to select targets and to use force without a specific human command. For instance, Turkey has a small drone that has facial recognition built in and is reputed to have autonomous features. So it’s entirely technically possible now to build hunter killer drones that can track people down based on facial recognition, which is really not accurate enough for that sort of thing.

Via The UN Convention on Certain Conventional Weapons (CCW), the campaign advocates for a preemptive ban on autonomous weapons, and for regulation to make sure that other weapons that have some form of autonomy are built and operated safely. People think that when you have one of these treaties, you have this static law and everyone obeys the rule. But international arms control and disarmament treaties are more complicated than that.

I think it’s important to think about moral norms. Not everything has to be written in stone to be effective. Look at nuclear weapons: there is a very strong moral norm against their use even though a number of nation states have them. I hope that alongside the treaty, we can think about the moral drawbacks of AI weapons that target people.

We’re seeing the increased use of automated decision making systems in important and high consequence contexts. If we can’t agree that it’s a bad idea to automate the most high-risk decisions, then what hope do we have for AI systems for things like job applications and social welfare benefits? To me, autonomous weapons are a very important ethical problem in their own right and the canary in the coal mine for all these other things.

What advice do you have for developers and regulators based on your experience?

As a developer, it’s entirely possible to work on systems without having a great idea of the big picture; you’re working on your own part of the puzzle and don’t necessarily know how it’s put together. This is why I advocate for more alarms, explicit rules, and laws. We need to start regulating autonomous weapons, surveillance, and privacy.

These ethical conversations are not well advanced. Particularly in the case of surveillance, they’re colored by this perception that these technologies add to security, due to a lack of discussion around the drawbacks.

It’s difficult to do, but I encourage developers to question what companies are doing and the potential ramifications of that work. Walking away from something that you don’t think is right is also a perfectly reasonable thing to do.

No one individual has the power to stop a large corporation from doing something that it wants to do. I certainly don’t have that power. But I have the power to walk away and not contribute to it. I have the power to go outside and speak about it. It’s important that we all do what we can.

Portrait photo of Laura Nolan is by Hannah Yoon (CC-BY) 2022

Mozilla has taken reasonable steps to ensure the accuracy of the statements made during the interview, but the words and opinions presented here are ascribed entirely to the interviewee.