Please flip your device.

 

Transcript: The Tech We Won’t Build

This is the transcript of Mozilla’s IRL podcast episode, The Tech We Won’t Build from July 18, 2022 (IRL Season 06, Episode 01).

Bridget Todd:
Most of us would like to believe the work that we do has a positive impact. The tech industry is no exception. But what if you discovered that your work is contributing to something harmful? Would you say, “No, I won’t help build that”? Enormous power over people can be amplified with AI. Companies and governments have their objectives with technology, but are they always right?

Yves Moreau:
We need to accept or refuse to do something because tech people have a quite amazing superpower. They can actually stand up from their chair and go through the door and go and look for another job.

Bridget Todd:
That’s Yves Moreau. He’s a professor of engineering at KU Leuven in Belgium. He specializes in genetics and artificial intelligence. We’ll hear more from Yves in a bit. Before we dig in a heads up, we’ll be taking on some heavy topics, including surveillance, oppression and war, but the stories are about people who are standing up and making a difference. So here’s the big question. Where should we draw the line between what we can build and what we should build with AI? I’m Bridget Todd and this is IRL, an original podcast from the non-profit Mozilla. This season, five episodes on the perils and promise of artificial intelligence. Thanks to the internet, AI is everywhere now. We’ll be talking about healthcare, gig work, social media and even killer robots. We’ll meet AI builders and policy folks who are making AI more trustworthy in real life.

Bridget Todd:
This season also doubles as Mozilla’s Internet Health Report. In this episode, when do we blow the whistle, call out the boss and tell the world, “This work is not good work and I’m going to demand that we do better”? The largest tech companies in the world have worked their way into our everyday lives. But Big Tech also enables the data collection, cloud storage and machine learning, not just for the consumer internet, but for governments, police, and military. It’s not something that gets talked about all that much. And for employees of tech companies, it can be surprising or even shocking to discover that they may be expected to help build surveillance and weapon systems as part of their jobs.

Laura Nolan:
I think there are some very interesting cultural changes at Google during the time I was there.

Bridget Todd:
In 2017, Laura Nolan was tech lead on a team working on data infrastructure at Google’s Dublin office in Ireland. Often, her job had her flying to San Francisco to meet up with folks at Google headquarters.

Laura Nolan:
And so there was one particular technical leader that I always tried to meet up with whenever I was in the Bay Area. And he and I had a one to one and it was the afternoon. And we were in Google’s San Francisco office and this is a pivotal sort of half an hour, hour in my life. So we were chatting away and he said, “Well, there’s this very big project coming down the pike. And your team is going to have to be involved.” And the project turned out to be building these air gap data centers.

Bridget Todd:
Real quick. An air gap is a security measure where you isolate machines from outside networks.

Laura Nolan:
And I said, “Why?” This is going to be a very complex and expensive project. And the answer I got was, “Well, it’s this thing, Project Maven.” And I said, “What is Project Maven?”

Bridget Todd:
You may have heard of Project Maven in the media. It was a Pentagon project worth millions that would use AI by Google to recognize people and objects in aerial surveillance footage, including from US military drones and conflict zones around the world. Back in 2017, even most folks inside Google weren’t aware of it.

Laura Nolan:
I feel very lucky that this person was straight with me, that he actually told me about this. And didn’t just say, “Oh, well, we’re doing it for reliability reasons or whatever.” I actually got a clear answer. So I was immediately concerned.

Bridget Todd:
This was not what Laura had signed up for when she first accepted a job at Google.

Laura Nolan:
Organizing the world’s information is pretty uncontroversially good, whereas running massive surveillance systems that are geared towards finding people to kill is ethically murky at the very least and in my view, flat out wrong for quite a lot of reasons.

Bridget Todd:
The goal of Project Maven was to use AI to speed up the military’s identification of people and objects.

Laura Nolan:
People were saying, “Well, Google’s values are not relevant.” What we should be trying to achieve here is to have a sort of a value neutral cloud computing platform and let people more or less do with it as they will.

Bridget Todd:
Laura was shocked at first. And then she turned to her colleagues to encourage resistance.

Laura Nolan:
I couldn’t really talk about it to anyone in my life who was outside of Google. And other Googlers that I would speak to would just say, “Well, that’s weird, Laura, that can’t be happening. It can’t be that Google is analyzing drone videos so that the US can kill people more efficiently. We wouldn’t do that surely.” But that turns out that was exactly what Google was doing.

Bridget Todd:
So Laura expressed her concerns internally and many others did too.

Laura Nolan:
The narrative that was mostly coming out from Google’s leadership at the time was that it was sort of supporting the US troops, trying to protect US troops, supporting the sort of the US war effort really.

Bridget Todd:
For Laura, the work she was asked to do felt morally impossible.

Laura Nolan:
So we’re talking about every individual in a particular geographical area being tracked in real time or close to real time, potentially with this system. Every time you step outside of your front door and go somewhere else, that system is detecting that, it’s logging that. It knows which buildings you go to. It knows which houses you go to. It knows who comes to your house. It knows who you’re connected to. They were talking about building a user interface that you’d be able to click on a house and it would show you a timeline of all the interactions that the inhabitants of that house are having.

Bridget Todd):
Eventually, the objections multiplied and 4,000 staff members famously signed a petition for Google to pledge never to build warfare technology. Laura quit her job.

Laura Nolan:
No matter what my executives are telling me, I’ve got to sort of go by my own moral compass here, which is that I could not be in any way involved in supporting this work. Google was very much willing to do this sort of work and they just wanted to do it quietly. And they just wanted everyone to be quiet about it.

Bridget Todd:
Exposed to public pressure, Google did back out of Project Maven and 2018 became a watershed moment for protests by tech employees at a number of companies in the US. There was also Cambridge Analytica, scandals over sexual misconduct and protests over labor conditions. But let’s face it, Project Maven wasn’t the biggest, nor the first, nor the last collaboration between the Department of Defense and internet companies we interact with every day. When people say technology is neutral, is it a way to avoid taking responsibility for the harms that could be done? Laura now uses her personal story to warn policy makers against the dangers of combining AI with weapons.

Laura Nolan:
I had spoken to the New York Times about Project Maven. They did a big feature on it and the Campaign to Stop Killer Robots contacted told me and they said, “Hey, do you want to come to the UN?” And when somebody says, do you want to come to the UN and talk about why AI weapons are bad, you say yes. So I did.

Bridget Todd:
The Campaign to Stop Killer Robots is a global coalition of more than 180 member organizations. They want to see a ban on weapons that can automatically open fire without a human pulling the trigger. For tech workers like Laura and others who come forward publicly, raising the alarm about the ethical implications of technology can come at a high cost, too high, professionally and personally. Making sure that there are legal protections for people who blow the whistle, is crucial for all of us.

Laura Nolan:
If we can’t agree that it’s a bad idea to automate the most high risk decisions, then what hope do we have when we think about how we should be using AI systems for things like job applications and social welfare benefits. We’re seeing increased use of automated decision making systems and so many of these important and high consequence contexts. So to me, autonomous weapons, as well as being a very important ethical problem in their own right, are also this sort of canary in the coalmine for all these other things.

Yves Moreau:
I’ve been quite amazed in the past five years in interactions I’ve had, when you actually calmly, politely in a well argued fashion, explain that there is a problem. Then it’s actually quite frequent that a few people will say, “Yeah, actually I think we have a problem.” And then the dynamics of decision making can change very quickly.

Bridget Todd:
That’s Yves Moreau, again. He’s a professor of engineering in Belgium who researches genetics and how to better diagnosed diseases with artificial intelligence. Yves works at a university, not a tech company. But for years, he has been concerned with how often machine learning research about the DNA of people of certain ethnicities appears in academic journals, because it’s actually used to target people in real life.

Yves Moreau:
This is not just, “Yeah, I’m developing a new technology because I’m interested in knowing how faces can be different in more different ethnic groups.”

Bridget Todd:
The thing about AI research in journals is that it directly contributes to how AI technologies are deployed in the world. It’s how proofs of concept are shared between researchers and practitioners. So when Yves sees papers that trace the genetic markers of persecuted groups in China, for instance, he objects. He reaches out to the publisher.

Yves Moreau:
Because of the ongoing persecutions in Xinjiang, while you have to take that context into account, millions of people have been sent to camps in Xinjiang. There is facial recognition everywhere. There is tracking of individuals of their behavior. If you get detected as being deviant, you can be sent to a camp. There is forced labor. There is forced birth control. I mean, this is research that enables the potential control of certain ethnic groups. That there are actually products on the market from several Chinese suppliers that have functionalities embedded to actually tag people in video feeds based on their ethnicity, in particular Uighurs. I think mass surveillance is one of these key issues, is going to be really a battlefield for the shape of societies in the 21st century.

Bridget Todd:
Yves has been taking the battle to academic journals and publishers. Demanding that they retract papers that he considers to have questionable ethics. It doesn’t always work. In the past few years, Yves has requested ethical investigations for around 60 articles in a dozen different journals, including several with large multinational publishers. One paper, for example, used AI powered analysis of ethnic facial features of Uighurs and Tibetans. Yves questions whether this research is ethical.

Yves Moreau:
The ethical review is not just checking that well, did people consent and is somebody going to get heard here right now during the experiment? Saying, well, the people whose data was collected, they were not harmed. That’s not enough. I mean, if you’re developing technology where it’s quite clear that this is leading to serious concerns, then you have a problem.

Bridget Todd:
Yves says that so far, his campaign has resulted in eight retractions for papers on DNA profiling and ethnic facial recognition. He says the articles he has identified are only the tip of the iceberg. His call to action isn’t just for publishers. It’s also a call for researchers and research institutions to hold themselves and each other accountable for the huge risks posed by mass surveillance technology worldwide.

Yves Moreau:
We actually have a huge moral authority on what happens with the research we do. We invented it, we created it, we brought it to the world. We discovered it. And so we have a special right to talk either very specifically on the things we did ourselves or as representative of the community that did it. We actually have a lot of moral authority, which we almost never use.

Bridget Todd:
We’ve talked about resistance in companies and in research communities. But resistance to algorithmic oppression happens outside of establishment structures, too. And so does innovation around data for equity.

Yeshimabeit Milner:
Just because technology is new, does not mean that it’s innovative and it definitely doesn’t mean that it’s beneficial. One of the things that data weapons do is exacerbate and magnify the already harmful and racist and punitive carceral system that we’re living under.

Bridget Todd:
Yeshi Milner is a technologist and data scientist and the founder and executive director of the nonprofit Data for Black Lives in the US. She coined the term data weapons. What are data weapons? According to Yeshi, they’re any technological tools, often AI tools, that are used to surveil, police and criminalize black and brown communities.

Yeshimabeit Milner:
We’re thinking about technological tool that ranges from something as simple as camera technology, all the way to things more complicated, like facial recognition, hardware from x-ray, vans that expose anything in a given range, including vehicles homes, to cell site simulators, electronic monitoring. And then the development as well of these larger networks and algorithms such as domain awareness systems, as well as real time crime centers, infusion centers, which are oftentimes actual brick and mortar places, but are really the convergence of all of these technologies and their uses.

Bridget Todd:
Data for Black Lives is a movement of activists and scientists dedicated to using data to create measurable change in the lives of Black people. They say mainstream big data systems are designed to do the opposite, particularly when it comes to policing. Time and time again, costly surveillance and tracking systems are prioritized ahead of investments in communities. Across the US, automatic license plate readers are mounted on road signs, bridges, and on police cars. They are constantly generating detailed records of people’s movements. Many are run by private companies, companies that sell data to police with little oversight. Yeshi’s team has met with public attorneys to understand how data weapons impact their clients.

Yeshimabeit Milner:
Their client got dinged on an automated license plate reader system, and now are in the NYPD’s domain awareness system, which is a very sophisticated kind of algorithm. And now are incarcerated and potentially going to be deported.

Bridget Todd:
Yeshi explains how these tools for monitoring and collecting data about supposedly suspicious people are an extension, in fact of historic patterns of controlling Black people. Patterns that have existed since before slavery was abolished until today.

Yeshimabeit Milner:
Some of the biggest data collection was stopping frisk. People don’t realize that stop and frisk policies weren’t just about intimidating Black folks and scaring them. That’s a big part of it, but it was also to get people into the system, into the NYPD database.

Bridget Todd:
Yeshi believes as a matter of urgency that the technology used by law enforcement today needs to be fully accounted for and made transparent. Because part of what makes these technologies so powerful is limiting who gets to make decisions about them.

Yeshimabeit Milner:
Our priority is to focus on uplifting and naming these data weapons because just of their nature. They are obfuscated. They are hidden. A lot of them are proprietary algorithms that we can’t even do a FOIA request to better understand.

Bridget Todd:
By the way, FOIA is freedom of information act in the US. Data for Black Lives have brought thousands of people together on initiatives to create data tools that help expand opportunities for housing and education and healthcare, often in partnership with other organizations. By building differently, they’re highlighting the injustice of mainstream conventions around the use of big data and algorithms. And they’re showing that there’s hope in redirecting powerful technologies to improve lives.

Yeshimabeit Milner:
I think the kind of information and the kind of data that we need is one that’s going to actually achieve those things versus increased policing and increased surveillance. We have an opportunity to do that with big data and with machine learning.

Bridget Todd:
From unmanned drones to social media systems, what does it feel like to be on the receiving end of technologies designed by foreign superpowers?

Shmyla Khan:
The relationship that many of us have with technology is one sided, especially in the global south. Where a lot of this tech, the apps that we use, the devices that we use have been built in other places, in other contexts, by people who have not really sort of imagined us as the end users. And that is a really important issue because if that tech is not built for you or with you in mind or your needs in mind, that is a sign that you’re excluded from those conversations.

Bridget Todd:
That’s Shmyla Khan. She’s the research and policy director for the Digital Rights Foundation in Pakistan. Its goal is to stop the weaponization of social media against women, minorities, and dissidents. This is a problem that’s made worse by algorithms and lack of transparency. The organization has a cyber harassment helpline and advocates for policy and platform reform. Shmyla says that too often civil society is trying in vain to get their voices heard by social media companies who develop platforms in one part of the world and are causing harm in hers. And Shmyla says that even when they listen, there’s no accountability.

Shmyla Khan:
I think that’s sort of the main frustration that over the years, sort of me and other sort of colleagues also in the region have had, that we don’t just want to sit in a room and use all our labor and sort of give you all our expertise and for these companies and these tech people who are building this tech to then not be accountable for that. So I think that’s also a really extractive relationship that we’ve seen even when voices are included.

Bridget Todd (19:16):
When Shmyla was in college, she remembers hearing reports of unmanned drone attacks in nearby regions every day. It instilled an awareness in her about the particular dangers of technology developed in one part of the world with life or death effects in another.

Shmyla Khan:
There were people sitting in parts of the US, and they felt like they were playing a video game, where there would be these barren sort of areas with sparsely populated a lot of times, or with terrain that they don’t really understand. And they would just be executing these kill lists. And it was so sort of jarring. I think as the first time I sort of read about it, I was just completely taken aback that it’s such a clinical way of taking people’s lives, sitting halfway across the world and the kind of sort of distance that creates between what you’re doing and what’s happening on the ground.

Bridget Todd:
This distance Shmyla is talking about is a feature of automation. AI helps create an illusion of something happening without human input. In some cases, it acts as a high tech smoke screen to blur harms against people. Shmyla gives an example of a mobile app for women backed by her own government. It’s designed as a panic button that women can press to get help if they’re in danger. But given the limitations on women’s rights in Pakistan, Shmyla says it’s a dangerous tracking technology.

Shmyla Khan:
The essential message to women is that if you want security, then you’ll have to give up privacy, which is extremely problematic, but also sort of indicative of how the state conceptualizes security itself. It is surrendering yourself to complete and total surveillance as a way of being granted that privilege of security.

Bridget Todd:
Shmyla worries when any technology is presented as a silver bullet that can fix larger societal problems. Like an app to protect women from violence. When few women in Pakistan even have phones. With AI, overconfidence in technology is amped up.

Shmyla Khan:
I’d say it is a lot to people that I’m usually that person in a room or my colleagues and I are those people in a room who are always questioning why something needs to be built in the first place.

Bridget Todd:
Can we build this? Yes. Should we build this? Depends on who you ask. Most people don’t get any say on what tech should be built or how it should be deployed in the world. But maybe you do. I wish we lived in a world with no killer robots. So how about for every conversation we have about the tech we won’t build, let’s have three more about the tech that we do want to build with the people we’d like to build it with. AI is here to stay, but let’s use it for something better. This is IRL, an original podcast from Mozilla, the nonprofit behind Firefox. Follow us and come back in two weeks. This season of IRL doubles as Mozilla’s annual Internet Health Report. To learn more about the people, research and data behind the stories come find us at internethealthreport.org. I’m Bridget Todd, thanks for listening.