Please flip your device.

 

Transcript: The AI Medicine Cabinet

This is the transcript of Mozilla’s IRL podcast episode, The AI Medicine Cabinet, from September 12, 2022 (IRL Season 06, Episode 05).

Bridget Todd:

You’ve got to love the original Star Trek TV show from the 60s with its utopian, futuristic spaceship. Remember how Dr. McCoy had that device with the flashing lights that could diagnose any medical condition?

Bridget Todd:

I wish we had this in real life. I’m a huge fan of sci-fi, but I wonder if sometimes we get too swept away by the promise of some technologies to help doctors and patients.

Bridget Todd:

AI is unlocking opportunities in the medical field for diagnostics, discoveries, and even treatments. But in a world where access to healthcare is anything but equal, how can AI developers build healthier systems and data sets for healthier people?

Bridget Todd:

[foreign language 00:00:48]. This is a synthetic voice of a new AI chat bot in Rwanda, created by a local team of speech tech developers. It gives millions of people access to vital information about COVID-19 over the phone. We’ll hear more about this in a bit.

Bridget Todd:

I’m Bridget Todd, and this is IRL, an original podcast from Mozilla, the nonprofit behind Firefox. This season, it’s all about AI. Today, we’re talking to AI innovators about life, death, and data, specifically personal data that should be private and donated data that could save lives. Before we begin, a trigger warning. We’re starting with the story of someone who lost a loved one to cancer.

Avery Smith:

I met Latoya. We married in 2008 and we were together for a little over a year or so married.

Bridget Todd:

That’s Avery Smith. He’s a software engineer in Takoma Park, Maryland. His wife, Latoya, was a doctor, a podiatrist. One day after they’d been married for about a year, she found a strange raised mole on her scalp. It turned out it was melanoma, a serious form of skin cancer.

Avery Smith:

At the time general understanding was that, oh, black people really don’t get melanoma, so that was an untruth. And then going through that process, it was very challenging because a lot of the statistics and protocols and services available and doctors that you interact with and everything made it seem like it was a very novel case of which it was. After the diagnosis, my wife, she passed away 18 months later.

Bridget Todd:

Here’s the thing, for generations medical textbooks mostly did not include images of a variety of skin tones, and they still don’t for the most part. If you’re searching for skin conditions on Google, you’ll notice this too. White skin is the typical reference. I have a darker skin tone, so I have definitely experienced this firsthand. And in fact, I almost spent a ton of money on an expensive bed bug treatment after unsuccessfully searching things like dark skin, skin rash on Google. And after all that, it turned out to just be poison Ivy.

Bridget Todd:

There’s a lot of racial disparity in dermatology when it comes to quality of care. And it shows in skin cancer, survival rates too.

Avery Smith:

It can seem like it’s a hopeless situation because people are just unfamiliar with what it is that you’re going through.

Bridget Todd:

In the years following Latoya’s death in 2011, an idea began to take shape for Avery. He started imagining an app where black people could photograph a skin problem and have it automatically diagnosed. He came across a research paper in 2017 by someone who was working on an image recognition idea similar to his. Well, to an extent.

Avery Smith:

The difference was, that paper came out, they made no mention about the variety of different skin types, and particularly black skin. It made no mention of that, so I called them. I got the person on the phone and then I asked him the question. I said, “Hey, I think what you’re doing is great. Is there a consideration for black skin?” His response was, “Well, yeah. We like to do this for all skin types” which was okay, but what that basically told me was that the answer was, no, it doesn’t do it for black skin. And I was like, “Okay, well, why doesn’t their research paper point that out?” It doesn’t, it just kind of ignores it. So, I thought it was a bit misleading.

Bridget Todd:

Through their references, Avery discovered a project to create a very large repository of photos of skin problems as a resource for machine learning.

Avery Smith:

The resource which was referred to as this chief resource, did not contain much of any Black skin inside of it. It was virtually none at all. And we’re talking tens of thousands of images. And they told me straight up, they said, “Hey, we know we’re lacking in this area. If you would like to help us, we’d more than appreciate your contributions.” And so, that let me know. I said, “Okay, Hey, I think I’m on something.” So, that’s how I got into the concept of image recognition, machine learning, and dermatology for Black skin.

Bridget Todd:

Avery co-wrote a report on his research, which has been frequently cited. And he also started his own company, Melalogic. Because there are incredibly few Black dermatologists in the U.S., and around the world for that matter, Avery is using AI to create a shortcut for people to get answers.

Bridget Todd:

Melalogic users can upload photos of their skin to receive feedback from real doctors. In exchange, Avery asks them for a data donation. Their photos are anonymized, labeled, and added to a repository that will be used as training data for machine learning that works specifically for Black people. Avery knows that there are giant companies working on similar diagnostic tools, but he believes he will earn the trust and participation of his community.

Avery Smith:

They got more money than me. Google, Johnson & Johnson, all these other companies, they got more money than I do, but they don’t have my story. That’s one thing. They did not lose their wife to skin cancer. In addition to it, those companies are not necessarily run by people who are affected by this problem.

Bridget Todd:

Solving racial disparity in healthcare is core to Avery’s mission. And it impacts how he builds technology and how he will measure success. Eventually, he aims to offer automatic diagnosis of skin conditions through a mobile phone, not just for cancer, but for thousands of other skin conditions too.

Avery Smith:

It’s not just a color thing, it’s not just a DNA thing, but do I understand the specific hurdles, problems that we deal with on a regular basis? And then, how can those things be overcome through technology? And I happen to exist at a junction between being a caregiver, software engineering, and also having Black skin. And so, being at the nucleus of all of that helps me be able to kind of put together solutions that something like Google or Johnson & Johnson, they just couldn’t because they don’t have that perspective. And they’re trying to appease so many different groups of people.

Bridget Todd:

It’s still early days for Melalogic. Avery recently joined a local startup accelerator program for purpose-driven companies called, Conscious Venture Labs. Meanwhile, AI tools for diagnosis are already in use around the world for dermatology and other medical branches, with not much attention paid to the bias that exists in data sets. Even when human life is at stake, there is no guarantee that an AI bias will be acknowledged or clearly communicated.

Avery Smith:

Somebody can say, “Well, why not all skin types?” Well, this goes back to the bias that I was talking about, is that when you focus on everzthing, you’re really not that good at nothing.

Bridget Todd:

Let’s beam ourselves back a couple of years. In the early days of the pandemic, software developers around the world were joining forces to think about what they could do to help save lives, including in Rwanda.

Remy Muhire:

Local developers understand, actually, real problems in their communities.

Bridget Todd:

Remy Muhire belongs to a community of open source developers in Rwanda. They work on software for underrepresented African languages. The group is called Digital Umuganda. Their idea at the start of the pandemic was to use AI to solve a critical problem: access to public health information. Remy describes how the country’s central health agency, the Rwanda Biomedical Center, or RBC for short, was overrun with phone calls.

Remy Muhire:

The call center were actually hits over 2,000 calls per day, so which will actually a very big issue because there were actually quite a lot of restrictions and lockdowns. And people couldn’t actually move and get away from their homes. And then people was always actually curious, they’re calling, so when will be the re-openings and all those.

Bridget Todd:

Internet connectivity is low in Rwanda and the RBC had no capacity to deal with that volume of calls. Together they developed an AI chat bot that works in Kinyarwanda, English, and French. It’s called Mbaza, which means ‘ask me’ in Kinyarwanda.

Remy Muhire:

Basically, you can make a phone call on the Rwanda Biomedical Center 1-1-4. So, it’s a hotline, and then have the bots interacting with you. So you can also send a message over WhatsApp or SMS two-way interactions or Telegram. That’s where the conversation or AI messaging part comes in. And so, you can also use USSD which are short codes where people dial.

Remy Muhire:

One of the lovely part of Mbaza it’s more of, you have people in rural areas and we can just make phone calls because they’re not tech savvy. They’re not digitally trained as well. And then just with a phone call they can get assistance.

Bridget Todd:

Since Mbaza launched nearly two years ago, it’s been used by more than 2 million people. You could ask it questions about COVID infection rates and more.

Remy Muhire:

People in rural area use the service to check on their vaccination status and also to check on their COVID results. There’s actually restrictions for people to get in public places in Rwanda, if they’re not fully vaccinated. If there’s mass gathering like weddings or funerals, you need actually a COVID test to attend. Yeah. So, that’s some of the use cases people in remote areas use, yeah, to access Mbaza.

Bridget Todd:

Remy offers a basic explanation of how it works.

Remy Muhire:

The bot will have first, to transcribe the voice into text. [foreign language 00:11:14]. And then  pick actually the keywords and then send it to retrieve information regarding the actual number of cases and the current dates. And then those information will come as a text as well. And then the bot will have actually now to read those texts to the listener.

Bridget Todd:

Mbaza came together quickly, but years of work had gone into creating a voice data set for Kinyarwanda. That’s a language spoken by 12 million people in Rwanda. For two years, Remy was a community leader on a project to crowdsource voices through Mozilla’s open source platform, Common Voice. Data sets in more than a hundred different languages can be used by anyone to train machine learning models for voice applications. This opens up new opportunities, especially for languages that aren’t served by big tech, many of which are spoken by millions of people all over the world.

Bridget Todd:

Remy calls open source software and data like this, digital infrastructure. It empowers local developers to create solutions to local problems like access to public health information for people who can’t read or don’t have access to internet. With more infrastructure, there can be more innovation.

Remy Muhire:

So how do you build services where everyone can actually have access to it? So we have Mbaza as an example, and then from Mbaza and especially the work we did at Mozilla, the language infrastructure, like the Kinyarwanda language model, which is open source, I think the impact is actually yet to come. And we have actually more years to come, and this will be phenomenal.

Radhika Radhakrishnan:

Healthcare is not a technological problem, right? Healthcare is a social problem. It’s a development problem. So when you’re trying to fit a technology solution there, you really need to find what the problem is.

Bridget Todd:

Radhika Radhakrishnan is a scholar from Bangalore, India. She started in computer science engineering, then she pivoted to feminist technoscience studies. In her research, Radhika has focused on AI and data governance.

Radhika Radhakrishnan:

I really began critically interrogating the technologies that I had been working on that I had been building for many years before then. And it opened up this whole new world for me because I realized we were, as engineers, not really understanding the social implications of the tools we were building in the labs, not understanding how people were impacted by it.

Bridget Todd:

Radhika describes the AI in healthcare space in India as vast. She says that it often involves partnerships between Indian hospitals and multinational corporations like Google, IBM, and Microsoft. A product of these partnerships is diagnostic tools for people living in remote regions.

Radhika Radhakrishnan:

I have been living in Bangalore for a long time, which is the IT capital of India. And a lot of startups around me were working on these AI healthcare technologies. So I thought, “I think that’s what I should focus on next.” I did my master’s thesis on AI in healthcare in India. And the field work for that took me to places all around the country. And the stuff I saw there got me really worried because people weren’t talking about that side of AI. We were only talking about how it could be used for amazing things, how it was the next best thing to happen to humanity, AI for social good. But what I found was quite problematic, and that’s where this journey began for me in this space.

Bridget Todd:

Radhika says diagnostic systems are widely used to describe medical problems, to predict medical issues in patients, and even to prescribe treatments or medicines. But she identified flaws in how they are devised and deployed.

Radhika Radhakrishnan:

The problem that’s been identified by the tech companies is, we’ve got rural areas that don’t have access to healthcare. And the problem is that you don’t have enough doctors in the country to go and sit in the healthcare centers in these remote areas. And the ideal solution should be, “Hey, let’s get more doctors to these places that don’t have enough doctors.” Right? But what the tech companies are doing is sort of completely sidestepping that altogether and saying, “Let’s introduce some kind of technology that’s powered by AI, that will therefore allow us to make these diagnoses from a distance without so much reliance on doctor being physically present.”

Bridget Todd:

Small clinics and regions surrounding the hospital, scanned the eyes and bodies of patients with different diagnostic tools. They sent thousands of images back and forth between cities and villages. Radhika was disturbed by how often doctors would refer to patients as data points, rather than as people. Algorithmic systems were tested directly on patients. She saw a conflict of interest between the commercial interests of tech companies and the human interests of patients. People in villages were forced to sign consent forms they sometimes couldn’t even read. It was either that or received no treatment.

Radhika Radhakrishnan:

A lot of the doctors they told me, “No, see, everything’s going perfectly fine because nobody’s complaining. The patients are not resisting it. They are not opposing it.” And so what’s really happening is, that this lack of a choice is translating to a lack of any kind of opposition to the data collection. And that is quite conveniently construed as consent to the use of these technologies. Whereas at the end of the day, the patients themselves don’t have much of an idea about what’s happening with their data. Why is their data being collected? Because traditionally, you wouldn’t need to scan a particular body part for the diagnosis that they’re going in for, because you’re using an AI enabled image scanning algorithm. They are just trusting the system. And that’s sad, because that brings you to the next issue that is happening on the ground with patients, which is a lack of accountability.

Bridget Todd:

Radhika says designers of the systems always insist they have good intentions. Her concern is that they misrepresent business opportunities as AI for social good. She saw evidence of systems being deployed without testing and public documentation, or safety nets.

Radhika Radhakrishnan:

None of the medical practitioners I spoke to had any kind of consensus on questions about, what will really happen if there is an error in the diagnosis from the system?

Bridget Todd:

When Radhika interviewed tech workers about ethics and consent, they passed a responsibility onto hospitals.

Radhika Radhakrishnan:

It reflects a massive evasion of ethical responsibility towards patients. And that is putting people at the risk of untested technologies and not really understanding what their experiences are, not really even asking them whether they are okay with it. One of the medical practitioners actually told me that they don’t even bother to explain to the rural patients what data is being collected, why it’s collected, what they’re doing with it, because they blatantly said that these patients are not going to understand this massive infantilization of an entire population – and as a result of it, hindering their own ability to understand their diagnosis – it’s widespread. It’s horrendous. And that’s the tool we are getting at the end of the day. That’s our AI for social good on the ground.

Bridget Todd:

Radhika says regulation could help establish who would be held responsible, if there was an error in the system, for example, an incorrect diagnosis. It could also help establish a process for people to seek recourse. Policies could also guide procedures for getting consent from patients. And you could provide and explain meaningful healthcare alternatives, if they say no. Radhika says many regulations focus on just the commercial deployment of tech products, but that the risk for harm is just as serious during data collection and testing.

Radhika Radhakrishnan:

That’s a gray area all of these companies are functioning in right now. The tools are not technically deployed yet. They’re still being tested. And there’s not a lot of regulation that’s looking at how to make sure that testing itself is being done ethically.

Bridget Todd:

One of Radhika’s key recommendations is that hospital treatment facilities should separate from algorithmic testing facilities. But that doesn’t mean she’s completely against AI in healthcare.

Radhika Radhakrishnan:

It often comes across as anyone who’s been critical of some of these applications is, we’re critical of the technology itself, which is not true. There are definitely ways of doing this better. And I think the whole goal of even doing research of this kind is to show that there’s a way to do this better. And then if we do it better, everyone benefits.

Bridget Todd:

So much of our technology, especially when it comes to health, purports to help us be our best selves. For instance, have you ever felt depressed and wondered, “Maybe there’s an app for that.” A lot of people have. Mental health apps have boomed in the past years. Many use AI to chat with you, to keep you engaged and to analyze your words and behaviors.

Jen Caltrider:

It doesn’t play a part in all of the apps. It only plays a part in some of them, from what we can tell.

Bridget Todd:

Jen Caltrider is the lead investigator from Mozilla’s Privacy Not Included Shoppers Guide. This year, the team reviewed 32 popular mental health and prayer apps. They wanted to see how the apps stand up to scrutiny on privacy, security, and AI transparency.

Jen Caltrider:

One of the things that we found is, it’s really, really hard to tell what’s actually going on with the AI in these consumer products, because the AIs they use are often proprietary. And so, a number of companies don’t share what’s going on because that’s their business.

Bridget Todd:

Jen says there are AI chatbots and other natural language processing techniques that are used to listen and respond to users in order to gauge their moods and keep them engaged for longer. She says mental health apps have some of the worst privacy and security practices she has ever seen. It doesn’t matter if it’s a free app or an app you’ve paid for. Neither is guaranteed to keep your secrets.

Jen Caltrider:

What we learned was, they’re collecting a lot of data, creepy data that maybe you don’t want collected. They’re treating it like a business asset. Having a company know that and share that information with a marketing agency that could then be used by some really bad people with questionable ideologies to target you, to move you towards a hateful mentality or something like that, because they know you’re vulnerable, it gets really scary.

Bridget Todd:

When it comes to mental health, everyone is vulnerable.

Jen Caltrider:

Maybe you don’t want the world to know that you’re gay and you’re depressed. And that’s something that could live on forever, and somebody could use that to target you with conversion therapy. Because a lot of these privacy policies will say, “We’ll never share or sell your data without your consent.” And you’re like, “Great, I’m protected.” But what does consent look like?

Jen Caltrider:

Sometimes in a privacy policy, you’ll see that it says, “Once you’ve downloaded and registered with our app, you’ve given us consent to use your personal information in the ways we articulate in this privacy policy.” So, people might not understand when they’re really struggling that by downloading and installing an app and using it, they’ve suddenly given up consent for their information to be shared for interest-based advertising or sold to data brokers, or used for marketing for this company to try and get more people to use the app.

Bridget Todd:

There was one app that impressed the Privacy Not Included team. It’s called Wysa.

Jen Caltrider:

It’s an AI chatbot founded by a woman who, I believe, is from India, who was, from what I read, was depressed. And discovered that she preferred talking to a bot more than a human. And so, she built this company. And just reading their privacy policy, it was just different. They don’t require that you share personal information to use the app, and you can just use a nickname.

Jen Caltrider:

And then they also talk about how they aren’t going to request your personal data. If you accidentally submit it, they have ways that they can process it and they say, irreversibly redact any personal information within 24 hours.

Bridget Todd:

As consumers, it’s hard to know who to trust. With AI in particular, we’re promised great services and don’t necessarily notice our data changing hands.

Jen Caltrider:

I believe that there should be standards for apps that use AI to protect against bias and to provide transparency. Absolutely. Yes. Is that going to happen anytime soon? I don’t know. It doesn’t seem that way. Just way too often, we just can’t tell. We can’t tell if it’s trustworthy. We can’t tell exactly how it’s being used. We can’t tell if there’s bias in it. We can’t tell if a user has that much control over it. And that’s unfortunate, because that’s what we want to see in the world.

Bridget Todd:

When is the AI in the medicine cabinet trustworthy and when is it snake oil? In the rush to automate more and more, there are huge risks of bias and surveillance. There is rarely enough transparency and accountability. Who has the power over AI? Who is shifting that power?

Bridget Todd:

We’ve heard from champions all season who insist there’s a better way to build, deploy, and understand these technologies. Artificial intelligence has captured the world’s imagination, but let’s not look to algorithms for all the answers. This season showed that the intelligence in AI will come from all of us. We decide how to use AI and we can demand that it be trustworthy.

 

Bridget Todd:

I am Bridget Todd and that wraps up season six of IRL, an original podcast from Mozilla, the nonprofit behind Firefox. Before I go, check out internethealthreport.org for extended interviews and even photos of everyone you heard from this season. We’d love to hear your thoughts. Did we change your mind about anything? Write us a note and stay tuned.