Because of the internet, machine learning and automation are opening up entirely new ways to make sense of the world and its data. As it happened with the internet, we see powerful entities tightening their grip on AI, dominating who profits from it, how it is built, and what it is used for.
Amid the global rush to automate, we see grave dangers of discrimination and surveillance. We see an absence of transparency and accountability, and an overreliance on automation for decisions of huge consequence. But we also find champions insisting there is a better way to build, deploy, and comprehend AI’s potential.
Who has power over AI? Who is shifting that power? These are the central questions of this report. We set the scene with a compilation of research and data visuals about the current state of AI worldwide. And we explore concrete answers via the firsthand stories of innovators who exemplify how to build AI systems in more equitable ways (as well as when not to build).
Our target audience is AI builders and people who develop or influence AI policies. We seek to build bridges of understanding between tech and policy sectors to inspire more collaboration and action. Our recommendations are based on the input of more than 150 people over many months. They build on a larger body of work by the staff, fellows, and grantees of the Mozilla Foundation to pursue more trustworthy AI in tech products, policies, and code.
In five biweekly episodes, we travel the world and dive into an array of topics, including surveillance, labor, healthcare, geospatial data, and disinformation in social media. Altogether in this season, we speak to 19 people in a dozen different countries. As the season progresses, you will also have access to longer text versions of each interview.
Our framework of inquiry is about identifying the root causes for problems based on research and evidence. We speak to changemakers in different fields to arrive at recommendations for what can be done. We care about privacy, security, openness, decentralization, and more.
We consider the internet a global ecosystem that can be healthy and unhealthy in different ways and that adapts to human activity over time. We are guided by Mozilla’s manifesto, our longtime policy and advocacy work, and a recently updated vision for the web.
Previous editions of the Internet Health Report have touched on AI bias and more, but this deep dive reflects evolving ideas about AI in our broader movement, as well as Mozilla’s own theory of change for how to make AI more trustworthy.
We’d love to hear what ideas this year’s report sparked for you. Do you build or research AI? Do you work on AI policy? Which podcast episodes inspired you or challenged you? If you send us a comment in this form, we guarantee it will be read by a human.
So many researchers, Mozilla fellows, staff, and allies generously contributed data and ideas.
This report is produced by Mozilla Foundation’s Insights team.
J. Bob Alotta is the VP, Global Programs and the report’s executive producer.
Solana Larsen is the report’s editor.
Eeva Moore is the engagement manager.
Neha Ravella is the project manager.
Stefan Baack is the data and research analyst.
Kasia Odrozek is the director of Insights.
Bridget Todd is the host of IRL.
Pacific Content supported research and writing of the podcast and managed all production and sound design. Our website and visual design are by digital agency Rainbow Unicorn, data visuals were designed and coded by information design studio, Figures. Our portrait photography was done remotely by Hannah Yoon, and Agency of None designed the PDF and ePub.
If your name is missing from this list, please email us.