As I wrote a few weeks back, Mozilla is increasingly coming to the conclusion that making sure AI serves humanity rather than harms it is a key internet health issue. Our internet health movement building efforts will be focused in this area in the coming years.
In 2019, this means focusing a big part of our investment in fellowships, awards, campaigns and the Internet Health Report on AI topics. It also means taking the time to get crisper on what we mean by ‘better’ and defining a specific set of things we’d like to see happen around the politics, technology and use of AI. Thinking this through now will tee up work in the years to come.
We started this thinking in an ‘issue brief’ that looks at AI issues through the lens of internet health and the Mozilla Manifesto. It builds from the idea that intelligent systems are ultimately designed and shaped by humans — and then looks at areas where we need to collectively figure out how we want these systems used, who should control them and how we should mitigate their risks. The purpose of this brief is to spark discussions in and around Mozilla that will help us come up with more specific goals and plans of action.
As we dig into this thinking, one thing is starting to become clear: the most likely way for Mozilla
Beyond this constraint, the universe of possible goals for this work is quite broad. Some of the options that we are batting around include:
- Should we focus on user empowerment, ensuring people can shape, question or opt out from automated decisions?
- Should Mozilla serve as a watchdog, ensuring companies are held accountable for the decisions made by the systems they create?
- Could Mozilla play a role in democratizing AI by encouraging researchers and industry to make their software and training data open source?
- Is there a particular region we could focus on, like Europe, where the chance that AI which respects privacy and rights takes hold is greater than elsewhere.
- Or, should Mozilla focus more broadly on ensuring that automated systems respect rights like privacy, freedom of expression and protection from discrimination?
These are the sorts of questions we’re starting to debate — and will invite you to debate with us — over the coming months.
It’s worth noting that of these possible goals are focused on outcomes for users and society, and not on core AI technology. Mozilla is doing important and interesting technology work with things like Deep Speech and Common Voice, showing that collaborative, inclusionary, open source AI approaches are possible. However, Mozilla’s work on AI technology is modest at this point. This is one of the reasons that we decided to make ‘better machine decision making’ a focus of our movement building work right now. AI represents the next wave of computing and will shape what the internet looks like — how things work out with AI will have a huge impact on whether we live in a healthy digital environment, or not. It is critical that Mozilla weigh in on this early and strongly, and this includes going beyond what we’re able to do directly through writing code. The internet health movement building work we’ve been doing over the last few years gives us a way to do this, working with allies around the world who are also trying to nudge the future of AI in a good direction.
If you have thoughts on where this work is going — or should go — I’d love to hear them. You can comment on this blog, tweet or send me an email. There is also a wiki where you can track this work. And, there will be more specific opportunities for feedback on potential goals for our working coming over the next couple of months.
PS. I will write more about the topic of consumer tech and why we should focus on this area in an upcoming post.
Comments