Last year the Mozilla team asked itself: what concrete improvements to the health of the internet do we want to tackle over the next 3–5 years?
We looked at a number of different areas we could focus. Making the ad economy more ethical. Combating online harassment. Countering the rush to biometric everything. All worthy topics.
As my colleague Ashley noted in her November blog post, we settled in the end on the topic of ‘better machine decision making’. This means we will focus a big part of our internet health movement building work on pushing the world of AI to be more human — and more humane.
Earlier this year, we looked in earnest at how to get started. We have now mapped out a list of first steps we will take across our main program areas — and we’re digging in. Here are some of the highlights of the tasks we’ve set for ourselves this year:
Shape the agenda
- Bring the ‘better machine decision making’ concept to life by leaning into a focus on AI in the Internet Health Report, MozFest and press pitches about our fellows.
- Shake up the public narrative about AI by promoting — and funding — artists working on topics like automated censorship, behavioural manipulation and discriminatory hiring.
- Define a specific (policy) agenda by bringing in senior fellows to ask questions like: ‘how do we use GDPR to push on AI issues?’; or ‘could we turn platforms into info fiduciaries?’
Connect Leaders
- Highlight the role of AI in areas like privacy and discrimination by widely promoting the work of fellowship, host orgs and MozFest alumni working on these issues.
- Promote ethics in computer science education through a $3.5M award fund for professors, knowing we need to get engineers thinking about ethics issues to create better AI.
- Find allies working on AI + consumer tech issues by heavily focusing our ‘hosted fellowships’ in this area — and then building a loose coalition amongst host orgs.
Rally citizens
- Show consumers how pervasive machine decision making is by growing the number of products that include AI covered in the Privacy Not Included buyers guide.
- Shine a light on AI, misinformation and tech platforms through a high profile EU election campaign, starting with a public letter to Facebook and political ad transparency.
- Lend a hand to developers who care about ethics and AI by exploring ideas like the Union of Concern Technologists and an ‘ethics Q+A’ campaign at campus recruiting fairs.
We’re also actively refining our definition of ‘better machine decision making’ — and developing a more detailed theory of how we make it happen. A first step in this process was to update the better machine decision making issue brief that we first developed back in November. This process has proven helpful and gives us something crisper to work from. However, we still have a ways to go in setting out a clear impact goal for this work.
As a next step, I’m going to post a series of reflections that came to me in writing this document. I’m going to invite other people to do the same. I’m also going to work with my colleague Sam to look closely at Mozilla’s internet health theory of change through an AI lens — poking at the question of how we might change industry norms, government policy and consumer demand to drive better machine decision making.
The approach we are taking is: 1. dive in and take action; 2. reflect and refine our thinking as we go; and 3. engage our community and allies as we do these things; 4. rinse and repeat. Figuring out where we go — and where we can make concrete change on how AI gets made and used — has to be an iterative process. That’s why we’ll keep cycling through these steps as we go.
With that in mind, myself and others from the Mozilla team will start providing updates and reflections on our blogs. We’ll also be posting invitations to get involved as we go. And, we will track it all on the nascent Mozilla AI wiki. You can can use to follow along — and get involved.
Thank you for your blog. I have been following this healthy internet discourse and it has really come at the right time when nations are so absorbed into global politics and hateful commenting is just a mouse click away. Following the Africa Summer School on Machine Learning for Data Mining and Search I attended just this past January in Cape Town, South Africa, we realised the importance of the Applications of AI to improving the health of Internet. I would so love to be part of this movement as I realised that Africa among other continents still lags so much behind in managing, hence automatically deciding (using AI algorithms) what should be posted and not posted on social platforms.
The other issue that seemed to be the greatest challenge especially for Africa, emanated from the language factor. While other social media platform algorithms could detect hate speech and block it from being published, they could not detect hate speech composed in African native languages. In that regard, we are hoping to at least recruit a pool of programmers with knowledge of native languages forming a team to deal with that issue in particular. It would be nice if this endeavour is incorporated in your efforts to tackle this Internet health problem.