Amidst the current public health crisis, we’re being reminded of just how central the internet has become to our lives. It’s keeping us connected right now — to our friends, our families, our colleagues, our communities. It’s also a critical source of information for a public hungry for health information and a powerful collaboration platform for scientists tackling the spread of COVID19. In so many ways, we’re seeing the internet that we want and hope for: one that connects humanity in deep and beneficial ways.
However, in the background, we are also seeing a growing tension between collecting data about people who may have COVID19 and long term threats to privacy and civil liberties online. As a New York Times article from early this week outlined:
In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains.
In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said.
In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.
On the one hand, all of this makes sense. We all want to reign in the current pandemic, and it’s clear that the right data about its spread can help. On the other hand, we know from 9/11 — and the Snowden revelations that followed a decade after — that changes to surveillance norms made during a crisis can have a lasting impact on civil liberties.
While our attention should clearly be focused on the current health crisis, we should also keep an eye on longer term questions. How do we leverage data to solve important human problems while at the same time protecting privacy and personal agency? What checks, balances and guardrails do we put in place to ensure people who abuse our data are held accountable? Over the past year, Mozilla and our community have talked a great deal about questions like these, looking at how we set new norms for trustworthy data sharing and the AI that runs on that data.
In many regards, last year’s conversations on these topics feel a million miles away. Those were different times. Yet topics related to trustworthy AI and data sharing feel more urgent than ever. We have a chance to evolve data sharing and AI norms that are responsible and respectful — yet without a sustained focus on why it’s important and how it’s possible, we will likely head in the opposite direction.
On the positive side, our 2019 explorations in this area led us to a handful of people experimenting with data trusts, data commons and other new approaches to data governance. Approaches like these focus on shifting the power dynamic around data, giving individual internet users more control over how their data is used and opening the way for AI designed for the common good. If we had these approaches in place today, we’d be better prepared for the kinds of questions around sharing health and location data that we’re struggling with as part of the current health crisis.
On the flip side, our collaborations with researchers pointed out that AI systems like social media content recommendation engines and targeted online ads often end up amplifying misinformation. This led to campaigns to: pressure platforms like YouTube to make sure their content recommendations don’t promote misinformation; and call on Facebook and others to open up APIs to make political advertising more transparent. Misinformation is a real concern in the current crisis. We need platforms to step up to ensure it isn’t amplified — and to do so in a transparent manner so that governments and researchers can assess whether these efforts are effective.
As we face the pandemic, Mozilla is looking for ways to help tackle the current crisis. But we’re also keeping an eye on the longer game. This includes: looking back at our draft trustworthy AI theory of change to see what issues are most important to tackle now; monitoring emerging topics like ‘contact tracing’ in relation to data privacy issues; and continuing to push for transparency on platforms related to misinformation. This is very much a work in progress and something we’re looking to collaborate on. If you have ideas on how to work together, reach out. Or, watch this blog to track our thinking as it evolves.
PS. for info on trustworthy AI theory of change and our 2020 objectives related to AI check out the Mozilla AI wiki. Obviously, these plans are something we’re adapting to meet the current setting. Updates will be made on the wiki as we have them.