Values: Considering Human-centered AI
Mozilla Foundation is the non-profit, movement-building and philanthropy arm of Mozilla, a technology company that champions privacy, trustworthy AI, and an open internet. We believe that the internet and technology are the most powerful tools, and that we need to do more to see our aspirations for the human experience come to life on the internet. As a result, everything that we do goes back to the Mozilla Manifesto and its 10 principles.
We lead with our key values:
The internet is a global public resource that must remain open and accessible.
Security and privacy are fundamental and must not be treated as optional.
Championing human agency, providing users the choices to shape the internet/technology and their experiences online.
Putting people above profits, which entails balancing commercial interests with public benefits.
Transparent, community-based processes promote participation, accountability and trust.
A lot of the time, people encounter a new technology and get excited about its potential, forgetting about all of the existing processes, evaluation structures and safety considerations in place. (We call this shiny object syndrome!) But at Mozilla, our values and how we evaluate technology is no different when it comes to the development and adoption of AI. In fact, they are critical as AI development, testing, adoption and utilization continue to grow and accelerate.
We’ve been advocating for trustworthy AI since 2019, with concerted efforts towards supporting researchers, technologists, artists and social justice activists across the globe who are championing a more trustworthy and responsible AI ecosystem. We have also spent significant time evaluating AI technology for internal use to enhance productivity, and to amplify and accelerate the work we are doing. When evaluating and working with new AI technologies internally, these are the considerations that we take into account to ensure that our principles are being upheld.
Privacy & Security:
What does the AI do with the data that is being collected? Do you have the ability to limit or restrict that in order to keep your organization’s data secure? It is essential to give both the organization and the user the choice to secure their data while using AI, as well as to make sure that there are easy options available to users to ensure that their data is not shared, sold, or used to train the model.
Companies should hold privacy and security as important values in the development and utilization of AI technologies, whether it’s with their own data or with client data. We advocate for legislation when appropriate, but also understand that legislation is not enough.
Transparency:
It is essential to understand how the AI works and what biases may be built into it. Next, it is essential to know how to mitigate them.
Source attribution: What is being used to train the AI? It is important to know where the information is coming from (and, subsequently, where it is not coming from).
Accessibility:
AI should be accessible to all, not just those with money and resources. Openness leads to accessibility, which leads to trust and understanding.
Accessibility also means adaptability to different regions or different contexts.
Ultimately, AI is a tool for humans to use. But before you set out to understand how it plugs into your organization and how you can utilize it most effectively, you must first understand how it has been built and trained — ideally with human values and ethics in mind. Consider who your audiences are and who may be benefitting (or not) from the utilization of this tool, as well as what their use cases are, and whether or not the AI can be tailored to their different experiences and needs.
Once you have selected a system, it is important to negotiate a contract that aligns with your organization’s values and protects your organization’s data. It is easy to get distracted by the shiny new technology, or think that because the contract is small, there is no room for negotiation. People sometimes forget that they’re the client. At Mozilla Foundation, even though we are a fairly small non-profit, we have been able to negotiate contracts that ensure, for example, that our organization’s and our users’ data is not utilized for training vendor AI, and that we are clear about what data is being collected and how it is processed so we can make choices about how it is used.
Our approach to these conversations is to say, “We know you’re hitting the minimum for privacy compliance. At Mozilla, this is what we’ve determined to be best practice, and this is what we would need in order for you to be a partner with us.” We worked with one vendor whose Data Processing Agreement (DPA) originally only covered the jurisdictions set out by the General Data Protection Regulation and California Consumer Protection Act. We negotiated with them, saying that we couldn’t work together unless they signed a separate DPA to make the terms globally applicable for us. Ultimately, they took an additional step to make the DPA globally applicable for all of their customers. This was a big win for all of their clients — but if they hadn’t agreed to our minimum terms for data protection, we would have looked elsewhere.
When leading with values in the decision-making process, it’s important to remember that, as the client, you have a choice. With many companies today developing and connecting AI to their existing services, you can and should look for partners without compromising on your organization’s value. More importantly, as more organizations set ethical requirements for their AI partners, technology companies will have to design with the impact on the human user in mind. Let’s work to keep AI open and accessible to all.