Procedures: Developing a Risk-Focused Framework
By Ijeoma Mbamalu, Chief Technology Officer, American Civil Liberties Union
Last week’s launch of the Navigating AI series led to feedback and conversations that have me more than technically optimistic. Thanks to Mozilla’s Shing Suiter for her thoughts on embracing AI through a firm commitment to organizational values. Please keep the comments coming!
This week, Ijeoma Mbamalu, the CTO of the American Civil Liberties Union, shares what it looks like to explore a crawl-walk-run approach to adopting generative AI, balancing innovation with the need to uphold foundational commitments to such values as privacy and fairness. I hope that the approach that the ACLU has developed will inform — and inspire — your journey.
At the American Civil Liberties Union (ACLU), we have been careful to implement a crawl-walk-run approach to adopting generative AI. Our focus has been to balance the need to innovate while maintaining our foundational values of privacy, fairness, transparency, and accountability. In order to leverage this emerging technology – either by customizing it with proprietary data and systems developed by the ACLU, or in existing commercial off-the-shelf products that we use in our daily work – it has to ultimately be in service of our mission (protecting civil rights and civil liberties), values (including promoting equity, privacy, and security), and obligations (to our staff, clients, supporters, and others). Determining how to achieve this requires both expertise and partnership with a range of experts within the ACLU.
In collaboration with cross-functional experts representing many business functions within the organization , we developed a risk-based framework to carry out holistic discovery of when adopting generative AI tools may or may not be acceptable due to ACLU values alignment issues. As part of the framework, we also sought to create a rubric of risks and risk controls that the ACLU must consider when it transitions out of its “crawl” discovery phase to a “walk” exploratory (or pilot) phase.
To begin, we created a Generative AI Working Group, made up of members from the ACLU’s Technology team, as well as individuals from the following business functions: Privacy and Data Governance, Fundraising/Development, Human Resources, and General Counsel. The cross-functional group’s two-pronged scope of work is to: outline principles and guidelines related to the ACLU’s approach to the use of generative AI tools, and to apply those principles in the context of procuring generative AI tools as well as when generative AI features and functionality is incorporated into existing software used by the ACLU – what our Working Group has termed “Trojan horse” generative AI.
Moving forward, this Working Group will also be a thought partner on which business use cases would be most solved by values-aligned and scalable Generative AI solutions for the organization. Members also serve as advisors to staff seeking to understand how they could eventually bring Generative AI capabilities to bear across specific use cases related to their work. We urge individuals to reach out to the Working Group to discuss all use cases and avoid using generative AI tools that are not aligned with ACLU mission, values, or obligations.
Establishing Principles and Guidelines
The first document that the Working Group produced is “The ACLU’s Approach to Internal Uses of Generative AI: Principles and Guidelines.” The 20-page document outlines several principles to guide an assessment of potential benefits and risks of these technologies, as well as general easy-to-follow guidelines for business problems that staff may want to solve using Generative AI capabilities.
The document begins by outlining the principles that must be centered when considering using Generative AI solutions, with respect to the ACLU’s mission, values, and obligations, including to its staff, clients, supporters, and those we aim to uplift with our work. While Generative AI can offer many benefits, it is important to weigh them alongside the potential risks they pose to each of these groups. Planning for mitigation and remediation should not be left as an afterthought.
It then explores and outlines the following:
• What to do in the case of “Trojan horse” generative AI.
• Questions for staff to ask themselves before considering generative AI tools, e.g.:
• How will the generative AI tool,or generative AI components of the tool, support the work of the ACLU?
• How well does the tool work?
• Has the tool been audited or assessed with a focus on issues of equity and discrimination?
• Does this AI tool pose privacy, security, or other legal risks to the ACLU, our constituents, or society?
• If this tool proves dangerous in the future, what recourse does the ACLU have to change services or providers?
Finally, it offers guidelines, approved instances, and examples of business use cases for how different team members at the ACLU may approach and ultimately use Generative AI tools. It also includes examples of how they should reach out to the Working Group or their colleagues to learn more. In terms of change management for all staff in our organization, we socialized the principles and guidelines via presentations at department-style meetings, in all-staff meetings, and during office hours for any staff who may have more questions.
Seeking Value-Aligned Resources
What happens when an employee sees a tool with generative AI capabilities that they think might be helpful for them? We recognized earlier on that it could be hard to operationalize the principles and guidelines document in such a way that will provide each staff member with a definitive yes or no response to this question. As a result, we created a second product centered on procurement of commercial off-the-shelf solutions. The resulting seven-page vendor questionnaire goes through all categories of risks related to Generative AI use, including security, privacy, bias, copyright and accuracy.
In providing a series of questions for vendors — such as asking questions about (1) the datasets used to train the model, including whether there is model card or data documentation available, as well as whether copyrighted works or materials are in use, (2) whether evaluations with regard to bias and fairness as well as other social and ethical risks have occurred, and (3) whether any efforts to quantify data leakage in model outputs have been conducted. Across each modality, we seek to put the responsibility on the software owner or provider to explain what they're doing, as well as try to hold them accountable to their answers. We believe that this questionnaire will help inform our exit criteria out of the crawl phase into what will hopefully become the walk phase, which will be guided, intentional, cautiously optimistic experimentation behind our firewall.
Ongoing Research
Peer-reviewed research is key to how we frame our approach, both in terms of our understanding of these models and shoring up our understanding of the types of privacy risks, potential for biases, and security issues inherent in them. Our team does a lot of original research on the civil rights implications of automated systems, including AI systems with risks that are in some ways similar to those of generative AI tools. We also partner with external researchers who are looking at foundational questions around the evaluation and development of generative AI systems to stay up-to-date with what’s happening.
Additionally, we have a reading group open to anyone at the ACLU – including staff at our 54 affiliates around the country – that shares and discusses emerging research about the development of large language models and generative AI systems more broadly, whether it’s things like understanding some of the mathematical underpinnings of language models, studies exploring the ways covert biases can creep into model outputs, or studies discussing whether it’s possible to design generative AI systems in a more inclusive and participatory way. The reading group has been a really great space to wrestle with these questions together.
When we finally reach the point where we can begin to incorporate this technology into our work so that it creates, rather than risks or diminishes value, we want to ensure that we’re creating a values-aligned policy that checks all the right boxes. We can only do that by working closely together, across teams and disciplines, within a clearly defined framework.
This is 👏🏾