This week, I’m going to pull back the curtain a bit on what the engineering team here at Emerson Collective does. (Short answer: A lot.) One of the ways in which we assist our nonprofit partners is by developing programs that can help them strengthen and upskill their organizations, bridging gaps and building resilience. Recently, we developed an AI assistant focused on supporting the people who help migrants apply for legal asylum.
What are we doing developing chatbots? Well, last year, during a convening of partners working in the immigration space, one of the things that we heard people say was that they were understaffed and overwhelmed by the demand for their help in navigating the fast-changing policies surrounding the immigration process. Legal aid societies can rarely afford a team of lawyers, so they often rely on paralegals to support their work. What if we used AI as a knowledge-access accelerator?
There are a few design principles that we are using when building something like this:
Make sure that it’s the right technology for the task: The engineering team explored several options before deciding that AI was the right technology for the opportunity: This is a very complex language, and there are word flows that need to be understood before you recommend something, Jorge Escobar, our senior director of engineering, explained. “So having a large language model understand that very complex structure and be able to get answers from it easily is great. There’s definitely care being given around that area so we can be confident they’ll be providing good information.”
Focus on a narrow space: We make sure that we are not trying to solve all problems, because by doing so, the risk of hallucinations is high. As I’ve written before, the data set that you train a large language model on is key to what it generates. (What’s referred to as “garbage in, garbage out.”) And in this case, any hallucinations could have drastic consequences. As led by Carman Nareau, our product portfolio manager, the engineering team decided to focus it on a very narrow scope: a travel packet outlining the steps toward starting the process to request political asylum.
Limit the rollout: We are only letting authorized users in. We believe that trust is paramount, and we are trying to build trust by having genuine partnerships with people who will iterate with us on the technology.
Bring humans into the technology: We want to build tools to make it easy for those providing help to do their jobs, and to validate everything we’re doing so that we can make real-time improvements. As Jorge explained, we built a feedback loop, “so if a question is super wrong, you can vote it down.” And with any AI, it’s essential that the response is verified by a human. In this case, the paralegal has the responsibility to confirm or assert that what the chatbot is recommending makes sense. (We also made sure that users could offer feedback when it doesn’t.)
We believe that it’s important to understand technology’s limitations, not just its capabilities. In this case, the advice that these people are dispensing has very real consequences, so it’s mandatory that it be clear and accurate. After many conversations with our partners, we created a system that uses large language models that are bound to legal questions that paralegals can use to answer questions more efficiently, helping them get through more cases in a day.
Trust is essential at these organizations, which assist people who are often being sought by the government. Not only do the clients need to trust the organization, the organization needs to trust the information that they’re providing — and that it is not being leaked or shared without their knowledge. Data privacy is essential. (And if you’ve been listening to Season Two of the podcast, you know it’s all about data privacy.)
Why am I sharing this with you? Because we believe that our design principles are a step in the right direction. Please steal them! (And please tell me what you think — honest feedback only! Start the conversation at us@technicallyoptimistic.com.) My hope is that more human-centered programs will be developed going forward, no matter what the goal.
It makes me think of an interview that I did for Episode 5 of the podcast, where education scholar Tiera Tanskley asked, “How do we design technologies in ways from the start, so that they're actually beneficial to a wide range of people?” Those are words to program by.
Worth the Read
Want to know how to prepare your phone for a protest? The Markup has you covered, including this chilling advice: “Use a passcode, not a fingerprint: Fingerprint and face locks may be convenient ways to secure your phone, but they don’t always work in your favor if your phone is seized by law enforcement.”
OpenAI introduced new tools to help researchers verify content authenticity. Fingers crossed!
Travis Kalanick has been quietly lobbying to force food delivery apps (like, um, Uber Eats) to turn over their data to restaurants — a move that would benefit his own food delivery-related companies. (Full disclosure: I reported directly to Travis when I was building self-driving cars at Uber.)
Season Two podcast guest Ethan Zuckerman, who directs the UMass Initiative for Digital Public Infrastructure, wrote a fantastic op-ed on why he’s suing Meta.
Taylor Lorenz of the Power User podcast shares the first GPT homework machine, which is pretty brilliant.