An ethical question that comes up a lot is when to deploy AI systems. In a lot of cases, I advocate for having a human in the loop — like in this issue of the newsletter, where I wrote that it’s essential that a human validate any AI response. At Emerson Collective, it’s what we’re doing with the chatbot that we developed to help legal aid societies working with immigrants as well as other projects we’re working on. Because the stakes are just too high if the AI assistant is wrong. That word assistant implies that the program was created to help a human, not run the show.
There’s still a lot of nuance here, because the sheer act of democratizing access — making it so everybody has data skills, or graphic design skills, etc. — will have ripple effects throughout multiple industries. But let's ask the bigger ethical question: When is it okay to deploy AI? Some people say it’s only okay to deploy when it’s perfect, but that seems like an unusually high bar. To me, the answer is probably that it’s okay to deploy when it’s better than humans. But the question of “better than” (not to mention “humans”!) brings up questions of bias and the like.
During the interviews I did for Season Two of the podcast, I often used the analogy of self-driving cars with guests. Like, when will it be okay to start using them? In 2023, there were about 41,000 fatal crashes with driver-operated cars. That's somewhere around 1.4 deaths per 100 million miles traveled. It's hard to compare this to driverless cars, for a variety of reasons (far fewer cars, different reporting standards, not full anonymous vs. driver assistance) – but human-operated cars sets a bar.
Whether you’re looking at the lives saved or lost, what so often gets obscured is that each number is a person — a friend or relative, co-worker or acquaintance. And if you told me we got down to 40,000 fatal car crashes a year — 1,000 fewer deaths per year – driverless or not — but they were all women or Black or elderly, we would all revolt. Once we start looking beyond the numbers, a.k.a. the data, a technology’s impact becomes very different. And that is probably the best way to look at all this.
It's really easy for people who are deploying AI technology to look at numbers as a cold, hard reality, which they are, but we also need to remember that every data point is a person. My hope is that we can start training developers and engineers to consider the wide range of humans who will use the program so that it is beneficial to all. Do we humans really need AI, or do those who are deploying it need us to want it? Yes, it can solve incredible problems, streamline businesses and democratize education, among many other things. But if we simply use AI for the sake of playing with the newest toy and don’t insist on keeping a human in the loop now, we could lose sight of what’s most important: Our humanity.
Worth the Read
We’ve talked about Sora, OpenAI’s new model that will do text to video. They haven’t released it yet, but someone else has. This is just a testament to how quickly this is all moving.
AI naturally dominated Apple’s recent developer conference. Some of the reveals? “Apple Intelligence” is coming to the iPhone, complete with a deal with OpenAI.
On the ballot for next month’s general election in the UK? Meet AI Steve, the first AI candidate. (Well, he’s attached to a human candidate, who says that AI can humanize politics.)
The anti-AI movement is here. The Atlantic reports, starting with the excellent headline, “Excuse Me, Is There AI in That?”