We’ve been spending a lot of time on data and privacy recently — you can’t talk about AI without talking about data — but let’s zoom back out to AI and technology in general. When we talk about AI, our conversations turn to it as a general-purpose tool and how we are going to use it in the future. Those future conversations inevitably seem to turn to some doomsday scenario involving AGI, or artificial general intelligence, in which the computers essentially become sentient and do away with us. We see it in the movies, and we see it in our discourse on the topic.
MIT professor Daron Acemoglu (and a probable future Nobel laureate), who I was fortunate to interview about AI education, wrote an interesting essay that we featured in the links section of the last newsletter but which I want to look at more closely. The core argument is that we’re focusing our attention on the wrong thing: Instead of focusing on whether AI itself is safe, he writes, “what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.” Rather than anthropomorphize AI and assign it a personality that might not align with our ethics and objectives, the more immediate danger is the humans who can misuse the not-yet-superintelligent AI.
For me, it’s a yes, and.
First off, right off the bat, there is a lot of power and good that these systems can do for us and society. So, keeping that in mind, I do personally think that we need to figure out how to put sensible guardrails in place, and have a clear definition of what safety means so that these systems are not running rampant over our society and democracy by helping spread disinformation, etc. We need these guardrails, both at the foundational level (look at California’s SB-1047 bill, which would require that new models, especially large ones with over $100M invested in their training, should be safety tested) and probably also in how they are used and deployed in the world (look at the recent incidences of these tools being used — with bad effects! — in Spain to predict whether husbands and boyfriends would be violent towards their partners). The devil is, of course, in the details.
Safety in foundational models is not a bad thing (although SB-1047 has the tech world in an uproar; I’ll write about the highly nuanced issues around it in the future), but it alone probably won’t fix things. But what about models that are right below that threshold? You know, the $99.9M ones? They will still be incredibly powerful and — when in the hands of corporations, governments and people whose objectives, as Daron posits, might not be aligned with democracy — potentially destructive.
On the foundational level, these types of laws don’t really address all the ways that technologies can be used…or misused. (The age-old question of do guns kill people, or do people kill people?) Daron brings up the “misuse” of a car to kill people at a white supremacist rally in Charlottesville, NC, in 2017. Cars are one of the most highly regulated technologies, but no amount of safety research or laws could have prevented that tragedy. People use things in unexpected and malicious ways all the time. It all depends on whose hands the technology is in, and how they intend to use it. In terms of regulation, do we blame the company that developed the technology or the person who misused it?
Some US Congresspeople have chimed in on just this topic in a letter to CA governor Gavin Newsom about the safety bill making its way to the floor in the coming weeks, writing: “Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights.”
I’ve spoken before about the three-legged stool: Tech companies, government and civil society all need to balance one another. We really need to do more work to strengthen the conversations and multilateral nature there, rather than let one side hijack the conversation. The stool is useless with only one or two legs. It needs all three.
It reminds me of what Daron wrote: “We need much stronger institutions for reigning in the tech companies, and much stronger forms of democratic and civic action to keep governments that control AI accountable. This challenge is quite separate and distinct from addressing biases in AI models or their alignment with human objectives.”
Thank you, Daron, for putting it so brilliantly. While I have been pushing for some form of sensible regulation, what is needed is for people to think about the socio-technical system that all these issues exist in. We need to introduce solutions, values and sensible thinking at all levels — and not just one.
Worth the Read
Spearheaded by mothers of teen girls, a San Francisco lawsuit seeks to shutter “nudification” apps that create deepfake porn images of women and girls.
As for that AI safety bill in California I just mentioned, members of California congress have asked Gov. Newsom to veto the bill, should it pass in a few weeks.
A thoughtful look at just why the tech companies are so furious about the California bill.
That alleged breach in which hackers claimed to acquire pretty much every American’s Social Security number and then released it in an online marketplace for stolen data? Here’s how to protect yourself.
Does genAI help or hinder students? Axios takes a look.
The FTC passed a rule banning fake reviews, testimonials and celeb endorsements this week.