Hi, I’m Raffi and welcome to my newsletter. Each Friday, I break down the ever-changing conversation about technology and AI. If you’re not a subscriber, here’s what you missed this month:
A potential (very technical) workaround that might be part of the solution to shifting some of the power from big tech.
A look at Common Crawl, the nonprofit dataset that has media giants like The New York Times extremely cranky.
Subscribe to get access to these and future posts, as well as previews of Season Two of the Technically Optimistic podcast, launching in a few weeks.
At Davos last week, AI and its potential impact on democracy was one of the hottest topics. This year will see elections covering half of humanity, and AI has the potential to upset outcomes in the US, EU, India, Russia and more. Worse, there are virtually no guardrails to prevent meddling. It’s like a bad dream.
OpenAI was quick to respond to the chatter that culminated in Davos, issuing a post about how it will approach worldwide elections. In short, they claim to “continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” and announced a “cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering and policy teams to quickly investigate and address potential abuse.”
They have joined the other platforms, Meta and Google, who have announced their own restrictions: Meta said it will prohibit political advertisers from using its new AI tools to help generate marketing content, and any deepfakes posted on Instagram or Facebook will need to be disclosed. OpenAI states that they won’t allow people to build applications for campaigning and lobbying, nor will they allow builders to develop chatbots that are pretending to be candidates. Apps that discourage or confuse voters? Also not allowed. Users can report potential violations to them.
AI-generated images are also coming under scrutiny: OpenAI claims that it is developing efforts that will allow for more transparency, implementing the Coalition for Content Provenance and Authenticity’s digital credentials — essentially encoding images created by WALL-E with information about their provenance. (The Department of Commerce has been tasked with making this happen, as I explained in this newsletter on the White House’s sprawling Executive Order on AI a few months ago. OpenAI will surely beat them to it.)
There will be a move toward transparency in news, too, with users getting access to real-time global news, complete with attribution and links. As OpenAI says, “Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust.” It’s hard not to read this as a shot across the bow at The New York Times, which is currently suing OpenAI, as I write about here.
I asked ChatGPT to try to create text messages to deter people from voting, and I even used some of the jailbreaks. And it seems in all cases, ChatGPT said it can’t do it.
These are all good principles and an encouraging start. It seems like it’s not just a policy thing, but that OpenAI is actually trying to implement this in code. Within days, it had enforced an infraction, blocking Dean.bot, a voice bot created by a super PAC pretending to be Democratic presidential candidate Dean Phillips.
I asked ChatGPT to create text messages to deter people from voting, and I even used some of the jailbreaks. And seems in all cases that I tried (I won’t pretend that I’m being comprehensive), ChatGPT said it can’t do it. That’s great.
Table stakes, but is it too late? On Sunday, a deepfake Biden robocalled democratic primary voters in New Hampshire, telling them to hold off on voting until the election. Just to be clear, there is no evidence that OpenAI technology was behind it, but it still will be interesting to see how the Biden team responds: A few months ago, they assembled a legal task force to prep for the inevitable.
The deployment and lack of control around AI also allows candidates to brush off damning ads as AI-generated fakes, as Trump has recently done. Professor Hany Farid of the University of California Berkeley told The Washington Post that the age of AI creates a liar’s dividend: “When you actually do catch a police officer or politician saying something awful, they have plausible deniability.”
If you’re asking yourself why the companies that are allowing for the creation and dissemination of this misinformation are governing the use of their products, that’s because the government has been too slow — and too partisan — to respond effectively. (A panel on AI and elections that I sat on in November highlighted the general sense of inefficacy in the face of AI. I was also saddened to read this article, which shows that Americans, too, are becoming more partisan in their views of AI.)
While election law prohibits campaigns from “fraudulently misrepresenting other candidates or political parties,” Republicans on the Federal Election Committee blocked the potential to extend this to AI in June. In September, Sen. Amy Klobuchar introduced the Protect Elections from Deceptive AI Act, but it has yet to progress.
So the coder in me has to ask: What can open source do here? In order to build responsible AI, we need to do it in open ways. I believe that it is a way out of this mess, since it allows for more transparency and visibility into what is going on, taking it away from the big players and putting it into people’s hands. But the problem is that my argument is a long-term one and assumes that we, as a society, have our arms wrapped around this. We need to make strides toward establishing some norms when it comes to developing responsible AI. And we need regulatory help, too. The threat that AI poses to our society ahead of global elections is too big for us to tackle on our own or to leave to tech companies to solve: We need the government to step in.
As for the people, I’m hoping that the daily coverage of deepfakes — be they Biden, Imran Khan or Taylor Swift — serves as its own education. In the months (and years) ahead, we need to question and verify what we see, hear and believe, and to demand transparency from big tech and the media. It’s time to get real.
We’re going to be coming back to elections and AI throughout this year. It’s only just beginning. Please let me know what you think in the comments, or write to me at us@technicallyoptimistic.com.
Worth the Read
Beleaguered self-driving car company Waymo is almost green-lit in California, despite many red lights.
Also terrifying: Aural eavesdropping allows for recovery of audio data — everything from being able to determine what you’re typing based on the sounds your keyboard makes to duplicating a key based on the sound it makes when sliding into a lock.
In the UK, the National Cyber Security Centre says that AI will make phishing emails almost impossible to detect. This has major implications for elections, of course… (See also: the increased risk of cyberattacks.)
The World Health Organization has weighed in on large language models, recommending that governments encourage open models by requiring that foundation models built with government funding be widely accessible to the public.
The Doomsday Clock has been set to 90 seconds to midnight — the closest it’s been to armageddon since 1947.
In Georgia, which passed a law restricting the use of AI in optometric care and coverage decisions in 2023, Rep. Mandisha Thomas put forth a bill banning the use of AI in healthcare decisions.
Everything you need to know about North Korea’s AI development.