Last week, I was lucky to take part in Aspen’s AI Elections Advisory Council meeting, a gathering of people from TikTok, Facebook, OpenAI, the Center for Election Innovation & Research, the Stanford Cyber Policy Center and more, brought together to discuss this super-timely subject.
As I’ve written, technology has played a significant role in the spread of mis- and disinformation in the run-ups to elections around the world. A surprising point of our meeting was that our worst fears about election deepfakes here in the US haven’t come true.
There are big open questions as to why. The rest of the world certainly saw its share this year. In Indonesia, this year’s election was a Disneyland of deepfakes. (They literally had deepfakes of dead people talking about the elections on social media.) In India, there were deepfake celebrity endorsements. In Ireland, there was synthetic sexual imagery targeting candidates. I am genuinely baffled as to why none of this happened here at scale (though you’ll get to the Taylor Swift bit in a minute…). We had a deepfake of President Biden in the New Hampshire primary, but nothing else significant. Maybe I’m splitting hairs that don’t need to be split, but we’re seeing lots of disinformation, but not a ton of deepfakes Please email me with your theories or post them below.
That said, it’s still disturbing: US adversaries like Russia, China, and Iran have been actively sowing information chaos with microtargeted posts on social platforms large and small, and those platforms have not fulfilled their promises to appropriately label and remove AI-manipulated content or temporarily suspend users who proliferate it. (Do a reverse image search of a widely debunked deepfake that should have been removed, and you’ll find countless copies across platforms.) Have you heard the (fake!!!) story that VP Harris shot a rhino on a trip to Zambia? If so, you’ve seen some of the misinformation that is floating out there.
Part of the reason why Taylor Swift endorsed Kamala Harris was in response to deepfakes of her and her fans supporting Trump — images that had over 17 million views. Swift highlighted her “fears around AI, and the dangers of spreading misinformation.”
After Election Day, misinformation can trigger violence, especially when it comes to vote-counting. If you catch up with people like Renée DiResta, the author of Invisible Rulers: The People Who Turn Lies into Reality (and a podcast guest!), she’ll tell you that there is already a ton of mis- and disinformation being laid down to discredit the election. The trickle-down effects are terrifying: Look at the ballot boxes that were lit on fire in Oregon and Washington State.
How do you, as a voter, know what (and which sources) to believe? As Michigan Secretary of State Jocelyn Benson has said multiple times: It’s about trusted messengers. It’s important that our elected officials — especially governors and mayors — message clarity and authority around counting (and re-counting) methods. And as citizens, it’s important that we look at these trusted sources and ask questions. Look at information posted by your secretary of state’s office. Reach out to local election officials.
Next Tuesday, please go support the people who are volunteering to work the polls, especially in this environment, when they are potentially taking a risk. Thank them for defending our democracy. I never thought I’d say this, but be safe on Tuesday. I’ll see you next week!
Worth the Read
This piece on Delta suing software provider CrowdStrike for travel disruption makes me think a lot about insurance…
Where does China stand in terms of AI safety? This report from the Beijing-based social enterprise Concordia AI has some interesting insights.
This White House memorandum on using AI to help with national security is a fascinating deep dive — especially when they get into ways of making AI trustworthy and secure.