It can be really hard to believe, but our entire election system is built on trust: Trust that officials will do the right thing. Trust that the messengers — candidates especially — are telling you the truth (including which day to vote on). Trust that the system is fair and functioning. Trust is as easy to lose as it is hard to earn, so any weak link in the chain can be catastrophic.
We do know, however, that the Internet can be used to generate powerful ways to mobilize and persuade voters. Look: The Obama 2008 campaign is renowned for its groundbreaking use of technology and data, marking a significant shift in how political campaigns leverage digital strategies. The team's innovative use of social media, data analytics, and digital platforms played a crucial role in connecting with younger voters and in micro-targeting voters with tailored messages.
But as the last two elections have really shown, technology can also be used as a way to distribute powerful ways to confuse voters. And now, these new generative AI tools can not only provide a way to quickly and easily spread disinformation with zero training required, but, frankly, it can cause people to question all information. The deepfakes we have been seeing so far could be amusing (the pope in a puffer jacket) or concerning (a fake dental ad featuring Tom Hanks), but they should be considered a quaint high-school hack compared to what could be coming.
“We are witnessing a pivotal moment where the adversaries of democracy possess the capability to unleash a technological nuclear explosion,” Oren Etzioni, the former CEO of and current advisor to the nonprofit AI2, a US-based research institute founded by the late Paul Allen focusing on AI, told Popular Science.
And it’s not hypothetical. While we’ve all been warned about deepfake videos and pictures, what really has been happening is deepfake audio. Some “recordings” surfaced in the UK. Something similar in Slovakia. These did as they were intended: They manipulated voters into acting based on deceitful information. What is the natural result? We’ve entered a disorienting new reality. Now that we know that anything could be fake, how are we supposed to know what is true?
On Wednesday, I sat on a panel about AI and elections at the Aspen Cyber Summit with Michigan Secretary of State Jocelyn Benson and former CISA director Chris Krebs. As we agreed, these are socio-technical systems. There is no silver bullet to reestablish trust in the information ecosystem. Rather, we need to start putting fixes in lots of places and rush to work on an array of problems that will fix things on different time scales. Trust in institutions is on a serious decline, so we need to invest in finding and amplifying more trusted messengers, whether it’s the PTA president in your child’s school, a pastor, or a local community organizer. That’s the socio end. But we also need to work on the technical end.
Does watermarking miss the mark?
The bare minimum protection is to watermark generated content. This means any AI-generated content would have an embedded identifier stating that it has been programmatically generated. Last week, Meta issued a policy stating that political ads posted on Facebook and Instagram would have to post labels disclosing any use of AI — visible only once people have clicked on them — and that fakes would be removed. (Google announced a similar policy for political ads in September.)
This seems like the bare minimum, and a very low bar. We should absolutely do this. It's great that the technology companies are attempting to address the problem, but we need to do more.
While useful as a tool to indicate “ownership,” watermarking falls short in providing the complete history of digital content. It's akin to putting a special mark on a photo or video that breaks if someone alters the content, signaling that a change has occurred – however, this doesn't tell us the whole story of the content: its origin, journey, and all the modifications it has undergone.
Human rights activist Sam Gregory of WITNESS, who appeared on an episode about regulating AI during Season 1 of the Technically Optimistic podcast, highlights that watermarking, while useful for showing digital content ownership, doesn't fully track its history or changes. This is critical in human rights, where evidence authenticity is key. Witness advocates for more comprehensive solutions like the Coalition for Content Provenance and Accountability (C2PA), which records media provenance (its source and history, controlled solely by the content’s author) from creation to consumption, crucial for verifiable evidence in sensitive contexts.
Provenance is also particularly vital in elections, when accurate information is essential. It ensures campaign material authenticity, counters misinformation, and maintains electoral integrity. This helps everyone from candidates to voters make informed decisions and safeguards democratic processes.
It's great that tech is instituting labeling mandates as it tries to get ahead of criticism that it’s doing too little (see: 2020). However, it’s just the start of what’s needed ahead of next November. We need to set a floor on a bare minimum requirement. The REAL Political Ads Act, introduced in May by Senator Amy Klobuchar and her colleagues, “commonsense legislation” that addresses the regulation of AI-generated content in political ads. This legislation requires a disclaimer on political ads that use images or videos generated by artificial intelligence. The bill is designed to enhance transparency and accountability in political advertising, ensuring that voters are aware of the use of AI technology in campaign materials. It has yet to pass.
Look, if we can get a watermarking bill over the line without blocking something more comprehensive later, it’s a good start. I’m hopeful, but the most likely scenario is that states and Europe will do more to drive regulation — my fellow Aspen panelist from Michigan highlighted that they have taken action: The state’s new legislation requires political ads using artificial intelligence, especially deepfakes, to include disclaimers. This rule applies to AI-altered ads created, published, or distributed within 90 days of an election, ensuring transparency about their manipulated content across TV, radio, print, and social media.
What you can do
In the end, the most important thing is heightened public awareness. How do we get there? Step one is to acknowledge that there is a problem — and it seems like we have cleared that hurdle. As disorienting as it is, we should all be questioning the reality of what we see and read online in the coming year. Step two is to start having conversations about it. Finally, we need to use those conversations to advocate to our lawmakers and to put pressure on the software media platforms. We need movement from more than just the commercial sector here: We also need the government to step up. One of the best ways to ensure that is we all need to say it's needed: Call your representatives. Start a social media protest. The scary fact is, we need the platforms through which this disinformation is spreading to help us figure out what is the truth.
Tell me how you sift through the political messages you’re receiving to get to the truth. Please email me at us@technicallyoptimistic.com.
Worth the Read
We aired an entire podcast episode unpacking the White House’s Executive Order, and one thing we noted is that there are a bunch of milestones coming up as this order rolls out. Alex Macgillivary, former Principal Deputy CTO of the United States, wrote to me to point me to his calendar to help guide us through the myriad of these dates so we can track how this rolls out.
As though AI’s power to influence upcoming elections in the US, Taiwan, Mexico, India and more isn’t scary enough. This Wired piece on how gen AI tools are being used by terrorists and extremist groups — and how they’re poised to undo Big Tech’s guardrails against their content — is chilling.
DeepMind published a paper proposing different “levels” for “Artificial General Intelligence,” or when machines exhibit a type of intelligence that is comparable to human intelligence, encompassing the ability to understand, learn, and apply its knowledge to a wide range of tasks and problem-solving scenarios, much like a human being. Their levels go one through 0 (no AI) to 5 (“Superhuman - outperforms 100% of humans”). We’re at Level 1: “Emerging - equal to or somewhat better than an unskilled human.”
The EU’s proposed signature AI legislation, the AI Act, is hitting some bumps in its negotiations across countries. A key point in the legislation is regulation for the “foundation companies” – those who create the large language models – like OpenAI’s GPT-4, the underlying piece of technology. There are very few companies who have the resources and abilities to create these foundations, and it is easy for the EU to apply regulatory pressure when the companies that create them are American. However, France has come to the defense of one of its homegrown companies, Mistral, which creates these models. There are other sticking points, too, but the real test will be on December 6th. That seems to be one of the last days to finalize the act ahead of next year's parliamentary elections and changes in the European Commission.
I’ve come to accept that job churn is inevitable with generative AI in the world, and now there is actual evidence that shows it to be true. Freelancers on digital platforms are facing tangible job and earnings losses, regardless of their skill level.
Season Two of the Technically Optimistic podcast is going to tackle the data economy we are all participating in but that is largely invisible to all of us. There are questions of data, privacy, identity, power, and humanity all throughout it. Alice Marwick’s piece summarizes a part of it in a spot-on way: “You Are Not Responsible for Your Own Online Privacy."