AI is good at spreading disinformation. But one of the more bizarre twists is that it also makes it easier to deny the truth.
We’ve talked about this before in the newsletter. I mean, in December, we saw Trump claim that a Lincoln Project ad, which was an authentic blooper reel of his presidency, was an AI-generated fake – this from the man who has shared a deepfake image of Biden kneeling to pray. “@Projectlincoln just showed you reality, and you can’t take it,” tweeted the group’s cofounder.
Another side effect of AI saliency and misinformation running rampant? We have become more aware of, and concerned about, everything we see. It’s like every image and video is now superimposed with a big question mark.
So, yes: We’re going to talk about Kate Middleton.
It is very sad that she has been diagnosed with cancer, and we wish her our best. Today, we are going to look at this as a clear example of our no longer being able to trust what we see.
In this case, our institutions did save us: The news agencies issued a “kill notice” once enough people went through the now-famous manipulated photo of Princess Kate with her children. All of the professional photographers for the agencies interviewed on the subject said that “moving the pixels” is verboten. Small amounts of tweaking, like cropping or editing brightness and color levels, is fine. But, as it seems like happened with the picture, pixels cannot be moved around to create a false image. (At least not one that will be published by a reputable news agency.)
What is more important to talk about, however, is what happened next. After weeks of speculation, Kate put out a video explaining that she has cancer. And immediately, nobody believed it to be true. There has been wild speculation that it is all AI-generated. (People were commenting that the plants in the background were not moving, etc. Even Stephen Colbert had to apologize for engaging in speculation.)
This is the world we live in. One in which we no longer believe what we see. That was fast, wasn’t it?
In the newsletter, we've spoken about things like the Coalition for Content Provenance and Authenticity (CP2A), which could offer a solution to situations like this by developing technical standards for certifying the provenance of media content. You could imagine a world where the camera embedded a signature into each photo, and all of the processing toolchain noted what was done. In that world, news agencies could immediately validate photos and know just what tweaks had been made. That video that we’re talking about? We could have known if it was real, sparing us not just news cycles (and countless social media posts) of speculation, but also restoring some of our faith in politicians, public figures, the media — and reality.
The week that controversy first swirled around the photo, the EU passed its AI Act, which I looked at here. The act states that it “ensures that Europeans can trust what AI has to offer.” Deepfake audio and video content is included under the transparency obligations.
Had the guardrails proposed by this legislation been enacted, would Princess Kate’s sloppy Photoshop job have warranted a slap on the digital wrist? And can the US government catch up in time? On Monday, Rep. Anna Eshoo just introduced the Protecting Americans from Deceptive AI Act, a bill that requires “the development of standards for identifying and labeling AI-generated content and requiring generative AI developers and online content platforms to provide disclosures on AI-generated content.”
As always, I remain Technically Optimistic.
Worth the Read
Human rights groups are circulating a petition with the Federal Trade Commission to allow Americans to opt out of data brokers selling their private data — a kind of digital “do not call” registry. Sign me up!
U.S. District Judge Charles Breyer threw out Elon Musk’s lawsuit against a research group tracking the increase in hate speech on X, saying that Musk's suit, "is so unabashedly and vociferously about one thing that there can be no mistaking that purpose.”
https://www.nytimes.com/2024/03/27/technology/israel-facial-recognition-gaza.html
The Israelis developed facial recognition software to search for hostages in Gaza. The surveillance quickly zoomed out to include anyone with ties to Hamas.
Yesterday, the White House followed through on its Executive Order on AI, issuing its first policy to mitigate the risks of AI. The goal is to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.”