“We are overwhelmed, we are underfunded and we are drowning in the tidal wave of tragedy,” a law enforcement investigator told congress regarding a spike in reports of online child abuse. Sounds like a line from Wednesday’s headline-making Child Safety Hearing, at which Congress grilled tech leaders over their failure to protect children from online exploitation? Nope. That was in 2007.
At the time, tech companies filed 100,000 reports of illegal material. Last year, that number was 36.2 million. And, as the recent Taylor Swift deepfake debacle on X proved, AI is amplifying a harmful issue that has gone unaddressed for too long. It might be what finally gets protections passed in Congress.
In an episode of the Technically Optimistic podcast focused on AI and accountability, author and political scientist Ian Bremmer essentially said that it might just take a real disaster for people to do something about AI. Maybe that disaster is what’s happening to kids. (Or maybe it’s Taylor Swift…)
Wednesday’s four-hour hearing was contentious and emotional, with lawmakers pushing harder and asking more informed/less cringe-y questions of tech leaders, a déjà-vu lineup that included Meta’s Mark Zuckerberg, TikTok’s Shou Chew, Snap’s Evan Spiegel, Discord’s Evan Citron and Linda Yaccorina, who theoretically runs X. (To get an idea of the tone, look no further than this Politico headline: “Senator to Big Tech: ‘Collectively, Your Platforms Really Suck at Policing Themselves.’”)
It was breathtaking to watch Zuckerberg get pressured into apologizing to the parents whose children lives have been lost — though he didn’t go so far as to acknowledge Meta’s role in their suicides — and vaguely hopeful to hear that the topic has bipartisan support. Senator Dick Durbin closed the session by saying, “Every single senator voted unanimously in favor of the five pieces of legislation we discussed today,” adding that it should send Americans a “stark message.”
In the end, both Snap and X agreed to support KOSA (Kids Online Safety Act), which I wrote about here, and got a deeper education thanks to a great discussion here. Snapchat was the first social media platform the back the bill, which aims to strengthen online protections for children. At the hearing, Spiegel urged more tech companies to back the bill.
It’s worth noting that X even said they would support Sen. Durbin’s STOP CSAM act (Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment Act), which would peel back a bit of the Section 230 protections, which limit tech companies’ liability. Although, maybe it’s a play to get advertisers back? (either way, I’m super interested to hear what Elon says about that!) Discord’s Citron initially said it was not prepared to support the act, as well as the EARN IT Act, which would allow individuals to sue tech companies for hosting child sexual abuse material, but Discord PR reached out to Politico minutes after his statement to say that it would support only “elements” of the former.
Given recent revelations about lobbying efforts by the companies who are testifying (great video here, as Sen. Marsha Blackburn lobs names at Zuckerberg at 1:09), it’s clear that these acts, as well as the others on the table, face an uphill battle. While the five cited Wednesday have been passed, none have become law.
The state of self-policing
Let’s pull back a little.
It’s clear that these platforms are really bad at policing themselves. It’s a hard problem, but these companies have made a lot of money from users — many of them teens — and can certainly afford to ramp up oversight of explicit material involving minors. X gutted 30% of its trust and safety department after Musk’s takeover, but the Taylor Swift situation got them to back down and say that it will hire — gasp! — 100 employees to create a trust and safety center tackling deepfake content.
There’s little to no financial incentive for them to protect users from explicit or false content, other than to stay out of the news (or the occasional subpoena from the Senate). That’s why Congress needs to step up and actually do something. None of their bills have made it to the floor, and this has become a hot issue in an election year. Will they actually get it together this time?
In addition to tough legislation that makes tech companies legally accountable, we also really need to secure more transparency and access for researchers so they can see what’s happening on these platforms. We need reporters and advocates to be able to search and understand what is happening online so they can take that real data and amplify and share it. KOSA seems to have some provisions for getting researcher access. And of course, Twitter had public API access, literally sending a stream of tweets into the Library of Congress. You can imagine what happened to that… But we can still hope.
Wednesday’s hearing did a lot to show people what’s been happening to children online — as well as how little is actually being done to protect them. Hopefully by now we can learn these lessons and deal with them, especially because, while I’m optimistic in general about AI, the threat that it could bring to kids in these online places seems exponential. And we need to act.
Children’s safety is a theme that I’ll be exploring in an episode of Season Two of the Technically Optimistic podcast, launching soon. You’ll hear attorney Manmeet Dhindsa from the Federal Trade Commission talk about the new rules her agency is formulating to update current laws and enforcement strategies, Jim Steyer of Common Sense Media on why his organization is needed to help screen material, the author of the book that Mean Girls is based on and more. Follow me so you don’t miss an episode! Tell your friends!
Until then, call your senator. Post your thoughts. As Lindsey Graham told Mark Zuckerberg he has “blood on your hands.” If we don’t act soon, we all will.
Worth the Read
In other child-related lawmaking news, California is presenting a bill that would allow parents to determine if their kids see a less-addictive chronological feed on their social channels rather than feed them to the algorithms. Meanwhile, in New York, Mayor Eric Adams compares social media to smoking: a harmfully addictive “environmental toxin.”
No more ChatGPT-y surprises! Wired reports that the White House is leveraging the Defense Production Act to require all tech giants to inform the Commerce Department when they start training large language models.
Could the Taylor Swift porn deepfakes that swamped X last week be what finally gets the White House to respond to the countless requests to Congress for legislation around fake sexual images? Press Secretary Karine Jean-Pierre is on it!
It didn’t take long after ElevenLabs launched before people began using its tools to generate celebrity audio deepfakes. The real news is that ElevenLabs banned the user who made Biden’s New Hampshire robocall after a security company traced the recording to them. But, are we really just going to play whack-a-mole with users?
The ACLU filed an amicus brief in favor of a New Jersey man they say was falsely jailed after facial recognition technology misidentified him. The ACLU release states: “Nearly every known case of a wrongful arrest due to police reliance on incorrect face recognition results has involved [the] arrest of a Black person.”