We all know that power abhors a vacuum — especially when it comes to politics. The lack of government regulation of data privacy is having an interesting (if super messy) result: Individual states are taking it upon themselves to regulate tech companies to protect users’ privacy.
I’ve been deep in data these past months as I put together Season Two of the Technically Optimistic podcast, launching next month. Season one was about AI, and you can’t talk about AI without talking about data, so I’m diving in. I’ll look at how it impacts everything from surveillance to health care, kids to the economy, culture and more.
So I was interested to see recent stories about how cities and states have been stepping in. For example, last week, the LA city council banned digital discrimination — providers selling slower Internet service to lower-income or racially marginalized areas at the same price at which higher-income communities receive faster, more reliable broadband.
California is, as usual, ahead of the curve in terms of state regulation: In 2018, it passed the California Consumer Privacy Act. In fact, it’s the only one out of 14 states with consumer privacy laws to receive a B grade from the Electronic Privacy Information Center and the US PIRG Information Fund in this truly fascinating report. (Spoiler alert: Nearly half of the states received failing grades.)
Maine is debating a bill that could be even more progressive. Rep. Maggie O’Neil’s proposal limits the use of sensitive data, prevents companies from collecting unnecessary data and restricts ads targeted at kids. It’s not that dissimilar from a proposal that stalled in Congress in 2022. The big news is that it allows citizens to sue companies that breach their data privacy rights.
O’Neil’s plan, which is working toward a floor vote, is up against an industry-backed measure from a fellow lawmaker that would actually roll back several protections enforced by Maine’s strict 2019 bill. Yes, industry-backed. As the report mentioned above says: “Of the 14 laws states have passed so far, all but California’s closely follow a model that was initially drafted by industry giants such as Amazon. In an analysis of lobbying records in the 31 states that heard privacy bills in 2021 and 2022, the Markup identified 445 active lobbyists and firms representing Amazon, Meta, Microsoft, Google, Apple, and industry front groups. This number is likely an undercount.” Surprised…?
Massachusetts, Maryland and Illinois are also shaping legislation that would sharply curtail the online surveillance and discrimination that our data allows for. And on Tuesday, New York advanced its New York Privacy Law to the state’s Senate Committee for review.
Creating the Right Kind of Oversight
But state-by-state movement on data privacy just isn’t going to cut it. California can get away with it because it’s so big — look at what it was able to accomplish in terms of getting car companies to change their mileage standards. Do we really have to rely on Senators in 36 more states agreeing on how to get this under control — and doing it quickly enough to protect their citizens’ privacy from the tidal wave of AI?
The good news is that states are focusing awareness that we need to do something as a country. Now. As New York Senator Kristin Gillibrand wrote in The Hill, “The U.S. is one of the only democracies, and virtually the only member of the Organization for Economic Cooperation and Development, without a federal data protection agency. Instead, authorities have to rely on a patchwork of protections that make jurisdictional oversight very nebulous.”
This patchwork may force the government’s hand: Either Congress will choose to do something, or a lawsuit by a tech company arguing that it’s impossible to operate in this patchwork world would cause the judiciary to move.
Or maybe there’s another way. In Episode 3 of the podcast, which explored ideas for regulating AI, I spoke with Colorado Senator Michael Bennet. For years, he has been proposing the Digital Platform Commission, a federal oversight organization that would create compliance standards and oversee them, along the lines of the Food and Drug Administration or the Federal Communications Commission.
Bennet is taking it out of House, so to speak, because he doesn’t think that the government is up to the job. (“I mean, can you imagine? Members of the Senate?” he said, sounding amused. “I can tell you we're not going to have any idea, any clue, what to do with that.”) Instead, he sees the ideal commission as composed of “experts with a background in areas such as computer science, software development and technology policy.”
The fact is, the longer we take to get these protections in place, the harder it’s going to be to have any at all. We missed the boat on social media, as I wrote about last week following the Child Safety Hearing. And we shouldn’t have to wait for a disaster to happen before we act. (See: financial regulation.)
States might have to be the stopgap until a) The federal government can step up and b) It can get it together to create an oversight body that isn’t somehow controlled by big tech.
Or perhaps you see it as something in between? I’d love to know what you think. Please comment or write to me at us@technicallyoptimistic.com.
Worth the Read
We’re planning an entire episode in Season Two of the podcast on health care and data privacy. Until then, you can get a primer with a podcast from Medical Economics.
The New Yorker takes a deep dive into how AI is changing the music industry via a profile of a 45-year record-label veteran. “I’ve always been able to smell intuitively what the next scene is,” he said. “Whether it’s punk or New Romantics, I’ve always enjoyed it, picked up on it, and this is my view of technology.” To him, generative AI smells like the next big scene.
I’m loving Spotify’s AI-generated “daylist” titles, such as “Tailspin Self-Sabotaging Monday Afternoon,” which surfaced in this piece. And people are still surprised that their personalized playlists are algorithm-based? (Sure, Apple Music still touts some “human curation” in their playlists….)
A Consumer Reports study found that each Facebook user had their data sent to facebook by 2,230 companies — and over 7,000 for some users.
You’ll be fooled by an AI deepfake this year. Yup. Even with a push into legislation for watermarking, it’s probably going to happen. Maybe the world we’re heading toward is one in which we treat deepfakes like we do spam: There’s a lot of it, and it’s kinda annoying? And, ideally, our computers try to filter them out…