A few weeks ago, after I wrote about kids and AI, I received an email from a researcher whose work I admire. While the newsletter focused on how we need to get kids involved in the discussion and design rather than just legislate them, she pointed out that I needed to take a closer look at the legislation that’s being proposed, especially the 2022 Kids Online Safety Act, or KOSA.
Rather than get defensive (I try to keep the newsletter under 1,000 words!), I wanted to educate myself. So I reached out to Alice E. Marwick, the co-director for the Center for Information, Technology and Public Life at the University of North Carolina at Chapel Hill and the co-author of the report Media Manipulation and Disinformation Online, among many other things.
Please keep your feedback coming: It’s all about the conversation! Write to me at us@technicallyoptimistic.com.
Raffi Krikorian: What is KOSA, and what’s it trying to solve?
Alice Marwick: The Kids Online Safety Act, is, I think, the biggest piece of legislation in this whole constellation of bills that are basically framed as trying to solve harms to teens and children from social media online. They're predicated on this idea that there's a mental health crisis caused by social media.
KOSA and bills like that basically claim they're going to solve three things: The first is they're going to protect kids and teens from online harms. The second is that they're going to give parents more control over young people's online experiences. And the third is that they're going to hold big tech accountable, or in some cases that they're going to rein in big tech's most exploitive data practices.
Now, they do this in a couple different ways. The most commonly discussed is the age-verification stuff — what's called age-gating — that platforms would be responsible for determining whether or not they had minors on the platform. KOSA and other bills, like the one passed in Utah, would also give parents access to their kids' social media content and activities.
There are a few [bills] that do directly target privacy and limit collection of data from children, which I think are perfectly fine, like the New York Child Data Privacy Protection Act. But finally, there's this provision in KOSA that would hold platforms to a duty of care provision, which basically says that platforms are responsible for harms to minors like anxiety, depression, eating disorders, suicidal ideation, substance abuse, et cetera.
That gets us obviously away from Section 230 — away from platforms not being liable — and really putting this large burden on tech platforms to somehow avoid harms that may be caused by a wide variety of other things.
Raffi: Why is that bad?
Alice: I think everyone is concerned about the teen mental health crisis. The problem is the way we define its cause determines the kinds of solutions that we imagine. I don't think there's any certainty that social media is the cause of this. When you actually dig into the research, it's not at all certain that there's any kind of causal link.
There are plenty of other countries besides the United States that have very high rates of teen social media use that don't have these kinds of mental health impacts. I think we could think of many other reasons why young people may be suffering from depression and anxiety, like the rise of gun violence, the fallout from the COVID-19 pandemic, economic insecurity, the effects of climate change, and helicopter parenting. But all those things are really hard to solve.
What there is political will for is tech regulation. The problem is that bills like KOSA have a lot of unintended consequences. When you talk about age verification, you basically set up an internet that is age-gated, where rather than protecting people's privacy, we're now going to have to show driver's licenses, or in some cases biometric data or facial recognition, to use certain sites. And not only are there immense privacy risks, we also know that facial recognition is deeply racially biased and has significant inequities in it.
I was on a panel at NYU with a woman from an immigrant rights organization. She was basically like, None of the people in my community are going to use the internet if they start asking for official documentation, even if they are documented. Because it sets up this very frightening barrier to access.
The second unintended consequence is that there are lots of types of content that there may be certain communities that believe they're harmful and other communities that believe that they're not. A big, big one that comes up over and over again is gender-affirming care.
Raffi: Is the agenda here purely one of “We must do something and we're gonna stick it to the tech companies”? Because in some ways, you've just described a censorship regime.
Alice: So obviously Congress wants to show some kind of result when it comes to tech and privacy, right? It's much easier to pass a law if it targets children than any kind of comprehensive privacy legislation.
There are a couple of reasons why I think this idea is so sticky. The first is that young people's use of tech and media generally makes adults uncomfortable, especially if the kids are quicker to adopt things than their parents, or if they're better at using them than their parents. This drives moral panics. This has been attempted over and over and over again, pre-internet, and it usually doesn't work.
I wanted to come back to this stuff about content that parents don't approve of, because it's part of this larger cultural narrative that I think we need to recognize. KOSA has bipartisan support, but it has this very strong set of supporters who believe in restricting kids’ access to ideas that the parents don't approve of. And this is under this rubric of parental rights, which has been weaponized and is used all over the place now — the Florida Don't Say Gay Bill is actually called the Parental Rights in Education Act — because the idea is young people should not be exposed to the idea that it's okay to be LGBTQ, or that gender-affirming care exists, or that there are more than two genders. And if the parents don't want them to access that, it's violating the parents’ rights.
This is part of this authoritarian parenting, which is actually one of the biggest partisan differences between liberal Americans and conservative Americans is their parenting style. I think there's a possibility that politicians are like, oh, this is a constituency that might be sympathetic to this particular narrative.
And finally, I think the age-verification vendors are going to make an enormous amount of money. It's like getting in the ground level of PayPal or something like that. I don't think we should ever underestimate the power of lobbying companies.
Raffi: There is no silver bullet to solve anything, but are you saying that nobody should do anything in this space?
Alice: Well, I think it's, what is the problem we're trying to solve, right? If the problem is we want to improve teens' mental health, then I think the obvious thing would be to increase access to mental health services. We know those are really hard to access. There's a lot of therapists that are sort of overloaded post-pandemic. They're really expensive. There's a lot of kids whose parents don't have insurance or they have barebones insurance.
We know that kids who are at risk online are usually kids who are at risk offline. It's really hard to separate the online and the offline. And I read one study that said that young people who are undergoing a difficult time are more likely to turn to social media in the same way that you might binge eat or watch a lot of TV. So there's correlation there, but there's not necessarily causation.
In September, the Census Bureau released figures showing that the number of children in poverty in the U.S. doubled between 2021 and 2022. That also seems like something that could be a contributing factor that could be addressed. So it's about, where can we do the greatest amount of good for the greatest number of people?
And given the deleterious effects of these bills, this is not it. The tradeoff doesn't seem right. Now, we do need comprehensive data privacy. There does need to be clear limits on data collected by platforms and other tech companies and what they can use it for. But this is true for everybody. Not just people under 18.
And again, there's all this low-hanging fruit there, like data brokers. There are different ways to approach this that don't include pushing legislation through that's based on faulty assumptions that we honestly don't really know what the impact is going to be.
Raffi: What would you do? Is making kids part of the solution part of the answer?
Alice: It's hard to tell, right? I think a lot of the time when kids do get a voice in these conversations, which is very rarely, the kids who do manage to speak are often ones from fairly privileged backgrounds. They are the ones who, you know, maybe their parents are activists or they went to college, or there's something about them that gives them the idea that they can be politically effective, which minoritized communities don't believe.
I think when we're proposing any changes to any technology, you need to center the most vulnerable communities that might be affected by it and look at how it will affect them. Because if we were to center the concerns of, like, LGBTQ and trans youth, undocumented youth, poor kids, and now how you do that is another question, whether that is participatory design or working with social workers or something else, or working with particular school groups in underserved areas.
But you can't look at a bunch of well-educated white kids and assume that their experiences on the internet are going to be the same as everyone else's.
I think that if I had a magic wand and an unlimited budget, I would say increase access to mental health care for kids. I would fund qualitative research studies that try to examine different causes for the teen mental health epidemic and see if we can actually drill down into what some of these things might be. I think programs that increase housing security, food security, things like that for young people are always going to be generally good for the country as a whole and for the future of the country.
In terms of tech regulation, I do think we need comprehensive data privacy solutions. If that means we have to carve out something really small, then so be it. I just don't think we have the political will for a comprehensive data privacy bill. And that depresses me, because what I don't want to see is more weird piecemeal solutions so that you have laws like HIPAA and FERPA and things where your video store records are more protected than your email.
Worth the Read
The New York Times has a great roundup of all the global frameworks for regulating AI that are popping up (and attempting to keep up).
A few weeks ago in the links section, we highlighted the 23andme hack. It’s worse than we thought: 6.9 million profiles were accessed. What data was accessed is both complicated and unclear, but it doesn’t seem to be actual genetic data. (One flag: The company changed its terms of service last week to ensure that its customers couldn’t file a class action lawsuit.) Either way, this is a great reminder that biometric data is one of those things that you can’t change! Please be very wary about who gets to store your biometric data. Apple’s secure enclave is pretty good, for example. (Your biometric data never leaves your phone and is never accessible by any software.)
Speaking of Apple, they released the ad “Personal Voice on iPhone.” Apple is indisputably great at pulling at my heartstrings. I won’t spoil it, but they are demonstrating a technology released in iOS 17 that allows you to “create a synthesized voice that sounds like your own to communicate with family and friends.” A person’s voice is such an integral part of their identity, and it is unimaginable to lose it. Apple has gone to great lengths to protect the security of the model that can create your voice. And I trust Apple. I just really hope they got it right.
If you’ve ever wanted something like DALL-E or Midjourney (the photo and image generator, powered by generative AI) but powered by Meta, then your wishes have been answered. Their product is Imagine by Meta, and it's powered by their model, named…Emu.
Not to be outdone, Google has finally started releasing Gemini, the company’s response to GPT-4 from OpenAI and the generative AI technology that it will begin integrating across its products. Gemini will start showing up in multiple places, such as on Android phones, where the phone’s recorder will now be able to summarize recorded conversations.
Finally, Reuters is reporting that Senator Ron Wyden sent a letter to the Department of Justice urging the DOJ to allow Apple and Google to inform their users about how government agencies are demanding push notification records. Backing up: Every time you get a push notification on your phone, those notifications traverse Apple’s and Google’s servers before they hit your iPhone, iPad, or Android phone. And those companies therefore have records of what is going to your phone. It seems like governments are demanding that information from the companies while simultaneously preventing those companies from talking about it. Because of Wyden’s letter, Apple said they now have the opening they needed to be transparent with the public about how governments are monitoring these notifications. Google affirmed they will keep their users updated as well. Go transparency.