Kids and AI
They’re the ones who will benefit from — and be harmed by — it most. So what are we doing to protect them?
There is another Facebook whistleblower. And it’s specifically about kids. According to the WSJ article about the revelations of former Facebook engineering director Arturo Bejar: “One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.” The anecdotes in the story about the harassment of children are jaw-dropping.
As are the stories of how Meta handled it internally. Spanning from issues with how the statistics are reported (“The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed,” which reminds me of the stories of data science and survivorship bias from World War II) to how Meta almost completely relies on automated methods to try to identify and remove hate speech (“Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision”) to the fact that rules are written so narrowly to only ban unambiguously bad material. (Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s post with inappropriate materials.) And, that last point, like an ouroboros, wraps to the first issue as Meta itself defines what is harmful content so it, itself, can define how successful it is at removing it.
This story broke on November 2nd. By the 7th, Bejar was testifying in front of the Senate Judiciary Committee. Right now, the amount of interest in this whistleblower seems far less than the interest that Frances Haugen garnered, but we’ll see. I really hope that people haven’t gotten desensitized or otherwise distracted from children’s issues.
Above all, it makes me think about how rarely kids are given a voice when it comes to the development and regulation of technology — especially with AI, which stands to impact them the most over the coming decades.
Mitigating risk…
Children are one-third of the world’s population. And they are using devices at an increasingly young age. There is even a study that found that 1-year-olds who spend more than four hours per day with screens show developmental delays in communication and problem-solving. One-year-olds?! Their intellectual development is just as worthy of protection as their privacy and data.
Yes, AI affords incredible potential when it comes to personalized tutoring, among other educational gains. (Listen to Khan Academy founder Sal Khan discuss AI and education on Episode 4, Season 1 of the Technically Optimistic podcast here.) But there are many safety gaps when it comes to, say, video games and virtual assistants, which are being trained by adults.
We need to be looking at both sides of the equation when it comes to children. Most people, especially given the popular news, approach this purely from a risk-based perspective. We need to protect our children from the potential horrors of being online, as well as the considerable risks to their mental health. Legislation such as the Kids Online Safety Act has been proposed by Senators Blumenthal and Blackburn. And people like Stanford’s Renee DiResta are working hard to protect them from child sexual abuse material, or CSAM. (You can hear Renee talking about AI and accountability on Episode 6, Season 1 of the podcast.)
I think we can all absolutely agree that we need to protect our kids from risks. However, we also need to approach this from a rights-based perspective. It's almost clichéd to say, but the Internet is the future, opening up tons of educational and work opportunities, and creating a global community. If we enter a world in which parents are so scared about their kids’ online safety that we revert to not letting them be online at all, they will lose out.
…while protecting kids’ rights
The UN Convention on the Rights of the Child, from 1989, seeks to protect children from economic exploitation, among other forms. The UN’s Human Rights Office of the High Commissioner issued General comment No. 25 in 2021 to try to update it to the digital environment and UNICEF, also in 2021, released their guidance around policy for child-centered AI. But 2021 is still pretty far behind this wave of generative AI (which, if you think about it, many kids are training).
The UN’s comment breaks down children's rights into four principles:
Non-discrimination: Children must be treated fairly, whoever they are.
Best interests of the child: Governments and businesses must, when making any decision, do what is best for children rather than what is best for themselves.
Survival and development: Children must be supported to grow into whatever and whoever they want without any harmful interference.
Respect for children’s views: The opinions of children matter, and they must be taken into account in all the things that they care about.
The UK’s Digital Future’s Commission explains this very well.
We need safety by design, not from after-the-fact legislation — especially in the US, the birthplace of much of this tech. We need to codify it better. Organizations like Common Sense Media have been working to put together guidelines on how parents and educators can help kids navigate the space, including a ratings and review system for AI products.
UNICEF also offer parents conversational guidelines for exploring AI together. But, of course, it can’t all be left to the parents. We need to educate kids so they can be part of the conversation, updating the digital citizenship classes that are taught at some public schools around the country, such as in New York.
Finding a way forward
Luckily, there are youth activists like Sneha Revanur, the college sophomore who founded the Encode Justice coalition when she was 16 to help build a global movement around human-centered AI. Politico called her the Greta Thunberg of AI. Hopefully as the number of Encode Justice chapters grows internationally, so will the discussion. (You can hear Sneha on Episode 4, Season 1 of the podcast, or read the transcript here.)
We need practical examples of how to work with kids to help develop the tools that they use, and be sure that they’re included in training these programs. (The term “children’s invisibility” is being used in many ways to talk about how children’s data is lacking in systems, therefore making it hard to ensure that those systems can properly be used for those kids.) There are already some good examples out there, thanks to the likes of Niobe Way at NYU (who, I was thrilled to see, wrote in to respond to the Oct. 27 newsletter on AI and education) and Deb Roy at the MIT Center for Constructive Communication, who are trying to address it through new product offerings that are specifically designed for listening, dialogue, deliberation, and mediation with kids in mind.
In the UK, organizations like the Alan Turing Institute have been involving primary school children across Scotland in their work on artificial intelligence and children’s rights. From their perspective, there are four topics that have emerged in Phase 1 of their work: AI and education, fairness and bias, safety and security, and the future of AI. And the Digital Futures Commission recently released the Child Rights by Design toolkit, to be used by those designing digital products and services for kids. (The 11 principles put forth for designers include Best Interests, Consultation, and Age-Appropriate.)
Designing for safety. Teaching digital guidelines at home and in school. Including kids in the conversations around how digital spaces are designed and regulated, and ensuring their diversity. These are just some of the steps we need to be taking.
It’s right there in the name of the Digital Futures Commission: Children are the future. As they put it so well, kids deserve a world in which companies and policymakers see and consider them as they seek to find that crucial balance between risks, opportunities, and rights.
Worth the Read
AI Cameras Took Over One Small Town - Now They’re Everywhere - In my congressional testimony, I highlighted an issue with self-driving cars in that they are constantly recording video and basically blanketing us in surveillance. And now this is happening in small-town-America, too – not with self-driving cars, but the array of private surveillance cameras that are already installed – including smart doorbells like Ring. Look, there is a notion that you shouldn’t expect privacy in a public space. However, that may be outdated in this world of rampant digital surveillance – especially one that allows for aggregation and collation. It’s terrifying. The good news? Some people are pushing back against their towns’ adoption of the software.
The Final Beatles Song Was Made with a LIttle Help from AI - There’s a new Beatles song for you to stream, “Now and Then.” The track was made from John Lennon’s 1978 demo, including a 1995 guitar riff from George Harrison and (actual) bass and drums from the remaining living members. It uses the same machine-learning software that director Peter Jackson used for the Beatles documentary “Get Back,” which was able to separate Lennon’s vocals from the piano (not to mention some noise from his apartment) without losing any audio quality. But can AI generate a video for the song…?
High School Students Use AI to Make Nude Deepfake Photos of Their Classmates - On the darker end of deepfakes, on October 18th, sophomore girls at at New Jersey high school thought their male classmates were acting “weird.” Turns out they were circulating deepfake nudes of them. Because of the murky legality around AI-generated images, the school, parents, and police don’t have a blueprint for how to proceed. “I am terrified by how this is going to surface and when,” one mother told a reporter.
How States Are Guiding Schools to Think About AI - States are split (surprise!) about issuing guidance on platforms like ChatGPT. To give you an idea, only CA and OR have issued policy guidance. 11 states are developing policy, while 21 said they had no current plans…and 17 didn’t respond.
Striking Actors and Hollywood Studios Agree to a Deal - SAG-AFTRA reached a tentative agreement with the entertainment companies this week, potentially clearing the way for the industry to resume business. At its core were issues around streaming revenues and using AI to capture the likeness of actors to be able to use them in other ways. (This article, featuring Succession’s Brian Cox — and a great headline — is a good primer.) Season Two of the podcast will have an episode with Clark Gregg, who I testified with in front of the Committee on Energy and Commerce in October, to explore those underlying concerns. Details of the deal will unfold in the next few days.