I get pulled over for additional security checks in airports a lot. (Like, often enough that I have a spreadsheet tracking it, but we don’t need to dwell on that…) Maybe it’s because I am of Middle Eastern descent and have a beard? Or simply because I travel a lot? I’ll never know.
This summer, traveling both for work and with my family, I noticed a big change: TSA now uses facial recognition when they scan your ID to make sure you’re really you. And Delta and United recently aligned with the TSA’s PreCheck Touchless ID system, promising quicker bag drops if you allow them to scan your face.
TSA claims they’re doing it to streamline the security process, which, as we all know, is one of the least-fun parts of travel. Or maybe they’ve been watching increasing numbers of travelers zip through the Clear line — people who willingly pay $189 a year and consent to having their fingers or eyes scanned. Who needs privacy when you can spend more time in the lounge? (I am guilty of this: I use their finger scanning, but they do have scans of my eyes, which were necessary to set it up.)
Convenient? 100%. But look: We should all have a few problems with this. When asked by the Privacy Compliance Hub what she thought about biometric scans in airports, fellow Media Lab graduate and founder of the Algorithmic Justice League Joy Buolamwini said it pretty plainly: “The big one for me is normalizing surveillance.” We are getting people more and more used to the fact that they are having their images recorded, biometrics taken and scanned, and used. Who knows how?
Normalizing surveillance would basically mean that anybody could create a version of Clearview AI, the controversial facial-recognition software that was built by scrubbing millions of people’s pictures from the Internet without their permission and selling them to police and law enforcement. Needless to say, it leads to some terrible misidentifications. (We explored it on the podcast.) Right now, we are correctly incensed when somebody builds something like Clearview, but through something like these airport scans, we could be publishing enough data that anybody could build it.
“When asked what she thought about biometric scans in airports, Joy Buolamwini, who founded the Algorithmic Justice League, said it pretty plainly: “The big one for me is normalizing surveillance.”
Philosophical stance aside, what should you be worried about?
Eventually, they are going to be using your photo to match against a large database looking for people, and we’ve seen and heard horror stories of how this could go wrong. (Imagine being misidentified as someone on the no-fly list.) It’s to the point that certain cities have banned facial recognition as the sole basis of conviction for crimes. That’s because too many innocent people are being misidentified. Do you really want to be in the database?
But also, is the TSA deleting your image? According to the Customs and Border Protection agency, no. “Facial images for in-scope [noncitizen] travelers are also transmitted to the Department’s Automated Biometric Identification System (IDENT) and Homeland Advanced Recognition Technology System (HART). All biometrics of in-scope travelers are transmitted to IDENT/HART as encounters and are retained for 75 years in support of immigration, border management, and law enforcement activities.” 75 years is a lifetime.
The TSA’s Biometric Roadmap makes clear that it has every intention of implementing the use of facial recognition at airports across the country. This is a problem because any current claims by the TSA about how they are protecting privacy and the voluntary nature of the program ring hollow in light of the fact that there are no meaningful restrictions on how the TSA implements the use of facial recognition technology. As we’ve talked about repeatedly on the podcast this season, this is because the United States lacks an overarching law to regulate the use of facial recognition to ensure the necessary transparency, accountability and oversight to protect our privacy, civil liberties and civil rights.
Another risk that is set in motion by TSA’s use of face verification is the very real possibility that our face eventually becomes our default ID and creates a de facto national ID controlled by the government. There are many concerns about this, from targeting or discrimination against certain groups to potential abuses of power by the government, not to mention the lack of checks and balances.
You can say no
My wife declines facial recognition for herself and our children every time we go through security. She doesn't want the government to have any more pictures of her that she doesn't explicitly consent to, mostly because there is a base-level mistrust of what they would do with them. I mean, she has a point: As I’ve said before, there is no federal data privacy legislation. Who knows what they may do!!! And not just now, but in the coming years. It makes me think about the podcast episode we did on women’s reproductive data and how information from digital medical records was used against women after their home states banned abortion.
For the most part, our family’s refusal hasn’t caused any problems. We still wear masks at the airport, so we just keep them on and ask the TSA employee if we can do a regular ID check instead. Sure, we may slow down the line a bit, but so do lots of things.
(That being said, Oregon Senator Jeff Merkley reported having problems saying no on his trips home from Washington. He’s now one part of a bipartisan group pushing to halt facial recognition at airports until 2027, citing privacy concerns.)
It might not be too late to stop it, but it’s breathtaking how quickly we’ve gotten used to it. Joy’s normalization point may be true: If you’ve boarded an international flight, not only are you going through facial recognition at a TSA checkpoint, but perhaps also before boarding that plane. (Or at least it has been true for me when flying United.)
And if we get into that world, then we're not far away from anybody being able to track our movements. There are so many cameras and sensors in the world, they could piece all that together. And then your privacy, poof, is gone.
Worth the Read
Here’s an actually helpful cheat sheet for AI terms, from generative AI to frontier models.
Unfortunately it was only a matter of time, but a deeply disturbing study has found that real victims of child sex abuse have been used to generate deepfake videos.
Speaking of data and travel, here’s a charming TSA loophole found by a 100-year-old flier.
This kid was fed up with the web filters on his school computer. So he raised $1.8m and hired programmers to make his own version of a kid-safe filter.
A recent A16z poll found that the majority of generative AI apps are used for companionship. At the bottom of the list? Content generation..
Thank you. Easy to read, informative, nurturing mindfulness.