A couple of weeks ago, after I wrote about the proliferation of biometric authentication scanning at US airports, I got lots of emails asking: What, what about Apple Face ID?
Like millions of people, I use Apple’s Face ID many times a day: To unlock my phone, use banking apps, buy things with Apple Pay, download apps, etc. So how is it different from the facial recognition systems that we were talking about in that newsletter, and should we be thinking more deeply about this, too?
There are big differences! The biggest is that all of your information is stored on your phone (or iPad Pro) and lives there and only there. In fact, Apple created something called a “secure enclave” where all of your biometric information is stored, be it your fingerprint or the infrared image and depth map taken of your face that uses more than 30,000 infrared dots to scan the contours. (Thieves, or sneaky kids eager to buy stuff on your card, can’t just show it your photo or flash your face while you’re asleep. To prevent the latter, you can even set up an extra security step, “Require Attention for Face ID,” ensuring that your phone will only unlock when it sees you with your eyes open and looking at the screen.) Because of this enclave, no software on your phone — not even Apple — can get access to your raw biometric data.
All a piece of software on your phone can effectively do is ask, “Is this Raffi?” and the code will tell them yes or no. There’s no uploading your face scan to a server. Nobody is creating another Clearview AI from your face information. (Learn about the perils on this episode of the podcast.) Remember when we wrote about Age Verify? It’s kinda the same: The actual data doesn’t have to move; we just pass a verification check.
In fact, because the data is only stored on your phone and can’t move, you've probably noticed that if you upgrade your phone, all your software moves, but your face information doesn't! You literally have to teach your new phone to recognize your face all over again.
Compare that to TSA facial recognition, where all your facial information is uploaded to a server managed by the TSA/US Government. It can then be moved between systems, used and analyzed however they want, etc. (And, of course, potentially breached.) As we’ve said before, they can do what they want with this data not only because there is no federal data regulation on how to manage this, but also, there doesn’t seem to be any architectural code in these airport facial recognition systems designed to protect you.
So, thanks to those of you who reached out to ask! I’m always here for your questions. And the answer to this one: Huge fan of Face ID. Facial airport recognition? Not so much.
Worth the Read
“Everyone will go hungry,” a Wuhan taxi driver says as driverless taxi fleets roll out (literally) across China.
Meanwhile, in the US, a bill banning Chinese-developed software capable of Level 3 self-driving and above is close to passing. This would also halt testing of self-driving cars made in China.
This video shows you how to figure out if your car is collecting data on you. Of course, the irony is Instagram is collecting all this data on you watching this video.
California’s new AI safety bill would mandate that companies spending more than $100m on training a frontier model, such as ChatGPT-5, do safety checks or be liable should their system lead to a mass casualty event. The likes of Fei-Fi Li say it could be harmful to the industry (but, it should be noted that she is not only an incredibly respected academic, but has raised $100m for her latest startup).
MIT’s Daron Acemoglu, who I was fortunate to interview for the newsletter on AI education, has a thoughtful essay about what we’re getting wrong when it comes to our fears about AI safety.
The man who invented pay-per-click advertising says the generative AI industry is stealing data. So he created a company that gets publishers and individuals paid when their content is used to build models.