The other day, I was watching a YouTube review of Tesla's latest self-driving features. The reviewers praise how "natural" and "confident" the car feels. They casually mention that it "keeps up with traffic.” That’s code for speeding.
Let that sink in for a moment: They are commending an AI system for breaking the law.
I deeply understand this tradeoff: I used to run autonomous vehicles at Uber. When we were testing them in Pittsburgh, I had to deal with a similar bargain: Do we follow the speed limit and potentially create a dangerous situation, or break the law to keep everybody safe? Like most drivers out there, we chose to speed. But here is the difference: We were a research team running with trained safety drivers. We never released this to end consumers, people who may have to take responsibility for the car breaking the law.
Look, this is a hard call. I’m a Tesla driver. I sit there quietly when it chooses to speed. There is a setting on the car which sets the speed as “auto” — Tesla uses data to decide what speed they should be traveling down the road (irrespective of speed limits). That setting gives the vehicle more freedom regarding its speed, allowing it to keep up with traffic even if it’s higher than the speed limit. I just let the car do its thing. But I’m very conscious that I’m taking responsibility for this. Does every owner of a self-driving car — this includes newer models from Audi, BMW, Volkswagen, Nissan, Kia, Ford, Volvo and Range Rover — actively understand that? I wager no. If your AI co-pilot decides to speed, you’re the one paying the ticket. And, down the road, your insurance company could deny claims based on your "willful violation" of traffic laws.
Are you comfortable with that? I’m willing to bet that you speed while driving your own car. Shouldn’t these systems adhere to that same propensity to keep up with traffic? But, counterpoint, are we normalizing AI systems that break laws while disclaiming responsibility, and does that set a concerning precedent? What other laws will we be okay with computer code breaking? Is this meant to be a signal that laws need to change? Is it just a 21st-century extension of the radar detector? I mean, at this point, it’s so subtle that we don’t really think about it. And that’s what keeps me up at night: We’re not making explicit choices, we’re making implicit decisions.
As an engineer, I have to wonder what it means when we build systems that are "better" (read: more efficient) because they're willing to break the law. The tradeoff between safety and legality is real, but we should be more explicit about it. Who should be allowed to make these calls? I’m not seeing any pushback on the consumer or federal level. And as we know how these things go, it may take a lot of lives, and years of consumer lawsuits and campaigning against formidable lobbies, to get anything changed.
That is, it will if we don’t start speaking up now. Do you want to drive, or be driven?
Worth the Read
Speaking of Tesla, Gizmodo found that Tesla has the highest number of fatal crashes, but the federal reporting rule that made this information public is likely about to be wiped off the books.
A recent Pew survey found that 67% of Americans believe government oversight of AI won’t come soon enough, and might not go far enough. So it was heartening to read the open letter posted by 32 state lawmakers who pledge to release draft models of legislation by this month.
As if teens didn’t have enough to worry about in school: New “nudify” apps and sites can create nudes from clothed photos. 60 Minutes’ Anderson Cooper posted this unsettling interview with an affected teen. (And if women in politics didn’t have enough to worry about, 1 in 6 congresswomen report dealing with sexually explicit deepfakes.)
Airlines can’t keep track of bags, but they want me to spend money — and hand over surveillance to them?
The Washington Post is now using AI to enhance the conversations between subscribers and its journalists — perfect for increasing public discourse around AI! (I’m also proud to say that Gitesh Gohel, Head of Product and Design for The Post, is a Speakeasy.ai alum. I’m glad to see some of the thoughts from Speakeasy making their way into such incredible publications!)
Those encrypted messages that the developers of RCS (rich communication services) have been promising in the wake of the Salt Typhoon hack are still months away.