Who needs to remember things when we can just hit rewind? In my case, that means using Rewind AI. Rewind records everything, all the time, that shows up on my screen. It is my digital historian. With one keyboard shortcut, a window pops up on my screen and I can ask, “When did I say I would submit the first draft of Technically Optimistic?” For me, this augmented memory is a game-changer: It's like glasses and hearing aids, but for memory itself.
It's not the first piece of software that has tried to do this, but Rewind is probably the slickest. And it's undeniable that this piece of AI is invaluable for me. I don’t even need to remember which application or website I was visiting in order to search for something. With a simple keyboard shortcut, I can search through everything I’ve seen on my laptop to find whatever I need to rewind to.
This is an AI success story, sure, but this week I want to focus on its tradeoffs — especially privacy.
What are we willing to trade off — or how much of ourselves, our data, and our privacy are we willing to give up — for augmentation?
Right now, there is a tradeoff between personalization and privacy. For example, we willingly (either explicitly or implicitly) give over all the photos we’ve taken on our iPhones so Apple can create customized “Memories.” for us. I log into Google so that it can give me more customized search results, allowing Google to use my data for advertising purposes.
Rewind goes through a lot of work to protect your privacy. The company stresses that screen and audio recordings never leave your Mac, and that data is encrypted. But whose privacy are they protecting? Didn’t take notes during your Zoom call? Just ask Rewind what everyone said.
Norms assume that we don’t secretly record any conversation that is expected to be private. In fact, it is illegal according to wiretap laws. Federal law in the United States mandates one-party consent, meaning that one participant in the conversation needs to agree to the recording. Twelve states are either two- or all-party consent states, meaning that every person on the call must consent to be recorded. This legislation might not have caught up with the existence of Zoom, which automatically tells you when the call is being recorded — but not when the participants themselves are recording it via something like Rewind.
Today, the camera points both ways
Sam Gregory, in Episode Two of the podcast, spoke to me about the nonprofit group WITNESS, which uses video to create human rights change, the idea being if I’m conducting evil in the world, I should fear that the “majority of the world’s population now has a camera in their pocket.” Those cameras point both ways. When I wander London, I’m only slightly cognizant of the fact that the UK capital is one of the most monitored cities globally, with over 630,000 CCTV cameras staring down on me.
This brings me to the big question that a simple newsletter can’t answer, but one that I hope you’ll start thinking and talking about: What should our privacy expectations be, especially as technologies like cameras and software like Rewind are becoming both more advanced and more available? Clearview AI is an example of this potential privacy trainwreck. In 2020, the New York Times published “The Secretive Company That Might End Privacy as We Know It.” Before it was limited to police agencies, the software was used in some pretty icky ways.
We’ve normalized nanny cameras and have introduced surveillance into our homes (it's way easier to get a list of “best nanny cameras” than it is to ask a question about the ethics or legalities of them), and devices like Google Glass and Meta’s Smart Glasses are pushing it to the forefront (There was, naturally, an entire Black Mirror episode about it.) To up the ante, last week, Rewind announced their Pendant, a wearable that captures what you say and hear in the real world.
It’s disturbing enough for someone who identifies as male. These issues only get more sensitive and amplified for females and other groups, who may already feel tracked and scrutinized in menacing ways.
Enter sousveillance
I’m only scratching the surface of surveillance questions. And now, on top of it all, sousveillance is creeping in.
This trend of surveillance by the public rather than those in charge is moving to commercial and wearable technologies, and it needs to enter into the public discussion. I doubt that everybody who shows up on a video call expects that it’s being recorded, transcribed, and indexed for future searching. And certainly not their ever physical move. Rewind founder Dan Siroker has some interesting ideas on how to mitigate all this, but are we just relegating ourselves to hoping that each technologist, out of the goodness of his or her heart, is deciding to do the right thing?
The Fourth Amendment talks about protections against “unreasonable searches and seizures,” but that focuses on what the government can know about its citizens. It doesn’t seem to take into account the emerging sousveillance regime. The EU, unsurprisingly, has taken this further with its “right to be forgotten,” ensuring that individuals can ask software companies to delete the personal data they have collected.But again, what about that data we’ve all collected on our devices? And, to zoom out for a moment, just like everything else, these privacy tradeoffs walk right into national security and global economic questions.
We need to have a conversation about what constitutes our inherent right to privacy as we enter this fast-moving new era, and who is defining that right. What are we supposed to do in order to protect ourselves? Should we be wearing jamming equipment, and possibly triggering a surveillance arms race? Maybe we’re supposed to be super awkward and start every conversation with, “Are you recording me?” Do we ditch the tech and hide out in the woods? Or perhaps we accept that trading privacy for connectivity and efficiency is worth the price. First, we all need to understand just what that price is. What are you willing to give away every time you click “Accept All”? Write and tell me at us@technicallyoptimistic.com.
Quote of the Week
“My hope is that our representatives are critical consumers of information about this technology and not falling for the narrative that this is moving too fast for regulation to keep up. Your job is to protect rights, and those aren’t changing so fast.” — Emily M. Bender at a virtual roundtable hosted by Congressman Bobby Scott called “AI in the Workplace.
Worth the Read
“Ultra-fast Deep Learned CNS Tumor Classification During Surgery” - That title is a mouthful, but the New York Times has a more accessible version of the story. The gist is that neurosurgeons, upon encountering a brain tumor, are faced with a hard choice: Do you cut some potentially healthy brain tissue to ensure you got all the cancer, or do you cut conservatively to leave as much brain tissue as possible with the chance that you left some disease behind? Dutch researchers have a tool to help surgeons with that decision: A new AI tool that can help brain surgeons guide their scalpels and help them decide how to proceed.
“2,851 Miles // Bill Gurley (Transcript + Slides)” - The legendary investor behind Uber, Zillow, GrubHub, OpenTable, and so many others, argues that America’s regulatory regime is set up to benefit incumbent interests of large commercial actors over the interests of the public, not to mention how money and politics plays into all this. His most memorable slide? Silicon Valley has been so successful because it's so far from Washington, DC. You can argue whether he has selectively chosen examples, but the general theme is hard to ignore. Don’t read this as “regulation is useless,” but, instead, with hope that we can get it right this time.
“Decomposing Language Models Into Understandable Components” - Anthropic, the large foundational model company that “puts safety at the frontier,” published a blog post outlining its work to better understand the neural networks behind its models. Explainability is actually a really big deal. Imagine saying, “We don’t understand how this technology works, but you should use it.” That’s where we are now! The deep neural networks that underpin the large language models are mysterious and confusing, and people are actively trying to figure out how they work. Understandably, therefore, a lot of companies are nervous to use or depend on them. So, the first company to crack explainability has a massive trust advantage and will make a lot of money from it.
“Governor Murphy Establishes State Artificial Intelligence Task Force” - As a New Yorker, I like to poke at New Jersey, but in this case, New Jersey leads, joining states like Maryland to create state-based offices to tackle the ethics behind AI. You shouldn’t think of this as “instead” of federal work and legislation. Although the federal government seems a bit distracted right now, we need to be engaging with AI at all levels of government and our society.
“Freedom on the Net 2023: The Repressive Power of Artificial Intelligence” - Freedom House, a DC-based 501c3 organization whose mission is to expand and defend freedom globally, published its latest report, which outlines the way repressive regimes are using generative AI to supercharge online disinformation campaigns, and to enhance and refine their online censorship. Time, Forbes, and Gizmodo all have good summaries and writeups of the report.
“A Graphic Hamas Video Donald Trump Jr. Shared on X Is Actually Real, Research Confirms” - After Donald Trump Jr. shared a video regarding the horrific Hamas attacks in Israel, community notes immediately started popping up on it suggesting Don Jr. was spreading misinformation. Maybe not surprising given the poster, but, it turns out, the video does seem to be true. Given that we live in a world that is supercharged with disinformation campaigns, as noted above, the question today seems to be, “how are we supposed to know what’s true?”
“Artificial Intelligence Technology Behind ChatGPT Was Built in Iowa — With a Lot of Water” - Data centers are very resource-hungry (we talk to Keolu Fox about this in his bonus episode of the podcast), and water is needed to cool these very hot machines. It’s alarming when OpenAI’s training jobs require sucking water from the watershed of the Racoon and Des Moines rivers in the middle of a drought. Microsoft also disclosed that its water consumption jumped 34% from 2021 to 2022 – again, probably mostly due to the needs of AI. Futurism also breaks this down.