This week, I’d like to start by responding to a great reader email sent by Ryan Brounley, who works with the nonprofit Wikimedia Foundation (which hosts Wikipedia, Wikinews and other free educational sites). He commented that so far he has been unimpressed with AI detection models’ ability to suss out generated material when it comes to his work, and therefore questions whether they are capable of getting up to speed in time to detect election meddling. He wanted to know if I think that combatting source provenance is essential at the campaign level — not to mention media/social media provider level.
First off, keep the questions and comments coming! To answer Ryan’s question, I would say that while it clearly isn’t going to magically come together for the 2024 elections, it doesn’t mean we should stop. (I even started a company that built tools to help with these moderation and facilitation problems.) All of the solutions are hard — there are no silver bullets — and each will require different amounts of time and effort, meaning they will land at different times. We have to walk, talk and chew gum at the same time. Above all, we need to:
Build up trusted messenger programs – that is going to be crucial going forward.
Staff up human moderators and fact-checkers and keep experimenting with Community Notes-like features across all platforms.
Push for changes to the policies across all platforms to disclose whether generative AI is being used as part of the post – similar to what Meta and Google do.
Support legislation like Amy Klobachar and co.’s bill to regulate AI-generated content in political ads….
…but also push to go beyond watermarking to content provenance (but remember that this solves trust and not truth)
Continue to work on AI models, because this is an arms race — and with these models continuing to develop, so does the detection side.
I’d love to read your thoughts on this and other AI-related topics. Please email me at us@technicallyoptimistic.com.
Ode to the power of one
Season One of the Technically Optimistic podcast was an exploration of tech, power, and humanity. The ability to influence large numbers of people, and our society, has shifted through the years: It has moved from kings to those with money — and now to people who deploy technology.
Lots of people have asked me to comment on the OpenAI board situation. Like you, I don’t know any more than what has been written in the hundreds of articles, nor do I wish to spend this newsletter weighing in on who’s right. (See also: effective altruism vs effective accelerationism. Spoiler alert: probably neither.) What this situation made very clear is how such transformational technology, both in itself and what it can and will do to our society, is controlled by an incredibly small number of people. It takes me back to the message of my first newsletter, which is that a single person can change the course of an enormously powerful tech company. (In this case, a man in Malta is why the new iPhone uses a USB-C plug.)
Look, we’re all trying to figure how AI and our society will collide. ChatGPT reached 100 million users in just two months. TikTok got there in nine months. Instagram took two and a half years. This technology is rolling out really quickly, and the power to dictate whether some of the most impactful technologies can go full steam ahead or go into full “AI safety” mode is concentrated among so few. Unlike leaders which we are used to vetting – they campaign, they are scrutinized, and you vote for them — these few people don’t go through any of that process. This is not the first time we have this concentration of power, but that speed I mentioned is important, as we are racing forward faster than ever before, and faster than our society can absorb this technology.
This is not a commentary on the people behind OpenAI, but really just one of power and humanity. Because there are so few people involved, we are being asked to trust the humanity of these few.
I didn’t name my podcast and newsletter “Technically Optimistic” for nothing. As I said in the final episode of Season One of the podcast, I am optimistic because I believe in people. I strongly believe that we can contribute to — and ultimately change — the conversation. We just need to be educated to be able to understand the nuances of this technology. We need to understand the potential benefits for each of our lives, while also understanding the societal costs of how this technology is being deployed.
Technology is a language within everyone’s reach. It’s essential that we all educate ourselves about AI. Where to start? You could listen to our podcast. You could look at content from things like Stanford’s AI Congressional Bootcamp. But most of all, remember that none of this is inevitable. As we all start to think critically, it will be harder for such a small number of people to dictate the rollout, and the more possible it is to embed our values into these systems. This must be a conversation, not a dictate.
Worth the Read
Things may have changed (again!) by the time you read this, but I found this to be one of the best summaries of the Sam Altman v. Board saga.
Sports Illustrated published articles written by AI “reporters.” At some point these models will be good enough to write content that is indistinguishable from a human reporter, or even as a peer of Bob Ryan. But, right now, we’re living through the enshittification of these publications. When Futurism called them out, the articles, as well as the stock-image author photo and bio for “Drew Ortiz” were taken down. (Sample bio copy: "Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm.”) Remember the liars dividend? We’re rapidly reaching a time where we have to question more about what we read.
The Information covered a press briefing given by the Hugging Face — specifically giving some hot takes on what’s going on in the industry. Hugging Face is one of the larger companies building and supporting open source artificial intelligence tools. Two most interesting things: a prediction that open source models will close the gap (as of right now, the open source versions of ChatGPT and GPT-4 are still inferior to their closed source brethren – but, catching up), and some thoughts of what may come next after transformers (the ’T’ in GPT). That is especially interesting as these large language models are all the vogue right now.
Meta, who has called for more regulation over itself, is now challenging the constitutionality of of the FTC’s structure – so that it can block the ability of the FTC to regulate it. And, really, to regulate its ability to monetize the data of kids under the age of 18. At the heart is the fact that many of the US federal agencies have their own internal judicial processes, known as administrative courts or tribunals, to handle specific types of disputes related to their regulatory domains. We’ll see how this plays out. We know that the conservative leaning Supreme Court is looking for opportunities to curtail the power of the executive branch.
And, circling back up to Ryan’s questions up top: AI will even play a role in your holiday shopping. Amazon announced that it is using AI software to both weed out fake reviews as well as to summarize customer reviews into a concise paragraph, allegedly to save you time slogging through all 7,586 reviews of the new Sodastream. AI creating fake reviews, and AI weeding them out too.