Techno-optimism as a term just hit the big time, thanks to Marc Andreesen’s sprawling Techno-Optimist Manifesto. There is a lot to parse here, but the crux is that we should embrace technology at all costs, given its alleged benefits to our society and the economy, and to the foundation of human happiness itself. He even lists those who think about “social responsibility,” “trust and safety,” “tech ethics'' and others as explicit enemies of this train of thought. This kind of thinking — profit now, revise only when the negative effects are too visible (or costly) to ignore — reminds me of something: leaded gasoline.
In the 1920s, companies started selling a new fuel, tetraethyl lead, which made engines run more powerfully and quietly. But it came with a serious tradeoff. Public health experts immediately questioned the decision, and did so until it was finally banned in 1996. As a result of this unchecked “progress,” researchers have estimated that millions of premature deaths occurred, the soil in many cities became toxic, and there was a huge societal decrease in IQ levels. It is because of these socially responsible individuals in public health that we finally as a society worked our way out of a grow-or-die cycle — one that perhaps we could have stopped earlier.
I created the Technically Optimistic podcast and now this newsletter because I believe in people. I am optimistic because I believe we can see and solve problems. Just because new technology brings problems doesn’t mean technology is a “failure” – and perhaps that is what people like Andreesen are concerned about. Technology is never going to be perfect. But we can figure out how to make technology beneficial to all, and not as a tradeoff to some. And that, I believe, starts with talking about technology and educating all of us to be technically literate.
Andreesens’s long-scrolling screed includes only one reference to education, which struck me since it’s a topic that’s been on my mind: How do we both educate students to not only harness the technology but build it? And how can AI enhance rather than simply automate education?
A few weeks ago, I was lucky to have the chance to participate in a summit on AI and Education put together by the Tumo Center for Creative Technologies, and was able to chat with Daron Acemoglu for the summit. The Institute Professor of Economics at MIT is the co-author, most recently, of “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity,” which looks at how policy around technology can ultimately impact the balance of power in a society. I couldn’t resist asking him to do a quick interview on education — kind of a warmup for a longer conversation within Season Two of the podcast, which will air early next year. Here’s the beginning of our conversation, with much more to come!
I’d love to know what you think about education and AI. Write to me at us@technicallyoptimistic.com.
Raffi Krikorian: The core idea behind your book, “Power and Progress,” if I understand it correctly, is that technologies like AI create a dynamic that concentrates power in a few people's hands, and therefore it has a depressing effect on wages and large segments of society become disadvantaged. However, mechanisms like regulation and civic action can possibly shorten or ideally prevent those periods. Given that, what's the role of education in this AI world that we live in now?
Daron Acemoglu: Education, of course, has a very important role, but I hesitate to jump into education as the main solution, partly because both within economics and in policy circles for the last 50 years, we've heard, “Technology is running ahead. The only thing you can do is education.”
First of all, education is only one of the tools. And second, the perspective that we have to educate the workers differently so that they can adapt to technology — though it has some grain of truth — is too one-sided. We have to make sure that technology also is appropriate and develops in a way that's coherent with the skills and priorities of the workforce.
For developing countries, I think education is more important, because a danger is that they'll fall behind in this process. Their organizations are not ready for it. Their education system is not ready for it. And quite honestly, their politicians are ignoring all of the opportunities and the dangers that AI poses for them.
Raffi: Okay, so maybe we can separate out education in two different forms. There's one, which is we need to educate everyday people, we need to educate politicians. But how should we be thinking about it? And let's assume we can do that.
Then, what do we need to be doing with students, both in the sense of preparing them to use the technology, but also be more mindful about how these technologies, so when they become builders in the future, they're already starting to think about these issues.
Daron: You’ve got it exactly right. It's a really multi-layered problem. One is the education of the public and politicians. And that's a very high-level thing we don't want. I mean, it's okay if politicians become good at prompt engineering [laughs], but that's not what we are aiming at. We want them to understand the capabilities, the dangers, the ethical questions involved in AI.
For the students, I think there are a couple of important issues and important dimensions. First, we want the vast majority of the students to be comfortable with the technology. So that requires them to learn some basics of artificial intelligence, including prompt engineering and how to use some of these new tools that are developing.
“I am very optimistic that we have it in us to remake society in a way that can benefit from technology in a fair, equal way.” — Daron Acemoglu
Some of the do's and don'ts of using these technologies, having enough of an understanding of the background from computer science, from, you know, where reliable knowledge is coming, so that they can be good and socially mindful users, while at the same time, have the skills and the knowledge to create good opportunities in the labor market for themselves.
Second, we want all of the students to be aware of what are the types of skills that are going to be in demand in the labor market. So if the students don't have the knowledge of which types of jobs are likely to be eliminated and what types of skills employers and the community in general is going to demand, I think that's not going to be good for them. So the evidence from the U.S. is there's certain jobs that are not going to be hiring as much. For example, if you are doing very routine knowledge work, like collating of information, account-keeping, etc., I think we have to prepare the students for that.
On the other hand, We know that a lot of creative tasks, both in the entertainment industry and the knowledge industry, are going to be still in demand. We also know employers want much more flexibility, they want a lot of social and communication skills, because humans are still going to want human interaction. But that's where social and communication skills become important.
So I think that knowledge has to be imparted to the students, and they need to get ready for that new labor market. Finally, for a small minority of students, they will have aspirations and skills to be at the cutting edge of becoming engineers or designers or computer scientists. And we need to make sure that they get the necessary background so that they can be competitive with the students anywhere from China, from the UK, from the U.S. who want to jump into that race as well. But in that, it is also very important to provide that holistic picture so that when they become users of that technology, when they become employers who depend on that technology, when they become designers of that technology, they know what are the socially responsible and ethical choices, where the pitfalls are.
And I think that requires something much broader than technical education.
“Many people within the tech world are still working on things that do not leverage the skills and the importance of teachers.”
Raffi: A decade or so ago, you wrote this paper which said that the internet has this ability to create superstar teachers. But you also posited that we needed to create a system to try to equalize that again.
Daron: Absolutely. Is that going to occur again in AI? I think it is. How do we then stop it? It happened a little bit with the massive online courses, things like Khan Academy, that started taking away some teaching tasks. But the slight optimism of that paper was the belief that most students need other humans to be taught. You cannot have an education system that's purely online, and that remains true.
I think this is going to be both an opportunity and a pitfall for AI, because that's a lesson that I think the AI community has immediately forgotten. And many people within the tech world are still working on things that do not leverage the skills and the importance of teachers.
Online courses on automated grading, automated teaching, self-teaching, those have their role, but they need to be complemented with teachers. And I think what's missing is AI tools that will make the teachers much more skilled, for example, in personalized education — customizing the curriculum for the real-time difficulties and needs of the students and the diverse students. There is very valuable evidence that shows, for example, students from low socioeconomic backgrounds.
So it is exactly the kind of tool that could create an equalizing effect. I think this is the space where we need new technologies, but also for the teachers to get ready for AI. You know, the teachers can ignore AI, but that's not going to work.
Raffi: I am very thankful for your time. One last question: What are you optimistic about?
Daron: I think two things I'm optimistic about, but extremely cautiously. One is that we have been here before. The book I wrote with Simon Johnson, “Power and Progress,” is very historical precisely because we've been here before, and there have been episodes in which we've made the wrong choices and technologies have been bad for many people in society.
And there have been periods in which our institutions have adapted; people have risen up when the technologies were used against them, and better outcomes were obtained. So I am very optimistic that we have it in us to remake society in a way that can benefit from technology in a fair, equal way. I also am optimistic that the new technologies are very capable, and they have the potential to be very pro-worker, pro-humanity, pro-democracy.
So it's not true that AI can only work with authoritarians. It's not true that AI is going to destroy all jobs. That potential for helping workers is there. But very, very cautiously optimistic because I think right now, we are both politically and economically at a point in the United States, especially where the tech industry has the wrong direction and is very powerful, so changing course is very difficult.
Raffi: Daron, thank you so much for your time. Thank you for showing up and supporting Tumo. I really appreciate it.
Daron: Thank you, Raffi. It was great talking to you.
Worth the Read
“Scrolls That Survived Vesuvius Divulge Their First Word” - Purple. That was the first word decoded from these really old scrolls that survived the Vesuvius eruption. However, survived is loosely used here: These charred scrolls are so fragile that they crumble when opened. Through the use of CT scans, as well as some machine learning techniques, a few researchers have been able to start to read these scrolls. Here’s hoping they say something interesting!
“Scoop: AI Executive Order Expected Monday” - The White House is apparently holding an event on "safe, secure and trustworthy artificial intelligence” on Monday. Perhaps this will coincide with them launching a job board to bring AI talent into government. The White House has been leaning on voluntary commitments as their AI regulatory framework, which is akin to people grading their own homework, and Congress is quite busy, with very few days left in session, as well as a budget deadline looming. So, I’m excited to see if one branch of government can actually do something in the AI space.
“The Future of AI Is GOMA” - There is a shift in the tech company landscape from MANGA (Meta, Amazon, Netflix, Google, Apple) to GOMA (Google, OpenAI, Meta, Anthropic). Yes, these are the new companies to be tracking.
“23andMe User Data Stolen in Targeted Attack on Ashkenazi Jews” - I get very wary of biometric data being stored on companies’ servers. Biometric data is one of those things that you can’t change! Here, attackers guessed the passwords of a bunch of users, downloaded their genetic information, and posted it all online. Please be very careful with your biometric data. If it is stored somewhere, really consider two-factor authentication. But, honestly, step back and decide – do I really need somebody else to have this? And read the privacy policies too to understand how and where your unique data may flow.
“Kids on Roblox Are Hosting Protests for Palestine” - Politics completely aside, it is fascinating to watch kids use online spaces for a way to organize and find their own voices.