2025-10-15: Technological Optimism and Appropriate Fear
A commentary
🔷 Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Here’s today at a glance:
Jack Clark, co-founder of Anthropic, just wrote a long post, flowery in language, on his view that it was appropriate to be fearful of AI, with quotes like:
that’s not an artificial intelligence about to go into a hard takeoff, it’s just a tool that will be put to work in our economy. It’s just a machine, and machines are things we master.But make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine.Jack Clark
The baseline view he has can be summarized as
a) AGI is possible and will happen, probably in the near future - putting him in the technology optimist group vs the pessimists who do not believe it is possible at all (why worry then?) or that it will take a hundred years or longer.
b) that we should be fearful lest we tip the AI development process into recursive self-improvement while seeded with a faulty reward system (e.g. create world peace by nuking everyone)
He ends with a solution consisting of listening to people, addressing their fears with data and where things are going. He believes the model companies should be encouraged/forced to do this.
Right now, I feel that our best shot at getting this right is to go and tell far more people beyond these venues what we’re worried about. And then ask them how they feel, listen, and compose some policy solution out of it.…In listening to people, we can develop a better understanding of what information gives us all more agency over how this goes. There will surely be some crisis. We must be ready to meet that moment both with policy ideas, and with a pre-existing transparency regime which has been built by listening and responding to people.Jack Clark
🤗 Thank you to our sponsor
Building Voice AI? You shouldn’t overpay for transcription.
AssemblyAI is the speech-to-text API behind apps like Granola, Cluely, and Dovetail. With pay-as-you-go pricing from just $0.15/hr, no commitments, and no minimums, you can scale your Voice AI apps without breaking the budget. Unlimited concurrency, multilingual support, and continuous model improvements make building accurate, reliable apps fast and easy.
Start building for free or explore the Playground today.
🗣️ Response
How does one respond to “Hey, we might blow up the world… but we should listen so people can steer us to the outcome they want?”
I take a different viewpoint, in that most people do not yet understand what is coming, so they will not be able to express their desires, except as anchored in the current status quo. I had dinner with a friend of mine who manages a large amount of money, and one of the things he pointed out stuck with me “I don’t really want change. I’ve done extremely well for myself, and a future where I’m just the same as everyone else, dumb in comparison to a superintelligence, doesn’t appeal to me”.
Helpful. Harmless. Honest.
It sounds great, but is still anchored in what I call “Northern California progressive humanism”. Take a scenario for example where ChatGPT is advising a kid in Pakistan who starts to identify as gay, will ChatGPT under any circumstances tell him “Being gay is wrong in the eyes of God. You will go to hell. etc. etc” i.e. the prevailing morality of the society the individual lives in? Or will ChatGPT say “There is nothing wrong in that, but you maybe in danger where you are if you tell people. Here’s how you can be true to yourself while remaining safe…”
There are millions of these scenarios far short of existential, that the researchers and labs do not speak about, because it is blindingly obvious to them.
Here’s another “The Gutenberg press democratized the distribution of information. It led directly to the Protestant Reformation, and Wars of Religion in the 16th and 17th centuries as the balance of power between the Vatican and society at large changed”:
And now we have an even greater democratization of information. What chaos, civil and hot wars will this lead to?
Note that all frontier labs are aware of this. But this does not concern them. The crisis will not be existential in the sense that they care about “AIs killing humanity”, but humans killing humans, or AI killing some humans is the cost of doing business.
I am truly astonished sometimes at the blindness of the labs to all the craziness that humans are going to deploy these tools for, far short of ASI.
Here’s another. Some refugee kid is going to ask the AI who caused his misery, and how he can bring that person to justice. Leaders of death squads of the 20th century, in Argentina, Indonesia, many places in Africa, are now sedate upper middle class professionals with grandkids. They and their offspring are trackable now. Even more so in the future.
Here’s yet another. I am quite the aficionado of forensic accounting, which is a difficult but rewarding puzzle when deployed to real problems. I often ask myself, given access to the intellectual resources and the stock of historical data that still exists, how far back can one trace corruption? Could we envision a future where for example all prior illegal financial activity will be traced, the statute of limitations lifted, and the current fortunes of those who broke the law in the past be seized? The great grandkids of the bootleggers who became the owners of Diageo and Seagrams? The Columbian nepo babies living in Florida off the spoils of the white powder trade?
Here’s another. There have long been rumors of China transplanting prisoner organs into paying customers. As a result of improved genetic sensor technology, it becomes possible to read who is using one of these organs as they pass through a sniffer at an international border. What happens then?
We live in a world of unstable truths. Many many things are hidden from view because we do not have the technology to know those things, as these things emerge into the light, what shall we do?
🖼️ AI Artwork Of The Day
💻 One More Thing
We have sponsors (or at least one sponsor)! If you'd like to explore partnerships with Emergent Behavior, email ai@a16zstudios.com.





