2025-10-29: Some confirmations from OpenAI
OpenAI's Lukasz Kaiser discusses AI's future, dispelling fears of an "AI Winter" and revealing key insights into reasoning, costs, and multimodality.
🔷 Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Here’s today at a glance:
🗒️ Lukasz Kaiser
is one of the co-authors of the “Attention is All You Need” paper and has been at OpenAI since June 2021 after leaving Google Brain which he’d joined in October 2013.
Lukasz gave an interview to Jon Hernandez, a Barcelona-based YouTuber and AI enthusiast. There was quite a bit of alpha, and confirmations of many things that had been suspected but not fully confirmed by executives at the firm.
❄️ No Winter
In his opinion, there is not likely to be an AI winter for several reasons:
still very early in reasoning paradigm
lots of low hanging fruit and places they could optimize but have skipped over for now
similar to Moore’s Law, which held for 4 decades by exhausting each sigmoid before finding another, believes that AI will follow the same path of finding breakthroughs when they are needed to extend the runway
He confirmed that the “predict the next token using transformers trained on the entire Internet” paradigm is dead. Yes, there will be some uplift from scaling, but the real path forward is reasoning. He points out that in “Training Verifiers to Solve Math Word Problems” (Oct 27 2021), OpenAI had shown naive scaling would mean thousands of trillions of parameters would be required to solve a high school level math dataset. And that this dataset is now solved by some of the smallest simplest models.
Strongly disagreed with Richard Sutton. Yes predict-the-next-token LLMs are not thinking. But reasoning models are fundamentally different and are trained on a tiny amount of data in comparison to the Internet.
One API. End-to-end Voice AI Stack
Everything you need to build intelligent Voice AI, all in one API. AssemblyAI combines transcription, speaker identification, PII redaction, and LLM integration into a single, production-ready platform. No need for five separate tools, just one API to build, test, and scale powerful voice products fast.
Try it free.
⏱️ Release Speed
The speed of how quickly innovations can get released depends on where they are. Pre-training only happens once a year and takes months, so research discoveries earlier in the training pipeline are released much more sporadically than those later in the pipeline.
💰 Cost
He confirmed that they retrain models solely for cost, using GPT-4o as an example of a model that was not better than GPT-4 but much much cheaper. He also confirmed that GPT-5 Pro executes multiple chains of thought in parallel before consolidating and formulating an answer, which makes it expensive.
⛔ Bottlenecks
He sees the main bottleneck in the ability to execute on research ideas, either as a function of GPU shortages or research manpower. While the models are helping in coding or scheduling experiments, the bottlenecks just shift from place to place, ultimately capped by the number of GPUs they possess.
They could run more experiments in parallel, but they are limited by GPUs, just like all the other labs.
🤖 Multimodality
GPT-5 is already trained on natively on image and audio and can generate image and audio in response. And video is coming very soon. However he does not think a lot of video data is very relevant to tasks like mathematics. Outside of robotics and physics, he does not think the volume of video data is useful other than to fill gaps. He does not think video data used to build world models will generalize. “We live physically in this world, and that’s what is in the videos. In our heads we have a lot of different worlds, and these are represented in text. The language models already have a model of our abstract worlds.”
🤼 Competition
He does not think the competition with the other labs is a big stressor. People change jobs frequently, so there are no secrets that remain long-buried. All researchers at all firms are working towards more powerful AI. They don’t disclose trade secrets but they are fairly collegial.
🧑💻 Current Research
Lukasz’s main research focus is reasoning from arbitrary data, not just with a verifiable correct answer.
📣 On Slop and Ads
He’s hopeful that we use the lessons that we learned from social media on AI but is worried less about AI slop than AI weapons. “Any research you research will be used in different ways, and you cannot control how it will be used. Whether you personally like it or not, and AI is a very capable method and we need to come to terms with the fact that it will be used in ways that we do not want it to be used.”
Having said that “There is a strong ethos at OpenAI, at least among the employees, and at least parts of the leadership to [not maximize engagement for ads]”.
“You know people, I think also at Anthropic and Google and certainly OpenAI, are very committed. So we’ll try to not have ads”
“The problem is optimizing for engagement.. you're optimizing for people putting their time into this digital device. I think across the labs [there is a strong commitment not to do that]”
“OpenAI is still a small company, if a lot of employees feel that [we have a certain power].”
🛒 On ChatGPT Shopping
"Now you can do shopping from ChatGPT, you can immediately buy things, but we don’t need to show you ads, but the partners decided that its still ok for us to take some money”
“The deal is very much that its not affecting any of the shopping recommendations. It’s very hard to affect a language model to show you things like ranking. It’s an artifact of the technology… in a language model if you post-train for something, you may get something very weird”
“In the deal with the partners, it’s very boldly written that it will not affect anything”
💭 On Gary Marcus, Geoff Hinton, Paul Krugman and other naysayers
“I think we’re way too busy to follow it”
📰 Summary
This has been one of the best podcasts I’ve listened to this year. We don’t get a lot of chances for confirmations of stuff that’s been floating around on social media, so this was illuminating. Anyway, no AI Winter.
🖼️ AI Artwork Of The Day
💻 One More Thing
We have sponsors (or at least one sponsor)! If you'd like to explore partnerships with Emergent Behavior, email ai@a16zstudios.com.




