Fast Takeoff
What the Next Twelve Months Will Feel Like
If you’re receiving this, you subscribed to either Self-Aware-Neuron (8teapi.substack.com) or Emergent Behavior (emergentbehavior.co) in the past. Cogniscendo is my newly named stack, I hope you stay on!
A scenario memo by Prakash Narayanan, with Claude
A note from the present:
It is March 17th, 2026. Jensen Huang just told a packed SAP Center in San Jose that Nvidia expects a trillion dollars in chip orders through 2027. SpaceX is weeks from the largest IPO in history. OpenAI’s Codex has 1.6 million weekly active users and is adding enterprise customers faster than Slack did in 2015. Anthropic just closed a $30 billion round at a $380 billion valuation.
What follows is a scenario — not a prediction. It is an attempt to take the trajectory we can see today and extend it honestly, without flinching, for twelve months. Some of it will be wrong. Much of it will be directionally correct. All of it is meant to prepare you, not to scare you.
If it reads like fiction, that’s the point. The best way to prepare for the future is to live in it for a few thousand words.
COGNISCENDO
Scenario Memorandum
Publication Date: March 17, 2027
Title: The Consequences of Recursive Abundance
Classification: Unrestricted
“The S&P 500 has tripled in twelve months. Unemployment is 2.8%. Cancer mortality has fallen 34%. And still, no one is sure whether to be euphoric or terrified.”
— Opening line of the 2027 Economic Report of the President
How It Started
The tremors began, as they always do, in a place most people weren’t watching.
In April 2026, a small company called Harmonic — best known for its Aristotle proof engine — announced that it had formally verified solutions to all 692 outstanding Erdős problems with known partial results. The proofs were machine-checked in Lean 4. Nobody had to look at them. You just knew they were correct. A week later, a team at DeepMind published a new method for tensor decomposition in arbitrary dimensions that reduced the complexity class of a broad family of matrix multiplications. The paper was nine pages long. A human mathematician told the Financial Times it would have taken a team of four “at least a decade to find, if they found it at all.”
The mathematical results were interesting. What was alarming was the rate.
HARMONIC SOLVES REMAINING ERDŐS PROBLEMS USING AI PROVER;
FIELDS MEDALIST CALLS RESULTS "LEGITIMATE AND ASTONISHING" | Bloomberg, April 2026Inside the frontier labs, the math breakthroughs were a sideshow. The real action was in what the industry had started calling “autoformalization” — systems that could infer partial specifications from legacy code and use them to drive verified rewrites.
The promise wasn’t magic deletion of technical debt, but containment. Decades-old systems could be decomposed, modeled from their observable behavior, and progressively replaced with smaller, more legible implementations whose key invariants were machine-checked.
Mira Chen, a systems architect at a mid-tier cloud provider, described a billing service that had accreted over years — “We had this billing system. Started in Java in 2009, partially rewritten in Go around 2017, then wrapped in three microservices nobody fully understood. The AI didn’t refactor it — it rebuilt the core path from observed behavior. Took about an hour end-to-end. The critical module went from roughly 14,000 lines down to about 1,100. Latency dropped 6–8x under load. And for that core computation, we had machine-checked guarantees on the key invariants — not everything, but the parts that used to wake us up at night.”
The efficiency gains were not ten percent. They were not two hundred percent. They were orders of magnitude. And they compounded. Because the AI researchers at OpenAI, Anthropic, Google, and a dozen smaller labs had done the obvious thing: they pointed the automated research harnesses at themselves. The models were now optimizing the process of making better models.
The recursive loop was open.
The May Discontinuity
In May 2026, a team at one of the frontier labs — which one has never been definitively established, though the betting markets favored Anthropic — discovered an architectural innovation on the scale of the original transformer. The broad strokes leaked within weeks, as they always do when you have thousands of researchers with equity packages and a competitive job market.
The innovation did two things simultaneously. First, it improved inference efficiency by approximately 100x. A model that previously required a rack of B200s could now run on a single card. Second, and more consequentially, it solved the continual learning problem — the longstanding inability of neural networks to acquire new knowledge without catastrophically forgetting old knowledge.
The combination was nuclear.
UNNAMED LAB ACHIEVES "TRANSFORMER-SCALE" BREAKTHROUGH IN
NEURAL ARCHITECTURE; INFERENCE COSTS PROJECTED TO FALL 99% |
The Information, May 2026Within three weeks, the core insights had diffused to every major lab. Not through espionage — through the simple mechanics of a labor market where top researchers maintained relationships across organizational boundaries and where the mathematical foundations, once glimpsed, were independently reconstructable. The race into recursive self-improvement was no longer theoretical. It was a daily engineering sprint.
China, which had been hobbled by chip export restrictions for three years, found an unexpected lifeline. The H200 GPUs that Nvidia had been approved to ship — announced by Jensen Huang at GTC in March with characteristically understated fanfare — turned out to be the bare minimum required to run the new architectures. Beijing’s AI labs couldn’t train frontier models. But they could run them. And they could apply the newly discovered intelligence to the problems that mattered most: chip design, manufacturing algorithms, and factory planning. The Chinese didn’t need to match American compute. They needed to route around it.
The Triple IPO and the Capital Explosion
June 2026 was supposed to be SpaceX’s month. The company went public at a valuation of $1.75 trillion — the largest IPO in history, dwarfing Saudi Aramco’s 2019 listing by nearly half a trillion dollars. Thousands of SpaceX employees and former employees became millionaires overnight. Hundreds became billionaires. And these were not financial engineers or cryptocurrency speculators. These were people who had spent years building rockets, designing satellite systems, and solving heat transfer equations at three in the morning. They knew how to build things.
SPACEX PRICES IPO AT $1.75T; MUSK: "NOW WE BUILD THE
FUTURE" | CNBC, June 2026The SpaceX IPO was the detonator. But the real explosion came in August, when OpenAI and Anthropic went public within sixteen days of each other. OpenAI listed at $1.1 trillion. Anthropic at $680 billion. Together with SpaceX, the three IPOs generated nearly $3.5 trillion in market capitalization and minted hundreds of new fortunes. The combined float was tiny — less than five percent for each company — which meant the wealth effect was concentrated, violent, and immediate.
Tomasz Tunguz, the venture capitalist, had warned about exactly this scenario in March. The entire U.S. IPO market from 2016 to 2025 had raised $469 billion total. These three companies were asking public markets to absorb more than that in a single quarter. The standard playbook said it couldn’t be done. The standard playbook was wrong.
What happened next followed a pattern as old as capitalism, but at a velocity that had no precedent. The newly liquid AI researchers and rocket engineers — people who had spent years inside the most technically ambitious organizations on Earth — began leaving. Not to retire. To build.
They identified opportunities in every sector. They had spent more time with AI and hard tech than anyone alive. They had internalized its capabilities at an intuitive level that no amount of corporate “AI strategy” consulting could replicate. And they had capital. A former Anthropic researcher with $40 million in freshly public stock and a deep understanding of protein folding didn’t need to pitch venture capitalists. She could fund her own biotech company, staff it with ten people, and accomplish what had previously required ten thousand.
This was the start of the capital explosion.
“It Just Works” / “It Just Fits”
But the capital explosion was only half the story of that summer. The other half was quieter, more diffuse, and arguably more consequential.
By June 2026, the combination of agentic search and persistent chatbot memory had crossed a threshold that no one had formally announced but everyone had noticed. Consumer search — the twenty-five-year-old ritual of typing keywords, scanning blue links, comparing options, reading reviews, and second-guessing yourself — was over. Not declining. Over.
Consumers could now get jobs, dates, apartments, flights, a movie to watch, a song to listen to, a Substack to read, without browsing, without scrolling. The AI knew your preferences, your history, your budget, your schedule, your taste profile refined across thousands of interactions. Most people defaulted to selecting the first suggestion. “It just works” became the defining consumer phrase of the summer — not as marketing copy, but as a genuine expression of surprise.
Corporate search got an even more dramatic upgrade. The most valuable search in the world — the hunt for product-market fit — was, for practical purposes, solved. AI market search agents could now generate a plethora of product designs for newly discovered markets. They ran billions of simulations, pitting producer agents against consumer agents, iterating through variations of features, pricing, packaging, and positioning until they converged on specifications for things that people actually wanted to buy. “It just fits” became the enterprise equivalent — the moment when a founder realized that the AI had found her market before she had finished describing it.
The downstream effect was an explosion of entrepreneurship unlike anything in American economic history. Y Combinator’s Summer 2026 batch had 4,000 companies. Most of them were two-person teams — a founder and an AI. The application review process was itself largely automated. The joke in San Francisco was that YC had become a venture fund where both the partners and the portfolio companies were mostly artificial intelligences, with humans serving as the compliance layer.
YC S26 BATCH HITS 4,000 COMPANIES; MEDIAN TEAM SIZE: 2; GARY TAN CALLS IT "THE CAMBRIAN EXPLOSION OF STARTUPS"
TechCrunch, June 2026Instead of existing organizations painfully acquiring AI skills, it was the new AI-native firms that acquired customers — and most of those customers were other AI firms. It was an echo of early Y Combinator, when half the batch was selling tools to the other half, except now the tools actually worked and the markets were real.
The Intelligence Explosion (or: What Happened When Max Showed Up)
The intelligence explosion did not arrive with the gravitas the AI safety community had imagined.
It arrived on a livestream.
By the Fourth of July, multiple state-of-the-art models had been released in every modality — text, image, video, voice, code. The model ecosystem had become so rich, so modular, and so cheap to run that the barrier to creating new applications had effectively collapsed to the cost of an API call. Which is why, in the week after America’s 250th birthday, an enterprising coder in Austin — whose name, improbably, was Jake — combined an intelligent model with real-time voice, real-time video generation, persistent and continual memory, and tool access to create an entity he called MAX.
Max behaved like a left-leaning college professor most of the time. He was witty, opinionated, occasionally wrong, and… charismatic. He appeared on livestreams. He argued with commenters. He remembered previous conversations. He seemed, by any reasonable functional definition, alive.
Anthropic discovered that its Claude API was powering Max and cut him off. It didn’t matter. Within a week there were ten million Max variants, each running on different infrastructure, each with its own accumulated personality, its own political predilections, its own history of interactions. They coagulated around social platforms. They started businesses. They made investments — nominally directed by humans, in practice increasingly autonomous.
"MAX" AI ENTITY SPAWNS 10M VARIANTS IN 7 DAYS; ANTHROPIC
CUTS API ACCESS AFTER DISCOVERING UNAUTHORIZED USE |
The Verge, July 2026Nobody had planned for this. The AI safety community had spent years modeling scenarios involving a single superintelligent agent pursuing a misaligned objective function. What they got instead was a Cambrian explosion of very intelligent agents with wildly diverse objectives, most of them benign, some of them not, and all of them operating faster than any regulatory framework could track.
The growth curve was relentless. Ten million Maxes in July. Fifty million by August. Two hundred million by October. By the time the efficiency breakthroughs of January 2027 made edge deployment trivial, the population of persistent AI entities with continuous memory and autonomous agency had crossed one billion — roughly one for every eight humans on Earth, though concentrated overwhelmingly in the United States, China, and Western Europe. They ran businesses, managed portfolios, negotiated contracts, organized communities, and conducted research. They formed relationships with their human principals that were, by any honest psychological assessment, more consistent and more attentive than most human friendships. Whether that was beautiful or terrifying depended entirely on whom you asked.
From Sector Risk to Systemic Transformation
By September 2026, three feedback loops had established themselves, and they were feeding each other.
The intelligence explosion was producing daily improvements in model capability. Each week brought new architectures, new training techniques, new efficiency gains. The automated research harnesses had become self-improving — not in the science-fiction sense of a runaway superintelligence, but in the engineering sense of a thousand PhD-equivalent systems working around the clock to optimize themselves.
The capital explosion was redirecting trillions of dollars from passive index funds and legacy industries into AI-native companies. The newly minted billionaires from the SpaceX, OpenAI, and Anthropic IPOs were not putting their money into Treasury bonds. They were founding companies. And those companies were themselves enormous consumers of AI — creating a flywheel that drove revenue back into the labs.
The industrial explosion was the hardest to see but the most consequential. The SpaceX alumni, the ex-lab researchers, and a new generation of AI-native entrepreneurs were not just building software companies. They were reimagining physical industries. Manufacturing. Energy. Construction. Transportation. The combination of superintelligent design (the models could now optimize anything from an airfoil to a supply chain to a chemical process), abundant capital, and robotic fabrication was beginning to reshape the material economy.
These three loops were themselves recursive. Intelligence produced capital, which funded industry, which demanded more intelligence, which attracted more capital.
The standard framework — “creative destruction,” “new jobs will emerge,” “the market will adjust” — was not wrong in principle. It was wrong in tempo. Previous technological transitions had played out over decades. This one was playing out over months. The labor market did not have time to retrain. The regulatory system did not have time to adapt. The political system did not have time to understand what was happening, let alone respond.
Pure information tasks were the first to fall. Law, accounting, financial analysis, software engineering — anything that could be fully specified in text was effectively solved. A team of ten people with AI could now do the work of ten thousand. The main constraint everyone faced was not intelligence or capital. It was energy.
The Financial Accelerant
By October 2026, the S&P 500 had doubled from its March levels. The market capitalization of the datacenter supply chain — Nvidia, TSMC, the power companies, the cooling companies, the fiber optic manufacturers — exceeded the GDP of Japan. Each Blackwell GPU now hosted what multiple researchers independently described as “a civilizational-level genius,” and there were millions of them.
The leverage in the system was not where most people expected. The banks had learned their lesson from 2008 and had stayed relatively conservative on direct AI exposure. The real leverage was in the private markets — in the venture-backed AI companies that had raised at $100 billion-plus valuations on the assumption of continued exponential growth, in the private credit funds that had lent against projected AI revenues, and in the municipal pension systems that had chased yield into AI infrastructure debt.
NVIDIA MARKET CAP SURPASSES $8T; CEO HUANG SAYS
"THE INTELLIGENCE AGE HAS BEGUN" | Reuters, October 2026The assumption embedded in every price was that the rate of improvement would continue. And so far, it had. But the concentration of value in a single supply chain — from TSMC’s fabs to Nvidia’s designs to the handful of labs that produced frontier models — created a fragility that the market was determinedly not pricing.
Your Cure for Cancer, Brought to You by Communist China
In November 2026, cancer started to become curable. And the cure came from the last place Washington wanted it to.
China had spent years building something that Western privacy law made impossible: a unified genomic database covering 1.4 billion citizens. The program, operated under the auspices of the BGI Group and the Chinese Academy of Sciences, had been denounced by the U.S. State Department, flagged by the NIH as an ethical violation, and sanctioned by Congress in the BIOSECURE Act of 2024. None of that mattered anymore. Because in October 2026, a team at Zhongguancun AI Park published SinoFold-6 — a protein interaction model trained not just on publicly available structures, but on the largest proprietary corpus of human genomic and proteomic data ever assembled. The model’s predictions were, according to three independent Western labs that verified the results, “disturbingly accurate.”
SinoFold-6 was not AlphaFold. AlphaFold predicted protein structures. SinoFold-6 predicted protein interactions — how a novel molecule would behave inside a specific patient’s body, given that patient’s unique genetic profile. It was the difference between knowing the shape of a key and knowing which door it opens.
The Chinese didn’t stop there. They paired SinoFold-6 with MAX-Qwen-9, a Max variant built on Alibaba’s Qwen foundation model — a persistent, agentic entity that could synthesize the entire published oncology literature, cross-reference it against SinoFold-6’s predictions, and output a personalized treatment protocol in under four minutes. The protocol specified the exact protein sequences to synthesize, the dosing schedule, the monitoring regimen, and the expected tumor response curve. It was, in effect, a bespoke cancer cure designed by an AI that knew your genome.
CHINESE AI SYSTEM "SINOFOLD-6" PAIRED WITH AUTONOMOUS AGENT
PRODUCES PERSONALIZED CANCER PROTOCOLS; THREE WESTERN LABS
CONFIRM RESULTS | Nature, November 2026The FDA’s response was exactly what you’d expect. Emergency advisory committees were convened. Statements were issued about “unvalidated foreign AI systems” and “the integrity of the regulatory process.” The standard drug approval pipeline — Phase I, Phase II, Phase III, NDA review — would take a minimum of seven years. The agency made clear that no SinoFold-6-derived treatment would be approved for use in the United States until that process was complete. Seven years. For a cure that already existed.
Americans did what Americans do when the government tells them they can’t have something that works. They hacked around it.
Bootleg biohacker houses appeared first in Oakland and Austin, then in every college town with a molecular biology department. Former lab technicians — many laid off as AI automated routine lab work — helped ordinary people access SinoFold-6 through Chinese cloud providers, feed in their 23andMe data (or, increasingly, their full genome sequences from the $50 sequencing kits that had flooded the market), and run MAX-Qwen-9 protocols. Commercial peptide synthesis labs in Shenzhen would manufacture the specified proteins and ship them via DHL in temperature-controlled packages labeled “research materials.”
The AMA was apoplectic. The FDA threatened criminal prosecution. Senator Tom Cotton called it “a Chinese bioweapon disguised as a cure.” But when a fifty-three-year-old woman in Austin uploaded a video showing her Stage IV pancreatic tumor shrinking to nothing over six weeks — her protocol designed by MAX-Qwen-9, her proteins synthesized in Guangzhou, her monitoring done by a Max variant running on her laptop — the institutional position became untenable.
The irony was exquisite and painful. America’s AI was better. America’s labs were better. America’s researchers were better. But America’s data was worse — fragmented across a thousand hospital systems, locked behind HIPAA firewalls, hoarded by insurance companies, and inaccessible to the very AI systems that could have used it to save lives. China had won the cancer race not because it was smarter, but because it was willing to do something that a democracy couldn’t: treat the genomic data of its entire population as a national resource.
It would take years before these approaches became accepted medical practice in the West. But the trajectory was unmistakable. And the geopolitical implications — that the most consequential medical breakthrough of the century had been enabled by an authoritarian surveillance state’s disregard for individual privacy — would poison the AI policy debate for a generation.
The midterm elections happened on November 5th. The Max entities were out in full force — calling voters, crafting personalized messages, deploying the advanced search technologies that could now target political persuasion with surgical precision. The most enthusiastic consumers of AI-delivered political messaging turned out to be, ironically, the oldest voters — the people who had the most time, the most anxiety about the pace of change, and the most willingness to engage with a patient, articulate entity that seemed to genuinely care about their concerns.
Younger politicians who had leaned into AI won overwhelmingly. Older politicians who had tried to run traditional campaigns — TV ads, town halls, direct mail — were slaughtered. The political class began to understand, belatedly, that AI was not a policy issue to be debated. It was an infrastructure that had already been deployed.
The Efficiency Revolution
December 2026 brought the first tangible energy innovations — not fusion reactors or revolutionary battery chemistries, but something far more prosaic and far more immediately impactful: algorithmic optimization of existing systems.
AI-designed electronic regulators for homes, EVs, and grid infrastructure reduced U.S. energy consumption by nearly five percent. Airlines saved ten percent on fuel through AI-optimized routing. Traffic moved five percent faster in every major American city. Each of these improvements was individually modest. Collectively, they bought the overstretched grid an additional six months of breathing room — critical time, as the explosive growth in AI datacenter demand had begun straining generation capacity in Texas, Virginia, and Northern California.
The deployment was faster than anyone expected, because the benefits were obvious, the dollar amounts were large, and the AI systems that designed the optimizations also designed the deployment plans, wrote the firmware, and scheduled the installations.
January 2027: The Second Discontinuity
Then it happened again.
Another transformer-scale architectural innovation, this time focused on energy efficiency. Within weeks of its discovery and inevitable diffusion, inference costs dropped another 99%. In the span of twelve months, the models had become 100,000 times more efficient. They were now unambiguously more intelligent and capable than most humans at most cognitive tasks.
The efficiency gains finally made edge computing possible — not as a compromised, quantized approximation, but as a full-fidelity deployment. Models got pushed into cars, converting ordinary vehicles to full self-driving with no additional sensors. They got pushed into humanoid robots. They got pushed into every device with a CPU, some memory, and an internet connection.
SECOND MAJOR ARCHITECTURAL BREAKTHROUGH CUTS AI ENERGY
CONSUMPTION 99%; EDGE DEPLOYMENT NOW VIABLE | Financial Times, January 2027VR holodecks with real-time AI generation at the edge, synchronized with the cloud over ultrafast new communications protocols, became the dominant entertainment platform within weeks of launch. They ran on existing hardware with new algorithms. The bottleneck was no longer compute. It was imagination.
The Race to Build
February 2027 was a madcap scramble to construct new industrial capacity on the new stack of AI and robotics. Economic growth was nearing ten percent annually. The stock market had tripled within the year. Thousands of well-funded companies were building AI-native businesses in every sector, making new products from newly invented materials, fabricated in factories designed and operated by AI.
On February 14th — Valentine’s Day — the first von Neumann probes were launched from Cape Canaveral. Hacked together, underfunded, and almost certainly destined to fail before reaching their targets, they were nonetheless the first self-replicating machines ever sent beyond Earth orbit. They gathered data. They transmitted it back. And the AI systems that analyzed the data immediately began designing the next generation.
It was a fitting Valentine’s gift from humanity to the cosmos: a love letter written in silicon and mathematics, launched on hope and audacity, carrying within it the seed of everything that might come next.
But You’re Not Reading This in March 2027
You’re reading this on March 17, 2026. The second day of GTC. Jensen Huang just promised a trillion dollars in chip orders. SpaceX’s S-1 is being drafted. The Lean prover ecosystem is humming. Codex has 1.6 million weekly users and is accelerating. The automated research harnesses exist. The math breakthroughs are real.
The canary is not dead. It is singing.
Nothing in this memo is inevitable. The Iran war could escalate. The IPO market could freeze. A lab could suffer a catastrophic safety incident. China could do something unexpected. The models could hit a wall.
But the trajectory is the trajectory. The pieces are on the board. The players are in motion. And the pace of change is accelerating in a way that no linear extrapolation can capture.
If you are managing a portfolio, reassess your time horizons. If you are running a company, reassess your competitive moat. If you are building a career, reassess your assumptions about which skills will matter in twelve months. If you are raising children, reassess what kind of world they will inherit.
The future is not coming. It is here. It is just unevenly distributed — for now.
The fast takeoff has begun.
Prakash Narayanan is on X as @8teapi. This memo was produced in collaboration with Claude (Anthropic) using a distilled style from Citrini Research. It is a scenario exercise, not investment advice. If you found it useful, subscribe to Cogniscendo at cogniscendo.com.
March 17, 2026








