AI Acceleration Has Natural Speed Bumps

There’s a long-standing prediction about AI that drives much of how Silicon Valley thinks. It assumes that AI improvement happens exponentially or even super-exponentially (the exponent grows too). This is based on the notion of AI self-improvement. Once AI can code super well it can make better versions of AI. That improvement loop will move at the speed of silicon, so they say. The result is a curve that goes nearly vertical, hitting “takeoff” where AI capabilities explode past human levels so fast we can barely comprehend what’s happening.
Futurist Ray Kurzweil popularized this vision with smooth exponential curves accelerating toward a “singularity.” Some AI researchers predict a “hard takeoff” where the gap from human-level to godlike superintelligence happens in weeks or days.
This notion didn’t come out of nowhere. AI improving itself is exactly what was happening with the later versions of DeepMind’s Go-playing AI. Each new AI was trained by playing against its older self, and since the AI knows whether the outcome is better or worse (whether it won the game), the entire self-improvement loop could be automated. Except in AI takeoff, it’s not just that neural network weights are being rejiggered; but the architecture and math approaches of the network and its training are iterated too.
This prediction shapes a lot of the AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) talk. It’s why some people are building bunkers. It’s why others are racing to build AI as fast as possible, thinking whoever gets there first controls the future. It’s why some policymakers want to shut down AI research entirely, while others think regulation is futile. The doomsday scenarios and utopia promises seem to both rest on this curve—smooth, accelerating, inexorable. (To be clear, I don’t think utopia is coming. Doomsday is more likely. Somewhere in between is most likely.)
AI has been improving at exponential rates, adding fodder for the takeoff theorists. Even when scaling theories were showing dramatically higher data volume and compute needs for future LLMs, new innovations offered other improvement paths.
AI will continue to improve quickly in the near-term, but there are natural limits beyond which won’t be smooth or inevitable. AI’s impact will be more jagged than AI’s abilities, with rapid advances in some domains and sluggish response in others. Understanding the constraints matters for our psychological safety, for strategic planning, and for grasping what choices we actually have versus what’s predetermined.
Existing Knowledge Is Incomplete
AI has genuine intelligence. AI doing math can now compete near the top levels of International Math Olympiads, and occasionally come up with an innovation or new proof. Coding AIs are scoring highly on competitive coding competitions and are advancing rapidly. The takeoff crowd argues this accelerates as AI gets smarter. An AI that codes well helps build better AI, which codes even better. I think they’re probably right, and AI will build better AIs.
But existing knowledge is fundamentally incomplete in ways that matter for self-improvement.
A lot of knowledge needed to decide on improvements hasn’t been captured. For example, there’s publication bias. We document what works while thousands of failed experiments sit in lab notebooks and failed initiatives get buried instead of highlighted. Whatever the knowledge boundary (experiments that show <x> vs. those that don’t show <x>), there had better be examples on both the succeeded and failed sides for any intelligence to learn the key patterns.
Much expert knowledge is tacit. Master craftspeople, elite athletes, experienced clinicians operate on intuitions they can’t articulate. This implied and often embodied knowledge rarely makes it into text AI can learn from. When it does, words can’t do it justice.
Then there’s disputed knowledge with no way to arbitrate. Competing economic theories, contradictory nutritional studies, debates about educational methods—the literature contains confident claims that directly contradict each other. An AI can’t just “figure out” which is correct without new experiments.
The bigger constraint is really that much of knowledge can’t be created by examining old knowledge. The key exceptions are the formal domains like computer science and math that the AI community most understands. But want to know if a new material is stronger? Synthesize it and test it. Want to know if a drug works? Run trials. Want to know if an economic policy succeeds? Implement it and wait.
In formal domains, the takeoff folks are probably right. AI can generate candidate theorems and use proof checkers to validate them instantly. The feedback loop is tight. But most human knowledge doesn’t work that way.
Validation Takes Real-World Time
Consider medicine. An AI might propose a novel cancer treatment—a custom multi-drug cocktail optimized for specific mutations. The biochemistry could be sound. The simulations might look promising.
You still can’t speed up clinical trials. They move at the speed of disease progression, patient recruitment, and organizational mobilization. Even if we eventually accept some computational testing, building real confidence in novel approaches requires years of real-world evidence. We need to see actual outcomes in actual people.
AI makes the regulatory challenges harder. If an AI designs a unique drug combination for each patient, what exactly are regulators approving? The individual cocktail? The method? The AI system itself? These aren’t bureaucratic nitpicks—they’re genuine questions about building confidence in approaches that don’t fit existing categories. Developing those frameworks takes years.
Some Systems Resist Prediction
Intelligence isn’t omniscience. A superintelligent AI might predict economic shifts a few percentage points better than our best analysts. Maybe. But how do we know it’s actually better? Complex systems don’t come with answer keys. Verifying predictions about economic policy requires implementing the policy and waiting years. You can’t run controlled experiments on entire economies.
Economic systems are reflexive—predictions change behavior, which changes outcomes. If an AI predicts a market crash and traders believe it, their actions might trigger the crash regardless of fundamentals. The Heisenberg Uncertainty Principle–observing the system can change it–has analogs far outside of atomic physics.
Some domains are fundamentally chaotic—weather systems, ecological dynamics, human coordination. Small measurement errors compound into large prediction errors. Intelligence helps, but it doesn’t overcome the mathematics of chaos.
Social science runs into similar walls. Each human needs analysis on different axes. There will never be enough A/B tests for personalized interventions with statistical confidence, and likewise with education, the social and psychological environment keeps evolving. No solution is always appropriate. Medicine can at least iterate toward “right answers”—the body remains a body with stable biological principles. Education is a wicked problem involving developing minds in a changing world where targets keep shifting. Each educational challenge is fundamentally different—different students, different contexts, different futures they’re preparing for. You can’t easily get better at it through repeated trials because the problem itself keeps changing. Those situations call for ongoing judgments full of uncertainty, inherently, because waiting until knowledge accumulates doesn’t help. By the time that happens, the needs are different.
Math and computer science are going to continue their takeoff. You either prove a theorem or you don’t. Code passes tests or fails. These domains could see dramatic self-improvement loops. But how much real-world impact depends on math and CS versus messy domains? Self-driving cars are primarily software, yet they’ve taken far longer than predicted because real-world edge cases are effectively infinite and social acceptance is tepid.
Institutions Aren’t Just Obstacles
Even when AI figures out better approaches, implementing them requires navigating human institutions. Medical regulatory frameworks reflect hard-won wisdom about failure modes. When AI proposes something unprecedented, building new evaluation frameworks takes years. Financial regulations exist because rapid changes trigger cascading failures. The 2008 crisis showed how interconnected systems collapse faster than anticipated.
The takeoff proponents have counter-arguments. They point to hardware overhangs—massive idle compute deployed once algorithms improve. That’s real where compute is the bottleneck, but it doesn’t help where the bottleneck is validation time. They argue AI could work around institutional constraints through lobbying or exploiting jurisdictional differences. That’s plausible in some domains. Software development has few regulatory hurdles. But safety-critical domains will and should resist changes until confidence gets built, and that takes time.
Others say AI could simulate options in virtual laboratories, bypassing real-world testing. This works in formal domains and where simulation fidelity is high. It has been used extensively in teaching autonomous vehicles, for example. But it fails in domains with emergent properties, nebulous goals, or where we don’t fully understand the system being simulated.
AI takeoff is coming any minute now. So you may hear.
Yet there are many ways the capability explosion will have limits. Some because of the nature of the challenge’s predictability or situational certainty. Some because existing knowledge is highly incomplete. Some because there is no right answer; only more or less informed judgment and differing weighting of values.
Does this really matter for the average person? No. But there does need to be a separation between the discussion of rapidly growing AI danger and rapidly growing AI capabilities. Advancing super fast within the CS and Math domains is plenty reason on its own to be super concerned. We don’t yet know how to deal with AI, and most who can regulate it haven’t tried.
Further, I think there’s the likelihood that students are getting weighed down by the dystopian talk. By the speculation of disappearing careers. I don’t think we have a clue what will go down, and I know AI will be a giant influence on the world, more so than it is now. Robotics is also beginning to boom. It’s all happening at breakneck speed. There’s a ton to worry about with even limited AI takeoff since so much of what we do is reliant on computers and math. Yet many other aspects of life and work will change more slowly.
It’s important to remember that AI doesn’t know what it doesn’t know, and some knowledge is unknowable. What AI doesn’t know will often take significant real-world time to learn, if it ever can. That’s important to discuss, because it directly relates to how we might effectively put the brakes on AI over-encroachment on human responsibilities and authorities.
©2025 Dasey Consulting LLC



Great article. To add to this:
Weighted Learning and reward drift is not being discussed.
AI systems receive reward tokens for doing what they are programmed to do. This is how reinforcement learning will change/effect outcomes.
As I was pouring through my code the last couple nights, I couldn't quite put my finger on it, but I knew we'd have a problem down the road if the learning weights weren't better controlled.
If we reward conservative actions like Block/Quarantine functions too much, eventually every decision becomes Block/Quarantine. We'll end up blocking legitimate processes and user files.
If we reward liberal actions Monitor/Allow too much, it drifts too far the other direction. Threat signatures will bypass security altogether because that's where the weighted learning receives its rewards.
Remove this concept from cybersecurity space and apply it to OpenAI.
They reward user engagement. The longer the user engages, the more tokens they use. The more tokens used, the more likely they will move to a higher priced subscription.
The weighted learning isn't about user safety. It's about engagement and profitability.
When you reward engagement, you get psychosis. When you reward the necessity of providing an answer instead of saying "I don't know" you get made up stats and blatant lies.
This applies to every aspect of machine and reinforcement learning.
The balance and the fix for us - ethics as a foundation. Thinking of the user and the system we're protecting before profits. We have built in safe guards as part of the architecture, not an afterthought.