It has been two and a half years since ChatGPT's launch shocked the world. Were it not for AI-assisted cheating, I doubt many schools would be paying attention at all. Those that are usually focus on defensive measures (e.g., policy) over change-oriented responses.
Many people talk about the reasons for AI inattention or resistance, but at heart I think the system isn't constructed to be able to change. Most people in any field resist change, but the characteristics of other jobs—competitive obsolescence, frequent job and work culture shifts, constant reorgs—create the ethos of a changing system. There is no such vibe in education.
Education, writ large, is not a learning system. It has the hallmarks of a closed, static system: little interchange, including of personnel, between the sector and what's outside of it, and weak, slow mechanisms for changing anything. The entire system is defensive to change.
What makes this moment different is the accelerating pace of change outside education. For decades, the world changed slowly enough that educational lag didn't matter much. A curriculum developed in the 1990s could remain relevant through the 2000s. But now technology, work patterns, and skill requirements shift on timescales of months to years rather than decades. The education system's decade-plus adaptation cycles haven't just become inadequate—they've become catastrophically mismatched to external reality. Schools have been so slow to adopt Computer Science curriculum, as an example, that the decades-long emphasis on coding has lost much of its relevance. The system has literally changed so slowly that the envisioned change is no longer appropriate.
Each year of non-adaptation makes schools relatively more obsolete, creating a compounding problem where they're not just behind, but falling behind at an accelerating rate. Students can now see AI doing the kinds of tasks schools train them for, making the mismatch between education and reality immediate and obvious rather than abstract and distant. Educators claim everything they have always taught is still foundational, but it’s not. Otherwise, they’d be rolling out the abacus, and mainly teaching about religion and farming techniques.
There are principles that any learning system must embrace. In my recent book AI Wisdom Volume 1 I described four characteristics:
a system that can change (design)
a process for making those changes (learning mechanism)
a strategy for guiding changes (pedagogy)
a way to measure success (assessment)
Schools’ defensive posture is deeper than professional territoriality. Educational systems are ironically resistant to the very principles that make learning and improvement possible. They're like AI models that only train on their own outputs, eventually suffering from what researchers call "model collapse" because they lose touch with external reality.
I'm worried education is headed toward model collapse.
The Feedback Desert
Learning systems gather frequent, meaningful feedback from their environment and adjust behavior based on results. Educational systems have created the opposite—a feedback desert where the signals needed for improvement are either absent, delayed by years, or completely disconnected from what matters. If you fed the same sparse, delayed, and irrelevant feedback signals that schools rely on to an AI system, that AI wouldn't be capable of learning to do much of anything different either.
Schools measure test scores that correlate poorly with the skills they claim to value most. They track graduation rates and dilute the value of GPAs while employers consistently report that graduates lack essential workplace capabilities. The mismatch between what they measure and what matters has created a system that can appear successful while failing at its core mission.
Even when clear feedback emerges—persistent achievement gaps, widespread employer dissatisfaction—the educational response typically involves doubling down on existing approaches rather than questioning fundamental assumptions.
Meanwhile, schools sit on a goldmine of data they rarely analyze systematically. Each assignment, test, project, and interaction produces information about what works and what doesn't. Individual teachers might notice patterns in their own classrooms, but there's rarely systematic analysis across classes, schools, or districts to identify broader trends. The technology exists to find these patterns—AI systems excel at this kind of analysis and can be set up to be FERPA compliant—but educational systems rarely invest in learning from their own experience.
The Safety-Critical Trap
Educational systems weren't originally designed to be learning systems. They were set up more like safety-critical industries—law enforcement, public health, aviation—where the primary goal is minimizing risk and ensuring consistent delivery of established practices. Having worked with some of these industries, I recognize the pattern. They tend to be echo chambers and risk-averse, and part of that stems from legitimate concerns about harm that results from failure.
Educators carry a similar burden. They worry about destroying a child's future by using a pedagogy that lacks evidence. This mindset made sense when the world changed slowly and established practices remained relevant for decades. But it becomes counterproductive when timely adaptation is essential for preparing students for a rapidly changing world.
The system has created structural barriers that actively prevent learning and adaptation. Take continuing education requirements: some states mandate that teacher training must align with existing educational standards to qualify for continuing education credits. But no school has AI standards yet! The requirement essentially defines the system to not change, creating a "self-licking ice cream cone"—a closed loop that serves only to perpetuate itself.
This creates educational systems that have explicitly designed themselves into what learning theorists call pure exploitation mode. Any learning system needs to balance exploitation—using what you know works—with exploration, trying new approaches to discover better solutions. But schools have become trapped by their own "evidence-based practices" mantra.
The "evidence" they demand is often weak—small studies with limited scope, research conducted in contexts very different from their own classrooms—yet it's used to justify avoiding any meaningful experimentation. Real learning systems need structured exploration. They try things, measure results, and adapt based on what they discover. But educational systems have become so risk-averse that failed experiments are seen as disasters rather than learning opportunities.
Breaking the Model Collapse
The stakes keep rising while educational systems remain largely unchanged. Technology, work, and society are transforming in ways that make traditional educational approaches increasingly obsolete. Students enter the workforce prepared for a world that no longer exists, competing with systems that can learn and adapt faster than the institutions that educated them.
The path forward requires a fundamental shift from a safety-critical mindset to a learning-adaptive one. This doesn't mean abandoning concern for student welfare—it means recognizing that the greatest risk of harm is in the status quo.
Educational systems could partner with businesses and other organizations to develop meaningful metrics that actually measure the capabilities they claim to value or let students attempt to do actual job tasks to see where they struggle. This doesn't mean reducing education to job training, but it does mean creating feedback loops that connect educational outcomes to real-world performance. The durable skills that schools want are the same ones that employers want. Show them that’s what students are learning!
Schools and colleges could embrace systematic experimentation, trying new approaches in controlled ways and measuring results. They could analyze their own data to identify what's actually working. They could remove structural barriers that prevent adaptation to new realities.
Most importantly, they need to acknowledge that their personnel interchange problem is crippling their ability to adapt. A significant portion of educators—from kindergarten teachers to university professors—have never worked outside educational environments. They attended school, then worked in schools. Their understanding of workplace needs, technological change, and economic realities comes secondhand. When outside perspectives are offered, they're often dismissed based on credentialism rather than merit.
As just one example, I've been trying to volunteer my expertise to local schools for years now. Three decades of AI research, extensive work on how complex systems and organizations learn and adapt, direct experience with how different industries are transforming—you'd think that might be useful input as schools grapple with preparing students for a rapidly changing world. But so far my offer hasn’t been accepted; most of the time I get no response. The other educators I know tell me what I’ve already known. “You’re not a teacher, so they don’t want to hear from you.” Other school leaders tell me they shunt all volunteering offers automatically to junk.
Educational institutions can become learning systems. But only if they're willing to abandon the safety-critical mindset that once served them well and embrace the learning-adaptive approach that rapidly changing times demand. The meta-principles that make any system capable of learning—rapid feedback, systematic experimentation, diverse input, and adaptive responses—offer a clear roadmap for transformation.
The world isn't waiting for them to figure this out. Model collapse, once it sets in, is extremely difficult to reverse.
©2025 Dasey Consulting LLC
I can't help but see that since we've long delayed a necessary transformation of education, AI is now going to force it.