Stop Pushing, Start Catalyzing (2 of 2) - Designing AI-Learning Institutions

In part 1, I argued that education leaders should function like catalysts rather than heat sources regarding AI—removing barriers to change rather than adding pressure to force it. Much of the emotional resistance to AI stems from incomplete or incorrect information about what these tools actually do. Teachers who don't understand AI capabilities and limitations default to worst-case scenarios or assumption of weak AI capabilities. Knowledge and emotion can't be separated since emotions based on wrong assumptions persist until the assumptions get corrected.
Getting that knowledge foundation right is essential, but it's only the beginning. Once people can think clearly about AI—when they understand what it can and can't do, when their emotions are grounded in reality rather than speculation—then the real design work begins. You need institutional structures that make good experimentation easier and problematic uses appropriately difficult.
Don’t expect emotion to disappear. The goal is having more constructive conversations based on common knowledge, and to reduce the degree of moral judgment often tacitly applied to peer networks. Those who abstain from AI still matter in the process. They can offer benchmark comparisons, since too many things have changed about the students and their world to rely entirely on pre-AI data. Plus, they can help design the experiments; understanding their views of success and evidence is critical to healthy culture. Emphasize “we need to learn” over “we need to do.”
Many leaders would jump straight from educating about AI to policies and rollout plans, or maybe they did that before they knew what they were doing. They're pushing by adding heat to the “reactions.” Others actively decide to slow roll, or perhaps stymie completely, but they’re only cooling the “reaction” inside their walls. Students and educators live in the fast-moving real world too. What’s needed instead are selective barrier reductions through catalysts.
Freedom to Act, Permission to Fail
Teachers who understand AI don't need more supervision—they need more latitude. Most school systems do the opposite. The moment they hear "AI experimentation," they start writing restrictions, observing classes, creating approval processes, and building oversight mechanisms. They're designing friction into exactly the activities they claim to want more of.
The legal latitude exists, but the practical reality is often very different. Forty-three U.S. states allow local entities to develop curriculum. Standards explicitly state they "do not prescribe how teachers should teach or which materials they should use." The federal government is explicitly prohibited from curriculum control.
But walk into many schools and you'll find teachers who must document every learning objective for every lesson tied to specific state standards, or are subject to other scrutiny of lesson plans or leadership imposition of curricula. This is administrative choices presented as a mandate.
Higher education has similar patterns. Tenured professors typically control their syllabi and teaching methods with minimal interference. But many adjuncts and lecturers receive standardized syllabi with explicit instruction to follow the established course syllabus and college-generated teaching materials. At some institutions, adjuncts need prior approval to deviate from established evaluation criteria.
Teachers internalize messages about rigid requirements while leaders claim they want innovation. The result is learned helplessness about experimentation, even in areas where significant freedom actually exists.
States primarily monitor test performance, not daily curriculum choices. Much of the micromanagement serves symbolic rather than practical purposes. It signals control more than it ensures quality. Smart leaders start removing these imaginary barriers. They give explicit permission.
But permission isn't enough. Teachers need protected time and space for experimentation. This means exempting volunteers from other duties during AI exploration phases. Creating collaboration periods where teachers can share what's working and what isn't. Providing substitutes so educators can observe each other using these tools. Making resource investments that signal institutional commitment rather than just individual initiative.
Design for safe-to-fail rather than fail-safe. Safe-to-fail acknowledges that some experiments won't work but ensures the risks are contained. A teacher trying AI-generated writing prompts with one class isn't betting the farm—they're gathering data. Fail-safe thinking requires extensive piloting, approval committees, and risk assessments before anyone can try anything. By the time fail-safe processes conclude, the technology has usually moved past whatever was being tested.
Most importantly, you need evaluation systems that support rather than punish innovation. Teachers won't experiment meaningfully if their performance reviews penalize any deviation from established practices. If your evaluation criteria reward compliance while your initiatives ask for innovation, you've designed institutional schizophrenia.
Mechanics That Generate Learning
Knowledge and freedom create necessary conditions for change, but they don't guarantee it happens systematically. Learning systems need specific mechanisms that capture insights, share discoveries, and adapt practices based on what's working.
Most schools treat professional development like vaccination campaigns—a one-time intervention that provides permanent immunity. Learning systems require ongoing feedback loops. Teachers need regular opportunities to share what they're discovering, challenge each other's assumptions, and iterate on promising approaches.
Create structures for productive randomness. Different teachers trying different approaches with different student populations generates much more useful information than everyone implementing the same pilot program. Encourage diverse experimentation, but build mechanisms to capture and share results. Some districts are creating "AI learning communities" where volunteers meet monthly to demo tools, discuss challenges, and refine approaches.
Make failure analysis routine and non-punitive. When AI-assisted lessons fall flat, that's data, not disaster. What specifically didn't work? Was it the tool, the prompt design, the integration with existing curriculum, the way students were prepared for the activity, or a mismatch between the challenge and learner capabilities? These insights prevent others from making the same mistakes and help refine approaches that show promise.
Most importantly, measure what you actually want, not what's convenient to measure. Schools routinely claim they want critical thinking, creativity, collaboration, and adaptability—but their measurement systems focus on standardized test performance and compliance metrics. If you want to see whether AI integration is improving student engagement, measure engagement. Survey students about their interest levels, track participation patterns, and observe classroom dynamics.
This doesn't mean every experiment needs rigorous quantitative analysis. Teachers can often tell what works. A math teacher knows when students are more willing to tackle challenging problems. An English teacher can sense when discussions become more thoughtful. A science teacher notices when lab work becomes more investigative rather than cookbook-following. The key is having structured ways to share these observations, not control them.
When teachers try AI-assisted approaches, create simple frameworks for capturing insights. What was the goal? What actually happened? What would you do differently? What surprised you? These questions generate useful institutional knowledge without requiring elaborate research protocols.
Focus on trends across multiple experiments rather than individual successes or failures. If several teachers report that AI-generated discussion questions work better when students help refine them, that's worth sharing. If multiple attempts to use AI for automated feedback consistently frustrate students, investigate why.
Pay attention to the quiet successes, not just the vocal failures. Teachers who are successfully integrating AI often don't announce it.
Use AI to learn about AI knowledge and uses. These tools excel at pattern recognition across large datasets. Have them analyze survey responses about teacher AI experiences, identify themes in student feedback about AI-assisted assignments, or spot trends in which approaches work best for different student populations. The meta-learning opportunities are substantial if you're systematic about data collection. Of course the school must be cognizant of data privacy and other issues, but increasingly there are AI forms that can examine Learning Management System (LMS) content, for example.
Most importantly, separate learning from evaluation. Teachers won't share honest insights about what's not working if those admissions feed into performance assessments. Create spaces where experimentation results inform institutional learning without affecting individual accountability measures.
In higher education, department meetings often serve this function naturally. Faculty discuss teaching challenges, share successful approaches, and troubleshoot problems together. But these conversations rarely include systematic data collection or institutional memory. A chemistry professor's breakthrough with AI-generated problem sets might never reach the physics department, even though the approach could transfer easily.
Learning Systems Think Long-Term
Schools that successfully integrate AI won't just have teachers who use these tools effectively. They'll have developed institutional capacity to thoughtfully integrate new capabilities as they emerge. The competitive advantage goes to organizations that can experiment thoughtfully, learn quickly, and adapt continuously.
Most school change efforts fail because they're improvement projects disguised as transformation initiatives. Improvement assumes you know what you're trying to get better at and just need to execute more effectively. Transformation requires acknowledging you're not entirely sure what you should become, so you need systems that can explore possibilities and adapt based on what you discover.
AI forces transformation thinking because the optimal integration approaches aren't obvious yet. No one has definitive answers about the best ways to use these tools for learning. Teachers are discovering that AI works better for some subjects than others, that certain prompting approaches are more effective with particular student populations, and that integration strategies that seemed promising in theory fall apart in practice.
This means measuring different things. Instead of just tracking AI adoption rates, measure the quality of experimentation. Are teachers trying diverse approaches or all copying the same template? Are they reflecting systematically on results or just implementing randomly? Are successful practices spreading across the institution or staying isolated with individual innovators?
Focus on developing institutional wisdom, not just individual expertise. Wisdom emerges from the interaction between knowledge and experience over time. Individual teachers might become AI experts, but institutional wisdom requires systems that capture insights, transfer knowledge across personnel changes, and maintain learning momentum through leadership transitions.
Universities have advantages here. Academic culture already values experimentation, publication of results, and peer review. Faculty naturally document their innovations, share findings at conferences, and build on each other's work. The infrastructure for institutional learning exists—it just needs to be directed toward pedagogy as systematically as it's applied to research.
K-12 systems struggle more with institutional memory. Teachers often work in isolation. Successful innovations rarely get documented systematically. Personnel changes frequently wipe out accumulated knowledge. Building learning systems requires more intentional design in these environments.
The ultimate goal is having an educational system that can intelligently integrate new capabilities as they emerge. This requires moving beyond safety-critical approaches toward learning-adaptive ones. It means accepting that some experiments will fail while building systems that learn from those failures. If you design it right, both teachers and students will learn from the failures as much as the successes. It means privileging adaptation speed over approval processes when external change is accelerating.
Many leaders are still thinking about AI and education as a problem to solve rather than a capability to develop. They want policies that minimize risk rather than systems that maximize learning. They're designing for the world that was instead of the world that's emerging.
Leaders who get this right won't just successfully integrate AI—they'll build educational institutions capable of continuous adaptation. They'll create competitive advantages that persist beyond any single technology transition. Most importantly, they'll serve students by preparing them for a world where learning and adapting quickly is essential.
The catalyzing work isn't finished when people understand AI or even when they start using it effectively. Schools become learning systems when they can intelligently evolve alongside external change. Everything else is just improvement theater.
©2025 Dasey Consulting LLC


