Stop Pushing, Start Catalyzing (1 of 2) - How AI Knowledge Can Erode Obstinance

Ask whether computers are "good or bad" for learning and you've already missed the point. The question reveals a fundamental misunderstanding about how technology works in educational settings. Computers aren't inherently good or bad—they amplify existing practices, create new possibilities, and disrupt old assumptions. Whether that leads to better learning depends entirely on how we use them.
Most leaders approaching AI in education make this same category error, but with an added twist. They don't just want to know if AI is good or bad—they want to force it to be good through policies, mandates, and initiatives, or force it to be bad through bans and stigma. Often what’s being forced is based on weak knowledge about AI.
Those leaders are trying to push change rather than create the conditions where change can happen naturally.
In chemistry, this is the difference between adding more heat to a reaction versus adding a catalyst. Heat forces molecules to move faster, creating more collisions, but it's inefficient and often destructive. Molecules can resist changing until the heat gets so high the bonds themselves are destroyed. A catalyst works differently—it lowers the activation energy needed for the reaction to occur. It doesn't add force; it removes barriers.
Leaders who understand this distinction stop asking "How do I make teachers use AI?" and start asking "What barriers are preventing teachers from wanting to explore AI?" The first question leads to mandates and resistance. The second leads to sustainable change.
Of course, there could be a third question, also driven by naivete. I have gotten this question directly from several education leaders. They ask various forms of “how can we raise the activation energy barrier so we can keep doing what we’ve been doing?” They ask about moats and crocodiles, not bridges.
The problem is that many leaders jump straight to policies and procedures without addressing the foundational issues that make change feel impossible. They're adding more barriers and requirements instead of removing them. They think they're being catalysts, but they're actually increasing the activation energy needed for change. This happens because schools fundamentally aren't designed as learning systems, so they default to control mechanisms rather than adaptive responses when faced with uncertainty.
Knowledge and Emotion are Inseparable
Teacher resistance to AI has multiple sources, and oversimplifying it guarantees failure. Some teachers are genuinely change-averse—they've found approaches that work and see no reason to complicate things. Others have legitimate pedagogical concerns about student dependency, critical thinking skills, or academic integrity that deserve attention. Still others are operating at capacity, drowning in existing responsibilities.
But much of the emotional resistance stems from incomplete or incorrect information about what AI actually does. Fear fills knowledge gaps. When teachers don't understand AI capabilities and limitations, they default to worst-case scenarios. When they or their students use it in unskilled ways and the result is generic output, then it must be hype. When they hear AI, they imagine either magic that does everything or a threat that replaces everything.
This is why knowledge and emotion can't be separated. Emotions based on wrong assumptions persist until the assumptions get corrected. A teacher who believes AI will make students stop thinking entirely will resist any AI integration, no matter how thoughtfully designed. A teacher who thinks AI is cheating by definition won't engage with conversations about appropriate use. A teacher who only sees the downsides of AI isn’t going to have energy to join the club.
The most emotionally charged concern, the one that doesn’t get front-row attention, is professional relevance. Teachers who've spent years crafting assignments, providing feedback, and building relationships with students through their work suddenly wonder: "Does what I do matter anymore?" When students can generate essays that might earn decent grades, the fundamental value proposition of traditional teaching feels threatened. This is happening in many fields. Professional identity and purpose are under pressure.
Multiple surveys show that when people are more educated about AI, their opinions about AI improve. I don’t think it’s because they’re less terrified where this will go, but knowledge and experience with AI brings nuance to the menu. Computers can do good things too, not only cyberattacks. They realize that AI output still requires human steering and many judgments about accuracy, appropriateness, and effectiveness. AI that stole creative work might still be ethically acceptable if used to help disadvantaged people improve their lives. They might never want AI to replace actors, except an AI that pretends to be a scene partner so a human can better practice the art? Maybe that’s ok. The possibilities are endless, but not if the mentality is narrow.
Perhaps teachers start to see AI as potentially stress-reducing rather than stress-adding. Realistically, it isn’t going to save time in the near-term, and even then only when the tasks is somewhat repeatable. Make that clear. Learning requires more energy to get over the activation energy hump. The chief advantage is it can create forms of learning that would have been workload prohibitive pre-ChatGPT.
Much of teachers’ stress isn't about time management—it's about managing adversarial audiences. Students who are bored act out. Disengaged learners become discipline problems. Classes that feel like battlegrounds drain teachers emotionally in ways that no amount of reduced planning time can fix.
When teachers understand that AI might help create more engaging learning experiences, provide personalized feedback, or generate practice problems adapted to individual student needs, suddenly the "extra work" of learning these tools starts to look like an investment. The activation energy shifts because the potential outcome changes from "more work" to "easier work."
This is why effective professional development focuses on specific, practical applications rather than general AI literacy. Show a history teacher how AI can help create primary source analysis activities that students actually find interesting. Show an English teacher how AI can provide targeted feedback that helps students revise more effectively. Show a math teacher how AI can generate problem sets that adapt to individual student understanding.
The goal isn't to create AI evangelists or eliminate all concerns. It's to build the institutional capacity for nuanced thinking—people who can recognize both possibilities and pitfalls, who can experiment thoughtfully rather than rejecting or adopting blindly. You will still have critics and traditionalists. Challenge them too. If you don’t want to use AI in your teaching, then maybe teach about AI in screenless ways, ala my last book.
Getting Educated is Harder Than It Seems
Building that foundation of knowledge and emotional readiness sounds straightforward, but most schools will struggle with the execution. The barriers to getting properly educated about AI are significant, and ignoring them leads to the kind of superficial understanding that reinforces resistance rather than reducing it.
Most schools don't have anyone with enough AI expertise to lead this education internally. Unless you already have educators or administrators who spend a substantial chunk of their job working with AI and actively learning from outside sources, you're operating blind. What passes for AI knowledge in many schools is a collection of rumors, misinterpretations, and outright falsehoods passed along through informal networks.
Most educator knowledge about AI starts with "I hear that" or "I think that," but it's secondhand information at best. This creates a dangerous cycle where uninformed opinions get treated as facts, and those "facts" shape policy decisions. I have seen biases and not-quite-right pronouncements in consultant-run courses, institution policy documents, AI curriculum frameworks, and many times from AIs themselves.
Sure, get some outside help. I do some training myself and can recommend others when better suited for your needs. A workshop isn’t sufficient though. It’s a way to get some momentum started, but it will die out quickly, perhaps as soon as people leave the room and reenter the craziness.
Get volunteer help. Your community likely has parents, board members, or local professionals who understand AI far better than anyone on your teaching staff. Even if they don’t, if they want to investigate and coordinate expertise, let them. Give them some assignments you don’t have time for. Make them responsible for a weekly update to staff on AI changes that week. Whatever!
It is shocking to me that for two years I have volunteered my time to schools and colleges in my area, and although there are a few tentative bites this year, for the most part I get no response. I have had superintendents tell me the volunteerism goes right to junk mail. Uh-uh. Not OK. You don’t have to bring them all the way in, but this is a time where it takes a village. I’m more welcome when I charge money. That’s messed up.
The kids themselves may know more than any adult in the building about how these tools work in practice. Instead of treating learning as something that flows from administration down to teachers, consider reversing that direction or creating opportunities for co-learning.
One-off workshops or pilot programs won't cut it. This technology changes too rapidly for "send one teacher to training and have them share what they learned" approaches. You need ongoing education, and you need to use AI to learn about AI. Teachers need hands-on experience with these tools, not just demonstrations. They need to see how AI responds to different prompts, where it succeeds, where it fails, and how its outputs change based on operator expertise.
The goal isn't to turn every teacher into an AI expert. It's to build enough genuine understanding that emotions can be based on reality rather than speculation, and decisions can be grounded in experience rather than hearsay.
Getting the knowledge and emotional foundation right is essential, but it's only the beginning. Once people can think clearly about AI—when they understand what it can and can't do, when their emotions are grounded in reality rather than speculation—then the real design work begins. How do you create institutional structures that support thoughtful experimentation? That's where the catalyzing really happens. That’s part 2 of this article series.


