I have a fundamental issue with distilling education about AI into the label “AI literacy.” It's not because there's anything wrong with the existing scope of AI literacy, which I see focused on selecting and using tools effectively and safely.
No, it’s because it implies a false choice that either you learn how to use the tools effectively, or you become a Computer Scientist (CS) and learn how to build AI. Anyone not interested in a CS track need not understand what’s going on under the hood, or the challenges an AI developer has. In that sense AI literacy is like Digital Literacy.
But AI isn’t a tool you just use; it’s a tool you shape. It's like working with transfer students on day one. They arrive with existing knowledge, experiences, and biases but little clue about the school context. The job isn't simply to use their minds as they are, but to shape and develop them through teaching.
We’re all AI developers. The minute we forget that, safety and ethical vulnerabilities result. If others use the AI we develop then we’re product developers too. And developing AI means designing something that thinks and learns.
AI development is built on math, but the math and the associated CS approach are addressing issues that need no math to understand. Planning, applying pedagogy, managing learners and actors, and assessing abilities in school doesn’t require math except in simple ways on the assessment end. We’re teaching AI, but I can talk about it like a teacher. And in many ways, those AI teaching lessons will be pretty easy for educators to understand, since we’re speaking similar languages.
I wrote about the AI roots in my new book. One thing is for sure. What I talk about in the book will feel intuitive, because much of it is what you already do. Now I want you to teach that to your students so they can apply it to their AI interactions.
AI Wisdom Volume 1: Meta-Principles of Thinking and Learning
One of the first uncomfortable realities is that anthropomorphism will help you understand AI much better than our historical notions of machines. I’ll lean on that when describing the book.
When a manager gets a new employee, they should want to know two fundamental things about them. One is how they can best be applied to business challenges and the potential risks and costs of those use choices. But underlying that is understanding how the new employee thinks. How were they educated? How adaptable are they? What are the limits of their thinking? What biases might they have?
AI Wisdom Volume 1 answers what is at AI’s core, but not the “neuroscience” of it but rather the “psychology.” What are the challenges any intelligence must face? How does this AI thing approach those challenges? Humans? And how does that understanding supply critical thinking fuel for AI interactions?
AI Wisdom Volume 2 comes out late 2025-ish. (It’s mostly written but this book writing stuff is slow.) It discusses additional aspects of “what is AI?” not covered in Volume 1, such as AI “reasoning” and other cognitive additions, AI teams, and agents. The rest is about meta-principles for AI/human tasking, common challenge approaches, and ways to analyze for safety and ethics. If Volume 1 is teaching AI psychology and teaching, then Volume 2 is AI tasking, management, and leadership.
AI’s Durable Skills
Understanding AI's psychological foundations provides a more durable framework than focusing only on how to use current tools.
The other challenge of AI Literacy is the ephemeral nature of the knowledge. Prompt strategies are already different than last year, and AI safety and ethics issues evolve. Ai literacy must chase a very fast-moving train. To some degree, that’s unavoidable.
But there is a more durable knowledge base, and those roots are pervasive not only in understanding AI but also in understanding people and society. AI’s neural nets are not conceptually different than the emergent behavior in ant hills or economies. AI training adds randomness to the process, which improves the ultimate learned performance. That also shows up in various forms of human pedagogy. The crossovers, albeit often with key distinctions between brains and AI, are extensive. For that reason, teaching about AI is also teaching about how we think.
Every class, at every age group, is an opportunity to blend in AI education. Educators are trying to get students to think and learn. Bringing students into the methods behind their own classroom experiences, or getting them to design learning experiences, is a ubiquitous opportunity.
We quickly hit a wall in AI literacy instruction, with the difference between an OK and an exceptional prompter being chalked up to writing, critical thinking, creativity, collaboration…all the usual durable skill list. It feels like pointing to some mystical spirit we don’t quite understand.
In fact, critical thinking has long been discussed in the academic literature as having domain specificity. The critical thinking I need to evaluate oil paintings is not very similar to those a chemist applies. Not only is the underlying knowledge very different, but even the thinking processes are driven by very different objectives.
There’s something better than a hand wave toward durable skills. We can teach them AI durable skills.
Check out the book, and please leave a review if you like it. There are a few snags in the self-publishing process I’m having, most notably that my old book out competes my new book when searching for “AI Wisdom,” but also I found out about a book called AI Wisdom Vol. 1 that also came out recently. For that reason, it might be easiest to get to the book on Amazon through the link on my website. Amazon is the only place to find the e-book. If you want a print version through your local bookstore, Barnes & Noble, etc., then they can find it through the ISBN codes below. Enjoy!
Paperback ISBN: 979-8-9882386-4-5
Hardcover ISBN: 979-8-9882386-5-2
©2025 Dasey Consulting LLC. All rights reserved.