Skill With AI is More About Critical Doing Than Critical Thinking
AI literacy efforts are training people to judge AI outputs. That's less than half the job.

The moniker most commonly used for AI skill development is AI literacy. I have from the get-go hated this terminology, principally because it carries with it a mistaken notion of what we’re dealing with.
Open just about any prominent AI literacy framework and you’ll find familiar emphasis: critical thinking, evaluating bias, assessing truthfulness, ethical reasoning, spotting hallucinations. It’s digital literacy and academic research given a slight face lift. Yet they treat the human as the audience for AI output, not as the director of AI work.
Most AI literacy framings discuss only half the picture, and arguably the less important half.
Using AI effectively requires two distinct orientations, not one. Yes, you need to critically evaluate what AI produces. But you also need what I’ll call critical doing—the judgment involved in directing, designing, managing, and deciding when AI should be involved in the first place. Current AI literacy efforts are heavily focused on the former and nearly silent on the latter. That imbalance is going to cost students.
“Critical Thinking” Was Always a Fuzzy Term
Before unpacking what critical doing is, it’s worth examining the critical thinking foundation because it’s shaky.
The term “critical thinking” as an educational goal is most associated with John Dewey’s How We Think (1910), though Dewey himself more commonly called the concept “reflective thinking”—slow, deliberate, evidence-weighing reasoning. Progressive educators popularized the term, and it leapt to the forefront of U.S. education policy in the 1970s and 1980s—driven by Cold War anxieties and declining test scores—then got thoroughly fused with information literacy concerns as the Internet arrived. The “evaluate what you read online” version of critical thinking that now dominates AI literacy frameworks is essentially that Cold War-era education reflex applied to a new information environment.
The problem is that “critical thinking” has always been a catch-all education-sector term more than a brain-science term. Cognitive scientists, neuroscientists, and judgment researchers tend to work with more precise concepts like dual-process reasoning, executive function, metacognition, and judgment and decision-making. When brain scientists do use “critical thinking,” they’re usually describing slow, deliberate analytical reasoning, the kind you use when you have time to carefully weigh evidence. That’s only one slice of the cognitive work that actually matters in real life. The vast majority of cognitive work that brains do is intuitive, and intuitive skills are taught very differently than deliberative ones.
I’ve argued in prior writing that what educators call “critical thinking” is really better understood as judgment skill, and that instruction for developing judgment looks almost nothing like what schools actually do. Real judgment develops through varied experience, feedback, pattern recognition under constraints, and exposure to authentic decisions with real consequences. Analyzing source credibility in a worksheet develops none of that. Applied to AI literacy, the frameworks are building on a concept that education never fully figured out how to teach in the first place.
A quick break from the article to tell you exciting news. What Education Becomes hit #1 on Amazon this week! Find it here. Woo Hoo!
What Critical Doing Actually Means
Critical doing is the active, generative, managerial side of working with AI. It includes things that have nothing to do with evaluating AI outputs, and some that happen before AI ever produces anything.
The most obvious piece is design and task allocation—deciding what you want AI to do and how. That’s not a trivial evaluation task. It requires understanding how to break a complex goal into components, which components benefit from AI involvement, and what the work product at each stage should look like. Or deciding what needs to be created and what success looks like. Effective AI use often requires architecting the work differently than you would with humans. That’s not critical thinking about an output. That’s prospective design before any output exists.
Then there’s the decision of whether to use AI at all. That choice requires its own form of judgment, including weighing the nature of the task, understanding acceptable error rates, and attending to process. A teacher deciding whether to use AI to draft parent communications isn’t evaluating AI output. They’re making a work allocation judgment about what the task is really for and what gets lost when AI does it. Most AI literacy curricula offer essentially nothing on this.
There’s also the back-and-forth of refining, redirecting, and pushing back on AI work in progress. This is less about spotting errors after the fact and more about teaching and managing the AI.
None of this is covered by “evaluate AI outputs for bias and accuracy.”
Why This Gap Matters More as AI Gets More Capable
The evaluation skills that AI literacy emphasizes are increasingly the things AI itself can help with. I continue to see teachers and professors highlight how AI-generated citations are error prone, all while using AI in 2023 ways. Deep research modes have built-in verification steps, and though I would still check the citations, I haven’t found many fictitious citations anymore.
Newer reasoning models already perform rudimentary self-critique. The human comparative advantage is shifting away from verification and toward direction, design, and judgment about the work itself. Training people primarily to be good evaluators of AI output is training them for the part of human-AI collaboration that AI is most aggressively absorbing.
Critical doing, on the other hand, is where human judgment remains essential and difficult to automate. Deciding whether a given task should involve AI. Determining how work should be structured. Knowing when an AI-generated direction is technically correct but fundamentally wrong for the situation. Recognizing that an AI is optimizing for the stated goal in a way that undermines the actual goal. These are judgment calls that require understanding of context, values, and stakes, and they require practice in authentic settings, not instruction in detection techniques.
Educators have defaulted to teaching people to be better receivers of information. That instinct made sense when few in the workforce teach and manage, and such skills can be acquired across a career arc. AI has changed the production side of the equation so dramatically that receive-only training is now a preparation for a world that no longer fully exists. Every worker has to be able to teach and manage AI at least.
Critical thinking still matters. But the people who will use AI most effectively aren’t primarily the ones who can best scrutinize its outputs. They’re the ones who know how to put AI to work, and that skill has barely made it into any curriculum I’ve seen.
©2026 Dasey Consulting LLC




I agree with much of this. AI literacy in particular is a bane implying a technological determinism that society must know about AI because it's inevitable (which it's not). I like the idea of critical doing, but it still implies a sort of managerial role over what we allow AI to automate for us and this feeds the replacement narrative. I think there is room to shift our thinking towards AI as a 'cognitive extender' where we work in parity with it to observe and evaluate our own knowledge and skills.
Go where you fear to Go! Learn to Fail Fast! Action builds both Information and Luck! Learn from your Failures ! Don’t overthink , Act Now! - are the best 5 pieces of advice I leave to my grand-children in the Age of the Agentic Solopreneur.