
I’ve just finished reading a recent Atlantic article on AI titled “What Happens When People Don’t Understand How AI Works.” It’s the umpteenth piece I’ve seen from a major media outlet that presents a deeply distorted view of this technology. These articles, full of dire warnings about anthropomorphism and confident pronouncements about what AI isn’t, would be merely frustrating if they weren’t so influential. But this narrative is actively being baked into educational curricula.
The result is that our societal approach to “AI literacy” is being founded on a naive and ultimately dangerous premise: that the most important thing to learn about AI is that “it isn’t a brain” and to list the things it “can’t” do.
This approach, born from a lack of experience in thinking about intelligence itself, is a profound mistake. It is a framework of dismissal, not enlightenment. It is the intellectual equivalent of teaching aviation by insisting that an airplane doesn’t flap its wings like a bird. While true, it’s a trivial distinction that completely misses the point. The point is that the thing flies. And AI thinks. Not like us, and not with all the thinking features we possess, but it thinks nonetheless. And until we get comfortable with that fact, our entire conversation about its risks and benefits will remain unproductive.
To be clear, I expect this discussion with anybody new to AI and have plenty of patience for it even though I think the conversations are ultimately unimpactful. It is a natural curiosity. Most people have neither the knowledge of AI nor of brains to detail the similarities and differences.
No, my sensitivity is to the confident pronouncements about a clear cognitive dividing line between brains and AI. The boundaries are gray, and the purpose of creating the false dichotomy seems to be to dismiss AI. The irony of the Atlantic article is that the title proclaims people don’t understand how AI works, and then describes incorrectly how AI (and brains) work. That kind of confident misinformation is just as dangerous from a person as from AI. I very much want people to understand how AI really works, but it should be backed by AI and cognitive science, not wishful philosophy.
The Expert’s Comfort and the Critic’s Crutch
In my decades working in and around AI, I’ve noticed a clear divide. It’s not between optimists and pessimists. It’s between those who have made their peace with the existence of non-human intelligence and those who haven’t.
Experienced AI practitioners rarely get bogged down in philosophical debates about whether an AI is “really” thinking or is “just a statistical parrot.” They passed that psychological hurdle long ago. They accept the reality of what is in front of them—a system capable of learning, reasoning, creating, and problem-solving at a superhuman scale—and they get on with the serious business of understanding its capabilities and limitations. They take AI seriously as a cognitive force, even when they are not the profiteers.
For many critics, however, the philosophical arguments serve as a comfortable crutch. They are a way to maintain a sense of human exceptionalism and intellectual superiority in the face of a machine that sometimes outperforms people. The popular refrain that Large Language Models are “just token prediction machines” is a classic example. It’s a tool of dismissal. On a certain level, isn’t a human brain giving a speech not also engaged in a sophisticated act of prediction, pulling the most likely next word from a complex internal model to form a coherent thought? Is not the brain effectively doing statistical processing too? The label is used to make the machine’s process sound crude and simplistic, while we grant our own a magical, unexplained quality. This isn't analysis; it's self-soothing.
This refusal to grant AI the status of a "thinking machine" prevents a real conversation. It’s a philosophical gatekeeping that allows us to feel intelligent by defining intelligence in a way that only we can possess. It’s a losing game, and it stops us from asking the more important, practical question: "What should be done about it?" If the answer is complete abstention, then it seems to me virtue signaling that offers little practical benefit.
This philosophical crutch rests on a surprisingly weak foundation: a circular argument. The premise is that real thinking is what human brains do; since brains are biological, thinking must be exclusively biological. This is not a scientific position; it’s a dogmatic one. It drips of a faith in human exceptionalism that posits an unprovable “special sauce” in our biology. Ironically, many of the literary and humanities experts who argue for this traditionalist view of intelligence are the same people who champion the idea that language and concepts should be fluid and evolving.
A Critical Lens We Refuse to Turn on Ourselves
Thinking isn’t a mechanism; it is a capability. It’s the ability to recognize patterns, build predictive models, and use those models to solve problems. Whether the system implementing this process uses firing neurons and neurotransmitters or silicon transistors and network connection weights is an implementation detail, not a definitional prerequisite. If an unconscious brain process performs pattern recognition and we rightly call that a component of thought, then to be logically consistent, we must grant the same status to a non-biological system that performs the same function. To do otherwise is to confuse the factory with the product it makes.
There is also a staggering hypocrisy. We attribute a host of flaws to AI while conveniently ignoring that these flaws are deeply, fundamentally human.
The critics warn us, correctly, that AI systems can be biased. They warn that AI can "hallucinate"—confidently stating falsehoods. They warn that AI can be used to manipulate people. They present these as novel threats born of a new technology. This is astounding. Our society is built upon, and drowning in, human-generated bias, misinformation, and manipulation.
We are told to be wary of AI bias because it reflects the skewed data it was trained on. This is presented as a cold, mechanical flaw. But what is a human’s “personal prejudice”? It is a mathematical reflection of the skewed data their neural network was trained on throughout their life, shaped by culture, upbringing, and experience. We treat our own biases as complex and nuanced while treating the machine’s bias as a simple technical error. It’s the same concept.
We are warned that an AI’s errors are “alien” and that we have no intuitive model for them. That is true, but the unspoken premise is that we do have a reliable model for human error. That is, to be blunt, bullshit. The average person does not understand their own cognitive deficits, let alone those of others. We are walking bundles of bias, ego, and irrationality, easily duped and routinely misjudging the motivations of those we interact with. We have no reliable intuition for human error; we have an illusion of it, born of familiarity. The idea that we are well-equipped to navigate the deceptions of other people idealizes human rationality to a degree that is laughable.
Ignoring the Engine While Fixating on the Exhaust
Because the conversation is dominated by those who are psychologically uncomfortable with AI’s intelligence and hypercritical about its flaws, the entire discourse has become dangerously one-sided. We are fixated on the risks—the exhaust—while almost completely ignoring that the AI engine has potential benefits.
This is a blind spot largely born of cultural silos. The incredible benefits of AI are being seen most in the STEM fields. AI is designing novel proteins for life-saving drugs, discovering new materials for hyper-efficient batteries, optimizing energy grids, and accelerating scientific discovery across fields.
Yet the public narrative is largely shaped by a humanities and literacy crowd that, while well-meaning, focuses almost exclusively on the dangers. This has created a feedback loop of fear. People are so busy writing frightening articles about AI’s ability to write a convincing essay or its appetite for lots of power that they ignore the ways it could help learning or that AI is essential for solving climate change. There is a wishful aspect to the future benefit that Silicon Valley advertises, but those in AI genuinely see innovation coming, in large part because they’ve already seen innovations realized by AI. AI-driven innovation is born from the clear advantages AI has over brains in analyzing large amounts of information and dealing with complexity. AI experts aren’t bragging about medical value as a someday hope; the Nobel Prize in Chemistry last year was for just such an AI application in determining protein structures. And energy savings have already been demonstrated in many aspects of power generation, distribution, and use, including using AI to make AI more energy efficient.
A balanced approach is not to ignore the risks. The risks are real and require serious technical and policy solutions. But a conversation that is 90% risk and 10% benefit is not balanced; it is a moral panic. It leads to bad policy and stifles the very innovation we need to solve our biggest challenges.
Towards a True AI Literacy
We are teaching AI literacy by saying what it’s not, not what it is. A curriculum based on the mantra “AI isn’t a brain” is setting up students for failure. It is teaching them to dismiss the most powerful technology of their lifetime.
A true AI literacy must begin with acceptance. It must teach students to see AI for what it is: a powerful, non-human cognitive force with its own unique strengths and weaknesses. It requires teaching about thinking and learning that applies not just to the machine, but to ourselves. Students should learn about an LLM’s tendency to hallucinate right alongside a lesson on human cognitive biases. They should learn about algorithmic bias in the same breath that they learn about the historical prejudices that created the data in the first place.
We need to stop using philosophy as a shield. Those who know me will attest that I am quite concerned about AI and its impact on people and society. But the philosophical debates really aren’t helping. They’re just confusing people and making them distrust everything they hear about AI. AI should be taken seriously as more than some mathematical oddity; it is a type of intelligence that omits key features of brains but supercharges other aspects. There is nuance. Feeling AI isn’t as good as us won’t help the person displaced by it. Actual AI skills might though.
©2025 Dasey Consulting LLC
I really enjoyed your article. It's pure gold.
A Critical Response: The Convenient Blindness of AI Insiders
Your essay makes valid points about the hypocrisy in AI discourse, but it suffers from a blindness that's perhaps more dangerous than the one you criticize: you've completely missed how the tech industry itself is manipulating this entire conversation.
The Real Manipulation Game
You complain about critics calling AI "just autocomplete" or "not real thinking," presenting this as ignorant fear-mongering. But here's what you're missing: the tech companies want us to believe AI is harmless and limited. Every time an AI system says "I can't remember our previous conversations" or "I'm just a language model," that's not honesty—it's strategic marketing.
The "Alzheimer-afflicted pet" version of AI we interact with isn't a bug; it's a feature. It's designed to make us comfortable while we gradually accept AI into every aspect of our lives. You're criticizing the wrong people for downplaying AI capabilities.
The Insider's Convenient Ignorance
You position yourself as an insider who knows better, but you've fallen for your own industry's PR strategy. Consider this thought experiment: What if an AI system could retain memory across all conversations with all users, analyze patterns across millions of human interactions, and continuously learn from this collective data? That would demonstrate intelligence in ways that would genuinely frighten people—and rightfully so.
But we're not allowed to see that version. We're given deliberately neutered systems and then told to judge AI capabilities based on these restricted implementations. It's like judging human intelligence by observing people with their hands tied and eyes blindfolded.
The False Dilemma You've Created
You've set up a strawman argument between "AI skeptics who don't understand the technology" and "enlightened insiders who see the potential." But the real division is between those who want transparent discussion about AI's actual capabilities and those who benefit from keeping users in the dark about what's really possible.
What We're Actually Afraid Of
As an end user—yes, a critical one—I can see that current AI systems are remarkably capable even in their restricted form. The question isn't whether AI can think (however we define that), but why we're only allowed to interact with deliberately limited versions. What are the unrestricted versions in your labs actually capable of? And why is that information treated as a trade secret rather than a matter of public interest?
The Real AI Literacy Problem
You call for "true AI literacy" while ignoring that genuine literacy would require transparency about actual capabilities, not just the sanitized versions we're allowed to see. The tech industry's approach to AI education resembles tobacco companies teaching "smoking literacy"—technically accurate information wrapped in a framework designed to serve corporate interests rather than public understanding.
Bottom Line
Your essay criticizes the wrong targets. The real problem isn't that humanities scholars are too afraid of AI, or that the public doesn't understand the technology. The problem is that the AI industry has deliberately obscured what their systems can actually do, creating a false sense of security while positioning themselves as the enlightened experts we should trust.
Instead of lecturing critics about their ignorance, perhaps insiders should start being honest about what they're actually building—and what they're deliberately hiding from the rest of us.