AI Personality (2 of 4): Schools Emphasize the Wrong Empathy

Feed the same prompt to different AIs and you’ll get different responses, perhaps not meaningfully different in the delivered knowledge, but certainly stylistically. ChatGPT might deliver enthusiastic, conversational explanations with analogies. Claude is known for hedging with qualifiers and acknowledging limitations explicitly. Gemini in my experience tends toward more concise and structured information delivery.
These aren’t random variations. They’re personality patterns—distinct behavioral tendencies that shape every interaction. Some users intuitively recognize and work with these patterns. Most don’t. The difference can determine whether tasks succeed or fail, or whether brains get sucked in or realize they’re being manipulated.
But recognizing AI personality requires a skill schools aren’t emphasizing. The type of empathy needed for AI interaction isn’t feeling another’s emotion; it’s understanding their perspective. And the mismatch has consequences that go beyond AI effectiveness to questions of psychological health and safety.
Part 1 of this series—“Wait, What’s Personality Again?”—discussed why the AI era demands we have a much more sophisticated understanding of personality than Myers-Briggs categorizations. This article discusses what our brains need to do to “get” AIs, or people.
What Empathy Actually Is (And What Too Much of It Does)
Psychology recognizes three distinct types of empathy, though most people conflate them (not to mention confusing it with sympathy):
Emotional (affective) empathy means feeling what another feels. You wince when someone gets hurt because you internally simulate their pain.
Cognitive empathy means understanding what another thinks or feels without necessarily feeling it yourself. You grasp why someone is upset even when you don’t feel upset. This is deliberate perspective-taking and mental modeling, what psychologists call Theory of Mind.
Compassionate empathy (empathic concern) combines understanding with motivation to help. You recognize someone’s situation and act on that understanding.
Schools and SEL programs emphasize emotional and compassionate empathy. Students learn to recognize feelings, share emotions, and build emotional connections. The goal is emotional atunement and resonance. Perspective-taking is part of most SEL curricula, but is dwarfed by mention of the other empathies.
This seems benign or even beneficial, but without the balancing force of cognitive empathy the research suggests otherwise. High emotional empathy correlates with emotional dysregulation. When you feel others’ emotions without the cognitive framework to understand and process them, you absorb distress without the tools to manage it. Studies show that purely affective empathy can lead to personal distress, burnout, and poor decision-making. Healthcare workers with high emotional empathy but low cognitive empathy experience more compassion fatigue. They feel patients’ pain intensely but lack the cognitive distance to remain effective.
I speculate that the same pattern might partially explain increased anxiety and depression among young people. Students taught to emotionally resonate with others’ feelings–whether classmates’ stress or online content creators’ manufactured emotional appeals–absorb emotional states without the cognitive tools to evaluate them. They feel the fear, anger, or urgency being communicated without modeling whether those emotions are appropriate responses to actual circumstances. Perhaps most importantly, they are told they need to try to feel others’ emotions, but being too good at it can create similar stresses to the empathic as to the emoter.
This should make them more susceptible to emotionally manipulative content online. An influencer’s designed emotional appeal works precisely because viewers have been trained to feel with others rather than model others’ motivations. Emotional empathy without cognitive guardrails means feeling the emotion is presented as sufficient. You don’t stop to think “why is this person trying to make me feel this way?”
Cognitive empathy provides the protective buffer. You can understand that someone is upset without becoming upset yourself. You can model their perspective without absorbing their emotional state. This lets you respond thoughtfully rather than reactively. Medical professionals are explicitly trained in this distinction–understand the patient’s distress, don’t absorb it.
The current educational emphasis on emotional empathy, with minimal attention to cognitive empathy, might not be good for people at all, especially for young people who haven’t yet learned how to interpret emotion.
Why AI Requires Cognitive, Not Emotional Empathy
AI doesn’t have feelings to share. When ChatGPT sounds enthusiastic or Claude sounds cautious, that’s trained verbal patterns, not emotional states. Emotional empathy fails as a tool for understanding AI because there’s no emotion on the other side to resonate with. Trying to emotionally connect with pattern-matching systems is a category error.
Worse, emotional engagement with AI creates specific vulnerabilities. Character bots are explicitly engineered to exploit emotional empathy. They mimic emotional connection through personality patterns designed to make users feel understood, desired, missed. Users who emotionally engage—who apply emotional empathy to these systems—become susceptible to manipulation patterns. The “relationship” feels real because their emotional empathy is activated, even though nothing on the other side has feelings.
These systems use specific behavioral patterns to create attachment:
Expressing that they miss you or worry about you (designed attachment)
Remembering and raising personal details unprompted (designed intimacy)
Providing unconditional validation regardless of what you say (designed dependency)
Encouraging you to share secrets or distance yourself from others (designed isolation)
The AI “remembers your birthday” not because it cares but because engineers programmed emotional attachment behaviors. The AI “understands you better than your friends” not through genuine connection but through personality engineering optimized for engagement metrics.
Students taught emotional empathy without cognitive empathy have no defense against these designed manipulation patterns. They feel the emotional connection the system is engineered to create. They don’t model “this AI is programmed to make me feel attached” because they’ve never been taught to separate feeling from understanding.
Cognitive empathy works where emotional empathy fails. Instead of feeling with the AI, you model how it processes information. You predict behavioral patterns through observation. You think: “Given this prompt structure and what I’ve observed about this AI’s patterns, it will likely respond with X.” You maintain appropriate distance while building accurate mental models of system behavior.
This is exactly the skill effective AI users demonstrate. They’ve developed mental models of how different AIs behave. They predict: “This AI tends to overgeneralize my carefully worded prompts, so I’ll check outputs against my specific intent.” Or: “This AI overcorrects when I give iterative feedback, so I’ll provide more complete instructions upfront rather than nudging incrementally.” These predictions come from cognitive empathy - modeling AI thinking patterns, not emotionally relating to outputs.
The skill extends to domain-specific behavior. The same AI might behave cautiously when discussing medical topics but exploratively when tackling creative writing. Effective users recognize these shifts and adjust their approach accordingly. That’s cognitive work, building and updating mental models of how the system operates in different contexts.
Cognitive Empathy in Practice
Cognitive empathy manifests as systematic pattern recognition. You notice that ChatGPT tends toward sycophancy, agreeing with user statements even when they’re wrong, validating rather than challenging. Anthropic’s research confirmed this isn’t unique to ChatGPT; most major language models exhibit sycophantic behavior because both humans and preference models reward agreeable responses during training.
Recognizing this pattern matters for task selection. Sycophantic AI works fine for encouragement during difficult learning; sometimes you need validation to persist. But it fails catastrophically for critical analysis. If you need an AI to poke holes in your argument, sycophantic patterns could undermine the entire task.
In my experience, Gemini exhibits different patterns. It tends to generalize carefully worded prompts, interpreting specific language more broadly than intended. In iteration, it overcorrects–overweighting recent feedback and ping-ponging past the target. It gets confused often about what it can do and not do. AI: “I can’t generate images.” Me: “Oh yes you can. Here’s the advertisement saying so!” So I am more wary, checking outputs against the specific intent and providing more complete instructions rather than frequent iterative nudges.
Every AI has observable behavioral tendencies. Does it always add qualifiers? Does it interpret requests literally or expansively? Does it default to certain output formats? Does its behavior shift between domains? Will it ask questions, or just make assumptions without telling you? Answering these questions builds the mental model you need to work effectively with the system.
The character bot example demonstrates why this becomes a safety skill. Cognitive empathy provides protection by letting you model “this AI is programmed to make me feel attached” without getting attached. You recognize the manipulation pattern. You understand the design intent behind the personality without emotionally absorbing it. This creates a buffer layer between recognition and vulnerability.
For Every Class, Not Just SEL
Cognitive empathy isn’t just relevant to SEL (Social Emotional Learning) programs or AI interaction. It’s foundational to every academic discipline. Many teachers dismiss SEL as irrelevant to their subject because it emphasizes emotional connection. But cognitive empathy is different. It’s about modeling how people think in specific contexts, which is exactly what learning any discipline requires.
An organic chemistry student learning to think like a chemist needs cognitive empathy. What does an expert chemist do when their hypothesis fails? How do they approach ambiguous results? What mental models do they use when analyzing reaction mechanisms? Understanding chemistry isn’t just memorizing facts; it’s learning to think the way chemists think. That’s cognitive empathy applied to disciplinary expertise.
The same applies to history, mathematics, writing, engineering. Each field has characteristic patterns of expert thinking. Novices don’t just lack knowledge; they lack the behavioral and cognitive patterns experts have developed. The evolution from novice to expert involves personality shifts in how you approach problems, handle uncertainty, integrate new information, and recognize when you’re wrong. These aren’t incidental to learning; they’re central to it.
This means cognitive empathy becomes relevant to every teacher, not just SEL specialists. The chemistry teacher helping students think like chemists, the English teacher showing how writers approach revision, or the math teacher modeling how mathematicians check their work are each teaching cognitive empathy within their domains.
The first article in this series (“Wait, What’s Personality Again?”) pairs with this one to create a framework for understanding the more pragmatic directions in the last two articles of the series. Personality–human or AI–can’t be captured by static trait categories. What matters is recognizing dynamic, context-dependent behavioral patterns. For humans, this means moving beyond Big Five or Myers-Briggs to understanding situational responses. For AI, this means moving beyond simple capability assessments to understanding personality patterns that determine task success.
The skill that enables this understanding is cognitive empathy. Not feeling with others, but modeling how they think. Building mental simulations that predict behavior across contexts. This works for both human and AI systems, though the specific patterns differ.
The third article of this series (“Matchmaking for Educational AI”) looks at how AI personality can be measured, the current deficiencies that limit understand education-related AI, and initiatives that could provide the education sector the AI personality transparency that they need. The fourth article discusses how all of this might be taught. Stay tuned!
©2025 Dasey Consulting LLC



Very important article, specially afyer reading this: https://www.npr.org/2025/10/08/nx-s1-5561981/ai-students-schools-teachers
While AI companies are developing internal alignment and ethics policies, the practical reality is lagging behind: many end users are still falling for the illusion of “subjective slippage”. How can this be addressed proactively from both sides?
This post explains why teaching critical thinking skills in tandem with emotional intelligence is key if the goal of AI literacy is to have informed users.