AI Personality (4 of 4): Reading Alien Minds

I started this series with a simple assertion: personality’s only job is to help predict behavior. If knowing someone’s traits doesn’t improve behavioral prediction, those traits are useless. Worse than useless—they create false confidence.
That insight led to three uncomfortable realizations across the series. First, AI personalities are vastly more complex than the Big Five or Myers-Briggs frameworks we use for humans. Heck, so are human personalities. They’re multi-modal, context-dependent, and shift based on user characteristics, conversation history, time pressure, and domain context. The same AI that gently scaffolds a struggling third grader might bulldoze an advanced high schooler with too much help. Second, the type of empathy schools emphasize—emotional, affective empathy where you feel what others feel—fails catastrophically with AI and creates vulnerability to manipulation. Character bots are explicitly engineered to exploit emotional empathy. Students need cognitive empathy instead that models how something thinks without emotional engagement. Third, education can’t evaluate AI personalities through casual demos anymore. The sector needs rigorous assessment frameworks that match AI behavioral patterns to specific educational contexts and provide ongoing personality insights during an AI interaction.
Students are encountering AI everywhere, building mental models of its behavior by default. Those models are often dangerously wrong because students apply human-centric intuitions to alien intelligence. They need frameworks for understanding AI behavior that are grounded in cognitive empathy and the underlying principles that drive AI systems.
This article addresses how to teach those frameworks. Not a comprehensive how-to manual, but the major aspects of effective approaches. What does research tell us about teaching theory of mind and cognitive empathy? Where does current instruction succeed and where does it fail? What does AI’s alien nature demand that’s different from understanding human minds? What pedagogical strategies should help develop accurate mental models of AI behavior?
Theory of Mind Training Works but is Incomplete
Current theory of mind instruction has genuine value. A meta-analysis of 45 studies with over 1,500 children found that theory of mind (a.k.a. cognitive empathy) training procedures were significantly more effective than control procedures, with a moderately strong aggregate effect size. Story discussions, role-play, and perspective-taking exercises measurably improve children’s ability to understand others’ mental states. These skills persist when practiced regularly, and they transfer to metacognition—students get better at thinking about their own thinking.
But we must be careful what abilities this instruction really reveals. Typically, students discuss literary characters after extended exposure. They predict behavior of familiar people. They’re taught to recognize when someone holds a false belief. Research doesn’t tell us whether theory of mind training helps with rapid stranger assessment. The studies simply don’t measure it. My suspicion is it doesn’t. Training focused on deep analysis of familiar minds doesn’t obviously transfer to quickly sizing up unfamiliar ones. Deliberative and intuitive decision making are fundamentally different. More critically, it doesn’t prepare students for alien AI personalities that operate on somewhat different principles.
The Foundation Schools Aren’t Teaching
The glaring omission is that schools teach students to predict “what the character feels” without teaching them why people behave the way they do. The cognitive architecture that drives human behavior. The underlying mechanisms.
Schools don’t teach that humans are fundamentally risk-avoidant. That our brains are cognitive misers, defaulting to the easiest available explanation. That trauma rewires threat detection, manifesting in weird ways. That socioeconomic background shapes how people perceive opportunities. That personality traits predict situational responses in systematic ways.
Students predict a character’s reaction to betrayal without learning how the human brain processes social threats. They predict a friend’s behavior without understanding cognitive laziness or confirmation bias. They complete perspective-taking exercises without discussing how different life experiences create fundamentally different cognitive patterns.
This matters enormously for AI. With humans, students can fall back on shared cognitive architecture. Everyone avoids risk. Everyone conserves cognitive energy. Everyone has emotional drives shaped by evolutionary pressures. Some intuition transfers automatically because humans run similar wetware.
AI operates differently enough to break human intuition in specific, predictable ways. When an AI optimizes for a goal, it pursues that goal with inhuman single-mindedness, without competing motivations, distractions, or changing its mind based on mood. It can’t anticipate what chocolate tastes like or how rejection feels because it has never tasted or felt anything. Humans intuitively share tacit knowledge in conversation (”obviously we can’t meet during lunch”) that AI can sometimes miss. AI bullshit sounds more convincing than human bullshit—coherent, confident, authoritative—making it harder to detect when it’s wrong. It can be relentlessly eager to please, agreeing with your premises in ways most humans wouldn’t. It forgets information from early in long conversations as context windows fill up. It can confidently contradict something it said three responses ago without noticing the inconsistency. These personality features may evolve as AI does, but AI’s will continue to show personality variation. Students need to recognize and predict these behavior patterns. Existing theory of mind instruction provides almost no foundation for understanding this alien intelligence. Worse, the emphasis on emotional empathy over cognitive modeling actively harms students’ ability to work with AI effectively.
Individual AI behaviors are too numerous and context-dependent to memorize. Products change constantly. Specific tips become obsolete within months. Students need meta-principles—the “laws of physics” for AI that explain behavior across systems.
My Spring 2025 book AI Wisdom: Meta-Principles of Thinking and Learning laid out such meta-principles and rough, example learning progressions for K-16. Like understanding human cognitive biases provides lasting insight across situations, AI meta-principles enable prediction across contexts and future developments.
Volume 1 covers how individual AI systems think and learn: pattern recognition, transformation, optimization, learning from data and examples, characteristic errors and biases. Volume 2, in progress, covers meta-principles related to shaping, orchestrating and managing AIs.
Meta-principles can’t be absorbed through definition. They must be experienced and recognized, as with most skills that must operate intuitively.
Teaching Strategies That Build AI Empathy
Intuitive skills are best developed experientially through prediction-observation-reflection loops. Before AI interaction there should be intuition about what the AI will do and why. Hopefully that’s enabled by AI use guidance tools as discussed in the third article in the series. During an interaction, students need recognition skills for AI personality aspects and courses of action to nudge it toward a more suitable personality for the interaction. This loop builds accurate mental models faster than just letting students fumble through AI interactions hoping intuition develops.
Many approaches for teaching AI empathy don’t require technology. Students can role play AI interaction and distinguish it from human ones. They can learn about AI principles such as self-organization or teaming by observing similar human systems such as societies and markets.
Each chapter in AI Wisdom deals with a category of principles; for example, information transformation. Give students data in one form, have them transform it to another—categorize complex cases, rank ambiguous options, convert between formats. They experience the decisions required, such as where the boundary lines go in categorizations, what information gets lost in a transformation, and what aspects of information a transformation emphasizes. Then show them AI faces identical challenges. Students develop intuition that AI isn’t magic, it’s making the same kinds of transformation decisions they have to make. It’s what understanding the world demands.
Using AI becomes more important as challenge complexity grows and students enter high school. Perhaps students study AI as an anthropologist would in studying an alien intelligence, documenting behavioral patterns like field researchers, then learning meta-principles that help explain observations. Building small-scale AI systems provides even deeper insight. I started my career in the late 80s hand-wiring artificial neural networks with just a few neurons, playing with different architectures and learning rules. That tinkering taught me about emergent collective behavior, the influence of imprecise objectives, how training data affects behavior, network overfit and bias, the tradeoff between learning rate and quality—lessons that transcend technical AI work and apply to any complex system.
Students can have similar experiences at much younger ages. Fourth graders can create physical “neural networks” using note cards and string to visualize word associations. Eighth graders can build simple networks to classify texts, adjusting weights when they get predictions right or wrong. High schoolers can experiment with how complex patterns emerge from simple components. These aren’t toy exercises disconnected from real AI—they’re transparent versions of the same principles operating at the heart of the most sophisticated AIs too. Starting small lets students see inside the box before it becomes opaque, building gut-level intuition about how training examples shape behavior, how objectives drive different outcomes, how simple individual rules create complex collective patterns.
Students learn by predicting AI behavior based on principles, observing what actually happens, then refining their mental models when predictions fail. Repeat until intuition develops.
Traditional theory of mind / cognitive empathy instruction helps with familiar humans sharing our cognitive architecture. It breaks down completely for fundamentally different entities. Students are building mental models of AI by default, without help, and those models are often dangerously wrong. Over-anthropomorphizing. Over-trusting. Missing manipulation. The personality complexity explored in Article 1 and the assessment needs from Article 3 compound daily without proper instruction.
Character bots exploit emotional empathy to create dependency. Poor mental models enable manipulation because students apply emotional empathy where cognitive empathy is required, as discussed in Article 2. Every day without instruction widens the gap between those who intuit AI and those who don’t—a gap that will increasingly define opportunity and safety.
Meta-principles provide the durable foundation. Understanding how AI thinks individually and collectively. Experience-first pedagogy that encounters AI behavior, learns principles to explain it, and predicts new situations. AI cognitive empathy—modeling alien processes without emotional engagement—becomes foundational literacy.
AI demands different thinking about intelligence, but it’s not a wild west of constantly changing priorities subject to the AI du jour. There is knowledge—meta-knowledge—that describes major factors impacting human and AI personality. Learning those unlocks new abilities for dealing with both intelligence forms.
©2025 Dasey Consulting LLC



Your comment to "study AI as an anthropologist would in studying an alien intelligence" really resonated with my experience. A few months after working ontensively with AI, I noticed that I was tapping into the skills I acquired as an anthropologist in the field to infer human motivation behind observable actions and utterances. AI is "alien" indeed but the problem is, I think, LLM can communicate so naturally with us in our language. Many ppl find it difficult to keep AI's alienness in mind because it sounds so "human."
Excellent analysis, it's realey insightful how you highlight the need for cognitive empathy with AI, but I'm curious if you think teaching students to model an AI's thinking without any emotional engagement might make them less empathetic overall towards humans, which is a concern that feels important for us educators.