
In Orson Scott Card's Ender's Game, the military of the future values commanders who combine strategic thinking with psychological balance—leaders who can make split-second judgments under pressure while maintaining empathy for their teams and emotional control. Rather than throwing young officers into real battles to learn through trial and error, they create increasingly sophisticated training simulations that develop these complex leadership qualities through extensive practice. To a large degree that’s what military leadership has always done—practice a lot—but in Ender’s world that practice was quite realistic, even physical.
AI is making all cognitive workers into managers and leaders. What does that mean practically? According to Jeff Bezos, "As a senior executive, what do you really get paid to do? You get paid to make a small number of high-quality decisions. Your job is not to make thousands of decisions every day." If he makes "three good decisions a day, that's enough."
Yes, Ender’s Game is science fiction, and there are ethically disturbing aspects of that book and its series. I’m not trying to create an analogy to the book as a whole; just to the training aspect. And I usually hate what comes from Jeff Bezos’ mouth, but he’s right, and most leaders don’t seem to understand that a few great strategic decisions beat a bazillion tactical ones that your subordinates can handle. Subordinates like AI.
This is where work is heading. As AI handles more of the doing, humans will become commanders of AI systems, making fewer but much more complicated and consequential decisions. And just like Ender's training, preparing for these judgment calls will require extensive practice in virtual environments that mirror real-world complexity.
When AI Does the Work, Humans Should Do the Judging
The shift is already underway. AI systems can write code, design graphics, analyze legal documents, and diagnose medical conditions with increasing accuracy. What they can't do—yet—is decide which code to write, what taste to have, or how the nuances of context and personality help judge how a legal strategy fits a client’s value.
These aren't just "bigger picture" tasks—they're qualitatively different cognitive challenges. They require wisdom skills, or if you wish, judgment skills. Those skills include the ability to synthesize information from multiple domains, consider competing values and stakeholder interests, anticipate second and third-order consequences, and make decisions where there's no clear right answer.
Unlike the expertise-oriented tasks that AI can excel at, wisdom-based judgments rarely repeat exactly. Each situation has unique constraints, contexts, and human factors. A marketing director deciding whether to pivot a campaign doesn't analyze data the same way every time—they must read cultural zeitgeist, anticipate competitor responses, balance short-term metrics against long-term brand equity, and consider team morale. These decisions shouldn't be automated, though AI might advise in some way, because they're fundamentally about human values, priorities, and interpretation.
Work then becomes orchestrating AI systems to execute the vision these judgments create. Tell an AI to "write compelling social media content for our spring product launch" and it can generate a bevy of options. But deciding which voice resonates with your brand, which timing maximizes impact, and which messaging aligns with your company's values? That's still distinctly human territory.
Why Strategic Judgment Skills Resist AI Replacement
The jobs that will remain human aren't staying human by accident. They have specific characteristics that make them AI-resistant, at least for the foreseeable future.
First, they involve one-off decisions that don't generate enough similar examples for AI training. How do you train an AI to make the judgment call about whether to recall a product, restructure a team, or enter a new market? Each situation is unique, with different stakeholders, market conditions, and organizational constraints.
Don’t get me wrong; you will see products of this sort marketed by tech companies. But they can’t easily know how well the judgments will go in the real world, and what they are banking on is that even if the AI stinks at making those decisions, that people stink at them too.
Second, the most complex decisions often can't be scored objectively. Did the CEO make the right call in delaying the product launch? Success depends on counterfactuals we'll never know and outcomes that unfold over years. Unlike chess, where you can definitively say who won, the highest-level organizational judgment calls exist in a world of competing metrics and delayed feedback.
Third, they're inevitably entangled with human values, cultural context, and ethical considerations that don't have technical solutions. Deciding how to handle employee layoffs, whether to serve a controversial client, or how to balance profit against environmental impact requires weighing values that algorithms can't prioritize.
These high-stakes, low-frequency decisions are the hardest to prepare for through traditional experience. Being great at your job means being able to handle the unusual edge cases well, and that might take an entire career to experience. What if those moments are the main contributions of your career? Will you be ready? Will the judgments be high quality?
The Virtual Apprenticeship Age
If the most important skills are judgment skills, and judgment improves through varied experience, we need ways to compress decades of rare situations into months or years of intensive practice.
AI changes this completely. Whereas creating realistic professional simulations once required massive resources—think flight simulators or medical cadavers—AI can now generate infinite scenarios tailored to any profession, any level of complexity, any specific weakness you need to work on. (And yes, I expect quality control is applied before students get access!)
But practice doesn't have to start with the most complex, ambiguous situations that resist easy scoring. It can begin with scenarios that professions already know how to evaluate—situations where experienced practitioners can clearly identify better and worse approaches. A marketing crisis simulation might start with clear metrics: response time, stakeholder communication, brand sentiment recovery. Even when the ultimate "right answer" is debatable, specific judgment principles can be practiced and scored: "Don't commit all your resources at the beginning when the situation might evolve," or "Gather input from legal before making public statements."
As competence builds, scenarios can become more nuanced, and scoring can shift from algorithmic to peer-based. Other professionals can evaluate decision quality, AI can aggregate expert opinions, and learners can defend their reasoning in ways that build both judgment skills and the ability to articulate their thinking process. Medical residents already do this through case presentations—explaining their diagnostic reasoning to attending physicians who evaluate both the conclusions and the thought process.
Virtual situations offer more flexibility, can focus on aspects of judgments or processes, and offer safe ways to fail and iteratively improve. I have sometimes called them “judgment playgrounds.” Imagine a marketing professional practicing crisis communication by facing AI-generated scenarios ("Wise Up to What's Most Important in a Career"): a product defect discovered days before a major launch, a viral social media backlash, a celebrity endorser involved in scandal. Each scenario can test different aspects of judgment—stakeholder prioritization, communication timing, damage assessment, team coordination. The AI can adapt difficulty based on performance, introduce monkey wrenches, and provide feedback on decision quality.
Or consider a school principal practicing discipline decisions with virtual students, parents, and teachers who have realistic personalities, diverse cultural backgrounds, and competing interests. The scenarios could escalate from routine disruptions to complex situations involving special needs, family trauma, or community politics. Each case builds pattern recognition for the kinds of nuanced decisions that determine whether a school thrives or fragments.
Enhanced case study learning builds knowledge, but experiential judgment development builds skills. Active decision-making under pressure, with consequences (even virtual ones) that unfold based on your choices. You can't think your way through a hostile school board meeting or a product recall crisis. You have to practice your way through them.
AI-generated practice can safely expose people to the full range of professional challenges, including the ones too dangerous, expensive, or rare to encounter naturally. A surgeon can practice difficult procedures, a teacher can work through classroom disruptions, a manager can navigate team conflicts—all without real-world stakes while the skills are developing.
Education's Resistance to Reality
Most educators tell me that education will continue much as it has, with perhaps some AI-assisted lesson planning or automated grading thrown in. This bewilders me. We're talking about the most fundamental shift in human work since industrialization, and the response is to tweak the current system?
The skills educators say they want to develop—critical thinking, creativity, collaboration, adaptability—are precisely the judgment skills that resist traditional instruction. You can't lecture someone into making good decisions under pressure. You can't assign reading that teaches empathy. You can't give multiple-choice tests that measure leadership.
These skills develop through experience, and experience has always been education's scarce resource. How many students get authentic leadership opportunities? How often do they face real consequences for their decisions? How much practice do they get with ambiguous problems that don't have clear solutions?
The traditional answer has been "they'll learn this on the job," but that's increasingly inadequate. Entry-level positions now require judgment skills that used to develop over decades. And with work changing so rapidly, waiting years to practice the skills that matter most is catastrophic.
Games and simulations offer the only scalable solution. They can provide the experiential learning that wisdom skills require, at the volume and variety that traditional education can't match. They're essentially virtual apprenticeships, allowing students to practice professional judgment long before they have professional responsibilities.
AI makes sophisticated job simulations cheap and accessible, so employers won't need degrees as proxies for competence. They can directly assess candidates through realistic scenarios that reveal actual judgment capabilities. Companies could partner with specialized training organizations to create alternative pathways to professional readiness—ones focused entirely on demonstrable skills rather than credit hours and general education requirements. Higher education faces either its greatest opportunity to lead workforce development or its biggest threat to relevance.
You don’t have to believe AI will become superhuman to recognize that what’s already out there will over time change the fundamental nature of valuable human work. The future belongs to those who can make excellent judgments in novel situations, coordinate AI systems toward complex goals, and navigate the distinctly human elements of any profession.
Schools can either embrace this reality and become training grounds for professional judgment, or they can continue preparing students for jobs that won't exist. The tools to create engaging, sophisticated practice environments are finally here.
Just as Ender practiced from the beginning for real command, students practicing professional judgment through AI-generated scenarios are preparing for real professional responsibility. The difference is they'll know it from the start, and they'll be ready when their moment comes.
©2025 Dasey Consulting LLC
Excellent analysis of the growing chasm between what education should be helping people learn (critical thinking, creativity, collaboration, adaptability) and what we insist on teaching them, namely disconnected siloed content. Those qualities such as critical thinking, etc, are reflected in many schools' and districts' Portrait of a Learner. But the problem for most is they can't see how to make the leap from the latter to the former. It can be done, however -- just not if we insist on keeping the structures we have. Thankfully there are some who are leading the way. Keep advocating for new structures!