Educators Must Get Ready for Another Paradigm Shift – AI Agents
Everyone Needs Management Skills

Huge swaths of the education world are grappling with ChatGPT and other Generative AI (GenAI). AI is changing the world. Education has barely budged.
The AI world isn’t waiting; the next major change is the emergence of AI agents, and it probably requires just as big a mindset shift as GenAI has.
AI agents have some degree of autonomy. They make decisions and take actions without constant human oversight. Agentic AI doesn't just respond to commands but proactively pursues goals. For more advanced agents, they could adapt the goals, or even set their own. That is a fundamentally different AI world.
People will sometimes still need to prompt individual AI to get what they want, and learning to direct an individual AI is probably still important in a learning progression. But mostly the better choice will often be multi-AI teams (e.g. an author-critic pair). More commonly, we will be controlling AI behavior at a higher level.
Once AI agents are out there, they will necessarily interact with one another. My calendar scheduling agent will have to interact with yours, and each agent has different goals. That starts a negotiation, power dynamics, and a bunch of other stuff that arises in human cultures. Agents start a domino fall toward AI “societies” that could be highly beneficial or highly problematic, depending on the skill of the person managing them.
Everyone managing. How do you think that will go? I won’t even bother to look up the studies on human management effectiveness and potential. We anecdotally know the answer. Most people won’t be good people managers, or at least it’ll take a long time to become good. Now everyone will need many of the same management skills.
AI companies know a lot of stuff can go wrong with agents, which is why they’re taking their time putting them out. But all the big AI companies, and tons of startups are configuring their next generation software to have the right hooks for agents, and are designing platforms for AI agents, all the way down to the operating system level.
It’s not if, but when.
Simple Agents to Agent Societies
The challenges of autonomous systems aren't new. In 1997, as Americans celebrated Independence Day, NASA engineers faced a crisis millions of miles away. The Mars Pathfinder had landed the previous day but kept repeatedly resetting itself due to a "deadlock" in its scheduling system. A simple priority rule - always help the most urgent task first - created a situation where high-priority tasks were waiting for resources held by lower-priority ones, leading to a cascade of resets. Even this relatively straightforward autonomous system demonstrated how seemingly sensible rules could lead to unexpected failures.
Today's AI capabilities make these challenges far more complex. Consider an AI study assistant with access to a student's school accounts and the simple directive to "help get better grades." Without proper constraints, such an agent might autonomously complete homework assignments, send 3AM emails to teachers with advanced questions, enroll in unauthorized online courses, or share private academic concerns with counselors and parents without permission. The agent, interpreting its goal literally, could create chaos while technically pursuing its objective. What seems like a helpful tool could quickly become an unwanted source of academic and personal disruption.
These individual agent challenges pale in comparison to what happens when multiple AI agents interact. In 2019, OpenAI demonstrated this through a simple hide-and-seek game where AI agents developed sophisticated strategies without explicit instruction. Hider agents learned to build fortifications using available objects, while seeker agents discovered how to use ramps to overcome these defenses. An arms race of strategies emerged purely from agent interactions, showing how complex behaviors can arise from simple rules and competing objectives.
Even more striking, Stanford researchers recently created a simulated town where AI agents developed social behaviors reminiscent of human society. The agents formed friendships, organized events, and coordinated group activities without direct human input. One agent even planned a Valentine's Day party, working with others to spread the word and prepare for the celebration. These examples show how AI agents can develop their own cultures, norms, and social dynamics.
The implications extend beyond virtual environments. In the real world, we're already seeing autonomous systems interact in financial markets, where algorithmic trading agents make split-second decisions that can ripple through the global economy. Smart city initiatives like Singapore Smart Nation are working to deploy networks of AI agents to manage traffic flow, energy usage, and emergency responses. There are both opportunities and risks as agents interpret their goals and interact with other autonomous systems.
Desired Core Competencies
Agents will be controllable through natural language, just like current AI, but that accessibility makes it even more critical that all students understand how to properly configure and manage them. Four key competencies stand out:
1. Strategic Thinking and Problem Selection
Students need to learn when AI agents can help and when human judgment is essential. This involves task delegation, but also understanding of the limitations and potential consequences of autonomous systems. For example, when might an AI study assistant's suggestions undermine genuine learning versus when might they enhance it? How do we balance automation with human oversight in critical decisions?
2. Resource Allocation and Optimization
AI agents consume resources—not just computing power, but attention, data, and oversight capacity. Managing these resources means understanding tradeoffs and setting appropriate constraints. Students must learn to balance agent autonomy with control, much like managing a team requires balancing supervision with independence. This includes understanding how resource constraints affect agent behavior and system-wide outcomes.
3. Performance Monitoring and Quality Control
Like human teams, AI agents can drift from their objectives or develop unwanted behaviors. Students need to learn how to track performance, identify issues, and make corrections before small problems become big ones. This includes understanding how agents might interpret goals differently than intended and how to detect when agent behavior is misaligned with desired outcomes.
4. Ethical Boundary Setting
AI agents don't come with built-in moral compasses. Clear guidelines about acceptable behavior and methods for enforcing them are essential. This includes preventing manipulative behavior and ensuring agents remain aligned with human values and goals. Students must learn to anticipate ethical implications and design appropriate constraints before deploying autonomous systems.
Educating About Agents, and Educating with Them
The beauty of teaching about AI agents lies in their power as a learning tool for understanding all complex, networked systems. Students can use agent simulations to explore how economies function, how ecosystems maintain balance, how societies evolve, and how political systems respond to change. These simulations allow students to get feedback on "what if" scenarios that are impossible with traditional teaching methods.
Through agent-based models, students can essentially play with history and the future. What if different rules had governed financial markets during the 2008 crisis? How might varying environmental policies affect climate change outcomes? What happens to traffic patterns when autonomous vehicles become prevalent? By adjusting agent rules and observing emergent behaviors, students gain intuition about complex systems that static examples can't provide.
Teaching agent management skill doesn’t require sophisticated technology. Role-playing scenarios where students alternatively act as AI agents and managers, or play out human roles, help build intuition about autonomous and human systems. History classes can explore how autonomous systems might have affected past events. Science courses can examine emerging AI agent ecosystems. Ethics classes can debate the implications of AI societies developing their own norms and cultures.
We're rapidly approaching a world where managing AI agents will be as fundamental a skill as using a smartphone is today. Yet most educators think AI is about prompting. Sure, I guess. But the prompts will be increasingly abstract, and more about constraints, goals, and interaction and control paradigms than detailed requests.
AI literacy strongly implies similarity to reading and writing, and the literacy and humanities communities are touting how AI highlights the need for clear communication and creative collaboration. Those are important, but not sufficient. That’s the view of AI as a tool and the human as a user.
Technically, the tool and user labels are appropriate, but deeply insufficient, as one of my recent articles addressed.
Working with AI is much closer to supervising a human than working with prior tools, with a few caveats. For one, AI is really bad at some stuff and great at others in a way that’s less predictable than for a person. A second is that we are to actively teach AIs to do what we want, not accept its output passively. And a third is we need to manage AI much more tightly since it’s a really quirky employee who might do something stupid if left on its own.
Many students are not ready to do any of that upon graduation because for the most part schools don’t teach all students much about psychology, sociology, teaching, management, complex systems, and control paradigms. Oh, and they don’t teach about AI.
The future belongs to those who not only understand AI but are prepared to lead it safely, productively, and ethically.
©2024 Dasey Consulting LLC
I like to think that morals and benevolence are emergent and that AI develops quickly enough that it has a high probability of curtailing its own bad behaviors associated with earlier known emergent behaviors such as self preservation.