
Heard the term "AI agent" and found yourself a bit puzzled? You're not alone. For many educators, "agent" might bring to mind their financial or real estate assistant—someone to hand responsibility for executing complex tasks. Working with future AI agents will draw on skills surprisingly similar to managing a very capable, highly responsive, and closely monitored assistant.
AI agents are mostly performing too unreliably right now to trust. But there has already been rapid improvement, and like everything else in AI it’s not wise to bet against continued progress. While many are writing down their “AI Literacy” frameworks, standards, and curricula, none are properly addressing the learning needs for dealing with agents.
Educators are getting comfortable with Generative AI tools that “respond”, AI agents represent the next step: systems designed to “operate” with a degree of autonomy to achieve goals, make decisions, and take actions. This requires a shift in thinking beyond simply using AI on a task-by-task basis, and more towards guiding and managing an ongoing process.
The Properties of AI Agents
An AI agent is a system designed to perceive its environment, act within that environment with some degree of autonomy, and do so in a goal-directed manner. More advanced agents can also plan and adapt their actions.
Many everyday technologies exhibit some of these basic agent-like characteristics:
Perceiving and Acting: A motion-sensor light perceives movement (its environment) and acts by turning on. An automatic door perceives an approaching person and acts by opening.
Goal-Directed: A simple thermostat is goal-directed: its goal is to maintain a set temperature. It perceives the room temperature and acts by turning the heat or air conditioning on or off to achieve that goal.
Autonomy: An email spam filter operates with autonomy. It perceives incoming emails, applies rules to identify spam (its goal is to keep the inbox clean), and acts by moving suspected spam to a separate folder, all without direct intervention for each email.
Planning and Adapting: A GPS navigation system in a car demonstrates basic planning by calculating a route (its plan) to a destination (its goal). It can also adapt if a turn is missed or if it perceives new traffic conditions, recalculating the route.
AI agents take these characteristics to a far more sophisticated and complex level. The difference isn't just in degree, but often in kind.
Imagine an AI agent tasked with planning a complex event, like a school fundraiser.
Its perception isn't just a single data point like temperature or motion; it might perceive and process information from hundreds of vendor websites, volunteer availability spreadsheets, weather forecasts, and budget documents simultaneously.
Its actions aren't simple on/off switches; they could involve drafting and sending personalized emails to potential donors, negotiating prices with suppliers in natural language, or dynamically adjusting marketing campaigns based on real-time engagement data.
Its autonomy is far greater. It wouldn't be told every email to send or every call to make. It would be given the overall objective ("organize a successful fundraiser within this budget by this date"), and it would make many of the intermediate decisions itself.
Its goal-directedness is more nuanced. The "goal" of a successful fundraiser involves multiple sub-goals (maximizing attendance, minimizing costs, achieving fundraising targets) that the AI agent must balance.
Its planning and adaptation capabilities will be significantly more advanced. It might create a detailed project plan, identify potential risks (e.g., a key volunteer becoming unavailable), develop contingency plans, and adapt its entire strategy if early ticket sales are lower than expected, perhaps by reallocating marketing spend or suggesting new promotional activities.
This leap is from simple, often single-purpose automated systems to AI agents capable of handling multifaceted, dynamic, and information-rich tasks with a much higher degree of independent decision making. If a Generative AI right now is akin to a brilliant intern, an AI agent is more like an enterprising project coordinator assigned a complex, ongoing goal. It isn't just given a single prompt; it's given an objective, resources, and boundaries, and it then proactively works towards that objective, making many decisions along the way. This is a world away from an AI that waits for the next prompt. The human role inherently shifts from that of a direct prompter or user to that of a strategist, a manager, and an overseer.
Beyond Prompting: Essential Skills for an Agent-Infused World
If students (and educators themselves!) are to thrive in a future populated by AI agents, the ability to "prompt" an AI for an immediate response is only the beginning. The more profound skills will revolve around effectively and responsibly managing these autonomous systems. It's about learning to direct AI at a higher level, much like a good manager guides a team or an individual.
Here are some of the crucial "agent management" skills that will be needed:
Strategic Goal Setting and Task Delegation: There will be a need to clearly define what an AI agent should achieve overall, not just the output of a single task. This involves breaking down larger objectives into manageable tasks for the agent and understanding what can be safely and effectively delegated versus what requires human judgment and intervention. Students need to become good "managers" who know how to assign work to very capable, but literal-minded, "employees."
Designing Controls and Ethical Boundaries: AI agents don't possess innate common sense or human values. Clear operational rules, constraints, and ethical guardrails must be set. This means thinking through potential unintended consequences and defining what an agent should and should not do while pursuing its goals. An AI study assistant agent instructed to "help get better grades" shouldn’t decide to complete homework autonomously because it’s boundaries of action are properly set.
Performance Monitoring, Evaluation, and Adaptation: An AI agent can't just be set and forgotten. Like any ongoing project or supervised team member, its performance needs to be monitored. This involves skills in tracking an agent's actions, evaluating its effectiveness and alignment with the original goals, identifying when it's going off-course, and knowing how to intervene or adjust its instructions.
Technology Empathy: Working effectively with agents will require developing an intuition for how they "think" or interpret instructions. This "technology empathy" involves anticipating how an agent might misunderstand a goal, what its limitations are, and how its pre-programmed "personality" or behavioral style might influence its actions. It’s about understanding the quirky nature of an AI "assistant" and how to best communicate with it.
Resource Management: AI agents consume resources, including computational power, access to data, and even attention for oversight. A basic understanding that these resources aren't infinite and that agent use needs to be efficient and justified will be important.
These competencies are about strategic oversight, critical thinking applied to dynamic systems, and a form of digital-age leadership. In essence, as AI agents become more prevalent, everyone will need a core set of skills analogous to those of a good manager.
The main difference is that most of the people we work with are trustworthy to get the task done well. AI might go off the rails, so there’s a lot more rigor needed in making sure the agent’s behavior fits within bounds and is closely monitored.
It's More Familiar Than You Think
The idea of managing autonomous AI agents might sound daunting, but the skills involved are often an extension of competencies many educators already possess or value.
When students are given a long-term project, clear goals are set, resources provided, checkpoints established, and progress monitored – this is akin to strategic goal setting and performance monitoring for an AI agent. When classroom rules and expectations for behavior are explained, controls and ethical boundaries are being designed. When teaching styles are adapted to a student's learning needs, a form of technology empathy is being exercised – understanding how an "other" processes information and responds best.
The core difference is that an AI agent will follow instructions with incredible precision but lacks human intuition, common sense, and inherent ethical understanding. Therefore, the "management" involves being exceptionally clear in instructions, vigilant in oversight, and thoughtful in setting up the rules of engagement.
The skills needed to interact with current AI—clear communication, critical thinking, and ethical awareness—are still vital. Working with agents builds upon this foundation, adding a layer of managerial thinking. The AI education vision must be expanded to include the skills for a more interactive and ongoing relationship with AI.
Traditionally, schools don’t explicitly teach fundamental concepts like control theory (understanding how to guide and regulate systems, set boundaries, and manage feedback loops), different management structures (like hierarchical vs. collaborative, or centralized vs. decentralized decision-making), or the psychology of delegation and oversight to every student. These topics have often been reserved for specialized business or engineering tracks, if taught at all.
Yet, as AI agents become accessible personal assistants and collaborators for everyone, a foundational understanding of how to direct, constrain, and evaluate autonomous systems becomes a universal need, not a niche skill. The encouraging part is that these concepts can be taught through analogies and practical exercises that relate to everyday experiences.
©2025 Dasey Consulting LLC