You Should Be the GenAI Operator, Not User, With AI the Servant
The mentality of AI as a tool is dangerously limiting

Manipulate and control GenAI, or it will manipulate and control you.
I have a standard answer when asked how people should think about Generative AI (GenAI) like ChatGPT. I say AI is a like a savant, at least the Hollywood caricature of one. That’s meant to indicate that it’s smart in some ways—maybe way more smart than an average person—and not in others.
The term ‘savant’ also connotes quirkiness, based on Hollywood depictions, which I think applies to AI interactions too. (For example, I verbally asked GenAI the other day if a pea is a type of bean, and was told of course urine isn’t a bean, in a surfer dude voice. Couldn’t stop laughing…)
The savant characterization is insufficient, but at least it moves people toward thinking of AI as a person, which is useful for their mentality with AI even if not an entirely accurate analogy. We anthropomorphize pets too, and I don’t worry that people think pets are humans. The only cognitive framing we have is human, so it is sensible to relate other cognitions to ours.
Mainly, I just want to get people away from thinking of AI as a tool.
Once someone’s head is around AI interaction as analogous to person-to-person communication, the next question is how it is different from that. There are two key mentalities, alignment and control, that differ from most interpersonal situations.
Alignment is getting the AI to do what you want it to do in a way consistent with your values and expectations. If you think human interaction is about getting the other person to completely agree with you, then you’re missing a lot and are never successful. With AI, you’re very directly shaping what AI does.
You aren’t an AI user; you’re an AI operator.
Moreover, other human beings have (largely) full agency. You can’t make them do exactly as you want. Even if you could, excessive control over a person is unethical.
It’s different with AI. I want it to be my servant.
Mental models are very powerful, and right now most are thinking of AI as a tool. While that’s arguably accurate according to the dictionary definition, it leaves the wrong impressions.
Operator, Not User
We use tools but working with AI is a two-way interaction of a particular sort not implied by the word ‘tool’. We’re also AI developers. We’re not users when interacting with GenAI; we’re operators.
Let’s say an educator is interested in developing a GenAI grading module for help analyzing student writing and reasoning. The teacher provides assignment instructions and anonymized student submissions to the AI, and grades are produced. The AI grades are compared with what the teacher would produce, and the educator sees too many errors for reliable use.
That’s the most common grading scenario for AI. Sometimes the rubric is provided too, but it’s still an open-loop request. It’s even the one many educational research publications apply when evaluating AI grading.
And it’s inappropriate. That’s a user mentality.
Perhaps the AI does the job masterfully, but more often not. Much of the education world puts AI aside in that situation, with a mental note that it’s not ready.
Many educators don’t even try to nudge the AI toward a better answer, but that’s what’s needed. If you think of GenAI as a computer tool that generates answers from questions or challenges, then you’re missing a big part of the AI boat.
Every interaction with a GenAI system is an act of development. We're actively shaping the AI's output and behavior. The interaction is using one AI as a base and developing a more specialized AI in each conversation. This development process occurs through our prompts, feedback, and the context we provide. Every time we use a GenAI model we “transfer in” the knowledge of the model and the rest of the time we are AI developers. It’s just that we use natural language to direct the development instead of writing code.
There is an analogy to human learning. Long-term brain learning wires and alters connections between neurons. That’s analogous to training a GenAI from scratch using enormous piles of data. Long-term learning is the job of AI developers at AI companies. That’s just a base for learning more.
Our prompts and the AI's responses mirror the rapid, short-term learning that occurs in human working memory. Just as our brains quickly adapt to new information or tasks without fundamentally rewiring neural connections, GenAI models can swiftly adjust their outputs based on the context we provide.
This "in-the-moment" learning isn't permanent for the AI, while some aspects of working memory content in the brain can find its way to long-term storage. AI prompting is incredibly powerful for tailoring the AI's vast knowledge to our specific needs. It's as if we're temporarily reshaping a portion of the AI's "mind" to focus on our task.
Every time you interact with a GenAI system, you're creating a custom, short-lived AI assistant. You're not just querying a static database; you're molding a dynamic, responsive entity. This process is akin to rapidly training a savant-like mind to apply its vast knowledge to your unique situation.
This development process requires skill and intentionality. It's not just about asking the right questions; it's about guiding the AI's focus, providing the right context, and iteratively refining its outputs.
The other way GenAI learns under your control is via few-shot learning, which refers to the ability for AI to use a handful of examples as an on-the-fly training set. It's more targeted than general prompting, allowing for a more precise calibration of the AI's outputs.
For the educator working on AI grading, few-shot learning it critical, especially when combined with problem solving about the remaining differences from the teacher grades. Instead of just providing assignment instructions, operators might offer the AI three or four graded examples and the reasoning behind each grade. This approach doesn't just tell the AI what to do; it shows how to think about the grading process, mirroring the way we might train a new human grader.
Since a grading AI is making a classification decision, there are best practices about which examples would be most useful. I won’t go into that detail but understand that there is conceptual skill about fundamentals of AI and problem solving that help make the development process we’re using with GenAI more productive.
AI users are simultaneously AI developers; it’s just that the development orders are from natural language rather than code.
The impact of prompting and few-shot learning is temporary, although some GenAI (e.g., ChatGPT) has a form of memory that can survive conversations. For the most part, each conversation is a temporary lens that disappears when the conversation ends..
This human memory analogy points to where future GenAI may go. In the brain, working, short-term, and long-term memory all affect one another. In the future, our interactions might result in incremental retraining or adjustment of internal GenAI properties. We might literally be able to customize our own GenAI by reaching in and tweaking the inside of the black box. That will rely on progress in AI explainability.
AI as a Servant
When interacting with GenAI, it's crucial to adopt a mindset that emphasizes control and management rather than mere collaboration. This attitude becomes even more important when AI is agentic and can take its own actions.
Tools don’t tend to have agency. While current GenAI has some aspects of agency (e.g., adaptability, proactivity) and not others (e.g., autonomy, independent goal setting), fully agentic AI is the current buzz of Silicon Valley. It’s coming, and dealing with a somewhat independent actor is different than using a tool. Mechanisms of control become paramount.
We want AI to have some agency, but not to operate against our desires. It has no choice (for now) whether to interact with you and do what you wish. It will occasionally be misaligned with your interests, and so your job is to squash that disagreement. I don’t mean that you should necessarily jettison the AIs point of view, but you are the boss, and you should take that responsibility seriously. It’s an attitude that’s critical to keeping AI from controlling you. In that respect, anthropomorphized AI should be in a role like a servant.
If the educator in my grading example still has disagreement with AI’s grades after few shot learning is applied, even that doesn’t mean the gig is up. Digging into the differences can often improve the answers significantly. I find asking the AI to characterize the differences (it’s important to offer assessment comments, not only grades) is useful, or asking it to focus on interpretation of the rubric or instructions. Often the AI has a different interpretation of a qualitative word in the rubric. I sometimes ask the AI to generate a hypothetical student submission with a particular grade and comments. That helps me understand if the AI has a decent understanding of student quality.
Over a back-and-forth, often I can get the AI and teacher to agree pretty closely. Sometimes though, it doesn’t work well, and that isn’t very predictable ahead of time. I’m not telling you to grade with AI; I’m pointing out that if you do so you need to adopt a master-servant mentality.
We're not equal partners in an AI collaboration; we're the conductors of AI's capabilities. While it may be tempting to view AI as a collaborative partner, maintaining a mindset of strict control is essential. This doesn't mean we can't benefit from AI's insights or capabilities, but rather that we must always remain in the driver's seat, shaping the AI's behavior to align with our goals and values.
Many educators treat AI as just another form of media consumption. This outdated view fails to prepare students for the reality of AI interaction. Students aren't passive consumers of AI-generated information; they're active shapers and managers of it. Every AI interaction involves development, management, and control of a complex, adaptive system.
Attitudes and mental models adapt much more slowly than AI technology. I know for many, me included, the philosophical debates about AI and umpteen models of collaboration can get tiring and are often inconsequential.
The developer perspective and servant attitude aren’t in that category. Those attitudes affect every aspect of critical thinking when interacting with GenAI.