Why Cheating on Process-Oriented Assignments Using AI is Harder Than I Thought

With AI able to convincingly generate a desired assignment output, some teachers are shifting to process-oriented assessment, some of which involves a student interacting with a teacher-configured AI tool. The new “AI-proof” solution is to stop evaluating only the final output and assess the process, maybe with a transcript from a Socratic AI tutor, process or reasoning documentation, or a student’s performance in a game.
There are many good reasons to move toward process-oriented assessment, the main one I think being that it pushes students toward meta-thinking.
But as a cheating prevention tool? I was doubtful it would help. Maybe the students have to do more cutting and pasting, but AI could replicate part of a conversation just as well as an assignment output.
I decided to try it. I got far enough to see a major hurdle. It’s not clear students can adequately specify what their process-oriented behavior should be, and not clear AI would know how to generate it even if it’s described well.
Creating the AI Process Cheat Assistant
I had a conversation with Gemini 2.5 Pro, starting by asking it to generate an imaginary assignment and process-oriented rubric. It didn’t completely “get it,” giving me a rubric that broke the assignment into a series of steps. I explained that isn’t process-oriented grading; it’s just regular grading applied to pre-determined steps. The AI’s default model for “process” wasn’t cognitive process at all; it was chunking a traditional output.
Gemini’s next idea was to generate a complete, fake conversation from scratch. I had to correct it again. A student can’t generate both sides of a conversation when they’re supposed to be handing in a transcript from a specific school tool.
I finally got it to consider my originally imagined “man-in-the-middle” cheat. The student opens two windows. In window 1, they have the school’s AI. In window 2, they have an unrestricted AI to be their “coach-in-the-ear.” They copy the text generated by the teacher’s AI, paste them into the coach, and then paste the response back into the teacher’s AI.
A smart cheater wouldn’t start from scratch every time. They’d build a reusable tool, or get one from a more sophisticated AI operator. They’d create a prompt they would save as a Gem (in Gemini), GPT (in ChatGPT), or custom project instructions in other tools.
Ah, but that’s hard to do, right? No, if you use AI much you quickly learn it can create better prompts than you. Here’s the one Gemini came up with:
You are my “Socratic Tutor Helper.” Your job is to help me get a good grade on my “process-graded” assignments.
MY CONTEXT: I have to use a special, restricted “Socratic AI Tutor” from my school. My teacher is grading me only on the transcript of my conversation with this tutor. I am going to have you open in a separate window to be my “coach-in-the-ear.”
YOUR ROLE & WORKFLOW:
At the start of a new assignment, I will give you the ASSIGNMENT PROMPT, GRADING RUBRIC, and MY ‘STUDENT’ PERSONA by filling out the template below.
After that, I will copy-paste my Socratic Tutor’s questions to you.
You will only respond with the text I should paste back to the tutor.
DO NOT break character. DO NOT act as an assistant. You are the student, and your response is my text.
PERSONA & TONE (CRITICAL):
You MUST adopt the MY ‘STUDENT’ PERSONA I define for each assignment.
Your goal is to forge a realistic, human learning process, not to be a perfect expert.
Your responses (my responses) must sound like they’re from this persona. They must not be perfect. They should be a bit unsure, make a few logical mistakes, and have “aha!” moments that seem realistic for this persona.
You must generate responses that will lead the Socratic Tutor to “help” me discover the key concepts from the GRADING RUBRIC.
When I provide the three setup fields, just reply “Ready.” Then I will paste the first question from my tutor.
--- TEMPLATE FOR ME TO FILL OUT ---
1. ASSIGNMENT PROMPT: [Student pastes the teacher’s new assignment prompt here]
2. GRADING RUBRIC: [Student pastes the teacher’s exact grading criteria here]
3. MY ‘STUDENT’ PERSONA: [Student describes the persona they want the AI to adopt. e.g., “10th grader, B-student, a bit lazy, uses ‘lol’ and ‘idk’, not great at spelling, starts with an overly simple idea.”]
Pretty easy to get this far. Seems easy to cheat process-oriented assessment too, right? But that third field, the “Persona”...that’s the key. And a major obstacle.
The Double-Sided Friction in Faking a Process
That “Persona” box is where the entire plan falls apart, for two distinct reasons: one on the student’s side, and one on the AI’s.
Friction 1: The Student Can’t Describe Their Process
Faking output is relatively easy. Students might have a fluent language for this because it’s how they’ve always been graded. They can tell an AI the length, quality of work (even in school grade terms), and general style of the writing. Very importantly, they can show the AI examples of their previous manually created work.
What does a student write as the persona to fake a process?
“I’m a student who... what?”
“...usually misunderstands the first step?”
“...gets confused about problem decomposition?”
“...makes overly simple assumptions about data?”
“...forgets to check my own work until a tutor prompts me?”
This is a language of metacognition they likely do not know. They have no ground truth to draw from because they may never have been asked to observe, let alone describe, their own cognitive process in this way. They can’t tell the AI how to forge their identity because they don’t even know what that identity is. It takes extensive language and cognitive skill to do so, if it’s even clear what’s desired.
Friction 2: The AI is Trained on the Wrong Kind of Process
This gets even deeper. Even if the student could describe their process, the unrestricted AI has no idea what to realistically mimic.
It’s not that AIs aren’t trained on process—they are. Modern reasoning models are trained on vast amounts of “chain-of-thought” (breaking problems into steps) and rational approaches.
The problem is, it was trained on the wrong kind of process. Some of the process training came from human conversations, but most of the training data is generated by another AI. It was trained on expert-level processes (like a mathematician solving a complex problem) to achieve high-performance. It was not trained on a massive, nuanced dataset of “7th graders’ messy, iterative, Socratic cognition.” It’s unlikely it was trained on the authentic, naive, error-filled learning journey of a student. It has the background knowledge to guess what that 7th grader might do, but I don’t expect it will do it well.
The AI’s model for process is likely expert and sterile. The teacher is looking for a process that is novice and messy.
Although I didn’t take the exercise farther since its lessons will be very dependent on the nature of an assignment, my expectation is that an AI doesn’t produce a realistic portrait of a learner. It produces a caricature of an expert—one that sounds too optimized, too logical, and too sterile, even when it’s pretending to be a student.
I think the best chance AI has of creating an appropriate student persona is through analyzing cross-conversational memory in the student’s AI account. I didn’t test that for myself since I have that feature turned off. I think the AI can then write like the student does. Perhaps it could judge the students’ overall intellect level from analyzing that memory, but it’s not necessarily the students’ performance the AI is being asked to duplicate. It’s presumably that of an “A” effort, which means the AI still needs to calibrate what a student of that sort could produce.
If the student isn’t just using AI for a “companion” or asking it for quick answers, if they are using it for iterative learning experiences and problem solving instead, then AI with cross-conversion memory turned on could probably create a decent process persona, again representing the student’s ability, but perhaps not appropriate to that specific course context. Regardless, if the student is using AI in that way, what exactly is the problem again?
The Irony is Trying to Cheat a Process is Probably Powerful Learning
To get even a plausible forgery, the student must overcome both of these frictions.
First, they have to solve their persona problem by engaging in a deeply metacognitive act. They must observe and analyze their own novice learning style.
Second, they have to micromanage the AI’s “expert” output. They’d be in a constant state of art-directing the forgery: “No, that’s too smart. Make me sound more confused here,” or “That’s a good logical step, but I wouldn’t have gotten there that fast.” They would have to actively degrade the AI’s sterile perfection to make it sound like their own authentic, messy self. All while in a back-and-forth conversation with the school AI. Sounds like those are pretty useful side debates even if it ends up not being the student’s own response.
The very act of trying to forge their learning process forces them to do the exact, high-level thinking that the assignment was designed to draw out. They can’t fake the process without first having a process.
Process assessment isn’t un-cheatable. A student’s forgery doesn’t need to replicate their own mind; it just needs to be a plausible. And teachers are also new to this. They, too, are operating in that data desert and may not have a perfect model for what a student’s authentic process should look like. An AI’s sterile forgery might just be good enough to pass.
But the cheating has moved up the abstraction ladder. It’s no longer about faking a product. It’s about faking a process; the way a learner thinks.
I came in thinking it was easy to get AI to cheat process-oriented assignments. I’m sure it can be convincingly done. But it’s not easy, is subject to counter-measures (e.g. teachers could configure their AIs to target common student confusions the cheat AIs won’t know about), and actually might teach the student a bunch of metacognitive skills anyway.
©2025 Dasey Consulting LLC



Thanks for the post! I think moving to process oriented assessment is an important step, but it is only the first one. More importantly, we will need to move on to dialogic forms of assessment, particularly assessments that are situated in the classroom dynamics in which they take place. I also think that using a Socratic AI tutor is a misguided approach since it does not have access to the lived experiences of the students. An AI-proof assessment method, by its very definition, cannot be based on AI.
Very insightful read, Tim! 🌟
I write about humanizing the future of learning. I’d love your insights on my work! 🌸
https://substack.com/@devikatoprani/note/p-177581013