Discussion about this post

User's avatar
John Holman's avatar

Haha well said, there is a big difference between admiring problems and actually designing something to help fix them. Most of the discourse I see around AI in education stops at scary stories and blanket refusals – “we’ll never use AI for X” – which feels principled but leaves all the real design decisions to whoever built the default tools.

The concrete practices your piece points toward, if we actually adopted them, seem almost boringly simple in the best way:

– sketching influence diagrams before turning something loose in a class (“if we optimize for zero cheating, what happens to feedback speed, trust, digital literacy?”),

– red-teaming every new AI workflow before it touches students (“how could this be abused, what assumptions is it making about context?”),

– and using sliding-scale risk instead of all-or-nothing (“brainstorming with AI has light guardrails; grading and citation get much tighter verification”).

None of that requires a new model or a grand alignment breakthrough. It just requires exactly what you’re calling for: people in the loop who are willing to think like designers instead of critics, and to treat “we won’t use AI here” as a carefully placed brake rather than a total abdication.

Really appreciate how you framed this. “We need students to understand and help build the brakes, not just analyze the crash” is going to be rattling around in my head for a while.

No posts

Ready for more?