Ethical AI Decisions Require Architects, Not Critics

When AI ethics and safety comes up in education, the conversation usually drifts toward the apocalyptic or the corporate. Deepfakes destroying democracy. Data centers drinking cities dry. Algorithms radicalizing voters. These are valid, massive issues.
But focusing on these headlines treats ethics and safety as a spectator sport—something that happens to us, decided by billionaires in Silicon Valley. A lot is on them to do right. But that responsibility doesn’t end with the corporations. AI continues its configuration within every conversation, in the prompt box. It happens every time a student or teacher presses enter.
Ethics and safety seem like different domains, but they both demand systems thinking rather than binary judgments.
The people who teach students about ethics and safety tend to be trained in identifying and articulating problems. They’re less often trained in problem solving. That’s a gap. When was the last time you heard a humanities or science educator describe the great interdisciplinary work they collaborated on? Both care about expression, but in different forms. The engineer expresses through design. In the AI age, we need both expression and design in each brain.
How and when to use AI is a design process. We need to move students from passive critics who admire the problem to active architects.
The Single-Issue Trap
I have never understood why people collapse perspectives to simple narratives. Oh, I get why they do it psychologically; simple stories are more persuasive and less cognitively and emotionally taxing. Political moderates are treated as people who have no principles rather than people with values they understand are weighed in situationally dependent ways. Those who advocate for some uses of AI in education are quickly cast into the “hypester” bucket who see it as a panacea. AI cons are presented as the only relevant aspect. (And yes I understand the reverse is true in the AI tech community.) But in academia I have less patience with it, because a staple of its value is to go beyond the surface.
Take the climate argument. There are lots of articles about the massive investment in data centers and big increases in future energy consumption. AI uses more energy, therefore AI is bad for the climate. The former is indisputable. The latter is questionable.
It ignores that AI is simultaneously energy-intensive and energy-saving. These systems optimize power grids, reduce manufacturing waste, improve logistics routing, and accelerate battery and solar technology. Fixating on kilowatt-hours per query while ignoring everything else the system touches isn’t complete ethical reasoning. It’s understandable, but it misses the system.
Another example is that many teachers have declared they will never use AI for assessment because it’s central to their relationship with students. That’s a valid value. But it collapses all assessment into one category.
Formative feedback—comments on a draft, a quick check on whether a student grasps a concept—has different goals than summative evaluation. Formative feedback benefits from speed and frequency. A student revising an essay learns more when they get responses in ten minutes instead of ten days. It’s especially important when a student is practicing outside the classroom, when the teacher isn’t available. The teacher’s distinctive voice and judgment matter most in summative moments.
By treating assessment as monolithic, the teacher optimizes for one value (relationship, voice) while potentially sacrificing another (timely iteration that improves learning).
This isn’t just an individual pattern. It’s baked into how we train experts. Consider the autonomous vehicle debate. New data shows self-driving cars save lives. The health researcher might see such vehicles as a public health imperative, saving many thousands of lives per year. The labor economist could see jobs threatened, plus entire industries disrupted. (If you can summon any car whenever you want, why own one?) The environmentalist might see energy conservation potential: optimal routing, decongestion algorithms, replacement of the most polluting vehicles.
Each specialist is right within their frame. Each applies their discipline’s issues. But few are trained to consider them simultaneously and navigate the actual trade-off. We’ve grown generations of experts who can optimize within their silo but struggle to think across domains. AI demands the opposite.
There’s also the problem that responses to issues are often unhelpful because the issue is considered too simplistically. Articulating a problem is far easier than judging competing values and deciding what to do. If teachers conclude that total abstinence addresses the problem, they aren’t reckoning with the fact that AI companies are making enormous decisions about the future regardless of whether educators participate. Opting out doesn’t slow anything down. It may make someone feel better, like they’re not contributing to the problem, but then that must be weighed against other values, like education. An all-or-none stance eliminates the ability to distinguish an energy-hogging Sora video with no societal value from thoughtful AI use that might help a student learn.
Systems thinking isn’t just about understanding complexity. It’s about making decisions that are actually useful given that complexity.
Designers Instead of Spectators
If AI ethics and safety is taught as a collection of scary stories, or is avoided altogether because it’s easier to do nothing, the result is probably students who can’t handle real-life AI use and configuration decisions.
Teach AI ethics as corporate villainy and catastrophic headlines, and students learn to identify problems. They don’t learn to navigate trade-offs. They graduate and encounter real AI decisions at work, like whether to automate a process, what data is appropriate to use, or how to evaluate a vendor’s claims. They either freeze or make binary choices. That’s all they were taught.
Stop ethics instruction at “here’s what could go wrong,” and you produce excellent critics who can’t build anything. They admire problems. They might someday write about the dangers. But when it’s their turn to make a decision—and it will be their turn—they have no framework for action.
The system effect of treating AI ethics as spectator sport is students who become spectators. They learn that ethical people watch and comment. They don’t learn that ethical people also design, constrain, and build. And we add to the pile of narrow specialists who see through one lens.
What does it look like to teach students as architects rather than critics?
Get them creating influence diagrams. Before making a decision about AI use, students map the competing goals, the effects of different choices, and how those effects connect. This makes trade-offs visible. If a school optimizes for preventing cheating, what happens to digital literacy? Drawing these connections—literally—forces the systems thinking that gut reactions skip.
Train them in “red team” analysis. Before sharing an AI tool or prompt with others, students should try to break it. How would someone with different motivations exploit this? What happens if a user doesn’t share my assumptions? Documentation of appropriate uses. This isn’t cynicism; it’s stress-testing. Engineers do this before shipping products. Teachers and students should do it before handing an AI off to peers.
Teach sliding-scale risk thinking. AI mistakes aren’t a binary show-stopper. They’re a concern calibrated to task and stakes. Brainstorming something is low stakes because a bad idea costs seconds. Citing facts for a research paper is higher stakes because hallucinated information damages credibility. Students need practice thinking about the costs, then adjusting their verification accordingly. Instead, students are being taught to chase down every AI-stated fact, without the skill to understand that sometimes the error is immaterial to the situation.
These skills are teachable now, in any subject, without waiting for curriculum committees to catch up.
The future doesn’t need more people who can write essays about why AI is scary. It needs students who can map which valid goals are colliding, and build constraints to keep applications as ethical and safe as possible. Sometimes that’ll mean not using AI, but that decision must be treated as a skill far more sophisticated than a blanket policy.
We need students to build the brakes, not just analyze the crash.
©2025 Dasey Consulting LLC



Haha well said, there is a big difference between admiring problems and actually designing something to help fix them. Most of the discourse I see around AI in education stops at scary stories and blanket refusals – “we’ll never use AI for X” – which feels principled but leaves all the real design decisions to whoever built the default tools.
The concrete practices your piece points toward, if we actually adopted them, seem almost boringly simple in the best way:
– sketching influence diagrams before turning something loose in a class (“if we optimize for zero cheating, what happens to feedback speed, trust, digital literacy?”),
– red-teaming every new AI workflow before it touches students (“how could this be abused, what assumptions is it making about context?”),
– and using sliding-scale risk instead of all-or-nothing (“brainstorming with AI has light guardrails; grading and citation get much tighter verification”).
None of that requires a new model or a grand alignment breakthrough. It just requires exactly what you’re calling for: people in the loop who are willing to think like designers instead of critics, and to treat “we won’t use AI here” as a carefully placed brake rather than a total abdication.
Really appreciate how you framed this. “We need students to understand and help build the brakes, not just analyze the crash” is going to be rattling around in my head for a while.