Mutually Assured Creation
The Typical AI Engineer Isn’t Sociopathic, They’re Ordinary People Doing Their Best

I get it all the time from educators online, and probably from silent others in the audiences I address. “He’s an AI guy.” Or sometimes, an “AI bro.”
Often the implication is used to discount anything I might say about education, to neatly put me in a little box that belongs on the other side of some imaginary fence.
And what about the AI bro tribe? Well, they’re exemplified by Sam Altman, Elon Musk, Mark Zuckerberg, and the other AI company leaders who tout AI as solving the world’s problems, while underplaying the serious dangers it brings to society. I’m thrown into that camp.
I do feel it personally, and I feel it with outrage. Isn’t this the community that speaks so highly of inclusiveness? Do they want me judging teachers based on the least caring ones?
Do they understand, or even care about, the sacrifice I’m making? That I intentionally chose the path to help education at significant career risk? Not only giving up money, but prestige, making a high-risk pivot into a world where I am not known? Do they know what it feels like to know you have expertise to offer, but have almost every aspect of the system tell you that you don’t?
This isn’t about me, but about all the AI engineers and managers who are being scapegoated too. I mentioned soon after ChatGPT came out that it was a matter of time before the entire AI community was vilified. The problem is the typical AI worker isn’t like the high-profile CEOs. They’re ordinary people with many different views, doing their best to help people and society. They’re conflicted about AI too.
The entire field is in a reinforcing spiral I call mutually assured creation.
The nuclear era was stabilized, so far, by the notion of mutually assured destruction, where anyone trying to use such weapons would likely face the same fate. That relied on two tenets: from the global perspective, that there is no positive use of nuclear weapons, and that without a significant deterrent, bad people would someday use them.
AI presents a very different situation. The first AI I developed, in graduate school, was for detecting Multiple Sclerosis better. My first job assignment was to develop an AI to detect dangerous wind shear that made planes crash. That one is still used at the largest U.S. airports, and wind shear hasn’t crashed planes since (though I believe largely because of pilot training, not my algorithm.) Through my career I managed AI-infused work for such things as medicine, public health, disaster management, transportation logistics and security, and law enforcement. At no time did I feel the slightest bit morally conflicted. Those technologies had clear value to people and wasn’t replacing them. AI isn’t nuclear weapons.
Still, a few years before ChatGPT came out I began questioning the direction of the entire field. More specifically, of the pace of advancement. I remember asking one of my best AI people “When are we going to stop?”, with “we” meaning humanity. Her response was simple: “You can’t stop curiosity.”
She’s right. Has humanity ever stopped curiosity?
Maybe curiosity can be channeled away from ever-accelerating AI advancement, but only if the world decides together. Given the incredible power advanced AI can bring to companies and governments, that isn’t likely unless society forces it. Even then, there’s little way to enforce a slow-down or ban, even if isolated to certain uses. It’s hard even to clearly define those uses.
I chose to leave AI development and focus on the human aspect. People don’t know how to deal with AI; they don’t even really understand it. I know AI will advance. I’m concerned people will not, and the combination of a technology that can have such impact and a society that doesn’t understand it should mean really bad decisions will be made about AI.
Yet I don’t think the people I left behind are evil; far from it. I think they see a runaway train, one they didn’t choose to be on, and are doing the best they can with the decisions they have control over. The people developing the frontier AI models are a tiny fraction of the world’s AI community. The vast majority of the rest are doing the best they can to put their skill to positive uses within a framework that doesn’t allow braking. If they don’t do it, someone else will.
They can try to help from the inside, by being part of the decision making, or from the outside. Both are important. Only one of them feeds your family. Career change is hard.
Be all over the Pollyannish AI CEOs. I’ll join you. But don’t label the AI community the same way. If you want constructive ways out of this mess, then you need their help. Academia promotes going to experts over novices to build understanding. AI expertise shouldn’t be an exception.
Somehow educators have convinced themselves they are in a fight between good and evil every time they interact with an edtech representative or someone who points out a way they think AI might help.
But those people aren’t your enemy. In their own way, according to their own talents and ethics, they’re trying to help. You need to teach one another, find slivers of common ground, and do your best to help what is a difficult situation.
Engagement, not stigmatism, is the way forward.