
ChatGPT premiered on November 30 of 2022, so we’re coming up on the two-year anniversary. That means here come a slew of opinions on what has changed in that time. In the U.S., that coincides with Thanksgiving holiday weekend, so I’m getting mine out early. This one has a unique spin though.
In many ways, we can measure tech-driven progress not by the emergence of a technology but by the ability for people to understand how to leverage it appropriately. Big mindset shifts don’t happen often. The entrance of personal computers was one in my lifetime that took a real mentality shift, where many workers whose expertise counted on memorized knowledge had to reevaluate their value. Most did not. It took the next generation of workers to gradually replace the preexisting mentalities.
Unfortunately, I just don’t think many people are getting what we’re dealing with in Generative AI. Not just with the concept of it, but with how to interact with it. There’s a whole lot of blindfolded people touching different parts of the elephant, but few come away with a wholistic view. It’s a big motivation for my upcoming book AI Wisdom.
Some of the perspective limitation is based on experience, but also because people are busy. I’m in a role where I play with AI a lot, and I intuitively know, without doubt, that AI isn’t going to slow down.
Despite not many educators using AI regularly, it’s hard to believe educators haven’t at least peered over a shoulder or watched an online video or two over the past two years. And it’s difficult for me to understand how they could come away from the experience believing it isn’t relevant to them and their students.
For a while I assumed people weren’t trained on how to best use it. Now I don’t think that’s the reason. People who are trained still aren’t using it right, with notable exceptions of course.
People think they know what it is. They ask it questions or ask for a dinner recipe. It’s meh, they think. Gets it wrong a lot. They put it aside and attribute the rest to hype. They haven’t considered that maybe AI is not what they think.
Some youths are learning a different lesson. They can have deep conversations with some AIs, talking about personal thoughts in what they think is a safe way that doesn’t expose them to other people who might be judgmental or insensitive. The aren’t yet old enough to understand that AI is not what they think.
But there are some mindsets that hold us back even more. A bunch of it, in my view, are mindsets that education systems helped entrench.
Discomfort with Uncertainty and Imprecision
We are in a transition from a black-and-white world to a complex and nuanced one, and most adult brains aren’t handling that well.
It’s pretty easy to spot. Complex challenges, which almost everything at an organizational, societal, or governmental level is, are overwhelmingly viewed as having simple answers. Different simple answers, depending who you listen to, but all predicated on the ability to shrink it into an elevator pitch or campaign slogan.
And people buy it. Our brains love simple justifications. It takes training on uncertain, complex, and imprecise endeavors to get good at dealing with those and realize the oversimplification flaw. To get comfortable with discomfort, students have to practice it, and teachers need to model it.
We know only a few things for sure:
Simple approaches to complex problems are usually not effective and can easily make things worse.
Most problems are uncertain and complex, and you can’t wish that away.
Brains get better at something when there’s practice, and atrophy when there is not.
Students will only get better at dealing with complexity and uncertainty if they get a lot of exposure to those situations, not shielding from them.

Humanity on a Pedestal
In a host of ways, people are holding AI to a different standard than other humans. See one dumb AI error and, since we would never do that, write it off as useless. Talk about error prevalence, or bias, as if dealing with another human is error and bias free. Select the most outstanding human output as the AI comparison, rather than to the talent in the labor pool that businesses can access.
It is hugely uncomfortable to consider an alien intelligence like AI. It’s Copernican or Darwinian in its impact on our view of the primacy of humanity. Or in how we’re most valuable.
Like those two intellectual revolutions, we can adjust our mindset. We can still feel singularly important, even while acknowledging that AI knows stuff. We can accept Fido knows things too, humans mess up a lot, and that AI is only as useful as we allow it to be, just as my hires were.
In my view, none of that takes away from human specialness.
Wishful Thinking
People often think I must be a Silicon Valley apologist and an AI Pollyanna, which is another example of black-and-white thinking. I have disdain for much of what’s going on, and massive concerns about the future.
Yet it’s more than a bit curious that the academic community that preaches the value of expertise, who would think it silly to ask a person on the street how education should work instead of going to educational experts, are unwilling to listen to AI experts.
Those AI experts, including but not limited to those in massive AI companies, are telling you the big changes have just begun. I do not consider myself anywhere near the deepest AI expert; I haven’t built an AI in a couple of decades, though I managed those who did. My contribution is in having decent expertise in multiple areas. But even I can see a huge number of paths to even better AI. We’ve entered an era of AI cognitive engineering, and the advancements will probably accelerate, not slow.
Why aren’t they being listened to? Not just by educators, but by most of the public and most public officials? We just had an election for U.S. President where AI was barely mentioned despite it being a massive influence in the coming years.
It’s all wishful thinking. It’s a way for us to feel more comfortable. Inside I am jump-up-and-down urgent about educators learning about AI and teaching it to their students (not necessarily using AI in the teaching process). This isn’t because I think their adult or student lives should revolve around AI; it’s because I think it will.
Plugging ears is easier than engaging, especially when the solutions are murky.
There is some indication that some of the better AI users are experienced managers. When was the last time you heard that older managers were better with a technology than the younger employees?
Maybe it’s because their experience helps overcome these mindset traps. That role is fundamentally about dealing with the uncertainty and imprecision of situations, and with people who have varying abilities and must be used wisely. They do not have the luxury of ignoring the complexity.
Mindset is a huge part of interacting with people, of understanding how to get the best from a conversation or tasking. We accept that people are flawed, that life is complex and uncertain, and that things don’t go a certain way just because we wish it. We accept it, but if AI is in the same conversation, we don’t admit it.
There isn’t a person who has used Generative AI effectively who hasn’t had to change their perspective along the way. Perhaps that mindset evolution is the most important aspect of AI skill. But because modern AI is so different from what came before, the mental models that have to be reshaped are pretty central to many people’s thinking.
I think this focus on mindsets is so key!