
[In my book Wisdom Factories: AI, Games, and the Education of a Modern Worker, I explore how artificial intelligence might change the nature of work and what this means for education. The book argues that as AI becomes capable of handling more knowledge-based tasks, workers will need different skills - ones focused on wisdom rather than expertise. This excerpt from Chapter 1 presents a fictional scenario showing how one profession might evolve, illustrating broader patterns that could emerge across many fields.]
Mona entered the field because she had been through tough times. She had made some bad decisions. It was a psychotherapist who had been there to help her crawl out every time. She considered hers a dear friend, though she knew it was destined to remain one-sided.
It wasn’t a hard choice for Mona to become a therapist. It was the only part of her life that she was always sure of.
At first it was intimidating. New things always are. It didn’t take long for her to realize the patients couldn’t see her nervousness or uncertainty. They were in their own minds. She got confident quickly.
No patient was a textbook case. They often wanted some labeled condition that they could look up, but labels are simplifications of real people. People are lots of things. The officially blessed diagnoses are like sticks dropped haphazardly into a giant, disorganized pile. The outline of the bundle defines a scope through which the sticks explore, but most of the space is empty; the sticks take up a small fraction of the pile. Most of the bundle is air. Analogously, patients were usually in the empty space—somewhere between the official diagnoses. Mona was proudest of her ability to apply her expertise flexibly…to treat clients like individuals, not categories. She looked for patterns and indicators but found each person a unique puzzle.
Then everything started to change. Her halcyon early career became something entirely different. By the end of the transition, Mona wasn’t sure what her job was anymore. She was still customizing for the individual but no longer had the connection with patients that she most enjoyed. All because of AI.
It started with patient monitoring outside the therapist’s office. The fitness industry had lots of data on what people did, felt, and sometimes thought. That data reveals a lot. The way the phone jiggles when it is with you…which it inevitably is…gives away not just when you step, but all the fine details of how you’re moving. The data hints at your activities and typical movements. Couple that with other information like heart rate, blood pressure, respiration, voice, and perhaps text, and AI could tell the difference between exercising and getting high. AI could detect depression, panic attacks, PTSD reactions, and sometimes psychotic episodes. It could assess whether a person is calm, stressed, ill, and probably how firm their poop is!
You may think few people would agree to such monitoring, but carrots and sticks both influenced AI acceptance.
Many patients felt they could benefit from seeing their time-evolving mental health pattern. It helped that data privacy concerns were met with increasingly sophisticated cyber protections, ensuring patient-controlled access to their own data.
The stick eventually came from the insurance companies. They had long wanted to go from a fee-for-service payment model to a fee-for-value paradigm. In other words, “we’ll pay you (the health care provider) if you’re doing it the right way.” They wanted the right way to include health monitoring outside the therapist’s increasingly e-office. The app homework became standard.
Psychiatrists were the first to see benefit from the collected information. Medication choices and adjustments were improved. Therapists later found value, too. They could see clear anomalies in the wearables data and compare them to patient reports.
With insurance company enforcement, a huge amount of data accumulated—if patients allowed such access—on therapy practices and the more continuous effect on patients. A lot more was learned about what was working, and it was AI that found those patterns. Mona started getting automated newsletters on the advancements in the field as more and more data was analyzed. The sheer volume of literature was oodles more than it had ever been, and it had always been beyond one brain.
Those data discoveries needed both the patient monitoring to assess impact, and the treatment strategies or visit summaries from the therapist, to tie cause to effects. The AI didn’t stop at analyzing therapist notes. Patient records now included audio and sometimes video recordings of counseling sessions. It was initially resisted, but people got used to it. Medical records already had a ton of sensitive content anyway.
It turned out that therapy became cheaper, thus more widely accessible, when AI analyzed counseling conversations. It became a business advantage. At first AI used the voice and video information to estimate emotional and cognitive states during the therapy session, but that was just the beginning. The automation began to pull out the therapeutic approach from the session audio. AI assessed the patient’s fit to several psychological conditions and states by the content of the conversations and subtle audio and facial features. Other AI could then pose comments to the therapist or suggest questions for the patient during the counseling session.
Mona didn’t always like AI’s dialog recommendations. Often the AI was choosing a question that best illuminated a diagnosis or treatment plan. Therapists frequently knew better. Information maximization wasn’t always what the patient needed in that moment. Sometimes key information would take a while to tease out.
The AI’s summary of a session was informative, though. At least it was an alternative view, and rarely did Mona think it was very far off-base. She learned from the automation’s assessments far more than from digging through voluminous literature. AI gave her a short list of relevant research findings regarding each patient in case she needed it.
She had a bit of time to read up because the documentation workload was way down. Mona was reviewing documentation—not creating it from scratch. Usually she added a few more insights, but the AI summary was mostly right. It saved her a ton of time. AI-generated dashboards for each patient would suggest an approach for the next session and point out “what if” pivots for unforeseen issues the patient could bring up.
Eventually, Mona didn’t need to be at counseling sessions at all. Therapy with a computer avatar was preferred by some patients. By the time AI had permeated all aspects of her practice, it could create an avatar that was hardly distinguishable from a real person.
For those who wanted a real flesh-and-blood therapist, the counseling roles could be filled with lower cost and less credentialed workers. They didn’t need to understand much of psychology or even what to say during a session. AI took care of that. Their job was to read the person and occasionally override the AI when needed to establish a better connection with the patient. Mona called them Empathists.
The patient was better served by this increased scrutiny and attention to their health. More patients could be served because costs were lower and service availability was higher.
Mona felt the AI were now the experts. Before AI, many psychologists were not up to speed on the latest advances and treatment approaches. It’s a lot of work to stay current. AI never got behind. Its massive knowledge networks were updated frequently with the latest and greatest.
Everything was different. Mona spends most of her time managing the AI and the Empathists. She knows she’s still providing an important service, but oh how she misses the old days.
Mona is of course fictional, but the evolution of her job is consistent with AI’s expertise-chewing march. Some of the capabilities—such as mining the data from wearable devices—are works in progress (though little of it has gone through FDA oversight). [i]
The notion that counseling might use an AI therapist may seem to be the biggest stretch. Who would want to talk to a computer over a real person? It turns out virtual therapists are already in development, and some people (e.g., military personnel) prefer the computer one![ii] The convenience of being able to talk to a virtual therapist at any time is also a competitive advantage over the human service. Plus, I suspect the trepidation over talking to a virtual therapist will decrease over time if the technology provides trustworthy guidance. It is reasonable to think Mona wouldn’t want a fake therapist to resemble and behave too closely like her, but the ability for AIs to talk, look and behave in indistinguishable ways from real people is already a thing. They’re called “deepfakes.”
It isn’t likely that psychotherapy would evolve in precisely this way. One can’t discount technical obstacles, human acceptance, rollout slowness, and regulatory constraints, but the ingredients and methods are all pointing in the direction of Mona’s fictional future job. My crystal ball will necessarily be off, but that future is plausible.
One thing is for sure—AI will change jobs. A lot. Nearly every job.
[i] G. Doherty, M. Musolesi, A. Sano and T. Vaessen, "Mental State, Mood, and Emotion,” IEEE Pervasive Computing, Vol. 21, No. 02, pp. 8-9, 2022. This special issue contains several articles which are representative of current state-of-the-art in wearable physiological monitoring.
[ii] R. Gonzalez, “Virtual Therapists Help Veterans Open Up About PTSD: An artificially intelligent therapist named Ellie helps members of the military open up about their mental health,” WIRED, [Online] https://www.wired.com/story/virtual-therapists-help-veterans-open-up-about-ptsd, Oct. 17, 2017. [Accessed Jan. 21, 2023].
©2025 Dasey Consulting LLC. All Rights Reserved.