I’m used to hearing from people who disagree with me about addiction. I wasn’t expecting to hear from them about artificial intelligence.
I host a podcast about addiction, where disagreement is part of the job. When I interview someone in recovery, listeners tell me I was too sympathetic to 12-step programs — or not sympathetic enough. When we discuss medications, some argue they save lives; others insist recovery should be “drug-free.”
But the reaction to a recent episode about artificial intelligence was different.
I had interviewed Tim Requarth, a neuroscientist at New York University who studies how AI affects cognition and education. We were trying to understand what happens when a tool becomes capable of participating in thinking itself. Among other things, we discussed whether relying on AI to reduce the effort of thinking might come with hidden costs.
Some listeners objected to the premise. Calling that dependence on AI “addictive,” they said, was alarmist. Others insisted AI was too useful to question.
What struck me wasn’t who was right. It was how familiar the conversation felt.
In addiction medicine, this pattern appears whenever a powerful and effective substance becomes widely available. Early on, the benefits are obvious. Opioids relieve pain. Benzodiazepines relieve anxiety. Alcohol relieves social inhibition. For many people, these tools genuinely help.
Addiction rarely begins with harm. It begins with relief.
What Tim described didn’t sound like intoxication. It sounded quieter: people gradually relying on AI to reduce the discomfort of thinking.
Students told Tim that they began by using it to improve grammar, then to clarify ideas, then to generate outlines. Eventually, some used it to prepare for conversations, presentations, or decisions. Several said they felt uneasy about how much they relied on it. They wanted to use it less. But they found themselves returning to it anyway.
As a psychiatrist, I recognize that moment — not as proof of addiction, but as an early warning sign: when someone begins to doubt their ability to function without help.
That may sound similar to concerns about social media addiction. But AI is different in the kind of relief it provides.
Social media captures attention by exploiting social reward — approval, outrage, belonging. Its effects are external. You’re reacting to other people.
AI, by contrast, operates internally. It organizes your thoughts. It resolves uncertainty. It reduces the strain of not knowing what to say or how to begin.
That strain is uncomfortable. But it is also essential.
Writing is not just a way to communicate knowledge. It is a way to develop it. Explaining something forces clarity. Decision-making strengthens judgment. Conversation builds emotional awareness. These processes shape the brain through use.
When we consistently outsource them, we risk weakening them.
Neuroscientists have long understood that unused abilities diminish over time. GPS makes navigation easier, but many people have lost their sense of direction. Calculators make arithmetic effortless, but fewer people rely on mental math.
AI extends this dynamic into more intimate territory: judgment, creativity, and communication.
None of this makes AI harmful in itself. It is, in many ways, extraordinary. It allows people to access expertise they could not otherwise afford. It helps patients understand medical information. It assists clinicians in processing complex data. It reduces barriers that once excluded people from education and opportunity.
I use AI every day, including in editing this article.
The challenge is to use it without allowing it to quietly replace capacities we value.
Addiction medicine offers a useful framework. Many people use substances without developing addiction. The difference often lies in patterns of use and the role the substance plays in someone’s life. When something becomes the primary way a person manages discomfort — emotional or cognitive — risk increases.
AI can easily become that kind of solution.
The discomfort it relieves is subtle: the blank page, the uncertain decision, the difficult conversation, the effort of organizing thought. These moments are frustrating. They are also how competence develops.
Tim described setting boundaries for himself. He avoids using AI for early drafts because he believes the struggle helps him think. He is cautious about using it when tired or stressed, when self-monitoring is weaker. These are not moral rules. They are protective ones.
In addiction treatment, we help patients establish similar boundaries — not necessarily to eliminate a substance entirely, but to preserve autonomy.
Which brings me back to the reaction to our conversation.
When people hear the word “addiction,” they often assume it implies catastrophe — intoxication, loss of control, destruction. But addiction medicine describes a process long before those outcomes appear: the gradual shift from optional use to psychological reliance.
Framing AI that way makes people uncomfortable for a simple reason.
It suggests that something extraordinarily useful — something many of us already depend on — could quietly reshape how we think. And history shows that when a powerful tool offers relief from discomfort, questioning it often sounds like criticism of the people who use it.
The most transformative technologies are rarely dangerous because they are obviously harmful. They are powerful because they work so well that we stop noticing what they are replacing.
Jonathan Avery, M.D., is vice chair for addiction psychiatry at Weill Cornell Medicine and host of the podcast “Thriving With Addiction.“
Source: www.statnews.com
