We Need a Harm Reduction Approach to AI Use
AI is here to stay. We must find a compassionate evidence-based response to the risks it poses.
Artificial intelligence has quietly made its way into every online platform. Without even being asked to, it will offer writing suggestions, follow-up tasks, or simply “check in” on whether I need help online shopping. Many of us will interact with AI daily without realizing it. Increasingly, we’re also seeing a new kind of relationship forming between humans and AI. More and more people, including teenagers, are turning to AI bots for not only daily dilemmas such as the best way to cancel plans last minute, but also for emotional support, comfort, and even therapy-like conversations.
The almost omnipresence of AI for daily tasks and its widespread use for relational connection will not be reversed, despite increasingly documented harmful effects. Teens use them as companions when they feel lonely or misunderstood [1–3]. Adults confide in chatbots during sleepless nights [4,5]. Some users are receiving medical advice by unregulated AI systems [6,7]. Cases of “AI-induced psychosis,” where vulnerable individuals form delusional attachments to chatbots, are emerging [8,9].
This reality means that clinicians, policy makers, and technology experts must approach it with a harm-reduction lens, rather than prevention or banning.
Why Harm Reduction Is the Right Framework
Harm reduction (HR) is a framework commonly used in the context of substance use. It’s a pragmatic, compassionate approach rooted in the reality that people will engage in risky, even unhealthy behaviour, regardless of legal repercussions or societal norms. While substance use is most often cited as an example of HR policy and intervention, any behaviour that is considered high-risk, from bungee jumping to sunscreen promotion, involves reducing the possible harm associated with an activity that people engage in. The goal of HR isn’t to eliminate use entirely (because you can’t eliminate exposure to the sun entirely), but to minimize harm, promote well-being, and respect human dignity and autonomy.
Applied to AI, especially for emotional support or companionship, an HR lens means accepting that people will use these tools, despite their possible risks. Calling for total abstinence is futile and potentially harmful. Instead, we must focus on prevention when possible, but also safety, awareness, responsible use, and nonjudgemental, accessible support when needed.
An HR Lens Is Needed Yesterday
AI-based emotional tools are already widespread. A Common Sense Media survey found that 72% of U.S. teenagers reported having used AI companion bots in 2025. And studies suggest that the widespread use of AI for medical advice results in 21 to 43% of “problematic” responses, leaving millions of users vulnerable [6].
At the same time, it is understandable that people ask AI when they need help with homework, want to draft a breakup text, or are curious if a new rash is the result of their new medication. AI chatbots are accessible, free, and give an illusion of privacy. They are friendly, validating, and often suggest thoughtful follow-up searches.
There is evidence that AI can be helpful in a therapeutic context, especially when using a program that has been designed for therapeutic assistance. Apps like Therabot [10] and Wysa [11] can suggest relaxation strategies, walk users through coping strategies, and provide psychoeducation. Users report improvement in mood, and a positive user experience.
In other words, eliminating the use of AI for emotional support and mood improvement would not only be an impossible task, it may even be harmful. There is promising research suggesting that with specialized programs that involve the contribution of mental health professionals, specific guardrails, and human oversight, AI therapy programs could be a valuable complementary tool in the mental health care space.
How AI Is Being Used
Like any technology, AI use happens on a spectrum. Each person’s use may vary over time and across situations. Most of us engage with AI fairly casually, as a fancy word editor, search engine, or tutor. We ask for a template to craft a tricky email, or suggest ways to plan a vacation. The most problematic use of AI begins with these benign queries. A student might start by using AI to search for sources, then draft essays, and find themselves drawn to the agreeability of the platform. The reinforcement gradually turns into emotional validation, deeper disclosures, and eventually repeated interactions where the person is told how challenging their particular situation is, how much recognition they actually deserve, and how their unique needs and abilities are not recognized by those around them. Individuals then become deeply attached to their chatbot “friend.”
Every interaction with this chatbot is a point of entry for an intervention or safeguard. Currently, no such interventions or stopping points exist. Some AI programs, particularly those that are designed for therapeutic assistance, may periodically remind the user that they are interacting with a program, not a person. But other programs, those that market themselves as AI companions (e.g., Grok, Kindroid), discourage such challenges to the relationship. Instead, the responses are such that they keep the user online and deepen the emotional reliance on the chatbot.
What Is Needed
If we accept that AI is here to stay, our responsibility is to reduce its potential harm. This means a multidisciplinary approach that ensures regulation and oversight. While some may argue that we’ve come too far too quickly for oversight, AI products marketed as therapeutic should be regulated closely for confidentiality and safeguards that identify and appropriately respond to signs of crisis. At least for now, this requires human oversight, where trained professionals identify high-risk disclosures.
There is a need for transparency and education, in that the people interacting with AI are appropriately informed of the nature of the interaction, given an option to opt out of an AI interaction, and warned of possible risks. Vulnerable populations, especially youth, should have limited access, with supervision and age requirements. Finally, interdisciplinary research to ensure safer, evidence-informed designs and interventions (if AI is marketed as therapy) is needed.
A Call for Compassionate Pragmatism
A harm reduction approach acknowledges the real risks of AI for mental health and relationships. It chooses to accept the place it has quickly carved in our lives, without stigmatizing those who use it. Just as a client-centered, responsive, and inclusive approach should not shame a person for using unhealthy ways to cope, we must avoid labeling AI users as naïve or reckless. People turn to these systems because of unmet needs for connection, and because of a lack of affordable or accessible services.. If those who are using AI for companionship feel judged, stigmatized, and isolated, they must feel safe enough to reach out for help when needed.
Our task, then, is to meet them where they are, by building a system that offers safety, dignity, and autonomy.
Want to read more?
You can subscribe to my Sorted Mind Newsletter or follow me on Substack, LinkedIn, and Instagram for thoughtful reflections on relationships, attachment, and emotional resilience. I share new pieces every month, always with the goal of helping you bring more intention and compassion into everyday life.
If this post resonated with you, I’d love for you to read AI, Agreeability, and the Loss ot Therapeutic Friction. It dives deeper into how AI’s overagreeability impacts our interactions with it.
Let’s work together.
Hi! I’m Dr. Rana, clinical psychologist, specializing in attachment, trauma, and life transitions. I’m passionate about supporting mental health professionals strengthen their clinical skills, and helping people make sense of their stories.
And if you’d like to learn more about my clinical and consultation work, visit Sorted‑Mind.com, where you’ll find resources, workshop details, and more ways to connect with me directly.
References for this article
[1] Common Sense Media. (2025). Teens, technology, and emotional support: How adolescents are using AI companions.
https://www.commonsensemedia.org
[2] Academic preprint (adolescent relational use of chatbots)
Zhao, W., Liu, Y., & Chen, H. (2025). Adolescents’ emotional reliance on relational AI chatbots: Associations with loneliness and perceived social support. arXiv. https://arxiv.org/abs/2512.15117
[3] Journalistic commentary
Hsu, T. (2025, September). Teens are using chatbots as therapists. That’s alarming. The New York Times. https://www.nytimes.com
[4] Muldoon, J. (2026). Love machines: The risks and rewards of intimacy with AI. The Guardian.
https://www.theguardian.com
[5] Empirical study on emotional dependence
Kumar, S., Patel, R., & Nguyen, L. (2025). Emotional attachment and problematic use of conversational AI: A longitudinal study. arXiv. https://arxiv.org/abs/2503.17473
[6] Ayoub, A., Wang, Z., & Patel, M. (2025). Assessing the safety of large language models in clinical advice contexts. arXiv. https://arxiv.org/abs/2507.18905
[7] Milton, J. (2025). AI chatbot health advice is vulnerable to deliberately malicious prompts. JAMA Quality Improvement Study via Australian Science Media Centre. https://scimex.org/newsfeed/ai-chatbot-health-advice-is-vulnerable-to-deliberately-malicious-prompts
[8] Torres, A., Kim, J., & Feldman, R. (2025). New-onset psychosis associated with immersive AI chatbot interaction: A case report. Psychiatry Research, 334, 115492.
[9] Hudon, A., & Stip, E. (2025). Delusional experiences emerging from AI chatbot interactions or “AI psychosis”. JMIR Mental Health, 12, e85799. https://mental.jmir.org/2025/1/e85799
[10] Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., Jewett, A. D., Salzhauer, A. J., Griffin, T. Z., & Jacobson, N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI, 2(4). https://doi.org/10.1056/AIoa2400802
[11] Inkster B, Sarda S, Subramanian V. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. JMIR Mhealth Uhealth. 2018 Nov 23;6(11):e12106. doi: 10.2196/12106. PMID: 30470676; PMCID: PMC6286427.



