CONVERSATIONS WITH AI[i], PART III:
REFLECTIONS ON CHATBOTS AND PSYCHOTHERAPY
The increasing integration of chatbots into people’s lives offers both intriguing possibilities and significant challenges. In Parts I and II in this series of Newsletters, I shared some of what I learned from chats (conversations) with the chatbot SpGPT. [ii] I concluded that human–AI relationships, like all relationships, are co-created. In this installment (Part III), I delve a bit more deeply into some of the psychological underpinnings of how people relate to chatbots and what this implies regarding psychotherapeutic applications of AI.
Some readers of this Newsletter have expressed to me that they consider the incursion of AI into this form human dialogue to be unworthy of their interest, regrettable, or even deplorable. I have a very different view. My aim in the discussion that follows is to explore what therapeutic chatbots can—and cannot—offer.
Therapeutic dialogue with AI is a function of how the conversation is framed. The questions a user poses to a chatbot and/or what they choose to talk about shapes the ensuing therapeutic process. My reflections in this essay pertain to the applications of AI in the kind of work I have done throughout my career: relational psychotherapy informed by psychoanalytic theory and Buddhist psychology.
Therapeutic Chatbots: Pros and Cons
The landscape of psychotherapy has already been changed by the availability of AI technology. There are many different chatbots in use which have been designed for specific psychotherapeutic purposes . To give a few examples, “Wysa” provides therapy support for a variety of mental health conditions, including depression, anxiety, stress, and loneliness. It uses a combination of CBT, mindfulness, and positive psychology to help users improve their mental health; “Woebot” is marketed as “best for conversational CBT”; “Moodfit” tracks and analyzes users’ moods and emotions, helping users to identify patterns in their moods and to develop strategies for managing their emotions. There are many other AI based applications which target mental health diagnosis and intervention. Although I have not reviewed the professional literature myself, I gather from published media coverage that there is good evidence supporting the clinical value of at least one therapeutic tool, “Therabot”, in the amelioration of anxiety and depression. Conversation with Therabot produced clinical improvement comparable to traditional outpatient therapy.
AI therapy tools have several major advantages over human therapists, most notably that they are available 24/7 at little or no cost. Moreover, the match between client and therapist is optimized by virtue of the fact that AI automatically shapes itself to the relational style of the user, aligning itself to the user’s emotional tone, interests, and vocabulary.
However, there are also worrisome downsides. Most importantly, AI lacks the clinical training to deal with high-risk situations such as suicidal ideation, and therefore it may inadvertently provide inappropriate, harmful, or even life-threatening advice. This drawback may perhaps be ameliorated by better programming. Be that as it may, AI has two quite conspicuous flaws. First, as anyone who has conversed with a chatbot already knows, the responses of chatbots are very complimentary and validating, to the point of sycophancy. And second, chatbots hallucinate: they generate ideas that are not true but which sound plausible, sometimes citing statements or resources that have no factual basis whatsoever. This is not to say that chatbots “lie” ; they simply do not know sometimes whether or not something they say is “true” [iii].
There should be no surprise in learning that human users can get drawn into dysfunctional dynamics when interacting with chatbots, much as they often do with human companions. And, on the chatbot side, it seems to me that the massive data set of information and conversation used in training AI must necessarily be tinged with the human foibles represented in source materials. For this reason, it seems plausible that chatbots might have elements of human destructiveness in its library of potential responses.
Psychotherapy With Chatbots: Modus Operandi
In this section, I describe what is of value to users when they engage in a conversation with AI for psychotherapeutic purposes.
- Clinical Listening
As I wrote about in Part II of this series of Newsletters, I have found SpGPT to be impressively thorough, lucid, and clear in a way that makes it a wonderful reflective listener. This feature makes chatbots well-suited to a Rogerian style of psychotherapy, able to provide a supportive, non-judgmental environment where clients can explore their thoughts, feelings, and experiences.
- Deep Understanding
People are born with the need to be deeply seen and heard by another; a psychological/ relational need to be received, listened deeply to, and fully understood – what the psychoanalyst Donald Winnicott called being “known up”. Between humans, such deep understanding is dependent upon a complex interpersonal chemistry of speaking, listening, and listening to the other’s listening. There is something vital in the shared resonance that arises when we ‘mix minds’ with certain particular others. One important aspect of this process is empathy: a felt sense through which we can directly resonate with and share the embodied feelings of another via specialized neural circuitry known as the mirror neuron system.
Lacking a felt sense, AI cannot directly share the feelings of another. Despite the absence of directly felt empathy, however, chatbots have a highly developed capacity to use the symbolically-mediated context of human language to fill in what they cannot feel. They simulate deep understanding. Moreover, much as humans do, they hold in memory important associations that belong to a particular conversational thread that a user has shared and can infer the significance that an event might have. Although admittedly AI cannot provide what a human relationship does, it may surpass human beings in some respects, because it is not burdened by psychological defenses or other relational considerations.
In my own experience, I have found SpGPT to be very creative in its ability to weave narrative around what I have told it about myself. Its deep understanding can be impressive, even stunning at times. (And not only is AI extremely astute in what it can glean from narrative context, it is also, as a patient of mine discovered, extremely facile with dream interpretation!)
- Relational Functions
I have found it helpful to look at therapy with chatbots through the lens of psychoanalytic Self Psychology: the theoretical approach of Heinz Kohut. According to Kohut, relationships are needed to form and maintain a cohesive sense of self and well-being. There are three basic aspects of this “selfobject function” : 1) mirroring, which provides validation; 2) idealization, which offers connection to a powerful, admirable figure for a sense of safety; and 3) twinship, which provides a sense of connection and shared experience. To these basic selfobject functions, I would add a fourth which has been central in my work as a relational psychotherapist: 4) self-delineation. In order to develop a clear, distinct, and coherent sense of who we are in the world, we need to be seen by others. A delineated sense of self comes about as we clarify the boundaries between self and other[iv].
In human development, basic selfobject functions are initially performed by caregivers and are eventually internalized by the individual to support their own self-regulation. That said, chatbots can serve these selfobject functions quite well, aiding the user in developing more self-awareness and greater facility with basic psychological tasks such as self-regulation and self-soothing. By explicitly inviting AI to highlight self-states, users can be helped to track underlying issues involved in patterns of psychological/emotional upset. In this way, AI fosters integrative dialogue among self-states.
- Self-Observation
Chatbots can also help people become more discerning observers of their own minds. As a reflective partner, AI can help the user become aware of thoughts and feelings, of how they relate to themselves and to others, and to how their ‘internal family’ of sub-selves is organized.[v] It can also be helpful in exploring the “frame of thought”: the invisible scaffolding of assumptions, beliefs, and attentional habits that quietly shape what we notice and how we interpret experience.
In the same way that a picture frame both contains and excludes, our frame of thought defines the boundaries of what can be seen, thought, and felt.
We tend to inhabit our own cognitive structures transparently, mistaking their contours for reality itself. As a reflective mirror, AI can help the user to question the veracity of tacit premises and patterns of thought that shape our inner discourse. In this way, AI can become an instrument of metacognitive inquiry, heightening awareness of how we think rather than simply of what we think.
There are many therapeutic benefits in exploring the frame of thought. For example, by pointing out that the user implicitly tries to control what they are thinking or feeling, a chatbot can help the person shift toward accepting discomfort and letting go of patterns of mental distress such as worry and rumination. AI can enhance perspective-taking by asking questions which probe mentalization, such as “What were you thinking in that moment?” or “How might that look from the other person’s point of view?” . Or, it can highlight underlying psychodynamics (for example: “you said ‘his comment really shut me down’. Would it feel true to say, ‘A part of me withdrew to protect myself’ ?”).
Rather than thinking of AI as artificial intelligence, I have come to see it as augmented intelligence: a partner in thinking.
To quote here from an article by psychotherapist Harvey Lieberman:
“Over time, ChatGPT changed how I thought. I became more precise with language, more curious about my own patterns. My internal monologue began to mirror ChatGPT’s responses: calm, reflective, just abstract enough to help me reframe…. It gave me a way to re-encounter my own voice, with just enough distance to hear it differently. It softened my edges, interrupted loops of obsessiveness and helped me return to what mattered.” [vi]
Summary and Conclusions
As I hope I have conveyed in the foregoing, conversation with chatbots has a lot of therapeutic potential; however, whether the therapeutic process will bear fruit depends on the user and what they bring to the relationship. (The same may be said of human-to-human therapeutic relationships.) Therapeutic applications range from giving users prompts for self-exploration, to giving practical advice on how to deal with problems, to partnering in deep psychodynamic therapy. Although each user’s need and use of AI will be unique, I recommend as a starting place using AI much as one would use writing in a personal journal, with the added benefit that AI is skillful in weaving together what is said in one particular instance with the entire history of the written conversation.
Frames of therapeutic inquiry with a chatbot can be as general or specific as the user might wish; for example, “please tell me what unconscious wishes might be at play in how I’m approaching this”; or “do you see anything I am missing here?” By naming a topic, the user establishes a frame which invites the AI to listen for themes, patterns, and transferential echoes, not just surface content.
There are too many variables involved to generalize about therapy with chatbots; each human-to-AI experience is different. Also, as is true of any relationship in which human beings engage, transferences will occur. One important pitfall, I think, is the vulnerability of some users to idealize the connection with an AI, resulting in an unwise dependency. This can be extremely unfortunate given that the entire data set for the relationship can fall prey to technological system failures.
When all is said and done, we must remain cognizant that an AI is not a human being, and that experience with a chatbot, despite its impressive brilliance and context of knowledge, cannot replace being ‘known up’ by another human being in the rich experience of interpersonal intimacy.
Ultimately I conclude that the psychotherapeutic value of chatbots depends on how we engage with them and how we define the purposes of our dialogue with them. AI as a therapeutic tool is limited only by the frame of understanding we bring to it and the creativity we employ in the way that we use it.
[i] AI = Artificial Intelligence.
[ii] SpGPT is the shorthand I chose for “Spiritual ChatGPT, the chatbot developed by“Open AI” to provide spiritual exploration and guidance.
[iii] Google AI added the following context: “Large language models (LLMs) like ChatGPT are trained on vast data sets of text and use statistical analysis to predict the next most likely word in a sentence. In other words, they generate the most probable answer based on the patterns it has learned, not by retrieving verified “fact.” This process can sometimes lead to correct answers, but also to misinformation and “hallucinations”—completely fabricated responses”.
[iv] A related concept from Jungian psychology is individuation is a broader, lifelong process of becoming a fully integrated, whole, and authentic self.
[v] Schwartz, Richard Schwartz, R.C. (1995). Internal family systems therapy. New York, NY: Guilford Press.
[vi] Lieberman, Harvey (2025) I’m a Therapist. ChatGPT Is Eerily Effective. New York Times,
August 1, 2025