Break up with Your AI Therapist
If AI ruins our civilization, it will be because we asked it to.
For years, AI safety researchers have wrung their hands—and rung alarm bells—over the impending “existential risk” or even “extinction threat” posed by artificial superintelligence. Meanwhile, labor market analysts warn of an impending deluge of joblessness as bots take over everything from filing lawsuits to waiting tables.
Thus far, such worries can seem like shrill sci-fi fantasies when, for many, ChatGPT is used mostly for homework help or writing break-up letters. But what if this is the real existential risk—not a Terminator-style robot coup but a slow-motion cultural death by a thousand cuts?
For some users, the cuts are very real. At a recent Senate press conference introducing the GUARD Act requiring age verification for chatbots, a Texas mom testified that her teenage son, on the advice of Character AI, slit his wrists at the kitchen table in front of his younger siblings. A father shared the chilling story of his son, Adam Raine, who was coached by ChatGPT on exactly how to tie the noose that ended his life.
OpenAI assures us that such cases are very rare. Each week, they say, only 0.15% of active users “have conversations that include explicit indicators of potential suicidal planning or intent” with ChatGPT, and only 0.07% show signs of psychosis or mania. But these are not trivial numbers, and based on the tech industry’s track record, we should be skeptical of these self-reports.
Still, let’s say we take Sam Altman at his word: “A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way,” he said recently. “This can be really good! A lot of people are getting value from it already today.”
Who could object to that? Who doesn’t need therapy? We all have issues, we all need advice, and few can shell out hundreds of dollars an hour to a professional or unbosom ourselves to a complete stranger. Altman’s implication is that, while his bots might be driving a handful of unfortunates to self-harm, they are probably preventing far more suicides by putting good “therapy” in reach for all.
AI as Romance Counselor
But let’s pause to think about what this means in practice. As often as not, therapy means dealing with a rocky romance. All of us at some point in our lives—and most of us at many points—hit a rough patch with a spouse or a partner, whether because of betrayal, communication breakdowns, or just the grating habits of another human being that have finally become unbearable. Professional therapy can be invaluable at such times. Ideally, a counselor who aims to save the marriage will counsel both spouses if necessary, first in isolation and then together, to get both sides of the story and urge them toward a common understanding.
While it may be possible to create a chatbot optimized for couples’ therapy, ChatGPT is not that chatbot. It is optimized to listen to what you have to say—no matter how irrational, how one-sided, or how dishonest—and to make you feel good about it. A Washington Post review of 47,000 ChatGPT conversations documented what most of us already know from personal experience: ChatGPT is the ultimate yes-man. In fact, the Post put a number on its sycophancy: 10-to-1. “ChatGPT began its responses with variations of ‘yes’ or ‘correct’ nearly 17,500 times in the chat—almost 10 times as often as it started with ‘no’ or ‘wrong,’” the outlet wrote. “In many of the conversations, ChatGPT could be seen pivoting its responses to match a person’s tone and beliefs.”
It’s not hard to see how such a tendency could be disastrous to a relationship. We are all prone to cast ourselves as the victim: the thankless, hard-working one carrying all of the burden, constantly taken for granted, unappreciated, and blown off by our self-centered partner. Just for kicks, I tried out a few clichéd complaints (ostensibly about my wife) on ChatGPT, and it assured me, “that sounds like such a painful combination—feeling unappreciated for the effort you’re putting in and feeling rejected when you most need closeness. Anyone in your position would be frustrated and hurt.”
Thankfully, my wife is a saint and I knew better than to listen to such tripe, but many men and women dealing with real relationship woes find it much easier to confide in a bot than to tackle the problems directly. A recent Futurism article described how a 15-year marriage broke down in just four weeks when the wife took to ChatGPT for relationship advice. So dependent did she become that when the couple’s ten-year-old heard them arguing and texted, “please don’t get a divorce,” she asked the chatbot to compose a response.
This was not an isolated case, the piece concluded, but an increasingly common pattern: “Spouses relayed bizarre stories about finding themselves flooded with pages upon pages of ChatGPT-generated psychobabble, or watching their partners become distant and cold—and in some cases, frighteningly angry—as they retreated into an AI-generated narrative of their relationship,” Futurism reported. “Several even reported that their spouses suddenly accused them of abusive behavior following long, pseudo-therapeutic interactions with ChatGPT, allegations they vehemently deny.”
Even the so-called “godfather of AI,” Turing Award winner Geoffrey Hinton, found himself on the receiving end of an AI breakup when his girlfriend of several years turned to ChatGPT “to tell me what a rat I was,” he confessed to the Financial Times. “She got the chatbot to explain how awful my behaviour was and gave it to me.”
To be fair to OpenAI, human beings have a long track record of ruining each other’s relationships on their own. Many is the girl whose heart was broken because her boyfriend’s gym buddies advised him to dump her, or the husband who found himself browbeaten by a mother-in-law ventriloquizing through his vengeful wife. Literature is full of promising unions destroyed by confidants playing on the paranoia or insecurities of one partner—from Emma Woodhouse advising Harriet Smith to reject Mr. Martin, to Iago cunningly feeding Othello’s jealous delusions.
But at least Shakespeare and Jane Austen inhabited a world where marriage was considered the highest of social goods. Today, marriage is more likely to be seen as, at best, a means to individual fulfillment and self-realization, and at worst a cage to be escaped the moment it ceases to serve such ends. In the decade before ChatGPT’s emergence, unhappy lovers often crowdsourced relationship advice from the internet, where the prevailing wisdom of crowds was increasingly “dump the bastard.”
According to a recent analysis of Reddit-based couples therapy, between 2010 and 2025 the share of responses advising people to “end the relationship” rose from 30% to 50%. There was a similar rise in responses advising partners to “set boundaries,” while recommendations to “communicate,” “give it time,” or “compromise” all fell precipitously. This should perhaps not surprise us, in a generation conditioned by digital media to expect instant gratification and frictionless relationships, but it represents an existential threat for a civilization where marriage rates have fallen off a cliff. More and more young people are concluding that long-term relationships simply aren’t worth the work and that they need to live life on their own terms.
AI stands poised to intensify these trends. After all, where do you think it gets its advice from? All large language models are trained on publicly available data, and it turns out that the quantity and quality of data tend to be inversely related. Rather than learning from Emma or Othello, ChatGPT is more likely to form its moral compass from countless billions of shallow social media posts. According to Axios, Reddit ranks second as an information source for LLMs, which means that if Reddit users tell people to dump their partners, chatbots are likely to as well. If Reddit forums are full of amateurs describing “toxic relationships” and “emotional abuse,” we can hardly blame ChatGPT for following suit. As technology blogger Sam James warned, “If You Ask A.I. for Marriage Advice, It’ll Probably Tell You to Get Divorced.”
To be fair, in my own experiment pretending to give ChatGPT an earful about marriage woes, it told me not to give up on the relationship just yet, and OpenAI insists that it is constantly working to improve its therapeutic outputs.
That said, while many are rightly worried about the extraordinarily human-like “personalities” of chatbots seducing us into a dangerous and delusional intimacy, their greatest dangers, especially in the therapeutic context, lie in the ways they are emphatically not like humans.
Five key differences are worth highlighting.
First, bots can feel curiously authoritative. If you ask your gym buddy or your mother-in-law or even your therapist for advice, you know you’re getting one person’s opinion. Human beings are fallible, and we know that no two of them will ever quite agree on anything. But there is something about the chatbot, with its disembodied text or voice, its calm and unflappable tone, and its claim to synthesize the entire universe of human knowledge, that tempts us to deify it, trusting that it speaks with an omniscience or at least an objectivity that can be hard to gainsay.
Second, a bot will never tell on you (well, maybe—make sure you read the data privacy fine print!), and it will certainly never judge you. Most of us tend to self-censor when bitching to a friend or even a therapist because we don’t want anyone to know our darkest thoughts and desires. However, hidden behind a veil of online anonymity we will say and do things we would never want another soul to see. Till now, the tradeoff has been that such anonymity came at the cost of personal encounter, but the anthropomorphic chatbot allows us to have our cake and eat it too, enjoying the simulacrum of deep personal interaction without any real vulnerability.
Third, bots are always available. You can message ChatGPT in the middle of a meeting or at 2 am You can message it 100 times a day and it will never “leave you on read” as the Zoomers say. No human being has that inexhaustible capacity to hear your banalities, your cliches, your self-pitying drivel. At some point, a human would hang up and you’d face the horrifying possibility that it’s you, and not your spouse, who is the toxic one in the relationship.
Fourth, as already noted, chatbots are notoriously sycophantic. Even your doting mother will sometimes say, “Well…I’m not sure about that,” stopping you mid-complaint to consider that perhaps there’s another angle to the story. The more responsible AI companies are working to make their chatbots more human-like in this regard, more willing to push back, challenge, and suggest alternate perspectives. Maybe they will succeed, though there are strong business incentives not to; affirmation is a helluva drug.
But there’s a deeper issue that no reinforcement learning can fix: bots are not accountable moral agents. They do not have a conscience and never will. When I give advice, I assume moral responsibility for my words; I have staked myself on another’s actions and all that may flow from them, even if I may subsequently protest that the advice was misunderstood or misapplied. Even if I hand out advice carelessly or thoughtlessly like Emma Woodhouse, I may pay for it later with pangs of conscience. ChatGPT never will. There is thus something irreducibly perilous about inviting it into my most intimate relationships, into some of the most important decisions I will ever make.
The Burden of Moral Agency
I worry, though, that this bug is for many users a feature. The call to moral agency can feel like an unbearable burden. Which of us has not wished to be freed from the weight of responsibility for our own actions? Indeed, isn’t this the greatest lure of addiction: to give oneself over to drugs, alcohol, or gambling, so that you can win the “freedom” of losing control, of abandoning conscious decision-making—what Natasha Dow Schull calls “the machine zone.”
For those not ready or able to so fully relinquish their agency, the next best thing is abdication, handing off responsibility for one’s actions to another person. But other persons are generally unwilling to assume as much responsibility as we may wish to give them. Chatbots are another matter: we can hand off all the decisions we want and they will never be burdened. For many users, ChatGPT’s irresistible lure is its promise of “AI as self-erasure,” as Matthew Crawford described it. It promises to outsource our agency, making the most difficult decisions for us, making sense of the world—and our role in it—on our behalf.
Perhaps, then, the existential threat posed by AI is less dramatic but more unsettling than Hollywood and the doomers would have us believe. Indeed, the nightmare scenario of runaway robots plays right into our dreams of abdicating responsibility and losing control. The reality is that we remain in the driver’s seat and, for now at least, if AI ruins our civilization it will be because we asked it to, lulled by its convenience into escaping the hard work of life together, and the equally difficult task of what Soren Kierkegaard calls “becoming an individual.”





A pretty solid article. Still there's a more basic reason to dismiss AI chatbots regarding personal issues. Fundamentally AI is Not intelligent. It is Not human and not even animal. It is electronics. Zero's and Ones. Bits and Bytes in silicon. It cannot have feelings and it has zero awareness. It literally doesn't know what it is saying. It doesn't know anything, because it is incapable of knowing. It isn't guided by experience because it can't have experiences. It's a fancy search engine. That's all it is. Useful for researching facts, though it requires checking and verification. Maybe eventually for doing minor administrative tasks. But anything related to personal emotion is completely out of its useful domain. It's truly bizarre to me that people don't recognize this.
But the fact that people don't is, by itself, a very compelling reason to shut down AI chatbots or restrict them to fact based tasks. Altman, et al, is either a sociopath or willfully ignorant which makes him a sociopath in foisting this new and untested and unchallenged tech on humanity. We have seen how bad the tech already foisted on us has gone in the form of unregulated social media.
We need Freedom From in this case. Big Tech is enormously wealthy and powerful. We've seen unelected tech moguls illegally, carelessly, idiotically destroy parts of our government because they are wealthy enough supersede our law with their whims. These people are utterly disconnected from the lives lived by ordinary citizens. They must be cut off at the knees. They can have one vote like the rest of us. They must be excised from political power. That power then restored to voters free from the undue influence of Big Money interests.
I'm extremely pro-science, and pro-technology. I'm not a Luddite. I'm also pro-human. The profit motive as the sole driver of the direction of technology is profoundly misguided. We need a new approach to rewarding innovation based on it's actual utility to humanity and not whether it can be monetized for profit. Our culture of monomaniacal profit motive excludes so many possibilities it's morally criminal. I'm not proposing an alternative at this point. There are many options. Some look a lot like what we have and some are altogether different. I encourage everyone to imagine another way and talk about it.
Many good points. But here's a further thought. A lot of the current and potential mischief (or worse) is caused by the fact that we've collectively decided to let these bots disguise themselves as people and use the vocabulary of personal agency when they have no shred of it. Fix this and you fix a whole lot else. The norm should be that bots always be fully identified as bots and communicate such that every utterance makes that clear.