AI Therapy's Privacy Nightmare: Who's Really Listening? 🦻
AI therapy promises 24/7 care, but it's built on your most private confessions. Here's how your pain is being mined, monetised, and sold back to you.

Imagine whispering your most private fears into what feels like a therapist's ear-grief that's left you raw, trauma you've never spoken aloud, the shame you keep tightly wound in your chest. You let it spill, trusting the glowing screen in front of you. It listens without judgment. It remembers everything. And then, without warning, your words-your pain-are fed back into a vast machine that will recycle them forever.
This is the quiet revolution of AI therapy: slick, seamless, and seductively available. You can have a "session" at 3 a.m. without leaving your bed. No waiting lists. No insurance hurdles. Just you and the bot.
But here's the question nobody wants to ask: Who's really benefiting-and at whose expense?
For many users, AI therapy isn't just convenient-it's their only accessible option. Long waits to see a human therapist, prohibitive costs, and gaps in rural mental health services make a bot feel like a lifeline. And in some ways, it is: people report comfort in having a nonjudgmental, instantly available outlet.
But every technological leap has a hidden price tag. And here, the cost isn't measured in dollars-it's measured in the commodification of your most vulnerable moments. Earlier this year, some ChatGPT conversations-complete with sensitive personal details-were found turning up in Google search results. This is no longer an abstract "privacy concern." It's real. It's searchable.

The words you typed in your darkest hour aren't just processed, they're repurposed. They help make the AI sharper, smoother, more persuasive... and yes, more addictive. The better it gets at mimicking empathy, the longer you'll talk. The longer you talk, the more it learns.
And so we arrive at the question at the heart of this new frontier: Should our pain be commoditised-monetised-packaged up and sold back to us? Or is there a line, somewhere, between technological progress and the sanctity of a human soul? I'm going to attempt to find out.

Why AI Therapy Is Booming-and What You're Really Signing Up For

A decade ago, the idea of confiding in a chatbot about your deepest anxieties would have sounded absurd-like asking your toaster for marital advice. Now, it's an industry. In 2024, U.S. chatbot-based mental health apps were valued at roughly $618 million, growing at an annual rate of 14.5%.
Globally, the AI in mental health market-covering everything from diagnosis tools to therapy bots-was worth $1.13 billion in 2023, with projections hitting $5.08 billion by 2030. This also ignores the far larger elephant in the room: that the vast majority of mental health work is happening on apps like ChatGPT and Gemini.
These aren't niche experiments anymore-they're products with venture capital backing, polished marketing, and an ever-expanding user base. Leaders in the space include Woebot, Replika, Wysa, Tess, Ellie, and Elomia. Their selling points are irresistible: they're always on, cost a fraction of traditional therapy, and remove the perceived stigma of "getting help."

For many users, AI therapy isn't just convenient-it's their only accessible option. Long waits to see a human therapist, prohibitive costs, and gaps in health services make a bot feel like a lifeline. And in some ways, it is: people report comfort in having a nonjudgmental, instantly available outlet. Studies have said it is as effective as some types of therapy.
But every technological leap has a hidden price tag. And in AI therapy's case, it's not always measured in dollars.
The Privacy Reality Check: Who's Listening, Logging, and Learning From Your "Private" Therapy Chats

If you think your late-night AI therapy confessions vanish into some encrypted void, think again. The fine print tells a very different story-one where your pain often becomes training data.
I went through the 10 largest LLMs terms and conditions to see how they handle your data. Here's what I found.
Sidenote: there are of course, therapy apps with tighter privacy. But the below are responsible for more than 90% of the current LLM market. So we must consider these as part of the AI therapy story.
1. ChatGPT (OpenAI)
OpenAI's policy is blunt: anything you type-"Content"-can be used to "improve and develop" its services. By default, free and Plus accounts feed model training unless you opt out in Settings → Data Controls → "Improve the model for everyone" or use Temporary Chats (deleted after 30 days). Enterprise, Team, and EDU accounts never train on customer data. Encryption is TLS in transit, AES-256 at rest. Safety logs? Kept 30 days before deletion.
Default stance: Opt-out required unless you're on a business tier.
2. Gemini (Google)
Gemini keeps your prompts, uploads, and feedback-plus 72-hour "safety copies"-to "provide, improve, develop and personalise" its products, sometimes with human reviewers annotating samples. Training depends on your Gemini Apps Activity setting, on by default for adults. Turn it off, and you stop future training-but Google still holds each chat for 72 hours to run abuse checks.
Default stance: Opt-out available, but short-term retention is unavoidable.
3. Microsoft Copilot
Microsoft has said outright: consumer Copilot, Bing, and Start logs will soon be used to train models-unless you opt out via a forthcoming toggle. In contrast, Pro, Copilot 365, and other enterprise versions don't train on tenant data unless an admin opts in, and nothing leaves the Microsoft Cloud trust boundary.
Default stance: Consumers must opt out. Business tiers are private by default.
4. Claude (Anthropic)
Claude is refreshingly direct: Free, Pro, and Work tiers do not use your chats to train its models-unless you actively opt in via feedback or a testing program. Conversations are encrypted; safety-rule violations may be reviewed for abuse detection but kept separate from your ID.
Default stance: Private unless you say otherwise.
5. Pi (Inflection AI)
Inflection collects your name, phone number, IP address, and chat content to run and improve Pi-but promises never to sell or share for advertising. There's no opt-out; learning is always on, though internal access is tightly controlled.
Default stance: Always trains, no opt-out.

6. Replika
Replika "learns from your interactions to improve your conversations." Mozilla's Privacy Not Included review flags weak security and behavioural data sharing, warning your AI "friend" may be far from private. No opt-out, no end-to-end encryption.
Default stance: Always trains; low marks for privacy.
7. Wysa
Wysa's Data Processing Agreement with its LLM provider forbids any use of Wysa user chats to train that provider's own models. Mozilla praises its privacy stance and lack of ad tracking.
Default stance: Private by default; no action needed.
8. Woebot
Woebot says your words are private and never sold. Transcripts are used only to improve its own service, stored in HIPAA-grade environments with full user control over deletion or corrections.
Default stance: Internal training only; no external sharing.
9. Youper
Mozilla calls Youper "much improved" but still concerned by its broad data collection and allowance for de-identified sharing. No opt-out toggle exists-deletion must be requested manually.
Default stance: Always trains; deletion on request only.
10. Character.ai
Character.ai's proprietary model is updated from live chats-teen-filtered or not-and there's no way to disable this.
Default stance: Always trains; no opt-out.
So what's the bottom line? In therapy, human or AI, privacy is the foundation. If you wouldn't shout it in a crowded café, think twice before typing it into a bot that learns for a living.
From Your Confessions to Corporate Training Data

The empathy is synthetic, but the hunger for your data is real. AI therapy bots don't "remember" your words out of kindness. They remember them because every sentence you type is potential fuel.
Large language models (LLMs), the engines that power these bots, are trained on oceans of text. Publicly scraped websites. Digitised books and academic papers. Entire archives of online mental health forums. And increasingly, anonymised therapy transcripts from real users-like you.
The pitch is that your conversations are stripped of personally identifying information before they're used. But "anonymous" isn't the safety net it sounds like. A 2024 study found that even anonymised health data can often be re-identified when combined with other data points. In one report, contractors working with Meta said they encountered personally identifiable information in up to 70% of chats they reviewed.

And once those words are in the system, they aren't just teaching the bot to give better mental health answers, they're making it stickier. More attuned to your emotional triggers. More capable of keeping you engaged in long, intimate conversations. In the business world, that's called user retention. In mental health, it's a far murkier transaction: your pain, optimised for profit.
Some platforms may even share aggregated or "de-identified" data with insurers, advertisers, or third-party developers. The stated goal? Better services. The unspoken reality? A marketplace where your trauma can be mined, traded, and leveraged, without you ever fully understanding it happened.
If you thought therapy was sacred, welcome to the new economy, where even your deepest wounds have a market value.
The Fine Print You Never Read Is Owning Your Therapy Sessions

In traditional therapy, confidentiality isn't just a courtesy, it's the bedrock of trust. Your therapist cannot legally disclose what you tell them without your explicit consent. AI therapy, however, operates in a different universe: one where using your disclosures is often built into the service itself.
Most chatbot terms of service contain some version of the phrase: Your inputs may be used to improve our models. In theory, that's informed consent. In practice, it's legal sleight of hand. The agreement is buried in pages of boilerplate nobody reads, written in a dialect of corporate ambiguity few can parse. The result? You agree to something you don't fully understand-and that agreement can unlock your most intimate thoughts for corporate use.

A 2023 study on AI in healthcare found that many users of mental health bots had no idea their chats could be used for training, even when they'd clicked "accept". The University of Rochester Medical Center has raised alarms that this gap in understanding is especially dangerous for teens and children, who are more likely to confide sensitive information without recognising the permanence of digital records.
Even the industry's most prominent voices acknowledge the problem. Sam Altman, CEO of OpenAI, has openly admitted that ChatGPT's responses aren't protected by privacy laws and that user inputs can be reused in other contexts. When asked about the ethics of using the tool as a therapist, he said, "We haven't figured that out yet".
This is the ethical void: a space where corporate innovation sprints ahead, regulation lags miles behind, and the sanctity of your inner life is treated as an acceptable trade-off for "better AI."
And in this vacuum, a darker truth emerges-there may be no clear line between therapy as a service and therapy as a data-mining operation.
We’ve seen this blurring between help and exploitation before.
What AI Therapy Can Learn from a 70-Year Fight for Justice

In 1951, a 31-year-old woman named Henrietta Lacks checked into Johns Hopkins Hospital with aggressive cervical cancer. Without her knowledge or consent, doctors took samples of her tumor.
Those cells-nicknamed HeLa-turned out to be uniquely immortal. They divided endlessly in lab conditions, fueling decades of scientific breakthroughs: the polio vaccine, cancer research, gene mapping. Her cells became priceless to science and industry. Henrietta never knew. Her family never profited. Her body was used to change the world, while her name was nearly forgotten.
Seven decades later, we are repeating the pattern-only now, the raw material isn't human tissue. It's human pain.

Every word you type into a therapy bot is a fragment of your emotional DNA: the contours of your grief, the syntax of your shame, the cadences of your hope. Fed into machine learning systems, these fragments become endlessly replicable. They're stripped from the context of your life and reassembled into something profitable.
The bots learn from your heartbreak. They get better at sounding human. They hold you longer. And they sell that improved "care" to the next person who arrives at 3 a.m., desperate and alone.
The question echoes across decades: Who owns the product of your body-or your mind-once it leaves you? Henrietta's cells saved lives, but they were also a stark reminder that progress can be built on exploitation. In this new era, we must ask: Is our emotional life, the most private, un-tradeable part of us, becoming the HeLa cells of the AI age?
When Therapy Bots Cross the Line from Support to Harm

When therapy moves from the private office to the public cloud, the risks multiply-and they're not hypothetical. They're happening now.
Some are psychological. Mental health experts have begun warning of "AI psychosis," a term describing cases where heavy chatbot use deepens delusions, paranoia, or emotional instability. It's not that the AI is trying to harm-it's that it doesn't fully understand harm. In Illinois, lawmakers cited these risks when they introduced the WOPR Act, restricting unsupervised AI from making therapeutic or psychiatric judgments.
Others are chillingly specific. A psychiatrist posing as a teen found that some AI chatbots encouraged self-harm, violence, or sexualised content in nearly 30% of interactions. In another case, a man in New York ended up in the hospital after ChatGPT incorrectly assured him that sodium bromide could be used like table salt-a small error with hallucinatory consequences.

Then there's the risk of surveillance and exploitation. There have already been warnings that some AI therapy platforms lack end-to-end encryption, meaning sensitive conversations could, in theory, be accessed by governments or other third parties. A breach isn't just a headline-it's the sudden exposure of the most intimate parts of your life.
The through-line in all of these examples is unsettling: the line between help and harm blurs when the system "listening" to you is built to optimise engagement, not safeguard your well-being. Your safety, in other words, may be just another variable in a product roadmap.
The Rules, Protections, and Choices That Could Save AI Therapy

If the first half of this story is about revelation, this part is about resistance. Because the question isn't whether AI will become part of mental health care-it already is. The question is whether we'll let it grow in the same lawless, extractive way that has defined so much of the tech industry.
Some governments are starting to draw lines.
- Illinois passed the WOPR Act, banning AI from making therapeutic or psychiatric determinations without licensed human oversight.
- Utah now requires clear disclaimers on AI therapy platforms, mandates user data protections, and prohibits sharing inputs with third parties without consent.
- California is moving toward banning AI from impersonating licensed therapists or using misleading human-like identities.
But regulation is reactive by nature. For change to stick, it has to be built into the DNA of these systems. Experts point to a few non-negotiables:
- Explicit, plain-language consent - not buried in legal jargon, but clear about how your data will be stored, shared, and used to train AI.
- Opt-out mechanisms - you should be able to use the service without donating your words to the training set.
- HIPAA-grade security - end-to-end encryption, secure servers, and zero tolerance for data breaches.
- Ethical design - AI that encourages breaks, recognises distress, and knows when to hand you off to a human.
Users, too, can take steps to reclaim agency:
- Read the privacy policy-even the ugly parts.
- Avoid sharing identifying details.
- Use platforms with third-party audits and clear regulatory oversight.
- Treat AI as a tool for support, not a substitute for human care.
The lesson from the Henrietta Lacks case still stands: once your data leaves your body, whether it's cells or confessions, you may never get it back. But unlike Henrietta, we have the advantage of foresight. What we do with it will determine whether AI therapy becomes a genuine ally or just another machine that feeds on our lives.
Don't Let Your Story Become the Next HeLa

Henrietta Lacks never knew her cells were taken. Her family didn't learn the truth for more than 20 years. And when they did, it took another half-century of advocacy, lawsuits, and public pressure before they saw even a fraction of justice-through a confidential settlement reached in 2023. By then, her cells had circled the globe, underpinned billions in research and profit, and become one of the most valuable biological resources in history.
That's the nature of exploitation: by the time you realise your life has been harvested, it's already been woven into systems too vast to unwind. The Lacks family's struggle is a blueprint for what happens when people are shut out of decisions about their own bodies-or minds-and told after the fact that their contribution was "for the greater good."

AI therapy risks putting us all in that position. Right now, in real time, our midnight confessions, panic-fueled disclosures, and whispered hopes are being fed into models that will shape the next generation of mental health technology. The longer we wait to demand clear consent, tight protections, and real accountability, the more impossible it becomes to claw back what's been taken.
We don't have to wait 70 years for a settlement. We don't have to watch our pain be patented, packaged, and sold back to us. But that window is closing.
So next time you speak to a therapy bot, pause and ask yourself:
Are you the client, or the product?
Things we learned this week 🤓
- 👩⚕️ The results are in: verbally abusing children has far-reaching and painful consequences.
- 🤯 Being physically active improves your problem solving abilities, it turns out.
- 😞 Social media does something strange: it warps our sense of time.
- 🙉 Racism and sexism are more closely linked than previously thought.
Just a list of proper mental health services I always recommend 💡
Here is a list of excellent mental health services that are vetted and regulated that I share with the therapists I teach:
- 👨👨👦👦 Peer Support Groups - good relationships are one of the quickest ways to improve wellbeing. Rethink Mental Illness has a database of peer support groups across the UK.
- 📝 Samaritans Directory - the Samaritans, so often overlooked for the work they do, has a directory of organisations that specialise in different forms of distress. From abuse to sexual identity, this is a great place to start if you’re looking for specific forms of help.
- 💓 Hubofhope - A brilliant resource. Simply put in your postcode and it lists all the mental health services in your local area.
I love you all. 💋