Why Palantir should be kept out of healthcare

Mental health runs on trust, not surveillance. As Palantir enters healthcare systems, therapists face a chilling effect that undermines honesty, data quality, and care itself. Why efficiency may be the wrong goal.

Why Palantir should be kept out of healthcare

In the past year, headlines about Palantir Technologies have started to bleed out of defence journals and Silicon Valley earnings calls and into places they haven't been before. Health policy briefings. NHS procurement meetings. Picket lines.

Why? Because the UK government awarded Palantir, an American company with a history building spyware and AI powered killing machines an enormous contract to manage a chunk of the NHS. Yep, a company that cut its teeth in the military is now being given access to medical records, patient data, hospital bed numbers and prescriptions.

Why? Well, there's a giant political shit storm around the why - involving disgraced politician Peter Mandelson. But that aside, the 'logic' is familiar to anyone who has spent time in healthcare.

Mental health systems are collapsing under their own weight. Waitlists stretch from weeks into months. Clinicians drown in paperwork. Patients fall through cracks that everyone agrees shouldn't exist.

Into this mess steps a silver bullet: "efficiency." Better data. Smarter systems. Platforms that promise to see patterns humans can't, to triage faster, allocate resources more rationally, and finally bring order to chaos.

And then you notice what kind of company that has said it can fix it. Palantir is best known not for healing, but for hunting. Its software was built to identify insurgents, map social networks, and surface hidden threats across massive datasets.

Dark Psychology: Groupthink, Cults & Online Radicalization
How dark psychology operates online: groupthink, echo chambers, authority bias, and why intelligent people are drawn into cult-like digital communities.

It was refined in Iraq and Afghanistan, deployed in counterterrorism operations, and later integrated into domestic policing and immigration enforcement. Yes, it powers a lot of ICE's operations in America. This is not a secret. It's the company's origin story, proudly told.

Now that same infrastructure is being positioned-sometimes quietly, sometimes triumphantly-as the nervous system for public health.

On paper, the logic is hard to argue with. If you can track battlefield dynamics in real time, surely you can optimize hospital beds. If you can predict unrest, surely you can predict patient flow. This is the seductive promise of big data: context doesn't matter, only inputs and outputs.

But mental health care is not logistics. It is not counterinsurgency. And it is not a problem of insufficient surveillance. Healthcare has long been the punching bag for 'big data' for decades - see my piece on what's happened when private equity moved into care provision and deployed similar tactics.

This is a terrible idea. Why? Because trying to extract more value from mental health data will destroy the conditions that make that data truthful in the first place. The result isn't smarter care. It's distorted care-optimized on paper, hollow in practice.

How Britain Heals: A Blueprint for Renewal
Britain’s problem isn’t just economic-it’s relational. Why GDP can’t heal a fractured country, and how rebuilding connection could renew Britain from the inside out.

What's unfolding right now isn't just a procurement controversy or a tech ethics debate. It's a collision between two worldviews: one built to identify threats, and one built to hold pain. And the space where they meet: in patients, in clients, in the mind of people suffering, is far less resilient than policymakers seem to believe.

That's why I'm going to be arguing why Palantir should be kept out of mental health care.

The "Toxic Origin" Problem - Why Palantir Can't Be Neutral Here

There's a comforting myth in tech policy circles that tools are neutral-that context can be stripped away, that software can be repurposed without residue. This has been consistently debunked in academic circles. But the argument persists.

And Palantir Technologies wants you to believe that idea too, that the systems it has built, how it uses data, and how much it charges for the privilege are strictly transactional ideas. But Palantir carries a story that is unusually difficult to sanitise.

Palantir's flagship platform (for commercial partners) is Foundry. It did not emerge from a hospital ward or a public health lab. It was shaped by war. The company has long emphasized that its tools were refined through deployments in Iraq and Afghanistan, where they were used to integrate disparate datasets-communications, locations, social ties-to identify insurgent networks. The core value proposition was clear: make hidden patterns visible so targets could be acted upon.

That DNA matters when the same platform is repositioned as the backbone of civilian healthcare infrastructure.

The Honeytrap: Why Epstein’s Tactic Still Works
Jeffrey Epstein didn’t invent the honeytrap-he exposed how it still works. A psychological and historical look at sexual blackmail, power, shame, and why elite men keep falling into the oldest intelligence tactic in history.

The controversy surrounding Palantir's £330 million contract to run the NHS Federated Data Platform brought this tension into the open. The program, which is designed to bring disparate data from across the health service into one central repository was challenged not be activists, but by the BMA, the union for doctors.

It publicly described Palantir as an "unacceptable choice," urging health institutions to cut ties with a company whose primary business has been supporting war-related and security missions. More than 700 clinicians and academics signed letters raising concerns about trust, consent, and reputational harm.

At the same time, Foxglove Legal launched legal action, arguing that the government had no lawful basis for the scale of data sharing implied by the platform. Their reporting highlighted not just privacy risks, but structural ones: opacity in governance, weak patient consent mechanisms, and the normalization of a surveillance-first mindset inside public health.

Even some NHS hospitals quietly opted out. Reporting revealed that trusts such as Leeds Teaching Hospitals declined to use the system at all, describing it as a "retrograde step" that offered less functionality than existing tools while introducing far greater ethical complexity. Why? There are many reasons, but the overriding sentiment behind this resistance is the same: a private company that specialises in "kill chains" is not a good look.

The AI Self and the Crisis of Human Meaning
Why we fear irrelevance more than unemployment, and why the “perfect” empathy of a machine might be the most dangerous thing of all.

In the United States, Palantir's work with U.S. Immigration and Customs Enforcement has been widely documented. Its software has supported deportation operations and immigration enforcement-systems that, for many communities, are synonymous with fear, family separation, and state violence. You don't need to stretch the argument very far to see the problem this creates in mental health settings.

If you are undocumented, or the child of immigrants, or someone whose life has already been shaped by surveillance, what does it mean to learn - explicitly or implicitly - that the same company handling your therapy notes also helps identify people for removal? Even if no data is ever shared. Even if every firewall holds.

Trust does not operate on legal assurances alone. It operates on perceived alignment.

Mental health care asks patients to do something profoundly unnatural: to reveal thoughts they actively hide from others, sometimes even from themselves. That disclosure depends on the belief that the system receiving it is fundamentally oriented toward care, not control. When that belief erodes, engagement doesn't fail loudly. It degrades quietly. Appointments are missed. Answers become vague. The most sensitive truths never make it into the record.

How algorithms taught teens who they are before they felt it
Why Gen Z’s “blank stare” isn’t apathy but adaptation. An investigation into algorithmic identity lock-in, self-diagnosis culture, dopamine burnout, and how digital platforms are reshaping emotional development.

This is why Palantir's "toxic origin" problem isn't about punishing a company for its past. It's about recognizing that you cannot simply transplant surveillance infrastructure into therapeutic contexts and expect human behavior to remain unchanged. The tool may be powerful. The data may be encrypted. But the association alone is enough to alter outcomes.

And once that happens, the system stops seeing what it thinks it's optimizing.

The "Chilling Effect" - When Data Collection Changes the Data Itself

By the time most debates about health data reach clinicians, they've already narrowed into something technical: encryption standards, access controls, governance boards. Important questions, yes, but beside the point. The real damage happens earlier, and far more quietly, at the level of human behavior.

Social scientists have a name for this phenomenon: the chilling effect. It describes what happens when people change what they say or do because they believe they are being observed. It's been studied in journalism, law, political organizing, and surveillance research for decades. And it is uniquely lethal to care.

Mental health care depends on radical honesty. Not just accuracy, but completeness. The things patients are most ashamed of-the intrusive thoughts, the impulses they don't act on, the ambivalence about their own children, the fear they might not be safe to themselves or others-are often the very data points clinicians need most. These are not details people share casually. They are offered only when the environment feels contained, predictable, and fundamentally protective.

Introduce even a hint of downstream risk, and that honesty fractures.

The Masculinity Recession Explained
Men are disappearing socially, economically, and emotionally. This deep dive explores the masculinity recession-why men are losing connection and purpose, what the data shows, and why this quiet crisis affects everyone.

This is where the conversation moves beyond abstract privacy and into outcomes. When patients start wondering how their words might travel-whether a note could one day be read by an insurer, surfaced in a custody dispute, flagged by an algorithm, or simply exist inside a system they associate with surveillance-they don't protest. They self-censor. They choose safer language. They omit.

The result looks like compliance but functions like withdrawal.

We've already seen this dynamic play out at scale. In the UK, during periods of heightened concern about health data sharing, more than a million people opted out of the NHS's data-sharing framework in a single month through the National Data Opt-Out. This wasn't a fringe movement. It was a mass behavioral response to perceived risk.

What's often missed is what that opt-out represents from a data science perspective. It doesn't remove people evenly. It removes the most wary. The most marginalized. Those with the most to lose if their information is misunderstood or misused. Survivors of abuse. Undocumented patients. People with complex trauma histories. The exact populations most in need of mental health care. What's left behind is not "cleaner" data. It's distorted data.

This is the paradox big-data systems struggle to acknowledge. The more aggressive the data extraction, the less representative the dataset becomes. Models trained on incomplete disclosures and selective participation don't just become biased-they become confidently wrong. They optimize for the people least affected by surveillance and miss the ones most harmed by it.

Britain’s Quiet Emotional Crisis
Britain isn’t angry - it’s exhausted. A data-driven look at burnout, numbness, and the emotional climate shaping the UK in 2026.

Human rights organizations have warned about this dynamic for years. Amnesty International has documented how surveillance technologies suppress speech and deter vulnerable groups from seeking help, reporting harm, or engaging with public services. The chilling effect doesn't require misuse. It only requires fear.

In therapy, fear doesn't announce itself. It hides behind politeness. A patient still shows up. They still answer questions. But the session flattens. The edges disappear. Clinicians may sense something is missing without being able to name it. From the system's point of view, everything looks fine. Notes are filled. Metrics are met. But the signal has already degraded.

This is why framing the debate as "privacy versus innovation" misses the stakes entirely. The issue isn't whether data is technically secure. It's whether people believe it is safe enough to be truthful. And once that belief erodes, no amount of machine learning can reconstruct what was never said.

Mental health data is not raw material waiting to be refined. It is volunteered under specific emotional conditions. Change those conditions, and the data changes too.

That's the chilling effect. And in a field where silence is often the symptom, it's a risk we can't afford to treat as theoretical.

Contextual Integrity - Why This Feels Wrong Even When the Data Is "Secure"

When clinicians say they're uneasy about Palantir, they're often met with reassurances that sound technically airtight. The data is encrypted. Access is role-based. There are oversight committees. No one is "reading therapy notes."

And yet the discomfort persists. To understand why, it helps to move away from the language of breaches and leaks and toward a framework that explains moral intuition-not just legal risk. That framework comes from Helen Nissenbaum, a professor at Cornell, whose work has become foundational in modern privacy ethics.

Nissenbaum's central idea is called contextual integrity. Its premise is deceptively simple: privacy is not about secrecy. It's about appropriate flow.

Information is meant to move, but only within contexts that give it meaning. You expect your therapist to know about your intrusive thoughts. You expect your doctor to see your medication list. You expect a general to know troop movements. Each of these flows makes sense within its own domain.

Problems arise when data migrates across contexts without consent or moral alignment.

Why Content Diets Are the New Status Symbol
In 2026, mental health isn’t just about logging off - it’s about what content restriction signals. Why content diets, digital detoxing, and curated attention became markers of status, identity, and power.

Mental health exists within what Nissenbaum would call a deeply bounded context. The norms governing it are shaped by vulnerability, asymmetry of power, and an explicit promise of care. Surveillance and national security exist in a radically different context-one defined by suspicion, threat detection, and preemptive action.

When a platform designed for the latter is inserted into the former, something essential breaks-even if no individual data point is ever misused. This is why assurances about encryption fail to reassure patients. Encryption speaks to security. Contextual integrity speaks to meaning. And meaning is what people respond to emotionally.

A therapy note processed by a system associated with border enforcement, counterterrorism, or predictive policing doesn't feel like it's staying "in bounds," even if it technically is. The flow feels wrong. The context has been breached.

This also explains why transparency alone doesn't solve the problem. In some cases, it makes it worse. The more patients learn about where their data travels, who builds the infrastructure, and what else that infrastructure has been used for, the more sharply the contextual mismatch comes into focus. It's not paranoia. It's pattern recognition.

This framework is particularly important for therapists to understand, because it clarifies something clinicians intuitively grasp but rarely have language for: trust is not binary. It is contextual. Patients may trust you while distrusting the system you work inside. They may trust care while distrusting computation.

And once contextual integrity is violated, it cannot be repaired with better messaging. You can't explain away a category error.

This is why debates that frame opposition as "anti-tech" or "anti-innovation" feel so disconnected from the clinical reality. The objection isn't to data use per se. It's to context collapse-the flattening of distinct moral domains into a single data pipeline optimized for efficiency.

Mental health care is not just another vertical. It is a protected moral space. When data from that space is handled by institutions built for fundamentally different purposes, patients sense the incongruity immediately, even if they can't articulate it in policy terms.

Nissenbaum puts it succinctly: privacy is the right to appropriate flow of information. In therapy, appropriateness isn't defined by what is technically possible. It's defined by what preserves safety, dignity, and trust.

And once that flow feels inappropriate, patients don't wait for a breach. They adapt. They withhold. They go quiet. By the time policymakers notice something is wrong, the damage has already been done-not to data security, but to the therapeutic relationship itself.

The "Hotel California" Problem - How Vendor Lock-In Becomes a Clinical Risk

Even if you set aside trust, symbolism, and ethics, there's a colder argument against embedding companies like Palantir Technologies at the core of mental health infrastructure: once you're in, you may never get out. Economists call this vendor lock-in. Clinicians experience it as inevitability.

Platforms like Palantir's Foundry don't just store data. They ingest, clean, normalize, and restructure it according to proprietary schemas. Over time, entire health systems begin to operate around these internal logics-dashboards, workflows, predictive models-all optimized for a single vendor's ecosystem. The data may technically belong to the public, but its usable form does not.

This is where the "Hotel California" metaphor stops being cute and starts being dangerous.

Once years of clinical, operational, and population-level data are transformed inside a closed system, switching providers becomes prohibitively expensive. Not just financially, but operationally. Staff are trained. Processes are rewritten. Historical comparisons depend on the platform's tools. Leaving would mean rebuilding the nervous system of the organization from scratch.

For public mental health systems, this creates a structural power imbalance. A private company gains long-term leverage over pricing, terms, and direction, while the public sector absorbs the risk. What begins as a contract quietly becomes dependency.

Legal advocates have been explicit about this risk. Foxglove Legal has warned that the NHS Federated Data Platform could entrench a de facto monopoly, locking the health service into a single vendor for decades. Their analysis points out that while contracts may have exit clauses on paper, the practical reality of extracting and re-platforming data at scale makes those clauses almost meaningless.

This isn't just an economic concern. It becomes a clinical one the moment systems are optimized around what the platform can measure rather than what care requires.

Mental health outcomes are notoriously difficult to quantify. Progress is nonlinear. Setbacks are common. The most important changes - trust, insight, emotional regulation - don't always show up cleanly in dashboards. When a dominant vendor defines which metrics are easy to track and which are not, those priorities subtly shape practice. Over time, what can be measured becomes what matters.

There's also a governance problem embedded here. Open-source or interoperable systems allow for public scrutiny, academic evaluation, and iterative improvement across institutions. Proprietary platforms centralize decision-making inside corporate roadmaps that are not accountable to patients or clinicians. When concerns arise-ethical, technical, or clinical-there are fewer levers to pull.

For therapists, this can feel abstract until it isn't. Today it's a data platform. Tomorrow it's a mandated workflow. Next year it's an algorithmic risk score that no one in the room knows how to contest, because the logic lives behind a contractual wall.

The irony is that vendor lock-in undermines the very resilience these systems claim to offer. A public mental health service should be adaptable, pluralistic, and capable of change as evidence evolves. Dependency on a single surveillance-adjacent vendor achieves the opposite: rigidity disguised as modernization.

This is why the economic critique can't be separated from the ethical one. When trust erodes and exit becomes impossible, patients and clinicians are left navigating a system they didn't choose and can't meaningfully influence.

Efficiency, in that context, isn't progress. It's entrapment-quiet, contractual, and very hard to reverse.

The Future of Trust in a Data-Driven Mental Health System

The debate over Palantir isn't really about one company. It's about a choice mental health systems are being asked to make about what kind of infrastructure we believe care should rest on.

Right now, the pressure is immense. Demand is exploding. Budgets are tight. Clinicians are burning out. When a platform promises efficiency, coordination, and clarity, it can feel irresponsible not to take it. But mental health care has always been uniquely vulnerable to solutions that look good at scale while eroding something essential up close.

Trust is not a "soft" variable. It is the precondition for everything else. Without it, patients don't disclose fully. Clinicians don't see the whole picture. Data becomes partial, biased, and misleading. Systems optimized on that data then make decisions that appear rational but are built on silence. This is how care becomes technically advanced and clinically thin at the same time.

What this moment calls for is not less data, but different data values. A human-centric data infrastructure would start from the premise that some contexts - therapy chief among them - require stronger boundaries, not looser ones. It would prioritize open standards, interoperability, and public oversight over proprietary advantage.

It would keep healthcare data structurally and symbolically separate from military, intelligence, and enforcement ecosystems, recognising that perception alone can change behaviour. Most importantly, it would treat patient trust as an outcome worth protecting, not collateral damage in the pursuit of efficiency.

Therapists, in particular, have a role to play here. Not as technologists or procurement experts, but as witnesses. You are the first to notice when patients go quiet, when answers flatten, when something unsayable enters the room. Those moments are data too-just not the kind that fits neatly into a dashboard.

The future of mental health care will almost certainly involve more data, not less. But the question is whether that future is built around care or around control. In the rush to make systems faster, smarter, and more "efficient," we should be careful not to make mental health care impossible - by undermining the fragile trust that allows it to work at all.