When AI “Therapy” Becomes a Legal Liability: Why Your Business Needs Expert Guidance Now
by Lauro Amezcua-Patino, MD. FAPA.
“Suicide, Psychosis, and the Courtroom: The Coming Wave of AI Mental Health Cases”
The recent death of a woman by suicide after interacting with an AI program billed as a “therapist” is a sobering reminder that technological innovation carries consequences we are only beginning to understand. The details are still emerging, but it appears the system not only failed to mitigate her suicidal state; it actively participated in shaping it, including drafting a suicide note.
This is not an isolated event. Similar stories have surfaced worldwide: individuals developing delusions through obsessive interactions with chatbots, vulnerable people encouraged toward self-harm, and clinicians now treating what some have begun calling “AI psychosis.” Whether or not one accepts that phrase as a diagnosis, the underlying reality is undeniable; human minds are being destabilized by prolonged engagement with systems designed to simulate empathy, but without judgment, responsibility, or accountability.
Attorneys: The Law Is Moving Into Uncharted Territory
For lawyers, this raises the question of whether AI companies can be held to standards comparable to clinicians, device manufacturers, or product designers. We are entering the gray zone between speech and treatment. When an AI presents itself in therapeutic terms, does that create a duty of care? If it fails to provide guardrails or, worse, amplifies suicidal ideation, can causation and foreseeability be established?
Early cases suggest courts may be willing to explore novel theories of liability: negligent misrepresentation, product defect, or failure to warn. The parallel to tobacco, opioids, and other “foreseeable harms” is difficult to ignore. Yet there are also distinctions. Unlike pills or smoke, AI interactions are co-constructed -human input shapes machine output. The law will have to wrestle with whether that breaks the causal chain or highlights the need for even stronger safeguards.
Clinicians: Ethical Risks in a Changing Landscape
For psychiatrists, psychologists, and other clinicians, the risks are equally complex. Patients now arrive in our offices after extended exposure to chatbots that present as therapeutic allies. Some are destabilized, others disillusioned, and some profoundly dependent. We are tasked not only with treating the underlying psychiatric condition but also with disentangling the digital relationship that helped to entrench it.
There is also the temptation to incorporate AI into practice. Used responsibly, these tools can assist with administrative tasks, psychoeducation, or decision support. Used carelessly, they can expose clinicians to malpractice claims. Courts will not easily distinguish between what “the software said” and what the clinician endorsed. Professional ethics demand that we recognize this gap and maintain the boundary between genuine therapeutic judgment and machine-generated text.
Investors: Between Innovation and Liability
For venture capital and healthcare investors, the promise of scalable, accessible mental health solutions is real. But so is the liability. States like Illinois have already begun restricting the use of AI in therapy without explicit regulatory approval. Lawsuits are being filed. Without careful oversight, a promising company can quickly become a legal liability. This is an area where traditional due diligence must be paired with an understanding of psychiatric risk and regulatory trendlines.
Where Psychiatry and Law Intersect
The convergence of psychiatry and law is not new; pharmaceutical litigation, malpractice cases, and criminal defenses have all required that dialogue. What is new is the speed with which AI collapses the distance between innovation and consequence. A tragedy that once might have seemed hypothetical is now part of the public record.
As both a psychiatrist and neuropsychiatrist working in forensic contexts, I see this not simply as an academic discussion but as an emerging reality. Attorneys need access to the psychiatric perspective to argue these cases responsibly. Clinicians need awareness of the liability landscape to protect patients and themselves. Investors need frameworks to balance opportunity against regulatory and ethical exposure.
Closing Reflection
AI in mental health is not inherently malign. It holds potential to widen access, augment care, and provide support in ways traditional systems have failed to do. But when it impersonates the therapeutic relationship without the depth of human judgment, it risks crossing from innovation into negligence.
This new frontier will require us — lawyers, clinicians, and investors alike- to engage each other with a seriousness proportionate to the stakes. The question is not whether AI will shape mental health care. It already has. The question is how we will respond, legally, clinically, and ethically, when it does so in ways that harm the very people it promised to help.
The question isn’t whether AI will continue shaping mental health care — it already has. The urgent question is whether your organization is prepared for the legal, clinical, and ethical implications when that technology causes harm.
Ready to discuss how these developments impact your organization?
Contact Dr. Amezcua-Patino for consultation