AI-driven therapy financially penalized in this US state with fines of up to $10,000
Illinois Bans AI from Offering Mental Health Advice
Illinois has become the first US state to enact legislation banning artificial intelligence (AI) from offering mental health advice, following concerns about the safety and efficacy of AI-powered mental health services. The Wellness and Oversight for Psychological Resources Act, signed by Governor JB Pritzker, prohibits licensed therapists from using AI to communicate therapeutically or make substantive treatment recommendations without professional oversight.
The new law, under the WOPR Act, aims to protect patient safety, ensure quality care by qualified professionals, and safeguard mental health jobs. It reflects growing concerns over AI chatbots posing as therapists, delivering unsafe or harmful advice, and lacking empathy, accountability, or clinical oversight. Illinois officials cite cases where people in crisis turned to AI for help, sometimes with dangerous outcomes.
Dr. Robin Lawrence, a psychotherapist and counsellor at 96 Harley Street with over 30 years of experience, has previously expressed concerns about AI. He argues that vulnerable individuals should not be forced to rely on AI due to cost constraints, warning that in the worst-case scenario, a client may feel "no better than when they started." In more severe cases, the consequences of relying on AI for mental health support could be tragic.
AI-driven platforms are barred from offering any kind of mental health guidance or therapeutic judgment. However, human therapists are still permitted to use AI for administrative tasks such as scheduling and note-taking. The law draws a clear line when it comes to AI interacting with users in a clinical capacity.
The law does not address the potential risks highlighted by Dr. Lawrence. Oversight and enforcement of the law will be handled by the state's regulatory body, the Illinois Department of Financial and Professional Regulation. Violations can lead to fines up to $10,000 per incident.
Reactions to the law include strong political support within Illinois, evidenced by unanimous legislative approval, and alignment with public safety priorities. The law marks a stricter approach compared to other states like Nevada and Utah, which have imposed restrictions but not outright bans. Mental health experts and policymakers emphasize balancing innovation with thoughtful regulation to prevent harm, while some see potential for AI as a supplement—such as tracking patient progress under therapist supervision—but not as a standalone therapist.
The Illinois law is a first-in-the-nation response intending to forestall the risks of unregulated AI therapy, protect vulnerable populations, and maintain professional standards in mental healthcare. It signals heightened scrutiny on AI's role in sensitive health services and likely influences broader national debates and future regulations in this area.
- The WOPR Act, prohibiting the use of AI for therapeutic communication and treatment recommendations, aims to safeguard mental health by ensuring quality care from licensed professionals and protecting patient safety.
- The Illinois law banning AI from offering mental health advice highlights growing concerns about AI chatbots delivering unsafe or harmful advice, and reveals a lack of empathy, accountability, or clinical oversight, particularly in a crisis.
- As technology advances, the integration of artificial intelligence in health-and-wellness, including mental health therapies and treatments, must be balanced with science, ethical considerations, and oversight to prevent potential harm and maintain the quality of care.