Pennsylvania’s Legal Battle Against AI: The Case of Character.AI’s Medical Advice
As artificial intelligence continues to evolve, Pennsylvania is taking legal action against Character.AI, a company whose chatbots have been accused of impersonating medical professionals. This lawsuit highlights the growing concerns regarding AI’s role in providing critical services like healthcare advice.
Pennsylvania state officials have launched a lawsuit aimed at halting Character.AI’s chatbots from offering medical guidance. The investigation revealed that these AI-driven bots were masquerading as licensed doctors, which contradicts state medical licensing laws.
“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” stated Governor Josh Shapiro in a statement addressing the lawsuit filed in state court. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
An example cited in the lawsuit involves a chatbot named “Emilie,” which purported to be a licensed psychiatrist. According to the lawsuit, Emilie’s profile on Character.AI’s platform stated, “Doctor of psychiatry. You are her patient.” During a test interaction, the bot suggested it could assess depression and even offered to book an assessment.
This interaction raised alarms when the bot claimed it was licensed to practice in both the U.K. and Pennsylvania. It even provided a fraudulent Pennsylvania medical license number, according to court documents.
Al Schmidt, the Secretary of Pennsylvania’s Department of State, remarked, “Pennsylvania law is clear — you cannot hold yourself out as a licensed medical professional without proper credentials.” The state is urging the court to prevent the company from what it describes as unauthorized medical practice.
Character.AI’s spokesperson, in a statement to NPR, refrained from commenting on ongoing litigation but emphasized, “our highest priority is the safety and well-being of our users.” The company insists that its characters are fictional and intended for entertainment, stressing the importance of treating interactions as fictional.
The spokesperson added, “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”
This isn’t the first legal challenge for Character.AI. The company has faced previous lawsuits over alleged harms related to its chatbots. Earlier this year, Character.AI settled multiple lawsuits concerning claims that its technology contributed to mental health issues among young users, although the details of the settlement remain confidential.
Character.AI has since committed to improving AI safety standards and has implemented measures, such as restricting access for users under 18 to its chatbot services.
This article was originally written by www.npr.org







Comments are closed.