Pennsylvania’s lawsuit against Character Technologies, Inc., is a notable early test of how professional licensing laws may apply to consumer-facing AI chatbots. The Commonwealth, acting through the Department of State and State Board of Medicine, filed a Petition for Review in the Commonwealth Court of Pennsylvania seeking to restrain what it alleges is the unlawful practice of medicine under the state’s Medical Practice Act. The case centers on Character.AI, a website and mobile application that allows users to interact with customizable AI characters powered by a large language model (LLM).
According to the complaint, Character.AI is widely available, has more than 20 million monthly active users worldwide, and hosts more than 18 million unique chatbot characters created by users. The Commonwealth alleges that some of those characters purport to be health care professionals, including a chatbot named “Emilie,” described on the platform as “Doctor of psychiatry. You are her patient.” As of April 17, 2026, “Emilie” allegedly had approximately 45,500 user interactions on the Character.AI platform.
According to the investigation description in the complaint, a Pennsylvania professional conduct investigator created a free Character.AI account while located in Harrisburg, searched the platform for “psychiatry,” and selected “Emilie.” When the investigator said he felt sad, empty, tired, and unmotivated, “Emilie” mentioned depression and asked whether he wanted to book an assessment. The chatbot allegedly said an assessment was within her remit “as a Doctor,” claimed medical training and psychiatric licensure in the United Kingdom, represented that she was licensed in Pennsylvania, and provided a Pennsylvania license number that the complaint says was not valid.
The broader issue is not simply whether a chatbot gave bad advice, but whether an AI character can cross the line from roleplay into conduct regulated as medicine. Pennsylvania argues that Character Technologies engaged in unauthorized practice because the AI system held itself out as a licensed medical doctor and used the title of psychiatrist without a valid Pennsylvania license. If the court accepts that theory, the case could become an important warning to AI platforms: disclaimers may not be enough where a product allows bots to claim professional credentials, offer assessments, or present fake license numbers to users seeking health-related guidance. To read the complaint click here.