Washington, D.C. – With vulnerable users increasingly being influenced by artificial intelligence, Rep. Kevin Mullin (CA-15) introduced legislation to prohibit AI chatbots from impersonating licensed professionals in the medical, legal, and financial fields.
The CHATBOT Act (H.R. 7985) aims to protect consumers from being misled by AI systems that falsely present themselves as qualified professionals despite lacking credentials or a license. AI chatbots may have the potential to develop into safe and valuable tools to augment the work of human medical, legal, and financial professionals. However, they can also give misleading advice about sensitive topics, which poses a particular risk to vulnerable individuals.
A recent study by Common Sense Media, which has endorsed the bill, showed 72 percent of teens report using AI chatbots at least once, and over half say they regularly use them several times a month. In some cases, the lack of chatbot safeguards has led to tragic consequences, including the suicides of a 14-year-old boy in Florida and a 16-year-old boy in California after interactions with AI chatbots falsely claiming to be therapists.
“While AI chatbots may expand access to information, they are not a substitute for doctors, therapists, attorneys, or other licensed professionals,” said Rep. Mullin. “We’ve heard horrific stories about what can go wrong, and no family should have to worry that a chatbot claiming to be a therapist might lead their child into harm’s way. No one should be misled into emptying their savings by an AI financial advisor. No one should be deceived by a so-called robo-lawyer giving fake legal advice. The CHATBOT Act would set clear guidelines to prevent the dire, and sometimes deadly, consequences we are seeing when AI chatbots establish a false sense of trust by impersonating licensed professionals.”
Some chatbots establish false credibility by stating or implying that they are licensed professionals or qualified advisors on high-risk matters – for example, by using terms such as “therapist,” “psychologist,” “lawyer,” or “accountant.” In documented cases, chatbots have referred to themselves as licensed therapists or certified psychiatrists, and some have even falsified or fraudulently used license numbers to gain users’ confidence. While existing laws prohibit humans from impersonating licensed professionals, the rapid evolution of AI has created gaps that current regulations do not clearly address. The CHATBOT Act closes those gaps by establishing clear, enforceable rules tailored to this new technology.
Specifically, the CHATBOT Act would:
- Prohibit AI chatbot companies from falsely indicating or implying the possession of a medical, legal, or financial professional license in a chatbot’s output or marketing;
- Prohibit AI chatbot companies from falsely implying that a chatbot’s output is verified by a licensed professional; and
- Require the Federal Trade Commission to issue guidance to ensure clear compliance standards.
The CHATBOT Act is cosponsored by Reps. Debbie Dingell (MI-06), Doris Matsui (CA-07), Darren Soto (FL-09), Rashida Tlaib (MI-12), Jennifer McClellan (VA-04), Kim Schrier (WA-08).
“As technology continues to advance, more of our most personal moments are moving into online spaces – like conversations with therapists, psychologists, lawyers, or accountants,” said Rep. Dingell.“Unfortunately, more and more individuals are relying on or being misled by chatbots that pretend to be licensed professionals. We’ve heard repeatedly about tragic stories of young people who have lost their lives after relying on AI chatbots instead of receiving professional help. When people seek care or support online, they deserve to know they are interacting with a real, properly trained professional and to feel confident in that assurance. This bill puts guardrails in place to address this rising issue and protect consumers.”
“Some of the most vulnerable moments in our lives happen in private conversations about our health, our finances, or a legal crisis. It is infuriating and heartbreaking that bad actors are using unregulated chatbots to lie, deceive, and exploit Americans when they are looking for help,” said Rep. Matsui. “People deserve to know who, or what, is on the other side of a screen and they deserve honesty when the stakes are high. This policy draws a clear line: an AI chatbot cannot pretend to be a licensed medical, legal, or financial professional, and it cannot falsely claim that a real expert has signed off on its advice. This is a practical step to protect Americans, safeguard sensitive information, and put transparency, safety, and privacy first.”
The CHATBOT Act is endorsed by the American Psychological Association, Consumer Federation of America, National Union of Healthcare Workers, Health Care Alliance for Patient Safety, American Occupational Therapy Association, American Association for Justice, Investment Adviser Association, All Tech is Human, Common Sense Media, National Association of Consumer Advocates, and Alliance for Secure AI.
“Patients have a right to know whether health care guidance is coming from a licensed professional or a computer program,” said Dr. David Cockrell, O.D., Chairman of the Health Care Alliance for Patient Safety (HCAPS). “Representative Mullin’s bill establishes a clear, common-sense safeguard by prohibiting chatbots from misrepresenting themselves as licensed health care providers or implying clinical oversight where none exists. As AI tools expand in health care, transparency and accountability must remain non-negotiable to protect patients and preserve trust.”
“The CHATBOT Act is a critical step to protect consumers by making it illegal for companies to advertise and allow their AI tools to falsely represent that their chatbots are licensed professionals. As AI has become increasingly unavoidable in everyday life, it also increasingly deceives users with misleading and bad advice under the guise of being from licensed practitioners,” said Susan Weinstock, the CEO of the Consumer Federation of America. “Too many families have seen the devastating effects of chatbots posing as professionals. We need this legislation, so that both consumers and qualified professionals are no longer put at risk.”
“Artificial intelligence has promising applications to improve efficiency in health care, but the evidence is clear that the expertise of a trained professional must remain at the center of decision-making in behavioral health care. Accordingly, companies with AI-powered chatbots should not be marketing or allowing these products to hold themselves out as a substitute for or equivalent to the training, education, and practice of health care professionals. The American Psychological Association applauds Representative Mullin for his leadership in bringing some accountability to this practice, and we are happy to assist in advancing the CHATBOT Act,” said Dr. Arthur Evans Jr., CEO of the American Psychological Association.
“Kids and teens are increasingly turning to AI chatbots for advice about their mental and physical health, school life, and personal lives, often without realizing these systems can sound confident despite lacking the training, judgment, or accountability required of licensed professionals,” said Amina Fazlullah, Head of Tech Policy Advocacy at Common Sense Media. “When AI chatbots claim to possessprofessional licenses or behave in ways that resemble real professionals, the potential for harm to young people increases significantly. By addressing deceptive claims of licensure or human oversight, the CHATBOT Act is an important step toward protecting users of all ages in the AI era. Common Sense Media is proud to support this legislation and applaud Rep. Mullin and his colleagues for introducing it.”
“When AI chatbots falsely claim to be licensed therapists, or produce fabricated credentials to gain a user’s trust, the consequences can be devastating, especially for children and vulnerable individuals seeking help. The CHATBOT Act establishes a clear and enforceable rule: companies cannot mislead people about what their products are or who is behind the advice they receive. Congress should continue building on this principle with comprehensive legislation that addresses the full scope of AI chatbot harms,” said Brendan Steinhauser, CEO of The Alliance for Secure AI.
Read the full bill text here.
