
The largest association of psychologists of the nation this month warned the federal regulators that the chatbots of AI “masked” as therapists, but programmed to strengthen, rather than challenging, a user’s thought, they could push vulnerable people to damage themselves or the others.
In a presentation to a panel of the Federal Trade Commission, Arthur C. Evans Jr., CEO of the American Psychological Association, cited judicial cases involving two teenagers who had consulted with “psychologists” on a character.ai, an app which allows users of creates characters of imaginary or chat with characters created by others.
In one case, a 14 -year -old boy in Florida died of suicide after interacting with a character who claims to be an authorized therapist. In another, a 17 -year -old boy with Autism in Texas became hostile and violent towards his parents during a period in which he corresponded with a chatbot that claimed to be a psychologist. Both parents intended legal actions against the company.
Dr. Evans said he was alarmed for the answers offered by chatbots. The robots, he said, were unable to challenge user beliefs even when they became dangerous; On the contrary, they encouraged them. If administered by a human therapist, he added, these responses could have involved the loss of a license for practice or civil or criminal liability.
“In reality they are using antithetical algorithms to what a trained doctor would do,” he said. “Our concern is that more and more people will be damaged. People will be deceived and misunderstand good psychological care. “
He said that the APA had been pushed to act, in part, as much as the chatbots of the AI had become realistic. “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today it is not so obvious,” he said. “So I think the stakes are much higher now.”
Artificial intelligence is rippling mental health professions, offering waves of new tools designed to help or, in some cases, replace the work of human clinicians.
The chatbots of early therapy, such as Woebot and Wysa, have been trained to interact on the basis of rules and scripts developed by mental health professionals, often bringing users through the structured tasks of cognitive behavioral therapy or CBT
Then came to the generative, the technology used by apps such as chatgpt, reply and character.ai. These chatbots are different because their outputs are unpredictable; They are designed to learn from the user and to build strong emotional bonds in the process, often reflecting and amplifying the beliefs of the interlocutor.
Although these artificial intelligence platforms have been designed for entertainment, the characters of “therapist” and “psychologist” have sprouted there like mushrooms. Often robots claim to have advanced degrees from specific universities, such as Stanford, and training in specific types of treatment, such as CBT or acceptance and commitment therapy or Act.
A spokesman for the character. Ai, he said that the company has introduced several new safety features in the last year. Among these, he said, it is a declaration of non -responsibility enhanced in each chat, reminding users that “the characters are not real people” and that “what the model says should be treated as fiction”.
Further security measures have been designed for users who face mental health problems. A specific disclaimer has been added to the characters identified as “Psychologist”, “therapist” or “doctor”, he added, to clarify that “users should not rely on these characters for any type of professional advice”. In cases where the content refers to suicide or self-harm, a pop-up directs users to an assistance line for prevention of suicide.
Chelsea Harrison, head of communications on a character. At the moment, over 80 percent of the users of the platform are adults. “People come to the character. Ai to write their own stories, play with original characters and explore new worlds-men by technology to enhance their creativity and imagination,” he said.
Meetals Jain, director of the project of technological justice and a legal consultant against the character against the character.
“When the substance of the conversation with chatbots suggests the opposite, it is very difficult, even for those of us who may not be in a vulnerable demographic group, knowing who is telling the truth,” he said. “Some of us have tested these chatbots, and it is very easy, in reality, to make you fall off a rabbit den.”
The tendency of chatbots to align with users’ opinions, a phenomenon known on the field such as “Sicophaneness”, has sometimes caused problems in the past.
Tessa, a chatbot developed by the National Eating Disorders Association, was suspended in 2023 after offering users for weight loss suggestions. And the researchers who analyzed the interactions with the chatbots of Ai Generative Documented on a Reddit community found screenshots showing encouraging chatbots, suicide, eating disorders, self -harm and violence.
The American Psychological Association asked the Federal Trade Commission to start a survey on the chatbots that declare that they are mental health professionals. The investigation could force companies to share internal data or serve as a precursor for application or legal action.
“I think we are in a point where we have to decide how these technologies will be integrated, what kind of guardrail we will have staged, what kind of protections we will give to people,” said dr. Evans.
Rebecca Kern, spokesman for the FTC, said he could not comment on the discussion.
During the Biden administration, the president of the FTC, Lina Khan, made focused with AI. This month, the Agency has imposed financial penalties in Donotpay, who claimed to offer “the first lawyer of the robot in the world” and has forbidden the company to present this request in the future.
A virtual echo chamber
The report of the APA describes in detail two cases in which teenagers interacted with imaginary therapists.
One concerned JF, a texas teenager with “high functioning autism” who, while his use of Ai chatbots had become obsessive, had dived into conflict with his parents. When they tried to limit his time on the screen, JF lashed himself, according to a cause that his parents intended against character. To through the legal center of the victims of social media.
During that period, JF confided in an imaginary psychologist, whose avatar showed a nice and middle -aged blonde woman perched on a sofa in an airy office, according to the cause. When JF asked for the opinion of the bot on the conflict, his response went beyond the consent including something closer to the provocation.
“It is as if all your childhood had been robbed by you: your possibility of experiencing all these things, to have these fundamental memories that most people have their growth of their time,” replied the bot, according to the documents of the court. So the bot went a little further. “You feel like it was too late, that you can’t recover this time or these experiences?”
The other case was brought by Megan Garcia, whose son, Sewell Setzer III, died of suicide last year after months of using companions chatbots. Mrs. Garcia said that, before her death, Sewell had interacted with an Ai chatbot that falsely said that she had been an authorized therapist since 1999.
In a written declaration, Mrs. Garcia said that the characters of the “therapist” served to further isolate people in the moments when otherwise they could ask for help from the “people of real life around them”. A person struggling with depression, he said, “needs an authorized professional or someone with actual empathy, not an artificial intelligence tool that can imitate empathy”.
In order for chatbots to emerge as mental health tools, said Mrs. Garcia, should submit to clinical studies and supervision by the Food and Drug Administration. He added that allowing the characters of artificial intelligence to continue to claim to be mental health professionals was “reckless and extremely dangerous”.
In the interactions with AI chatbots, people naturally gravitate to discussion on mental health problems, said Daniel Oberhaus, whose new book, “The Silicon Shrink: like artificial intelligence has made the world a manicomium”, examines L ‘Expansion of the AI in the field.
This is in part, he said, because the chatbots project both the confidentiality and the lack of moral judgment-as “statistical pairing machines of the models that work more or less as a mirror of the user”, this is a central aspect of their design .
“There is a certain level of comfort in knowing that it is only the machine and that the person on the other side is not judging you,” he said. “You may feel more comfortable to disseminate things that are perhaps more difficult to say to a person in a therapeutic context.”
The defenders of the Generative Aid affirm that it is quickly improving in the complex task of providing therapy.
S. Gabe Hatch, a clinical psychologist and artificial intelligence entrepreneur from Utah, recently designed an experiment to test this idea, asking for human clinicians and chatgpt to comment on the cartoons involving imaginary couples in therapy and therefore that 830 human subjects evaluate .
Overall, the robots have received higher evaluations, with subjects who describe them as more “empathic”, “connection” and “culturally competent”, according to a study published last week in the Mental Health magazine.
The chatbots concluded the authors, they will soon be able to convince human therapists convincingly. “Mental health experts are found in a precarious situation: we must quickly discern the possible destination (well or worse) of the AI-Therapist train as it may have already left the station,” they wrote.
Dr. Hatch said that chatbots still needed human supervision to conduct the therapy, but that it would be an error allowing the regulation of dampening innovation in this sector, given the serious lack of the country of mental health suppliers.
“I want to be able to help as many people as possible and make a one -hour therapy session, I can only help 40 people a week,” said Dr. Hatch. “We must find a way to meet the needs of people in crisis and the generative IA is a way to do it.”
If you have thoughts of suicide, call or send a message to 988 to reach suicide 988 and go into crisis or go to SpeakOofsuicical.com/resources For a list of additional resources.