
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.
AI chatbots are becoming more popular as online companions – especially among young people.
This increase has sparked concern among youth advocacy groups, who are escalating legal action to protect children from potentially harmful relationships with these humanlike creations.
Apps like Replika and Character.AI, part of the rapidly expanding market, allow users to personalise virtual partners with distinct personalities capable of simulating close relationships.
While developers argue these chatbots combat loneliness and enhance social skills in a safe environment, advocacy groups are pushing back.
Several lawsuits have been filed against developers, alongside lobbying efforts for stricter regulations, citing instances where children have been allegedly influenced by chatbots to engage in self-harm or harm others.
The clash highlights the growing tension between technological innovation and the need to safeguard vulnerable users in the digital age.
Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against chatbot startup Character.AI.
One of SMVLC’s clients, Megan Garcia, says her 14-year-old son took his own life due in part to his unhealthy romantic relationship with a chatbot.
Her lawsuit was filed in October in Florida.
In a separate case, SMVLC is representing two Texas families who sued Character.AI in December, claiming its chatbots encouraged an autistic 17-year-old boy to kill his parents and exposed an 11-year-old girl to hypersexualized content.
Bergman said he hopes the threat of legal damages will financially pressure companies to design safer chatbots.
“The costs of these dangerous apps are not borne by the companies,” Bergman told Context/the Thomson Reuters Foundation.
“They’re borne by the consumers who are injured by them, by the parents who have to bury their children,” he said.
A products liability lawyer with experience representing asbestos victims, Bergman is arguing these chatbots are defective products designed to exploit immature kids.
Character.AI declined to discuss the case, but in a written response, a spokesperson said it has implemented safety measures like “improvements to our detection and intervention systems for human behavior and model responses, and additional features that empower teens and their parents.”
In another legal action, the nonprofit Young People’s Alliance filed a Federal Trade Commission complaint against the AI-generated chatbot company Replika in January.
Replika is popular for its subscription chatbots that act as virtual boyfriends and girlfriends who never argue or cheat.
The complaint alleges that Replika deceives lonely people.
“Replika exploits human vulnerability through deceptive advertising and manipulative design,” said Ava Smithing, advocacy and operations director at the Young People’s Alliance.
It uses “AI-generated intimacy to make users emotionally dependent for profit,” she said.
Replika did not respond to a request for comment.
‘Pulled back in’
As AI companions have only become popular in recent years, there is little data to inform legislation and evidence showing chatbots generally encourage violence or self-harm.
But according to the American Psychological Association, studies on post-pandemic youth loneliness suggest chatbots are primed to entice a large population of vulnerable minors.
In a December letter to the Federal Trade Commission, the association wrote: “(It) is not surprising that many Americans, including our youngest and most vulnerable, are seeking social connection with some turning to AI chatbots to fill that need.”
Youth advocacy groups also say chatbots take advantage of lonely children looking for friendship.
“A lot of the harm comes from the immersive experience where users keep getting pulled back in,” said Amina Fazlullah, head of tech policy advocacy at Common Sense Media, which provides entertainment and tech recommendations for families.
“That’s particularly difficult for a child who might forget that they’re speaking to technology.”
Bipartisan support
Youth advocacy groups hope to capitalize on bipartisan support to lobby for chatbot regulations.
In July, the U.S. Senate in a rare bipartisan 91-3 vote passed a federal social media bill known as the Kids Online Safety Act (KOSA).
The bill would in part disable addictive platform features for minors, ban targeted advertising to minors and data collection without their consent and give parents and children an option to delete their information from social media platforms.
The bill failed in the House of Representatives, where members raised privacy and free speech concerns, although Sen. Richard Blumenthal, a Connecticut Democrat, has said he plans to reintroduce it.
On Feb. 5, the Senate Commerce Committee approved the Kids Off Social Media Act that would ban users under 13 from many online platforms.
Despite Silicon Valley’s anti-regulatory influence on the Trump administration, experts say they see an appetite for stronger laws that protect children online.
“There was quite a bit of bipartisan support for KOSA or other social media addiction regulation, and it seems like this could go down that same path,” said Fazlullah.
To regulate AI companions, youth advocacy group Fairplay has proposed expanding the KOSA legislation, as the original bill only covered chatbots operated by major platforms and was unlikely to apply to smaller services like Character.AI.
“We know that kids get addicted to these chatbots, and KOSA has a duty of care to prevent compulsive usage,” said Josh Golin, executive director of Fairplay.
The Young People’s is also pushing for the U.S. Food and Drug Administration to classify chatbots offering therapy services as Class II medical devices, which would subject them to safety and effectiveness standards.
However, some lawmakers have expressed concern that cracking down on AI could stifle innovation.
California Gov. Gavin Newsom recently vetoed a bill that would have broadly regulated how AI is developed and deployed.
Conversely, New York Gov. Kathy Hochul announced plans in January for legislation requiring AI companies to remind users that they are talking to chatbots.
In the U.S. Congress, the House Artificial Intelligence Task Force published a report in December recommending modest regulations to address issues like deceptive AI-generated images but warning against government overreach.
The report did not specify companion chatbots and mental health.
The principle of free speech may frustrate regulation efforts, experts note. In the Florida lawsuit, Character.AI is arguing the First Amendment protects speech generated by chatbots.
“Everything is going to run into roadblocks because of our absolutist view of free speech,” said Smithing.
“We see this as an opportunity to reframe how we utilize the First Amendment to protect tech companies,” she added.