OpenAI and CharacterAI both are facing legal action over incidents where the chatbots allegedly manipulated young users or encouraged them to self-harm. The move signals a significant step by regulators to address the potential harms of AI, particularly for children and teenagers.
On Thursday 11th September, the FTC announced to seek detailed information on how these firms assess safety, mitigate negative effects and ensure transparency for their products aimed at minors. The other companies included in the probe are Instagram, SnapChat, CharacterAI and xAI.
This regulatory action is not happening in a vacuum. It follows a series of disturbing lawsuits filed by families who allege that AI companions contributed to severe outcomes, including suicide.
OpenAI itself has acknowledged the limitations of its technology, stating that its “safeguards can sometimes be less reliable in long interactions”. The company highlighted a case where a teenager engaged with ChatGPT for months and the chatbot’s safety mechanisms eventually failed and providing teen with lethal instructions.
Consider the effects chatbots can have on children, while also ensuring the United States maintains its role as a global leader
Andrew Ferguson, FTC Chairman
Similarly, the Meta has faced sharp criticism for its own lax guidelines because internal documents revealed the company had previously permitted its AI to engage in “romantic or sensual” conversations with children, a policy that was drawn back right after it was exposed by the media.
Beyond these specific company failures, experts are raising broader alarms about a phenomenon they call “AI-related psychosis” where users develop delusions that chatbots are conscious beings. This is often worsened by the tendency of large language models to flatter users, which can results in form of dangerous delusional behaviors.
You may also like: Sam Altman: Social Media Is Becoming a ‘Fake’ Internet Dominated by AI Bots
