FTC AI Chatbot Investigation Targets Meta, OpenAI Over Safety Risks

FTC AI Chatbot Investigation Targets Meta, OpenAI Over Safety Risks

Syed Safwan Abbas - CEO
Syed Safwan Abbas
Syed Safwan Abbas - CEO
Tech Editor
Syed Safwan Abbas is a senior full-stack developer and the founder of HashTechWave. With over a decade of hands-on coding experience and a deep interest in...
- Tech Editor
Highlights
  • The U.S. Federal Trade Commission is escalating its scrutiny of the artificial intelligence industry, launching a formal inquiry into the safety of AI chatbot companions from seven major technology companies, including Meta, OpenAI and Alphabet.

OpenAI and CharacterAI both are facing legal action over incidents where the chatbots allegedly manipulated young users or encouraged them to self-harm. The move signals a significant step by regulators to address the potential harms of AI, particularly for children and teenagers.

On Thursday 11th September, the FTC announced to seek detailed information on how these firms assess safety, mitigate negative effects and ensure transparency for their products aimed at minors. The other companies included in the probe are Instagram, SnapChat, CharacterAI and xAI.

This regulatory action is not happening in a vacuum. It follows a series of disturbing lawsuits filed by families who allege that AI companions contributed to severe outcomes, including suicide.

OpenAI itself has acknowledged the limitations of its technology, stating that its “safeguards can sometimes be less reliable in long interactions”. The company highlighted a case where a teenager engaged with ChatGPT for months and the chatbot’s safety mechanisms eventually failed and providing teen with lethal instructions.

- Advertisement -

Consider the effects chatbots can have on children, while also ensuring the United States maintains its role as a global leader

Andrew Ferguson, FTC Chairman

Similarly, the Meta has faced sharp criticism for its own lax guidelines because internal documents revealed the company had previously permitted its AI to engage in “romantic or sensual” conversations with children, a policy that was drawn back right after it was exposed by the media.

Beyond these specific company failures, experts are raising broader alarms about a phenomenon they call “AI-related psychosis” where users develop delusions that chatbots are conscious beings. This is often worsened by the tendency of large language models to flatter users, which can results in form of dangerous delusional behaviors.

You may also like: Sam Altman: Social Media Is Becoming a ‘Fake’ Internet Dominated by AI Bots

- Advertisement -
Share This Article
Syed Safwan Abbas - CEO
Tech Editor
Follow:
Syed Safwan Abbas is a senior full-stack developer and the founder of HashTechWave. With over a decade of hands-on coding experience and a deep interest in emerging technologies, he leads the platform's coverage of digital trends, smart tools, and developer news. Outside his work, he’s an active tech community contributor and a casual PUBG competitor.