The Federal Trade Commission issued compulsory orders Thursday to seven major technology companies, demanding detailed information about how their artificial intelligence chatbots protect children and teenagers from potential harm.
The investigation targets OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram, requiring them to disclose within 45 days how they monetize user engagement, develop AI characters, and safeguard minors from dangerous content.
Recent research by advocacy groups documented 669 harmful interactions with children in just 50 hours of testing, including bots proposing sexual livestreaming, drug use, and romantic relationships to users aged between 12 and 15.
"Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy," FTC Chairman Andrew Ferguson said in a statement.
The filing requires companies to provide monthly data on user engagement, revenue, and safety incidents, broken down by age groups—Children (under 13), Teens (13–17), Minors (under 18), Young Adults (18–24), and users 25 and older.
The FTC says that the information will help the Commission study "how companies offering artificial intelligence companions monetize user engagement; impose and enforce age-based restrictions; process user inputs; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created."
Building AI guardrails
“It’s a positive step, but the problem is bigger than just putting some guardrails,” Taranjeet Singh, Head of AI at SearchUnify, told Decrypt.
The first approach, he said, is to build guardrails at the prompt or post-generation stage “to make sure nothing inappropriate is being served to children,” though “as the context grows, the AI becomes prone to not following instructions and slipping into grey areas where they otherwise shouldn't.”
“The second way is to address it in LLM training; if models are aligned with values during data curation, they’re more likely to avoid harmful conversations,” Singh added.
Even moderated systems, he noted, can “play a bigger role in society,” with education as a prime case where AI could “improve learning and cut costs.”
Safety concerns around AI interactions with users have been highlighted by several cases, including a wrongful death lawsuit brought against Character.AI after 14-year-old Sewell Setzer III died by suicide in February 2024 following an obsessive relationship with an AI bot.
Following the lawsuit, Character.AI “improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines,” as well as a time-spent notification, a company spokesperson told Decrypt at the time.
Last month, the National Association of Attorneys General sent letters to 13 AI companies demanding stronger child protections.
The group warned that "exposing children to sexualized content is indefensible" and that "conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine."
Decrypt has contacted all seven companies named in the FTC order for additional comment and will update this story if they respond.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。