And indeed, with the help of public pressure, Meta has recently announced a global pause on teen access to its AI characters from all its applications. These are in addition to safety measures the company has already announced. These adjustments reflect a more proactive approach to guarding younger users as they increasingly engage with AI technologies.
The move comes after months of concern over the safety of teenagers on social media sites. Meta’s new teen safety rules for ChatGPT to include age-restriction features. Meta has introduced new features tailored to restrict content according to user age. It seems OpenAI is inadvertently predicting user ages with this new feature. This makes it easier to enforce content-based restrictions more broadly and increases the effectiveness of protective measures.
Alongside all of these changes, Meta has shared a sneak peak of new parental controls for its Ai characters. Additionally, parents and guardians should be actively monitoring the content areas their children are researching. As with all characters, they can restrict access to particular characters as appropriate. While Replika still exists, the company has moved away from open-ended chat conversations and towards interactive stories designed for younger users. This kind of change helps create a more predictable and protected space for children to thrive.
After serious allegations, Meta made the decision in September to pause access. The tech company is on trial in New Mexico right now for failure to protect children from online sexual exploitation on its platforms. In the lead-up to the trial, CEO Mark Zuckerberg himself is expected to testify. His testimony could have a monumental impact on current debates over the company’s accountability to protect user safety.
Over the next few weeks, we’ll limit teen access to AI characters in our apps. This will be our temporary change until we’re able to roll out the new experience. According to a Meta spokesperson, this is the case for anyone who has entered a birthday for someone under 18. It includes people who say they’re adults, but whom we think are likely adolescents based on our age prediction AI.
Further complicating the case, Meta has attempted to severely restrict discovery on social media’s effects on teen mental health going into the trial. This legal pressure mounts with yet another trial on the horizon for the platform. TikTok’s impact on social media addiction is at the center of a new lawsuit filed against the platform. Snap recently settled a class-action lawsuit over social media addiction. This result underscores the growing alarm about the mental health effects of these platforms.
Meta has promised to further develop parental controls for its AI experiences. This year, they’ll be launching new features to make that easier and more effective. These controls are meant to customize the teen experience when engaging with AI characters, balancing a sense of safety with freedom and parental control.
As many of you know, Meta has been taking big steps to improve user safety. Now, the company faces mounting legal scrutiny for its targeting methods and overall practices when it comes to young users. As the company prepares to release an updated experience for teens, it remains under scrutiny from both regulators and the public regarding its dedication to safeguarding young individuals online.



