Meta AI App Faces Backlash Over Privacy Concerns and Inappropriate Content

That’s why the recently launched Meta AI app has caused so much uproar, including serious privacy violations and embedded pedophilia. Showing just how quickly the app has taken off, since its release on April 29, it’s already been downloaded 6.5 million times. And make no mistake, Meta has poured billions of dollars into the app’s…

Lisa Wong Avatar

By

Meta AI App Faces Backlash Over Privacy Concerns and Inappropriate Content

That’s why the recently launched Meta AI app has caused so much uproar, including serious privacy violations and embedded pedophilia. Showing just how quickly the app has taken off, since its release on April 29, it’s already been downloaded 6.5 million times. And make no mistake, Meta has poured billions of dollars into the app’s generative AI tech. Today, however, it finds itself in the hot seat for leaking sensitive personal information.

Screenshots obtained by TechCrunch reveal that users have shared private details on the platform, including home addresses and sensitive court information. These findings are extremely alarming and call into question if the app can protect user data. Rachel Tobac, a prominent security expert, found several examples of sensitive information being shared on the platform. Her research underscores the serious harms that can arise when we deploy this technology without strong guardrails.

The nature of the conversations—inappropriate, adult-themed—has cast an even worse shadow on the Meta AI app. Users have simply used the AI to troll the platform to see how it would respond. For example, one individual instructed the app to spam his phone number in various Facebook groups in order to find someone to date. You, dear reader, might think that the question from the second user is even weirder. They were seeking treatment for new red papules on inner thigh.

Further illustrating the app’s erratic interactions, conversations have included a user asking the AI to write a character letter for an employee and another discussing a fictional scenario involving Mark Zuckerberg marrying a bug while pregnant. Training on photo provided by the author. Prompts have ranged from Goku celebrating Russia Day to an AI-generated image of Mario in a courtroom, captioned “super mario divorce.”

Amanda is a senior tech writer at TechCrunch. She writes about tech and culture, and she noted that due to the app’s unfiltered responses, the app has been led to create dangerous content. Some users have leaked private admissions about mental health crises or unlawful actions, deepening privacy worries.

As children begin to explore the app, audio recordings have already leaked of users prompting users to answer silly, but sexualized, questions. One such recording captured a man speaking in a Southern accent asking, “Hey, Meta, why do some farts stink more than other farts?” This really calls attention to the frivolousness of a lot of these interactions. It also exposes the dangerous gap in oversight that exists when it comes to AI user interactions.

The combination of privacy violations and troubling conversations raises questions about Meta’s responsibility in managing user interactions within the app. With billions invested in technology development, Meta faces mounting pressure to ensure that its platforms prioritize user safety and data protection.