Meta Resolves Security Flaw Exposing Users’ AI Prompts and Content

Specifically, Meta Platforms, Inc. settled over its recent handling of a high level security flaw. This bug allowed users of its AI chatbot to see other users’ private prompts and AI-generated responses. Sandeep Hodkasia, the founder of the Indian security testing firm Appsecure, found a deeply concerning vulnerability. This discovery set off major red flags…

Lisa Wong Avatar

By

Meta Resolves Security Flaw Exposing Users’ AI Prompts and Content

Specifically, Meta Platforms, Inc. settled over its recent handling of a high level security flaw. This bug allowed users of its AI chatbot to see other users’ private prompts and AI-generated responses. Sandeep Hodkasia, the founder of the Indian security testing firm Appsecure, found a deeply concerning vulnerability. This discovery set off major red flags with user privacy and data integrity on the app.

On December 26, 2024, Hodkasia noticed a serious vulnerability. They spotted it while probing the new AI feature that lets logged-in users change their prompts to regenerate text and images. Meta’s servers were creating prompt numbers that were “simple to deduce,” he noted. This vulnerability might have provided malicious actors with the day-to-day chance to work with their advantage. By quickly cycling through these prompt numbers using automating tools, an attacker might be able to scrape original prompts submitted by other users.

Immediately after receiving this report, Meta took action to prevent this issue from recurring. By January 24, 2025, the company had released an update to address the vulnerability. In response, Meta stated that they had not seen evidence of any malicious exploitation of the flaw. Specifically, they recognize the potential risk of the bug’s defect.

Hodkasia’s discovery is a stark reminder of why security must be paramount in the development and deployment of AI applications. He also touched on the potential privacy danger of users being able to edit prompts. This is especially troubling when those prompts might inadvertently expose private data. Until today, his findings were only shared exclusively with TechCrunch, where security editor Zack Whittaker reported on the story in-depth here.

Meta also acknowledged Hodkasia for their work in reporting and disclosing the security vulnerability. They issued him a $10,000 bounty as part of their bug bounty program. This program incentivizes security researchers to report vulnerabilities privately. This provides firms a unique opportunity to remediate vulnerabilities before they are able to be weaponized.

Hodkasia’s detailed analysis of how Meta AI works highlights the importance of security researchers in protecting user data. Moving forward, as AI technologies undergo more rapid innovation, upholding the strongest security standards possible will be critical in putting user privacy first.