Meta Patches Bug That May Have Exposed Users' AI Prompts and Outputs
Meta has resolved a security vulnerability that allowed users of its AI chatbot to access and view the private prompts and AI-generated responses of other individuals.
Figure 1. Meta Fixes Flaw That Exposed Private AI Chats and Prompts.
Sandeep Hodkasia, founder of the security testing firm App Secure, exclusively revealed to TechCrunch that Meta awarded him a $10,000 bug bounty for privately reporting the issue on December 26, 2024.
According to Hodkasia, Meta implemented a fix on January 24, 2025, and determined there was no indication the bug had been exploited for malicious purposes. Figure 1 shows Meta Fixes Flaw That Exposed Private AI Chats and Prompts.
Hodkasia explained that he discovered the flaw while analyzing how Meta AI permits logged-in users to modify their prompts in order to regenerate text and images. During this process, he noticed that Meta’s backend servers assigned a unique identifier to each prompt and its corresponding AI-generated response. By inspecting browser network activity while editing a prompt, he realized he could manipulate that identifier, prompting the servers to return another user’s prompt and response.
This flaw existed because Meta’s servers failed to verify whether the requesting user had authorization to view the specific prompt and response. Hodkasia noted that the prompt identifiers were “easily guessable,” raising concerns that a malicious actor could exploit this by using automated tools to rapidly cycle through identifiers and harvest private user prompts.
When contacted by TechCrunch, Meta confirmed the issue had been resolved in January. “We found no evidence of abuse and rewarded the researcher,” said Meta spokesperson Ryan Daniels.
The revelation comes amid a broader industry race among tech giants to launch and improve AI products—efforts that often raise concerns over user privacy and security.
Meta AI’s standalone app, introduced earlier this year to rival platforms like ChatGPT, faced early challenges after some users unintentionally shared what they believed were private interactions with the chatbot.
The Discovery of the Bug
In December 2024, security researcher Sandeep Hodkasia, founder of App Secure, uncovered a flaw in Meta AI’s prompt-editing system. While exploring how users can regenerate text and images by editing AI prompts, he noticed a vulnerability in how the platform handled backend request IDs.
By changing a unique identifier tied to a prompt, Hodkasia found he could access another user’s prompt and its AI-generated response — a clear privacy breach.
How the Bug Worked
Meta AI assigned each prompt-response pair a numeric ID on its servers. However, the system did not verify whether the user requesting that data was authorized to access it.
These ID numbers were “easily guessable”, allowing anyone with technical knowledge and browser tools to manipulate request traffic and retrieve content belonging to others.
A malicious actor could have used automated scripts to cycle through prompt numbers and scrape private conversations.
Meta’s Response and Fix
Upon being privately notified by Hodkasia on December 26, 2024, Meta investigated and rolled out a fix by January 24, 2025.
The company stated there was no evidence of abuse and acknowledged the researcher’s responsible disclosure by awarding him $10,000 through its bug bounty program.
Broader Implications for AI Security
The incident highlights growing security and privacy concerns in the AI space, where platforms collect sensitive user inputs and generate potentially personal or confidential outputs.
As tech companies race to launch AI products, robust access controls and server-side validation must keep pace to prevent similar leaks.
A Rocky Start for Meta AI
Meta AI’s standalone app, launched earlier in 2025 as a competitor to ChatGPT and other AI tools, has already faced criticism.
Despite this setback, Meta continues to invest in expanding its AI offerings — but the incident serves as a cautionary tale about balancing innovation with user trust and security.
Source: TC
Cite this article:
Priyadharshini S (2025), Meta Patches Bug That May Have Exposed Users' AI Prompts and Outputs, AnaTechMaz, pp.752















