The incident underscores the need for careful oversight and data safeguards when deploying AI in critical systems.

Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.

Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.

The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.

Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.

Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.

Analysts warn that such errors are likely to recur as AI adoption accelerates.

The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot