Microsoft Confirms Office Bug Exposed Customer Emails to Copilot AI

Microsoft has found a serious flaw in its new chatty AI, Copilot. This issue has unwittingly allowed the AI to read and condense sensitive private emails. The issue, first reported by Bleeping Computer, has raised serious concerns regarding data privacy and security for users relying on Microsoft’s Office suite. Unfortunately, since January, a flaw that…

Lisa Wong Avatar

By

Microsoft Confirms Office Bug Exposed Customer Emails to Copilot AI

Microsoft has found a serious flaw in its new chatty AI, Copilot. This issue has unwittingly allowed the AI to read and condense sensitive private emails. The issue, first reported by Bleeping Computer, has raised serious concerns regarding data privacy and security for users relying on Microsoft’s Office suite.

Unfortunately, since January, a flaw that made this capability possible was on full display in the Copilot system by letting Copilot Chat read and summarize email content. This occurred despite users having data loss prevention policies turned on. Microsoft says that this bug caused its generative AI to consume sensitive data from emails marked as private.

The implications of this bug are considerable. Users who assumed their communications were protected discovered that the AI tool was able to summarize their private emails. This is especially alarming for companies who value data security and have taken measures to proactively secure their data, even putting in place heavy safeguards to avoid a breach.

To counter the bug’s wide-reaching impact, Microsoft started deploying a fix for the bug earlier last month. The European Parliament’s IT department moved quickly. They just want to keep lawmakers’ communications safe from emerging AI tools that could threaten security. This groundbreaking decision goes straight to the heart of the problems with the new AI tools. We dread that they’re going to inadvertently put sensitive email in the cloud.

“With a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat.” – Microsoft

The European Parliament’s action is a good sign of the seriousness institutions are taking to address this issue. The decision to turn AI capabilities off is a big one, and it shouldn’t involve sensitive information. This development takes place against a backdrop of heightened concern regarding the use of AI in workplace settings.

As Microsoft works to rectify the bug affecting Copilot, users remain on high alert concerning the integrity of their private communications. This incident is an important demonstration of how security resilience is one of the most important security measures. In our increasingly digital society, AI tools are becoming essential.