What evidence exists of actual data breaches attributable to Copilot deployments?

Checked on January 11, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Public reporting shows multiple serious vulnerabilities in Microsoft Copilot products—most notably CVE-2024-38206 and the EchoLeak CVE‑2025‑32711—alongside researcher demonstrations that Copilot indexing and caching exposed private repositories and could be driven to leak internal content [1] [2] [3] [4]. What is clear from the sources is strong evidence of exposure risk and researcher-verified leaks or misconfigurations, while public documentation of widespread, malicious exploitation in the wild beyond those researcher findings remains limited in the available reporting [3] [5].

1. Known technical flaws: CVE-2024-38206 and EchoLeak (CVE-2025-32711)

Security research identified a set of concrete, high‑severity technical flaws in Copilot components, starting with a Copilot Studio issue tracked as CVE‑2024‑38206 that allowed external HTTP requests and leakage of internal cloud metadata, and later the EchoLeak chain tracked as CVE‑2025‑32711 which researchers characterized as a zero‑click prompt injection enabling exfiltration without user interaction [1] [2] [6]. These CVEs carried high severity ratings and were publicly described by multiple independent analysts and vendors, establishing a technical basis for real data exposure risk when Copilot components processed untrusted content [7] [2].

2. Researcher demonstrations and reported exposures of private GitHub content

Independent researchers and vendors reported concrete instances where Copilot and Bing caching surfaced content that had been private, with Israeli firm Lasso documenting private GitHub repositories appearing in Copilot results and linking that behavior to storage and cache policies that made data accessible; Lasso reported the issue to Microsoft in November 2024 and claimed the exposure affected thousands of organizations before policy changes mitigated the vector [3] [4]. Those published investigations present demonstrable exposure events—researcher discovery of private data available via Copilot/Bing caches—amounting to documented leaks in researcher-controlled tests and measurements [4] [3].

3. EchoLeak and zero‑click exfiltration: proof‑of‑concepts, not widely reported in‑the‑wild campaigns

The EchoLeak research from Aim Security and others described an exploit chain where a crafted email could prompt Copilot to combine and return sensitive contextual data from Outlook, SharePoint and other connected stores with no user action required, earning a CVSS score and prompting Microsoft to patch the issue [6] [8] [2]. Multiple industry outlets and vendor blogs document the mechanics and the subsequent Microsoft fixes, indicating responsible disclosure and remediation; however, the public reporting focuses on proof‑of‑concepts and researcher demonstrations rather than confirmed large‑scale attacker campaigns leveraging EchoLeak in production environments [6] [5].

4. Corporate and public responses: patches, policy changes, and usage restrictions

Microsoft responded with policy changes, patches and defensive features (for example, changes to Bing caching policies and security updates to Copilot) and rolled out additional protections like Security Copilot agents while partners and vendors advised DLP and access‑control mitigations [3] [9] [10]. At the same time, some organizations and legislative bodies moved to restrict Copilot usage—Congress reportedly banned staffers from using Copilot—reflecting policy caution driven by the perceived risk even as fixes were deployed [11] [10].

5. What counts as “evidence of actual breaches” and where reporting is limited

The available sources establish real-world exposures demonstrated by researchers (private GitHub data indexed and researcher‑verified cache leaks) and high‑severity vulnerabilities that could enable data exfiltration if exploited [3] [4] [2]. What is less documented in these sources is definitive public evidence of widespread, stealthy mass exploitation by malicious actors in production environments beyond researcher proofs and responsible disclosure reports; the reporting documents successful researcher-triggered exposures and patched vulnerabilities but does not catalog verified criminal campaigns or quantified incident reports attributable to Copilot in the wild [4] [5].

6. Conclusion: substantial evidence of exposure risk and researcher‑verified leaks, limited public proof of large‑scale malicious exploitation

Taken together, the corpus of reporting shows clear, demonstrable exposure events and exploitable flaws in Copilot deployments that were responsibly disclosed and patched—evidence that Copilot deployments have caused or could cause data leakage in practice—while public documentation of persistent, large‑scale malicious breaches exploiting those exact flaws remains sparse in the sources provided, leaving an evidentiary gap between researcher‑proven leaks and confirmed adversary campaigns [3] [6] [5]. Organizations should therefore treat the documented vulnerabilities and researcher findings as real, remediated (or mitigated) risks and not conflate the existence of exploitable bugs with proof of mass exploitation absent further forensic reporting [7] [10].

Want to dive deeper?
Which organizations publicly reported Copilot‑related incident response or forensic findings after the EchoLeak disclosure?
How did Microsoft change Bing caching and Copilot data handling policies after Lasso’s GitHub exposure report?
What technical mitigations (DLP, isolation, least privilege) are most effective against Copilot prompt‑injection and zero‑click exfiltration?