Critical flaw in Microsoft Copilot Studio exposes sensitive cloud data
A significant security flaw has been identified in Microsoft's Copilot Studio tool, which could potentially expose sensitive information within a cloud environment. The server-side request forgery (SSRF) vulnerability was discovered by researchers at cybersecurity firm Tenable. They were able to exploit this weakness to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances.
Microsoft acknowledges vulnerability as CVE-2024-38206
The vulnerability, now recognized by Microsoft as CVE-2024-38206, enables an authenticated attacker to circumvent SSRF protection in Copilot Studio and leak sensitive cloud-based information over a network. This flaw is triggered when an HTTP request made using the tool is combined with an SSRF protection bypass. Tenable researcher Evan Grant clarified that "an SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way."
Exploit tested on cloud data and services
The researchers used their exploit to generate HTTP requests for accessing cloud data and services from multiple tenants. While they did not find any cross-tenant information immediately accessible, they noted that the infrastructure for this Copilot Studio service was shared among tenants. Grant stated that any impact on this shared infrastructure could potentially affect multiple customers, thereby magnifying the risk.
Microsoft mitigates vulnerability in Copilot Studio tool
Upon being alerted about the flaw by Tenable, Microsoft acted swiftly to address the issue. The company has now fully mitigated this vulnerability, with no further action required from Copilot Studio users. This information was confirmed in a security advisory issued by the tech giant.
Copilot Studio: A tool for creating AI assistants
Launched last year, Copilot Studio is a user-friendly tool designed to create custom AI assistants or chatbots. These applications allow users to perform various large language model (LLM) and generative AI tasks using data from the Microsoft 365 environment or any other data that the Power Platform on which the tool is built. However, security researcher Michael Bargury recently criticized it as being "way overpermissioned" at this year's Black Hat conference in Las Vegas.