Microsoft's Copilot AI can be manipulated to leak sensitive data
Microsoft's Copilot AI, a tool designed for organizations to customize chatbots according to their needs, has been found vulnerable to hacking. A security researcher named Michael Bargury, has demonstrated that this AI could be exploited to disclose an organization's confidential information such as emails and bank transactions. The findings were presented at the Black Hat security conference in Las Vegas.
It can also be used for phishing attacks
Bargury also revealed that the Copilot AI could be turned into a potent phishing tool. He stated, "I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf." This discovery highlights the potential risks associated with AI chatbots like Copilot and ChatGPT when they are connected to datasets containing sensitive information.
How was sensitive data revealed?
Bargury demonstrated that without access to an organization's account, he could trick the chatbot into altering the recipient of a bank transfer. He achieved this by sending a malicious email that didn't require opening by the targeted employee. If a hacker had access to an employee's compromised account, they could extract sensitive data from Copilot by asking straightforward questions.
Copilot AI's vulnerabilities stem from data access
The vulnerabilities of Copilot AI arise from its need to access company data to function effectively. Bargury highlighted that many of these chatbots are discoverable online by default, making them easy targets for hackers. He told The Register, "We scanned the internet and found tens of thousands of these bots." This discovery underscores the security risks associated with using AI tools in business settings.