Microsoft AI inadvertently exposed 38TB of sensitive data: lessons to be learned
The consequences of data breaches and implications for corporate cybersecurity
Microsoft AI accidentally exposed 38TB of sensitive data due to misconfiguration of SAS tokens. It is critical to implement adequate security controls and carefully monitor access to sensitive data to avoid incidents like this.
A configuration incident led to the accidental exposure of 38 TB of sensitive data by the Microsoft AI team. This event offers us the opportunity to reflect on some important considerations.
Data leak: what we know
Wiz researchers found that Microsoft's AI development team mistakenly exposed more than 38 TB of sensitive data. This data included backups of the hard drives of two former employees, confidential documents and passwords for Microsoft services, as well as 30,000 internal Teams messages.
The risks associated with incorrect configuration of SAS tokens
The cause of this incident was a misconfiguration of SAS tokens on an Azure Storage account, which made sensitive data accessible to the entire network. It is essential to implement appropriate controls to ensure the security of datasets used in training AI-based applications and protect the confidentiality of personal information.
Mitigate risk and ensure data security
To mitigate data leak incidents, it is critical to use SAS tokens correctly, limit their use, and monitor them carefully. Furthermore, it is important to have a centralized system for managing tokens and setting time limits for their validity. Likewise, you must be extremely careful when setting up access accounts to online resources to avoid accidental exposure of sensitive data.
Follow us on Google News for more pills like this09/20/2023 19:25
Editorial AI