News

Microsoft and Mitre Collaborating on AI and Machine Learning Security Tools

Microsoft and Mitre Corp. last week outlined their collaborative efforts to shore up the security of machine learning models and artificial intelligence (AI) platforms.

MITRE is a nonprofit implementer of federally funded research and development centers. It typically works to address U.S. government and cybersecurity concerns.

Microsoft and Mitre Security Tools for AI
The two companies announced that Microsoft's Counterfit red-team AI attack tool, released last year, is now integrated in Mitre's Arsenal plug-in.

Arsenal is a tool that "implements the tactics and techniques defined in the Mitre Atlas framework and has been built off of Microsoft's Counterfit as an automated adversarial attack library," Mitre's announcement explained. The Arsenal plug-in is specifically designed for use by security practitioners who may be lacking in-depth knowledge about AI and machine learning technologies.

Additionally, Microsoft's Counterfit has now been integrated into Mitre's Caldera, which is used to automate red-team attacks using emulated attack profiles.

The two companies also collaborated on MITRE Atlas, which is a knowledge base of attack techniques used against AI systems. Atlas is described by Microsoft as an "ATT&CK-style framework for adversarial machine learning" attacks that is specifically useful for mapping "machine learning supply chain compromise" attacks.

Here's how Mitre described the overall AI security tools collaborations with Microsoft:

Microsoft’s Counterfit is a tool that enables ML researchers to implement a variety of adversarial attacks on AI algorithms. MITRE CALDERA is a platform that enables creation and automation of specific adversary profiles. MITRE ATLAS, which stands for Adversarial Threat Landscape for Artificial-Intelligence Systems, is a knowledge base of adversary tactics, techniques, and case studies for ML systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

Microsoft's AI Red Team and Best Practices
Microsoft, for its part, noted that it has an internal "AI Security Red Team" that has been probing AI and machine learning vulnerabilities, emulating "a range of adversaries, from script kiddies to advanced attackers."

This team has identified common software vulnerabilities as sensitive points for attacks on AI and machine learning systems. The vulnerabilities include "poor encryption in machine learning endpoints, improperly configured machine learning workspaces and environments, and overly broad permissions in the storage accounts containing the machine learning model and training data."

Microsoft's general recommendations to security professionals protecting AI systems is to use model registries and implement "security best practices" for AI systems. Those best practices, described in Microsoft's guidance, include "sandboxing the environment running ML models via containers and machine virtualization, network monitoring, and firewalls." Lastly, Microsoft urged security pros to use Mitre Atlas to understand AI threats and "emulate them using Microsoft Counterfit via MITRE CALDERA."

Microsoft's announcement included links to a bunch of tools and guides concerning AI security. It's "Taxonomy" document for engineers and policymakers is perhaps the best of the bunch for describing things in plain English. For instance, it defines AI attacks such as poisoning, model stealing and reprogramming, among others.

Microsoft also suggested that its AI security efforts will help remove complexity and open up cybersecurity jobs, per this LinkedIn.com announcement by Charlie Bell, executive vice president for security, compliance, identity and management at Microsoft.

In addition to its Mitre collaboration, Microsoft has also worked with machine learning repository company Hugging Face on building "an AI-specific security scanner."

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured

comments powered by Disqus

Subscribe on YouTube