New whitepaper outlines the taxonomy of failure modes in AI agents
Read the new whitepaper from the Microsoft AI Red Team to better understand the taxonomy of failure mode in agentic AI.
Read the new whitepaper from the Microsoft AI Red Team to better understand the taxonomy of failure mode in agentic AI.
Since 2018, Microsoft's AI Red Team has probed generative AI products for critical safety and security vulnerabilities. Read our latest blog for three lessons we've learned along the way.
Today, we are releasing an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI) to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.
We’re sharing best practices from our team so others can benefit from Microsoft’s learnings. These best practices can help security teams proactively hunt for failures in AI systems, define a defense-in-depth approach, and create a plan to evolve and grow your security posture as generative AI systems evolve.
Today, we are releasing an AI security risk assessment framework as a step to empower organizations to reliably audit, track, and improve the security of the AI systems. In addition, we are providing new updates to Counterfit, our open-source tool to simplify assessing the security posture of AI systems.
Counterfit is a command-line tool for security professionals to red team AI systems and systematically scans for vulnerabilities as part of AI risk assessment.
Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft […]
Azure Sentinel Fusion technology uses powerful machine learning methods to enable your SecOps team to focus on the threats that matter.