The Cloud Security Alliance (CSA) has announced a significant expansion of its CSAI Foundation’s mission, unveiling a suite of initiatives designed to bolster the security and governance of what it terms the "agentic control plane." These pivotal milestones, revealed on April 29 at the CSA Agentic AI Security Summit, include the launch of a new catastrophic risk initiative, official authorization as a CVE Numbering Authority, and the strategic acquisition of two critical agentic AI specifications: the Autonomous Action Runtime Management (AARM) and the Agentic Trust Framework (ATF). This concerted effort underscores CSA’s commitment to providing robust governance and assurance frameworks essential for the safe and responsible deployment of increasingly autonomous AI systems.
The core of these announcements revolves around the escalating need to secure agentic AI systems, which are characterized by their ability to act autonomously, make decisions, and execute tasks without direct human intervention at every step. Unlike traditional AI models that primarily process data and offer insights, agentic AI systems possess a degree of agency, interacting with environments and initiating actions. This enhanced autonomy, while offering immense potential for innovation and efficiency across industries, simultaneously introduces novel and complex security, ethical, and control challenges. The "agentic control plane" refers to the underlying infrastructure and mechanisms that govern these autonomous AI agents, ensuring their actions align with human intent and established safety parameters.
The Imperative for Agentic AI Security
The rapid evolution and "viral, bottom-up adoption of agents inside the business," as described by Jim Reavis, CEO and co-founder of CSA, highlights a critical juncture. Enterprises are eager to leverage the transformative power of agentic AI, from automating complex business processes to enhancing decision-making capabilities. However, this enthusiasm is tempered by legitimate concerns about maintaining control, preventing unintended consequences, and ensuring accountability. The global AI market is projected to grow exponentially, with estimates often placing its value in the trillions of dollars within the next decade. As AI systems become more integrated and autonomous, the financial, reputational, and societal costs of security breaches or uncontrolled behavior could be catastrophic. CSA’s initiatives aim to equip organizations, auditors, and regulators with the necessary technical specifications and assurance scaffolding to confidently embrace agentic AI without relinquishing control.

Deep Dive: STAR for AI Catastrophic Risk Annex
A cornerstone of CSA’s expanded focus is the launch of the STAR for AI Catastrophic Risk Annex. This initiative is being developed with crucial support from Coefficient Giving, a philanthropic organization dedicated to advancing long-horizon AI safety research. The annex represents a vital extension of CSA’s existing AI Controls Matrix (AICM) and STAR for AI assurance program. Its primary objective is to address scenarios involving the gravest potential outcomes of AI system failures, including the loss of human oversight, uncontrolled system behavior, and other large-scale, irreversible, society-wide consequences. Such scenarios could range from an autonomous AI agent making critical infrastructure decisions leading to widespread disruption, to self-replicating agents exhibiting unforeseen emergent behaviors, or even sophisticated AI systems inadvertently amplifying societal biases with profound and lasting negative impacts.
The annex is meticulously designed to focus on controls that are not merely theoretical but can be rigorously tested and validated in real-world production environments. A related CSA blog post elaborates on the project’s methodology, stating that it will initially identify existing AICM controls relevant to catastrophic risk, introduce new controls where current frameworks exhibit gaps, and define precise evidence requirements and testing criteria suitable for independent assessment. This methodical approach is crucial for building verifiable trust in AI systems that operate with significant autonomy.
The rollout of the Catastrophic Risk Annex is planned in four distinct phases, spanning from June 2026 through December 2027, demonstrating a long-term, structured commitment:
- Phase 1 (June – September 2026): Translating Risk to Controls. This initial phase will focus on translating abstract catastrophic risk scenarios into concrete, auditable control language. This involves defining specific technical and procedural safeguards that can mitigate identified risks.
- Phase 2 (October – December 2026): Developing Validation Protocols. Following the definition of controls, this phase will concentrate on developing robust validation protocols. These protocols will outline how the implemented controls can be effectively tested and verified for their efficacy in preventing or mitigating catastrophic risks.
- Phase 3 (January – June 2027): Real-World Implementation and Pilots. This critical phase will involve bringing the annex into practical application. Pilot assessments will be conducted in real-world environments, assessor training programs will be developed and delivered, and reference implementations will be created to demonstrate best practices.
- Phase 4 (July – December 2027): Public Benchmarking and Reporting. The final phase will culminate in the production of public STAR for AI registry entries, enabling benchmarking of AI systems against the catastrophic risk controls. This phase will also include the publication of a "State of Catastrophic AI Risk Controls Report," offering insights into the industry’s progress and ongoing challenges.
A key aspect of the annex’s design is its commitment to global alignment. CSA has confirmed that the framework will align with leading international AI governance standards, including the NIST AI Risk Management Framework (AI RMF), the European Union’s AI Act, and ISO/IEC 42001. This interoperability is vital for ensuring that the annex is broadly applicable and can contribute to a harmonized global approach to AI safety and governance. The absence of specific control text in the initial announcement indicates that these details will be meticulously developed and refined throughout the phased rollout, inviting collaboration and feedback from the wider AI and cybersecurity communities.

Authorization as a CVE Numbering Authority (CNA)
Another significant milestone for the CSAI Foundation is its authorization as a CVE Numbering Authority (CNA) by MITRE. This authorization empowers the CSAI Foundation to assign Common Vulnerabilities and Exposures (CVE) IDs to newly discovered vulnerabilities within AI systems, particularly those that are agentic. The CVE program, managed by MITRE, is a globally recognized standard for identifying, defining, and cataloging publicly disclosed cybersecurity vulnerabilities. Becoming a CNA signifies the CSAI Foundation’s official role in contributing to the global cybersecurity ecosystem, specifically for the burgeoning domain of AI security.
The implications of this authorization are profound. For years, the cybersecurity community has relied on CVEs to track and address vulnerabilities in software, hardware, and operating systems. Extending this established framework to AI systems, especially complex agentic ones, is crucial for several reasons:
- Standardized Reporting: It provides a standardized mechanism for reporting and tracking vulnerabilities unique to AI, such as adversarial attacks, data poisoning, model inversion, or prompt injection vulnerabilities that can compromise an agent’s integrity or lead to unintended actions.
- Enhanced Transparency: It fosters greater transparency in the AI ecosystem by creating a public record of known weaknesses, enabling developers to patch them and users to assess risks.
- Improved Risk Management: Organizations deploying agentic AI systems can better understand and manage their attack surface, integrating AI-specific vulnerabilities into their broader security risk assessments.
- Facilitating Collaboration: It encourages collaborative efforts among researchers, vendors, and users to identify and mitigate AI vulnerabilities, leveraging a common language and framework.
This move acknowledges that AI systems, like any complex software, are susceptible to flaws and exploits. Given the potential autonomy and impact of agentic AI, a formal, globally recognized system for vulnerability disclosure and management is not just beneficial but essential for maintaining trust and safety.
Acquisition of Autonomous Action Runtime Management (AARM) and Agentic Trust Framework (ATF)

Further bolstering its technical specifications, the CSAI Foundation has acquired two pivotal agentic AI specifications: the Autonomous Action Runtime Management (AARM) specification and the Agentic Trust Framework (ATF). These acquisitions represent a strategic move to integrate practical, actionable frameworks directly into CSA’s assurance programs, providing developers and operators with concrete tools for managing and securing agentic AI.
-
Autonomous Action Runtime Management (AARM): This specification addresses the critical need for granular control over autonomous AI agents during their operational runtime. It defines mechanisms to monitor, govern, and potentially intervene in the actions of AI agents as they execute tasks in real-world environments. This is particularly vital for preventing agents from deviating from their intended objectives, violating ethical guidelines, or causing harm due to unforeseen circumstances or internal miscalculations. AARM provides the technical blueprint for establishing guardrails, monitoring agent behavior, and enabling human-in-the-loop oversight where necessary, without stifling the agent’s autonomy for routine tasks. It is about ensuring that the "agentic control plane" remains firmly within human purview, even as agents operate independently.
-
Agentic Trust Framework (ATF): The ATF specification focuses on building verifiable trust in agentic AI systems. It outlines criteria and mechanisms for assessing the trustworthiness of AI agents, their underlying models, and the data they process. This includes aspects such as data provenance, model explainability, bias detection, reliability, and security posture. In a world where AI agents will increasingly interact with sensitive data and critical systems, establishing a clear framework for trust is paramount. The ATF provides a structured approach for evaluating and communicating the trustworthiness of an agent, which is essential for regulatory compliance, auditability, and user confidence. It allows organizations to demonstrate that their agentic AI systems meet defined standards of integrity and reliability.
By acquiring and integrating AARM and ATF, CSA aims to provide a comprehensive toolkit that not only defines security controls but also offers the technical specifications for implementing them effectively. These frameworks will likely be incorporated into the AICM and STAR for AI, providing concrete methods for demonstrating compliance and building robust, trustworthy agentic AI systems.
Context: AICM and STAR for AI – The Foundational Frameworks

These new initiatives build upon CSA’s established and highly regarded AI Controls Matrix (AICM) and STAR for AI assurance program. The AICM is a vendor-agnostic framework specifically designed for securing cloud-based AI systems. It comprises an extensive set of 243 control objectives spanning 18 critical security domains, offering a holistic approach to AI security. The comprehensiveness of AICM is further enhanced by its rigorous mapping to several leading industry standards and regulatory frameworks, including ISO 42001 (AI Management System), ISO 27001 (Information Security Management System), NIST AI RMF 1.0, and BSI AIC4. This broad alignment ensures that organizations adopting AICM can simultaneously address multiple compliance requirements, streamlining their security efforts.
The AICM package is a robust resource, including the matrix itself, detailed mappings to NIST AI 600-1, ISO 42001, and the EU AI Act, comprehensive implementation guidelines, and auditing guidelines. It also features the AI-CAIQ questionnaire, a standardized tool for assessing AI security postures, introductory guidance, and a STAR for AI Level 1 submission guide. The STAR (Security Trust Assurance and Risk) program, well-known in cloud security for its three levels of assurance, provides a mechanism for organizations to publicly document their security controls and compliance with CSA frameworks. Extending STAR to AI, with the new Catastrophic Risk Annex, signifies a maturation of AI security assurance, moving beyond basic controls to address the most critical and systemic risks.
Broader Impact and Implications
The Cloud Security Alliance’s latest announcements carry significant implications for the entire AI ecosystem:
- For Enterprises: These initiatives offer a much-needed roadmap for the secure and responsible adoption of agentic AI. By providing clear controls, vulnerability management tools, and technical specifications for runtime management and trust, businesses can mitigate risks, ensure compliance, and accelerate their AI deployments with greater confidence. This "assurance scaffolding" is crucial for avoiding regulatory penalties, reputational damage, and financial losses associated with AI failures.
- For Regulators and Policymakers: The alignment with NIST AI RMF, the EU AI Act, and ISO/IEC 42001 makes these CSA frameworks highly relevant for informing future AI regulations and policies. They provide concrete, actionable standards that can be referenced by governments seeking to establish robust AI governance without stifling innovation. The Catastrophic Risk Annex, in particular, addresses a core concern of many policymakers regarding systemic AI risks.
- For AI Developers and Innovators: By establishing clear security and trust requirements, these frameworks encourage a "security by design" approach in AI development. This can foster a more mature and responsible AI industry, where safety and control are integrated from the outset, rather than being an afterthought. The CVE CNA status will also drive better vulnerability management practices within the AI development lifecycle.
- For the Cybersecurity Community: The expansion firmly embeds AI security as a critical and distinct domain within the broader cybersecurity landscape. It provides specialized tools and frameworks necessary to address the unique attack vectors and vulnerabilities inherent in autonomous systems, moving beyond generic IT security practices.
- For Societal Trust in AI: Ultimately, the success of AI adoption hinges on public trust. By proactively addressing catastrophic risks, providing transparent vulnerability reporting, and establishing frameworks for agentic trust and control, CSA’s efforts contribute significantly to building and maintaining public confidence in AI technologies.
The ongoing challenge for organizations like CSA will be to keep pace with the exponential advancements in AI technology. As frontier models continue to leapfrog each other and agentic capabilities become more sophisticated, the frameworks will need continuous adaptation and refinement. However, these recent milestones from the CSAI Foundation represent a crucial and timely step towards establishing a foundational layer of security, governance, and assurance, enabling the global economy to harness the power of agentic AI while maintaining control and mitigating its inherent risks. The next few years, particularly as the Catastrophic Risk Annex phases unfold, will be critical in demonstrating the practical impact and widespread adoption of these vital new standards.




