Skip to content

How NIST's AI Control Overlays Interface with the Coalition for Secure AI

The National Institute of Standards and Technology's new Control Overlays for Securing AI Systems and the Coalition for Secure AI provide much-needed standardization for AI security across government and industry.

With the recent unveiling of new control frameworks from NIST and the Coalition for Secure AI (CoSAI), federal officials and industry leaders are taking important steps to rein in the risks posed by artificial intelligence. The guidelines arrive as enterprises and governments alike move swiftly to deploy AI systems, offering guardrails that aim to prevent misuse, strengthen resilience, and protect both national security and public trust.

The second effort, initiated last month, NIST COSAIS (Control Overlays for Securing AI Systems) represents a government-led standards development initiative, building upon the established SP 800-53 security controls framework to create implementation-focused guidelines for specific AI uses. The initiative targets cybersecurity practitioners, AI users, and developers with detailed technical controls for protecting AI system confidentiality, integrity, and availability.

The first effort, CoSAI, launched in July 2024 under the OASIS Open consortium, operates as an industry-driven collaboration bringing together technology companies, including Google, Microsoft, Amazon, OpenAI, and others. CoSAI focuses on developing open-source methodologies, standardized frameworks, and practical tools for secure-by-design AI systems.

The initiatives directly respond to rapidly evolving AI security challenges. Current threats include sophisticated adversarial attacks that manipulate AI decision-making, supply chain compromises targeting AI development frameworks, and the emergence of "excessive agency" where autonomous AI systems execute harmful actions without adequate oversight. "The adage of great power deserves great responsibility has never been more relevant than with AI systems today," said Andrew Storms, VP of security at commercial software distribution platform Replicated.

Research shows AI systems face unique vulnerabilities that extend beyond traditional software risks. Data poisoning attacks can corrupt model training with as little as 1-3% malicious data, while model inversion techniques can extract sensitive information from deployed systems. The rise of AI-enabled social engineering has made phishing campaigns significantly more effective through personalized, context-aware attacks.

The development of standards and frameworks is welcomed. "The primary value that NIST continues to provide is establishing the best minimum security requirements that everyone can understand and implement," added Storms. "It allows buyers and sellers to agree on a baseline by building upon the established SP 800-53 framework that many organizations already know and have implemented. We hope it will help ensure that the numerous AI startups have at least some sense of security goals they should strive to provide their customers. What's particularly smart about NIST's approach is that they're not reinventing the wheel. They're extending SP 800-53 control framework with specialized "overlays" that address AI-specific risks like prompt injection attacks, model poisoning, and others," Storms said. 

"The Coalition for Secure AI complements this by bringing together industry heavyweights like Google, Microsoft, IBM, and others to collaborate on practical security solutions. Their focus on sharing best practices and building open-source tools means the entire ecosystem benefits, at least we hope," he said.

These initiatives do feather well. While NIST's work provides authoritative government standards that federal agencies and regulated industries must follow, CoSAI develops industry consensus on practical implementation that organizations can voluntarily adopt. And CoSAI explicitly acknowledges its collaborative relationship with NIST, stating that it "collaborates with NIST, Open-Source Security Foundation (OpenSSF), and other stakeholders through collaborative AI security research, best practice sharing, and joint open-source initiatives." This ensures industry-developed frameworks align with emerging government standards.

Overlapping Priority Areas

However, both initiatives do address similar technical challenges through different methodologies:

Supply Chain Security: CoSAI's first workstream focuses on "Software Supply Chain Security for AI systems," developing guidance on evaluating provenance and managing third-party model risks. NIST's overlays will address similar concerns through SP 800-53 controls for AI developers and organizations using third-party AI services.
Defender Preparedness: CoSAI's "Preparing defenders for a changing cybersecurity landscape" workstream parallels NIST's focus on helping security practitioners adapt existing controls for AI-specific risks. Both recognize that traditional cybersecurity approaches require modification for AI environments.
Risk Governance: CoSAI develops "AI security governance" frameworks, including risk taxonomies and scorecards, while NIST creates structured control implementations that organizations can assess and audit.
The approaches create multiple pathways for organizations to improve AI security:
Government and Regulated Sectors will primarily implement NIST's control overlays to meet compliance requirements, particularly organizations already using SP 800-53 frameworks. Federal agencies and defense contractors represent the core audience for mandatory adoption.
Commercial Organizations can leverage CoSAI's open-source tools and methodologies for voluntary adoption, which is particularly beneficial for companies seeking industry consensus approaches rather than regulatory compliance. CoSAI's founding member companies demonstrate implementation feasibility across diverse technology environments.
Hybrid Implementation allows organizations to use NIST overlays for formal risk management while adopting CoSAI tools for practical implementation, creating comprehensive coverage of both compliance and operational needs.

Strategic Industry Impact

These frameworks help to address the fragmented AI security landscape that both initiatives identify as a core problem. Rather than creating competing standards, the parallel development ensures that government policy requirements align with industry implementation capabilities.

For security practitioners, they also provide clarity on long-term direction while enabling immediate action through CoSAI's available resources. Organizations can begin implementing CoSAI frameworks knowing they align with emerging NIST standards, reducing future compliance migration costs.

Five Critical Use Cases for AI Security

NIST proposes addressing five distinct scenarios that reflect real-world AI deployment patterns:

Generative AI Integration focuses on organizations using large language models and content creation systems, covering both on-premises and cloud-hosted implementations with various data integration approaches, including retrieval-augmented generation (RAG) architectures.

Predictive AI Systems addresses organizations using machine learning for business decision-making, covering the complete lifecycle from model training through deployment and maintenance across different hosting environments and data sources.
Single-Agent AI Systems covers AI agents capable of autonomous decision-making, including enterprise copilots connected to internal systems and coding assistants with repository access and deployment capabilities.
Multi-Agent AI Systems tackles the emerging challenge of coordinated AI systems working together on complex business processes, such as automated expense reimbursement workflows using standardized communication protocols.
AI Developer Controls provides security frameworks specifically for organizations building AI systems, mapping NIST's secure software development practices to AI-specific artifacts and security requirements.

The timing aligns with broader government initiatives, including recent executive orders directing federal agencies to integrate AI vulnerability management into existing cybersecurity processes by November of this year. This ensures that NIST's technical guidelines support policy-level security requirements across government and critical infrastructure sectors.

For enterprise security teams, the overlays promise to provide much-needed standardization in an area where 96% of organizations are increasing AI security budgets, but only 32% have deployed comprehensive protection controls. The framework's emphasis on mapping to existing SP 800-53 controls means organizations can integrate AI security measures into established governance processes rather than creating entirely new frameworks.

Strategic Implications for Security Practitioners

The COSAIS initiative represents more than technical guidance—it signals a fundamental shift toward treating AI systems as critical infrastructure requiring specialized security controls. Organizations currently relying on traditional cybersecurity measures for AI deployments may find themselves significantly exposed as threats evolve and regulatory requirements tighten.

Security practitioners should begin preparing for these changes by assessing current AI deployments against the proposed use cases, evaluating existing control implementations for AI-specific gaps, and engaging with NIST's community feedback process to ensure the final overlays address real-world operational needs.

The success of this initiative could establish the template for AI security standardization globally, making early adoption and implementation expertise valuable competitive advantages for security professionals and their organizations.

HOU.SEC.CON CTA

Latest