Insights
NIST Issues Preliminary Draft of Cyber AI Profile, a Framework Poised to Alter Security Operations in the AI-Driven Threat Landscape
January 5, 2026
On December 16, 2025, the National Institute of Standards and Technology (NIST) released its preliminary draft Cyber AI Profile (NIST IR 8596, Cybersecurity Framework Profile for Artificial Intelligence), a framework intended to provide organizations navigating adoption of artificial intelligence (AI) tools with guidance on managing AI-related risks. Aligned with NIST’s Cybersecurity Framework (CSF) 2.0, the Cyber AI Profile addresses the new cybersecurity risks and opportunities that AI introduces. This preliminary draft provides for a 45-day comment window (until January 30, 2026), allowing NIST to review stakeholder input before releasing an initial public draft.
As AI transitions from experimental pilots to an integral part of daily operations, budgets, and risk management for many U.S. businesses, it is now embedded in products, workflows, and vendor ecosystems. The integration of AI impacts legal, technical, procurement, and governance functions, making cross-functional collaboration essential. Third-party tools and services increasingly incorporate AI, necessitating due diligence and alignment of data use, security requirements, and monitoring expectations. Both adversaries and defenders are leveraging AI; attackers use it to scale phishing and create deepfakes, while defenders employ it for threat detection and response. Despite the significant enterprise-wide challenges presented by AI integration, many organizations lack the dedicated resources to fully manage these new risks, prompting NIST to develop this new guidance after engaging with cybersecurity leaders.
The draft Cyber AI Profile builds upon two of NIST’s existing foundational frameworks: CSF 2.0 and the AI Risk Management Framework (AI RMF). The Cyber AI Profile synthesizes these frameworks by applying the structure of CSF 2.0 to AI-specific risks, thereby enabling organizations to secure their AI systems, strengthen their cyber defenses with AI, and prepare for AI-enabled threats. While CSF 2.0 defines high-level cybersecurity outcomes and the AI RMF seeks to improve AI trustworthiness and reduce risk, the draft Cyber AI Profile seeks to integrate AI considerations into a CSF-aligned program. This provides business leaders and technology teams with a common language for setting goals and gives practitioners specific reference points for policies, controls, and vendor expectations without replacing existing security frameworks. Notably, the Cyber AI Profile does not define “AI,” allowing the term to be interpreted broadly due to the evolving nature of the field. To aid understanding, it provides examples of AI and defines “AI systems” as any systems using AI capabilities, including stand-alone systems, as well as applications, infrastructure, and organizations that incorporate AI.
The preliminary draft explains how to apply the CSF 2.0 outcomes to AI across three practical “focus areas”:
The core of the draft is a set of tables aligned to the six CSF “functions” (Govern, Identify, Protect, Detect, Respond, and Recover). Each table discusses AI-specific considerations for each of the three focus areas and a proposed 1 to 3 priority level for each CSF subcategory to guide planning. Sample opportunities are included for the Defend focus area to suggest how AI can help achieve the desired outcomes in each subcategory. The document also contains informative references to additional resources and uses the phrase “standard cybersecurity practices apply” where no unique AI twist is needed.
The draft highlights some of the following considerations unique to AI:
NIST issued the Cyber AI Profile as part of its broader Cybersecurity, Privacy, and AI program to help the business community adapt current risk management approaches to AI’s realities, while emphasizing that many existing cybersecurity practices remain effective. At a strategic level, the Cyber AI Profile underscores clear leadership accountability, cross-functional teamwork among legal, privacy, procurement, and security stakeholders, and swifter policy updates. It further emphasizes human oversight of AI-mediated actions and extends supply chain diligence to include data provenance and integrity. Operationally, these themes translate to immediate practical actions, such as updating asset inventories, reviewing risk assessments with AI-specific threats, setting more frequent review triggers for policies and risk appetite, applying guardrails and human-in-the-loop controls to AI-assisted tools, and tuning security playbooks for potential AI-accelerated attacks.
In parallel with the Cyber AI Profile, NIST is developing SP 800‑53 “Control Overlays for Securing AI Systems” (COSAiS) to provide implementation‑level guidance complementing the Cyber AI Profile’s outcome‑oriented approach so that organizations can prioritize and operationalize AI‑related controls in a coordinated way. While the Cyber AI Profile and COSAiS are available for comment and will likely be further revised and expanded in 2026, businesses should adopt a proactive approach to considering and implementing the guidance available at this stage. Steps that organizations should consider implementing today include performing a gap assessment and targeted updates to their AI policies and incident response plans. For assistance translating these standards into an actionable plan, please contact the authors of this insight.