For security leaders, the conversation around artificial intelligence has shifted rapidly — from experimentation to expectation. AI systems are no longer confined to innovation labs or narrowly scoped pilots. They are increasingly embedded in core business processes, security operations, and decision-making workflows.
That reality is reflected clearly in NIST’s newly released Cybersecurity Framework Profile for Artificial Intelligence (NIST IR 8596, Initial Preliminary Draft). Rather than introducing an entirely new AI security framework, it leverages the existing Cybersecurity Framework (CSF) 2.0, signaling that AI risk is now inseparable from enterprise cyber risk management.
For CISOs responsible for aligning security programs with standards, regulators, and board-level risk expectations, this draft offers important clues about where AI security — and cybersecurity more broadly — is heading.
AI is no longer a side conversation. Now it’s part of the core risk model.
One of the most important signals in the NIST draft is what it assumes. The draft is written for organizations that are already using AI or will be imminently. It does not ask whether AI belongs in the environment; it focuses on how AI systems should be governed, secured, monitored, and recovered when things go wrong.
This matters because it reframes AI from a future concern into a current attack surface. From a CISO perspective, the implication is clear: AI systems must be incorporated into existing risk registers, threat models, and control frameworks, not managed as special exceptions.
NIST reinforces this by organizing AI security considerations across all six CSF functions: Govern, Identify, Protect, Detect, Respond, and Recover. In other words, AI is expected to be subject to the same rigor as any other mission-critical system.
The Real Theme: Trust, Not Tooling
While the document covers a wide range of AI-related threats — from adversarial inputs to data poisoning — its most consistent emphasis is not on specific AI techniques. It is on trust.
NIST repeatedly highlights the need for:
- integrity verification
- provenance and traceability
- authentication and authorization
- accountability and auditability
These are foundational cybersecurity principles, not AI-specific innovations. Their prominence suggests that the future of AI security will not be defined by ever-more complex models alone, but by the strength of the trust infrastructure surrounding them.
For CISOs, this is an important recalibration. It implies that AI risk management will depend less on understanding every internal detail of a model and more on ensuring that AI systems can be trusted, verified, and governed throughout their lifecycle.
AI agents are being treated as first-class cyber actors
Perhaps the most forward-looking aspect of the draft is how it treats AI agents and services. NIST describes AI systems as autonomous entities capable of interacting with data, systems, and even other agents, sometimes at machine speed and without direct human intervention.
As a result, the draft emphasizes that AI agents require:
- unique, traceable identities
- credentials
- defined permissions
- continuous monitoring and logging
This is a subtle but profound shift. AI agents are no longer framed merely as applications; they are treated as actors within the environment that have the ability to make decisions and take actions instantaneously..
For CISOs, this aligns AI security with long-standing machine identity and zero trust challenges. And it makes clear that AI doesn’t eliminate identity management concerns; it multiplies them.
Identity and access management quietly become foundational to AI security
Within the Protect function of the CSF, identity and access control emerge as recurring themes for AI systems. NIST stresses that AI services and agents must operate under principles such as least privilege, strong authentication, and continuous verification .
This reflects a growing reality: AI systems increasingly act on behalf of users, security teams, and organizations themselves. Managing who or what can access data, issue commands, or trigger actions becomes central to reducing AI-driven risk.
From a standards perspective, this suggests that identity-first security models will play a critical role in AI governance, especially as AI becomes more autonomous and interconnected.
AI Supply Chain: Larger Than Many Expect
NIST also expands the definition of supply chain risk to include:
- training data
- models and prompts
- inference services
- APIs and third-party AI providers
Importantly, the draft places data provenance on equal footing with software provenance, recognizing that compromised or opaque data can undermine trust just as effectively as vulnerable code.
For CISOs accustomed to managing software supply chain risk, this is a familiar challenge, but at greater scale and complexity. AI introduces more dependencies, more external services, and more opaque components, all of which must be accounted for in risk assessments and vendor management.
Zero Trust doesn’t stop at people and AI makes that unavoidable.
Throughout the draft, NIST references Zero Trust concepts such as continuous verification, least privilege, and adaptive controls. What’s notable is that these principles are applied not only to users, but to AI systems themselves.
AI agents behave differently than humans or traditional applications. They operate continuously, generate outputs dynamically, and can influence systems at speed. That behavior makes static trust assumptions untenable.
The implication for CISOs is clear: Zero Trust architectures must extend to AI systems if organizations expect to maintain visibility and control in AI-enabled environments.
The Future of AI and Cybersecurity
Taken together, the NIST Cyber AI Profile draft suggests several clear trends:
- AI security will be integrated into existing cybersecurity and risk frameworks, not managed separately
- Trust, identity, and cryptographic assurance will underpin AI governance
- AI agents will be treated as autonomous cyber actors requiring strong identity controls
- Supply chain transparency will expand to include models and data, not just software
- Zero Trust principles will increasingly apply to machines as well as people
It’s important to remember that this is an Initial Preliminary Draft, and NIST is actively seeking feedback. Details will evolve. Priorities may shift.
But the direction is already clear. AI and cybersecurity are converging around familiar – but increasingly critical – principles of trust, identity, and lifecycle management.
For security leaders, the question is no longer whether AI belongs in the cybersecurity program. The question is whether the program is ready for AI.
Reading between the lines, this draft raises five questions CISOs can’t ignore:
- Where is AI already embedded in our environment – including third-party tools and services?
- Can we identify and trace the actions of our AI systems and agents?
(If an AI system acts autonomously, can we prove it?) - How are identity, access, and permissions enforced for AI today?
(And are those controls designed for machines, not just people?) - Do we understand the provenance and integrity of our AI models and data?
- If an AI system fails or is compromised, do we know how to respond and recover?
The draft is open for public comment through January 30, 2026, and NIST is seeking input from security and risk leaders.