In part one of this blog series – The Next Security Frontier: Securing the Future of Agentic AI with Digital Trust – we introduced the rise of Agentic AI and the foundational role that digital trust must play as these intelligent agents gain operational independence.
Now in Part Two, we dive deeper into why securing Agentic AI matters for businesses today – and what’s at stake if we fail to safeguard this transformative new class of AI.
What Is Agentic AI – and Why the Hype?
We’re entering a new era of artificial intelligence: Agentic AI. Unlike traditional AI systems that simply respond to inputs, Agentic AI refers to autonomous agents that can:
- Make real-time decisions
- Independently initiate actions
- Seamlessly operate across APIs, cloud services, and enterprise data ecosystems
This evolution takes AI from automation to true autonomy – and it unlocks unprecedented efficiency and scalability across industries like supply chain management, customer service, finance, and cybersecurity.
The Opportunity – and the Risk
While Agentic AI presents massive opportunities, it also introduces new security challenges:
- Blurring identity boundaries between human users, machines, and digital agents
- Expanding attack surfaces across interconnected systems and APIs
- Compromising data integrity and trust if autonomous agents are exploited or manipulated
Without robust digital trust frameworks – encompassing identity verification, behavior monitoring, and secure orchestration – Agentic AI systems could become a new vector for cyberattacks, data breaches, and operational failures.
The rise of Agentic AI signals both innovation and disruption. It will change how enterprises function — and introduce new security and compliance risks. Investors and PE firms are pushing companies to adopt Agentic AI with the goal of realizing tangible financial benefits, whether that’s in operational efficiency or cost savings.
For example, Gartner predicts that by 2029, the use of AI agents could generate up to 30% reduction in operational costs for common customer service issues. These estimates are significant! According to Gartner’s 2025 Cybersecurity Innovations Survey, over half of respondents said they’ve already deployed custom AI agents (53%). Keyfactor’s approach to securing these AI agents at scale offers both a strategic edge and a path to risk mitigation.
MCP (Model Context Protocol) is likely becoming the universal standard interface between AI, data & applications. It’s used to decouple models from the software that they interact with, to coordinate across different systems, and to adapt quickly. But MCP is still early in its maturity. Many client implementations lack the robust authentication and authorization that enterprise-grade security requires — capabilities like role-based access control (RBAC) and automated credential rotation.
Most current MCP implementations leverage OAuth, but often with fixed keys or shared secrets instead of stronger, more durable credentials. We believe that MCP clients are on the path to support certificate-based authentication.
Securing Agentic AI: Why PKI is a Key Piece of the Foundation
Without strong identity, authentication, and data integrity, Agentic AI can introduce vulnerabilities instead of delivering value. Securing AI Agents requires four key things:
- Strong identities
- Fine-grained access controls
- High degree of auditability (i.e., so SIEMs can look for anomalies)
- A method for removing access if an AI agent isn’t acting as expected
According to Cisco, identity-based attacks were the most common attack vector in 2024, accounting for 60% of all incident response cases — a stark warning as we shift toward increasingly autonomous and interconnected AI agents.
For organizations exploring how to secure Agentic AI access, PKI and certificate-based authentication provides a robust solution:
- Mutual TLS (mTLS) allows agents to authenticate each other before exchanging data
- Ephemeral certificates give short-lived agents time-bound trust
- Hardware-bound keys protect agents embedded in physical systems like drones or vehicles
Ultimately, how to secure Agentic AI access involves building trust at the identity layer with proven security standards. PKI is becoming one of the most critical focus areas for enterprises. Certificates that are issued from a PKI offer proven answers – already used extensively in enterprise security for machine identities, IoT, workload authentication, and Zero Trust frameworks.
As the identity layer becomes central to securing Agentic AI, it’s also a matter of trust – not just between systems, but between people and the technologies they depend on.
Agentic AI: Here Are the Questions You Need to Ask
Enterprises must align security frameworks, governance policies, and business priorities to ensure Agentic AI can scale responsibly and safely.
The following three focus areas outline how to begin that alignment, drawing from both current trends and real-world implementation challenges.
Focus Area #1: Agentic AI Is Coming Up Fast and Will Require Scale
Agentic AI requires autonomy at scale in order to realize the benefits of efficiency and cost savings.
These agents are already being deployed across high-stakes environments, and enterprises must ask themselves:
- How are we measuring ROI from agentic AI – and against what KPIs?
- Is there a clear business owner for AI risk, governance, and performance?
- Can we confidently brief our board on our AI readiness and roadmap?
According to Keyfactor CSO Chris Hickman, many organizations are racing ahead with experimentation but haven’t addressed the infrastructure demands of widespread Agentic AI adoption. It’s critical to understand the initial cost and long term maintenance of the full solution at scale before deploying anything into production.
“The reality is that once you start deploying thousands of agents across APIs and cloud environments, managing trust, identity, and lifecycle becomes exponentially more complex.”
Focus Area #2: The Speed of Innovation in Agentic AI Requires Proven Security Principles
With rapid innovation, there’s real risk that security could become an afterthought. And yet, autonomous agents can do real damage if compromised, far faster and with less transparency than traditional workloads.
Here are some of the important questions to ask here:
- Does our holistic corporate strategy for securing AI agents, include cryptographic identity, visibility & monitoring, and governance?
- What’s our plan for executing successful pilots without compromising security?
- Do we have a playbook for major AI incidents or failure scenarios?
According to Hickman, organizations need to adopt security controls that don’t slow innovation, but still ensure verifiability, trust, and resilience.
“PKI, mutual TLS, and certificate-based authentication are battle-tested methods that work for securing non-human identities at massive scale. And they’re exactly what Agentic AI needs.”
He adds that relying on shared secrets or static tokens – as some MCP implementations do – is a ticking time bomb. The right foundation isn’t just safer, it’s also future-proof.
Focus Area #3: Policies and Practices Must Keep Pace — or Agentic AI Becomes a Liability
Agentic AI introduces legal, ethical, and reputational questions. As autonomous systems become more embedded in operations, governance becomes non-optional.
Ethics and compliance leaders should be asking:
- Do we have a cross-functional AI ethics committee or governance body?
- We know how to handle breaches of HR and IT policies, but how do we handle AI agent breaches where software has taken on what were previously human roles?
- Can we demonstrate our AI responsibility posture to regulators and the public?
According to Hickman, this is where many organizations fall short. “You wouldn’t let a human employee operate without HR, performance reviews, or policy constraints, yet many AI agents are out there acting with no oversight or kill switch.”
He stresses the need for auditable systems and revocable trust.
“If you don’t have cryptographic controls and policies to remove an agent’s access the moment something seems off, you’ve already lost the governance battle.”
Closing Thoughts: Aligning Policy, Security, and Trust
Agentic AI is here, and it’s already changing how enterprises operate. But like any powerful new capability, it comes with risk. By applying proven security frameworks, investing in scalable identity infrastructure, and aligning policy with autonomy, organizations can turn Agentic AI into a strategic advantage.
Now is the time for leaders across security, IT, risk, and ethics to align on what trust means in an autonomous future. In Part 3: Four Key Steps for Securing Your AI Agents Framework, we’ll break down the essential building blocks of a secure, scalable trust architecture purpose-built for Agentic AI.
Want to know where your current infrastructure stands – or how you can begin building trust at the identity layer for Agentic AI? Our security experts are ready to help you define your roadmap.