“All these things are connected now.”
That was the warning from Keyfactor CTO and co-founder Ted Shorter at Black Hat 2025, where the buzz wasn’t just about generative AI or even Agentic AI, but the growing risks of adversarial AI, autonomous machine-to-machine communication, and the infrastructure gaps that no one’s talking about.
“I really do believe that agentic AI and AI in general is going to be as transformative at the end of the day as the internet was. It’s going to change the way people live and work in a very profound sense, and we’re only at the very, very beginnings of that.”
In industries like healthcare, where connected devices are exploding in number, machine identity is no longer just a security concern, it’s a safety issue. And with autonomous agents now being wired into enterprise workflows, attackers aren’t just going after users anymore. They’re hijacking the systems that power business at scale.
That’s why Shorter sees machine identity as the next great security frontier, and why protocols like MCP (Model Context Protocol) could soon be as foundational to AI infrastructure as HTTP was to the early internet. But there’s a catch: most systems today still lack encryption and mature identity frameworks.
“You need to know what this thing is. You need to know what it’s supposed to do. It starts with identity that’s strong and well-established and durable.”
▶️ Watch his full interview to learn why identity is central to securing Agentic AI (Source: theCUBE.NET)
What Are GenAI Agents (and Why are People Using Them)?
Agentic AI represents a revolutionary shift in artificial intelligence by enabling systems to autonomously perceive, reason, learn, and take action in dynamic environments, transforming industries through end-to-end problem-solving and adaptive decision-making.
It builds on Large Language Models (LLMs) that enable generative AI. Rather than simply responding to a prompt with text or other media, it can follow instructions in the prompt to accomplish a goal.
Agentic systems need tools to act – enter the Model Context Protocol, which enables agents to securely connect with enterprise apps. MCP is the interface that allows agents to use the necessary tools, enabling the user to use many of their existing apps from a single natural-language interface.
Organizations and leaders are looking to leverage AI and ‘AI-enabled environments’ to grow their business. But with power comes risk. AI agents, like any other automated system, must be governed, authenticated, and trusted. Keyfactor is helping customers secure these AI agents much like they do RPA bots: with strong, certificate-based identity, not SSH keys or shared secrets.
What is Model Context Protocol?
Introduced in late 2024 by Anthropic, MCP is a new universal language between Agentic AI tools and enterprise applications. It allows AI assistants to securely access and interact with platforms like Keyfactor Command, making it possible to ask things like, “Can you find the certificate that poses the highest risk to my organization?” and receive a fast, contextual response, all within the AI interface.
In short: MCP enables AI agents to act. But acting on behalf of a business user requires authentication, authorization, and trust, without which the system becomes vulnerable.
Why Do We Need to Secure AI agents?
- Autonomous AI and AI agents are workload identities that need to be secured
- It’s essential to authenticate these agents / bots to allow secure connections and trust AI sources
- Digital trust: AI agents are yet another non-human entity. As with all devices and workloads, anything that is network- or internet-connected must be authenticated before it can be trusted.
- As with any connected ‘thing’, AI agents must be authenticated before they can connect with external resources, such as databases, applications, and other AI agents. PKI plays an essential role in verifying the authenticity of AI agents and safeguarding communications between them and other entities.
- AI agent infrastructure is comprised of multiple connected pieces that need to be secured in order to be trusted.
How Should Organizations Secure AI Agents?
Securing AI agents requires four key capabilities:
- Strong identities
- Fine-grained access controls
- High degree of auditability – so SIEMs and other detection tools can spot anomalous behavior
- Rapid access revocation – in case an AI agent deviates from expected behavior
How Does Keyfactor Enable My Organization to Leverage AI Securely?
- Keyfactor establishes Digital Trust. AI agents are yet another non-human entity. As with all devices and workloads, anything that is network- or internet-connected must be authenticated before it can be trusted.
- The best way to authenticate and encrypt connections with AI agents as they connect to cloud services, data warehouses, and applications with digital certificates.
- Establish a chain of trust to prove origin and authenticity of devices, workloads, and software code
What’s Next for Securing AI?
It’s important to remember that we’re at the beginning. The insights from Black Hat, Ted’s Cube interview, and ongoing conversations with partners are shaping the future of how we think about securing AI systems.
👉 Watch the full Cube interview with Ted Shorter
👉 Check out our Education Center for more about Agentic AI security
Stay tuned for Part 2 in this new blog series, Securing Agentic AI – Why Businesses Should Care, where we’ll dive deeper into the real business risks and opportunities Agentic AI presents. From compliance to scale, learn how your security team can prepare for the next wave of intelligent automation – grounded in proven security principles and ready to meet the speed of innovation.