#1 Global Leader in Digital Trust & Quantum-Safe Security.    Discover how Keyfactor makes it possible.

  • Home
  • Blog
  • AI
  • Digital Trust Digest: Explore the AI Identity Edition Shaping Security in 2026

Digital Trust Digest: Explore the AI Identity Edition Shaping Security in 2026

AI

AI has crossed an invisible line.

It no longer just supports human decisions. Increasingly, it initiates them — embedded in workflows, operating at machine speed, and interacting directly with critical systems. Autonomous AI is becoming an active participant inside the enterprise, and that shift is forcing security leaders to rethink how trust is established, enforced, and scaled.

For Keyfactor CEO Jordan Rackie, this moment represents a fundamental inflection point. Organizations have spent decades modernizing identity for people and machines, evolving from passwords to multifactor authentication to certificates for devices, servers, and workloads. Each step required new assumptions about trust. Now, AI is pushing those assumptions to their limit.

“You wouldn’t allow an employee to operate without an identity, defined permissions, or accountability. AI agents deserve — and require — the same rigor,” explains Rackie.

Agentic AI systems don’t simply assist humans. They take action on their own — spinning up infrastructure, modifying code, accessing sensitive data, and triggering downstream processes across environments. Yet many organizations are still securing these systems with the same shortcuts once applied to scripts and basic automation. The mismatch between capability and control is becoming increasingly difficult to ignore.

That tension — between accelerating autonomy and fragile trust — is the foundation of the new Digital Trust Digest: AI Identity Edition.

AI Identity: Why Autonomous Systems Change Everything

Keyfactor magazine, In this new edition of Keyfactor’s full-length magazine series, we examine AI identity, agentic AI, and what they mean for digital trust at scale.

In this new edition of Keyfactor’s full-length magazine series, we examine AI identity, agentic AI, and what they mean for digital trust at scale.

The magazine’s articles draw on insights from 450 security professionals across North America and Europe. Their responses reveal consistent governance gaps, identity blind spots, and growing concern about autonomous systems operating without strong controls.

While awareness of AI-driven risk is high, confidence in the ability to identify, govern, or shut down a rogue AI agent remains far lower.

Across the issue, one theme emerges clearly: autonomy is scaling faster than trust.

Identity sits at the center of that challenge. When AI systems act independently, identity defines who or what the agent is, authorization establishes boundaries, and continuous evaluation ensures those boundaries still make sense as risk and context evolve. Without that foundation, accountability disappears — and risk expands quietly. 

Operating PKI at AI Scale

Ted Shorter, Keyfactor’s CTO and co-founder, brings a complementary perspective in his contribution to the edition, grounding the AI identity conversation in operational reality. 

“The challenge ahead isn’t adopting certificates. It’s operating PKI at AI scale,” says Shorter.

Autonomous AI introduces characteristics that stress traditional PKI deployments. Agents are dynamic, short-lived, massively parallel, and capable of acting independently across environments. Certificate lifecycles multiply. Trust relationships expand. Failure modes accelerate. Systems designed for static infrastructure or human-paced change struggle to keep up when AI operates at machine speed.

This is where digital trust becomes an engineering problem as much as a governance one. Identity must be cryptographically strong, fully automated, and resilient by design — or it won’t hold under the weight of autonomous systems.

The AI Identity Edition brings together these technical realities with broader governance and leadership perspectives.

Responsible AI cartoon image, Digital Trust Digest, tech checklist

In addition to insights from the Keyfactor team, the issue features external contributions from IBM, AWS, and Delinea, alongside expert commentary from Kay Firth-Butterfield, one of the world’s leading voices on AI governance and responsible AI. Her perspective reinforces a critical point: trust in AI isn’t just about security controls. It’s about intent, accountability, and leadership.

The magazine’s main message is consistent: Autonomy without intent is risk. The hardest question in an AI-driven enterprise isn’t whether an agent can act — it’s whether it should, under what conditions, and whether that decision can be explained, audited, and reversed when necessary.

This is why the AI Identity Edition matters for security leaders. The magazine’s contributors turns research, experience, and expert insight into practical direction — helping organizations understand where their trust models hold, where they don’t, and what needs to change as AI gains real authority.

Ready to See Where You Stand?

If AI already plays a role in your organization — or soon will — this edition was written for you. Dive into the magazine, which is packed with practical frameworks and regulatory guidance to move from awareness to action.

👉 Explore the Digital Trust Digest: AI Identity Edition

tracking pixel