PKI is not just another software for your organization. It’s critical infrastructure. It’s the foundation of digital trust that provides a reliable way to control access and securely connect devices, workloads, and people at scale.
As a result, PKI requires serious investment and effort to build and maintain. And today, that investment is being tested more than ever as new use cases, vulnerability exploits, software end-of-life, and future risks posed by quantum technology are forcing organizations to rethink their strategies.
Specifically, Microsoft PKI, which is better known as Active Directory Certificate Services (ADCS), has been the de facto PKI solution for many organizations since it was first introduced in 2000. It makes sense: It’s baked into Windows Servers, it’s relatively easy to set up, and it’s pretty well integrated with the Microsoft ecosystem.
But it’s 2024 now, and times have changed. The IT landscape is almost unrecognizable from 20 years ago; the number of use cases and the sheer volume of certificates has grown significantly. This situation has led many organizations to ask if it’s time to replace their ADCS. So what’s the answer? It’s not cut and dry, but here are important areas to consider as you evaluate your organization’s needs that can help you answer that question once and for all.
How the role of PKI has evolved since the release of ADCS
Remember the year 2000? Phones flipped, CDs skipped, and it took 10 minutes to start up your computer. We were hot off the heels of the dotcom era, the iPod just revolutionized music, and dial-up internet was dying its very slow death. It was also the year that Microsoft officially introduced Certificate Services, as it was originally called.
Certificate Services relied on static user lists and NTLM authentication. At that time, PKI was still a very expensive monolithic infrastructure that organizations spent a lot of money to install and get running. But as more businesses went online, the need for certificates increased, setting the stage for a very rapid iteration of technology, particularly PKI.
Over the next decade, smartphones exploded, and so did mobile device management, since companies needed authentication methods to ensure these devices could be trusted and securely connected to the network. We also got our first taste of cloud computing, with AWS, Google App Engine, and Microsoft Azure coming to market.
Against that backdrop, Microsoft made modest advancements to Certificate Services in both 2003 and 2008.
In 2003, Certificate Services was integrated into Active Directory and officially became ADCS. At this point, it looked more like a PKI than its previous iteration and served the purpose pretty well for the use cases most organizations needed at the time. Those were use cases like adding a certificate on this mobile device or for that Wi-Fi network, or enabling good authentication on remotely managed workstations. It’s important to remember that this was before moving workloads to the cloud. Back then, the cloud was a storage mechanism more than anything else, as no meaningful work happened there yet.
In 2008, ADCS matured again, with the addition of NDEs server, which is really just Simple Certificate Enrollment Protocol (SCEP) that had actually been around since 2000. SCEP was originally developed for getting certificates on Cisco routers, and as it was extended to a larger set of use cases, it pushed the boundaries of security. The purpose of this change was to meet the needs of mobile phones, and it became the standard for getting certificates on phones and vulnerabilities in SCEP started to surface (VU#971035 – Simple Certificate Enrollment Protocol (SCEP) does not strongly authenticate certificate requests)
Fast forwarding to the 2010s, hybrid and multi-cloud hit the scene, and DevOps started to take hold. We entered an era of automation and containerization, and that meant certificates were suddenly needed to authenticate all sorts of devices, services, and protect machine-to-machine communications.
Soon, a plethora of DevOps tools were built around this ecosystem — including the rise of Hashicorp, Terraform, and Vault, as well as cloud-native certificate issuers, like AWS Private CA, JetStack manager, and Google CA service. All of this growth made the landscape of certificates a lot more complex.
This is really when we started to expand outside of corporate networks and data centers, progressing into the cloud as a new way of working. And that shift required authentication. We needed encryption, and that’s how protocols like EST and ACME came about – to help facilitate cloud deployments. This is an area with which Microsoft did not keep up. The Microsoft CA has been woefully underdeveloped to meet these new use cases. Instead, we saw iterative server additions and minor feature releases but no major uptick to stay current with what teams needed to be successful with PKI.
As a result, security teams have either had to face management challenges with certificates living in multiple different places, or they’ve had to make do with manual solutions that don’t work well at scale. The cloud era in which we now operate is all about scale. And while there are add-on products you can plug into various ADCS environments, teams are still very much constrained by the fact that ADCS was never designed to work in cloud environments.
Fortunately, better options now exist. We’ve seen the emergence of new CA technologies better suited for the cloud era – ones that were not only designed intentionally from the start for our modern environments, but that are also continually developed in that direction. All of this is especially important in the 2020s era of remote and hybrid workforces and the widespread use of IoT devices. For example, we’re seeing new standards like Matter, which is setting the pace for PKI usage to secure IoT devices and provide unique identities for them through efforts like firmware signing.
But when it comes to Microsoft PKI, not much really happened in the latest release in 2022. Microsoft did, however, recently announce the upcoming release of Microsoft Cloud PKI, but that still seems very much focused on-premise. That said, we don’t have the full picture since it hasn’t been released yet.
Now, we have more change on the horizon: the migration to post-quantum algorithms. There’s a new set of algorithms that are soon to be standardized, and organizations are already looking at what that means for them and starting to prepare to adopt them. Unfortunately, there has been no acknowledgment from an ADCS standpoint of what they are going to support against what timelines for these new algorithms.
4 Common scenarios that push PKI to its breaking point
Through all of these changes, common scenarios have emerged that really push PKI to its breaking point and force companies to consider a change:
1) Root expiration or end of life
Root expiration is simply unavoidable, and it can take many forms. Beyond just your root CA expiring, your issuing CA could expire, or your servers or HSMs behind your PKI could be reaching its end of life. Whatever the situation, in most cases, it requires a complete rebuild of your PKI from the ground up.
For many organizations, this scenario looks something like this: The issuing CA begins issuing new certificates that are only viable for 13 months when the template says they should be good for two years. Upon deeper inspection, it turns out something in the chain is expiring in 13 months, and one of the rules of PKI is you cannot issue a certificate that is valid for longer than the chain’s lifespan.
So that starts to drive truncated certificates, which creates panic in the organization, and it is very disruptive to have to turn around and rebuild PKI under a very specific time constraint because that’s when mistakes tend to happen.
2) Employee churn and skills gaps
PKI isn’t just software, it’s critical infrastructure with a predetermined lifespan. Oftentimes, that lifespan outlasts the tenure of the employee who created it in the first place. When that happens, PKI often gets tossed like a hot potato onto someone else’s lap, and it just continues like this from there.
While some organizations have a dedicated team behind PKI operations, in many cases it’s a part-time job for a full-time employee. Generally, we see big skills gaps around PKI: you can’t go to school for PKI, and there aren’t any really great books that address how to care for your infrastructure daily. As a result, a lot of organizations work on legacy knowledge to know how their PKI is run. But in those cases, when people leave organizations and things aren’t documented properly, problems ensue. And every time you do something, you chip away at the foundation of security, which leaves your organization in a bad place. It’s not anyone’s intention to degrade the security, but that’s what happens when you keep applying band-aids and adding new use cases on top of a failing foundation.
3) Growing pains
As the volume and velocity of certificate issuance grows, teams need more protocols, integrations, and infrastructure to support that. And in many cases – especially recently among teams relying on ADCS – the PKI simply can’t support those use cases.
This has led organizations to implement point solutions as needed, which has brought us to the point where teams now have an average of nine PKI/CA solutions. If everything is properly managed, this number can be okay, but in most organizations, there are disparate issuance points for certificates that cropped up for very specific needs. When there’s no comprehensive solution across an entire organization, growing pains become very common, and it’s hard to understand where things came from and why they’re taking certain actions.
Ideally, organizations should have two sources of issuance for certificates: one for all internal resources and one for those that need to be publicly rooted, like SSL/TLS. If you can consolidate that infrastructure as much as possible, it reduces the risk, the cost, and the maintenance of having to keep up your PKI.
4) Risk (both known and unknown)
All it takes is one vulnerability for it all to fall apart. According to a recent report from the NSA and CISA, one of the top ten cybersecurity misconfigurations is insecure ADCS deployments.
Unfortunately, there are a lot of known points of misconfiguration that can render ADCS insecure. By and large, it’s very easy to make mistakes, whether due to other distractions or innocent mistakes like setting up auto-enrollment for a certain type of certificate that accidentally gives everyone in the organization a code signing certificate.
These instances happen all too often and create serious vulnerabilities in the infrastructure, forcing organizations to rethink their entire setup.
Is it time to replace Your PKI?
Whether or not it’s time to replace your PKI is something each organization has to answer on its own, but it’s something you can ideally determine before you reach a breaking point that forces your hand.
As a result, if you see your organization heading toward one of the common scenarios above – which may be very likely if your foundation is based on ADCS, which was not designed for the scale, velocity, or cloud-based use cases today’s environment demands – the answer may very well be yes.
Ready to take charge of your PKI? Mark your calendar for Keyfactor’s webinar series, which explores the risks and limitations of using Microsoft PKI today and how organizations are migrating to modern alternatives.