As discussed in the first post in the Infrastructure Hygiene series, the most basic advice we can give on security is to do the fundamentals well. That doesn’t insulate you from determined and well-funded adversaries or space alien cyber attacks, but it will eliminate the path of least resistance that most attackers take.

The blurring of infrastructure as more tech stack components become a mix of on-prem, cloud-based, and managed services further complicate matters. How do you block and tackle well when you have to worry about three different fields and multiple teams playing on each field? Maybe that’s enough of the football analogies.

As if that wasn’t enough, now you have no margin for error because attackers have automated the recon for many attacks. So if you leave something exposed, they will find it. They being the bots and scripts always searching the Intertubes for weak links.

Although you aren’t reading this to keep hearing about the challenges of doing security, are you? So let’s focus on how to fix these issues.

Fix It Fast and Completely

It may be surprising, but the infrastructure vendors typically issue updates when discovering vulnerabilities in their products. Customers of those products then patch the devices to keep them up to date. We’ve been patching as an industry for a long time. And we at Securosis have been researching patching for almost as long. Feel free to jump in the time machine and check out our seminal work on patching in the original Project Quant.

The picture above shows the detailed patching process we defined back in the day. You need to have a reliable, consistent process to patch the infrastructure effectively. We’ll point specifically to the importance of the test and approve step due to the severity of the downside of deploying a patch that takes down an infrastructure component.

Yet going through a robust patching process can take anywhere from a couple of days to a month. Many larger enterprises look to have their patches deployed within a month of release. But in reality, a few weeks may be far too long for a high-profile patch/issue. As such, you’ll need a high priority patching process, which applies to patches addressing very high-risk vulnerabilities. Part of this process is to establish criteria for triggering the high-priority patching process and which parts of the long process you won’t do.

Alternatively, you could look at a virtual patch=, which is an alternative approach to use (typically) a network security device to block traffic to the vulnerable component based on the attack’s signature. This requires that the attack has an identifiable pattern to build the signature. On the positive, a virtual patch is rapid to deploy and reasonably reliable for attacks with a definite traffic pattern.

One of the downsides of this approach is that all traffic destined for the vulnerable component would need to run through the inspection point. If traffic can get directly to the component, the virtual patch is useless. For instance, if a virtual patch was deployed on a perimeter security device to protect a database, an insider with direct access to the database could use the exploit successfully since the patch hasn’t been applied. In this context, insider could also mean an adversary with control of a device within the perimeter.

For high-priority vulnerabilities, where you cannot patch either because the patch isn’t available or due to downtime or other maintenance challenges, a virtual patch provides a good short-term alternative. But we’ll make the point again that you aren’t fixing the component, rather hiding it. And with 30 years of experience under our belts, we can definitively tell you that security by obscurity is not a path to success.

We don’t believe that these solutions are mutually exclusive. The most secure way to handle infrastructure hygiene is to use both techniques. Virtual patching can happen almost instantaneously, and when dealing with a new attack with a weaponized exploit already in circulation, time is critical.

But given the ease with which the adversary can change a network signature and the reality that it’s increasingly hard to ensure that all traffic goes through an inspection point, deploying a vendor patch is the preferred long-term solution—and speaking of long-term solutions.

Abuse the Shared Responsibilities Model

One of the things about the cloud revolution that is so compelling is the idea of replacing some infrastructure components with platform services (PaaS). We alluded to this in the first post, so let’s dig a bit deeper into how the shared responsibility model can favorably impact your infrastructure hygiene. Firstly, the shared responsibility model is a foundational part of cloud computing and defines that the cloud provider has specific responsibilities. The cloud consumer (you) would also have security responsibilities. Ergo, it’s a shared responsibility situation.

Divvying up the division of responsibilities depends on the service and the delivery model (SaaS or PaaS), but suffice it to say that embracing a PaaS service for an infrastructure component gets you out of the operations business. You don’t need to worry about scaling or maintenance, and that includes security patches. I’m sure you’ll miss the long nights and weekends away from your family running hotfixes on load balancers and databases.

Ultimately moving some of the responsibility to a service provider reduces both your attack and your operational surfaces, and that’s a good thing. Long term, strategically using PaaS services will be one of the better ways to reduce your technology stack risk. Though let’s be very clear using PaaS doesn’t shift accountability. Your PaaS provider may feel bad if they mess something up and will likely refund some of your fees if they violate their service level agreement. But they won’t be presenting to your board explaining how the situation got screwed up – that would be you.

The Supply Chain

If there is anything we’ve learned from the recent Solarwinds and the Target attack from years ago (both mentioned in the first post of the series), it’s that your hygiene responsibilities don’t end at the boundaries of your environment. Far from it. As mentioned above, you may not be responsible for maintaining the infrastructure components of your providers and partners, but you are accountable for how weaknesses there can potentially impact your environment.

Wait, what? Let’s clarify a bit. If an external business partner gets owned and the attacker moves into your environment and starts wreaking havoc, guess what? You are accountable for that, but you can make the case that the partner was responsible for protecting their environment and failed. That fact won’t help you when you are in front of your organization’s audit committee explaining why your third-party risk program wasn’t good enough.

Just as we want to abuse the shared responsibilities model to get some operational help and reduce the attack surface, you need to spend additional resources on risk management. On a positive note, you’ve very likely already been doing this, and it’s a minor extension of your program to scrutinize the infrastructure components underlying your tech stack.

Infrastructure hygiene is straightforward in concept, but it’s much harder to do consistently. At scale, so we’ll wrap up the blog series with a discussion of the processes required to do it well, which will include far more than just an admin running patches all day.

Share: