IT Pro Panel: Why is patch management so difficult?
Equifax. NotPetya. Wannacry. These three security issues have one thing in common: They were all caused by companies leaving their systems unpatched for extended periods of time. In all three cases, the attack used security flaws for which a patch had already been issued, with victims failing to apply it in a timely manner.
While many armchair security experts suggest that avoiding falling victim to exploits such as these is a simple matter of applying patches as soon as they become available, actual IT leaders contend it’s not as simple a task as it may appear from the outside.
So what makes patch management so complicated? In this month’s IT Pro Panel discussion, we speak to some of our expert panellists to find out.
What’s the problem with patches?
It’s perhaps not surprising that companies still find themselves struggling to keep up with the number of patches they’re asked to apply. Since the start of 2019, Microsoft alone has released more than 10,000 updates and patches for its Windows, Office, SQL Server and Exchange Server product families. Add in all of the other vendors and suppliers that typically make up a business IT environment and you’re left with a list of patches that likely numbers in the hundreds of thousands.
“Patching is an issue for a myriad of reasons,” says Domino’s UK and Ireland CISO Paul Watts. “One of the big challenges we have is the reduction of permitted downtime in order to apply those patches, which come in thick and fast these days and can be misused as an opportunity to drag one’s heels.”
As AmTrust International’s EMEA head of cyber security, Ian Thornton-Trump, points out, cloud-based infrastructure and lean endpoints can reduce the amount of patching companies are required to do, but on-premise equipment such as IoT devices and physical infrastructure still represent a potential problem.
“I’d argue it’s an issue even with lean endpoints,” William Hill group CISO Killian Faughnan retorts. “Unless you’re totally SaaS (in which case it’s someone else’s issue) you’ll still have something to maintain. It’s almost inevitable that at some point you’ll end up using third-party tools for your processes, or desktop applications (even if you’re fully VDI), or libraries for your devs.”
University of Suffolk director of IT Peter O’Rourke adds: “As you move to a mixture on-prem, cloud and SaaS environments, the number of complex interactions between these various platforms also increases exponentially.”
This is significant; a huge part of why patch management is so challenging is that in order to avoid downtime, IT teams have to ensure that applying a patch to one system won’t break the compatibility between it and any interdependent systems it has to work with. That, coupled with the challenges of managing technical debt, can present a huge problem when it comes to effective patch management.
“Peter’s point about the mixture is deadly accurate. Complexity makes technical debt even worse,” notes Thornton-Trump.
“The main concern I have around patch management is the behaviours it’s a symptom of,” Faughnan continues. “In companies where I’ve seen patch management done well, I’ve also recognised low or well-managed technical debt, a high degree of focus on quality, good security cultural practices, and the use of value generation over revenue generation as a business motivator.”
For Watts, the biggest challenge for most organisations with regards to patch management is legacy sustainability and the associated technical debt consideration.
“It’s very easy to forget the past when you’re preoccupied with the future,” he says; “of course SaaS and PaaS patch management is someone else’s problem, as mentioned earlier – but you turn your back on your legacy at your peril, and there are numerous case studies that demonstrate that in painful detail.”
Response and responsibility
When thinking of patch management as “someone else’s problem”, that “someone else” isn’t even necessarily an external provider – it could be another business unit. This can breed its own problems.
“I think there is certainly a discussion to be had around moving things like patch management outside of the IT delivery function,” O’Rourke ponders. “The execution of agreed approaches should sit within IT – however, could the risk and investment cases sit elsewhere in an organisation?”
While Thornton-Trump is comfortable with the patching process being carried out by IT and monitored by security, he notes that setting expectations and SLAs is key. “The problem is, the operators are supposed to be the experts, and you get turf wars and blame game if it’s not under one leader,” he says. “‘You wrecked such-and-such, because you’re not aware of so-and-so’.”
Faughnan agrees but adds that, by necessity, patch management will always end up being devolved to application owners or business teams on some level.
“In a previous life, I took ownership of patch management,” he explains, “and while we did experience a marked improvement in patching levels across the business, we still ran into the issue alluded to by Ian earlier – application owners are required to give up some of their time to test patches, which takes time away from their releases, and business owners and teams have the same issues.”
It’s an age-old problem: IT and business often don’t see eye-to-eye. O’Rourke argues that in most cases, the wider business still doesn’t understand the value of effective patching, and obtaining the necessary downtime and resources is still too difficult. His central belief, he says, is that compliance risks, including those associated with patch management, are something that the organisation as a whole has to take ownership of.
“Peter is 100% right. It can’t just fall on IT’s shoulders,” Thornton-Trump says. “IT might own the servers, but the business owns the data and they need to be part of the management strategy.”
Faughnan, meanwhile, is more skeptical about the realities of this approach. “I agree with the theory, but has anyone ever seen it work in practice?” he asks. In his experience, he says, obtaining buy-in from business units depends on the personality of who’s in charge, rather than enforced corporate strategy.
Thornton-Trump reports some initial success in this area, though, explaining that he has convinced three of the company’s business units to put together lists of their top ten most critical applications for the purposes of disaster recovery. “It took them a while,” he says. “but now I have a list of 30 of our most critical apps. It’s a start, but it requires a spirit of cooperation and trust.”
Although Faughnan recognises the value of these efforts, he has lingering doubts as to their long-term effectiveness.
“I believe that sustaining a solid patch compliance programme requires an embedded culture,” he says. ” I do agree with both Ian & Peter’s points; it’s simply the sustainability I question. You’ve both obviously got things moving in the right directions, leading from the front by the sounds of things. So the worry is, what happens when you or the other driving personalities leave?”
This relationship between patch management and compliance is something Watts has also grappled with, particularly when it comes to getting the wider business to comprehend the risks associated with unpatched equipment, so that the business feels the burden of risk as much as IT.
“If a system isn’t patched and its debt accrues then so does the risk, and whilst the business can point the finger of blame at IT, it has to recognise the potential business risk and impact is theirs and theirs alone,” he says. “The business should be putting pressure on IT to ensure that their risks are adequately managed, and patching and vulnerability management are key controls in managing that risk.”
“It’s hilarious to me the business seems more afraid of downtime associated with patching than the existential cyber threat from unpatched infrastructure,” Thornton-Trump adds. “It blows my mind when you get pushback on bringing things inside that should never have been outside to begin with.”
He also advises that firmware and software should never be updated at the same time, and “everything” should go through a change management process to ensure smooth implementation.
“Ian’s point about change management is well made, and one I agree with,” O’Rourke says. “However, it’s almost impossible to guarantee that all parts of your ecosystem are operating to similar levels of change management. As organisations move towards utility computing, we’re giving up a lot of the controls that we once had.”
For Faughnan and Thornton-Trump, an element of pragmatism is essential. Patching everything within your estate at the same rate may not always be feasible; in those situations, IT teams can use methods like the Pareto Principle (otherwise known as the 80/20 rule) to determine where to focus your efforts.
“I think possibly the most important thing though is to understand what you’ve got,” Faughnan says. “One of the most difficult aspects of IT is to have a continually accurate view of what you have where, and what condition it’s in. Arguably things like mandatory tagging have made this easier, but you’ll always have some areas of the business which receive less TLC than others.”
Thornton-Trump agrees, noting that even if a company has a complete inventory of their IT estate, that inventory is only one acquisition or divestment away from being inaccurate.
“Absolutely,” Faughnan continues. “And who hasn’t been through one or the other in the last few years? If not several! I think what we’re all edging towards is that it’s important to differentiate between ‘patch management’ and ‘patch compliance’; the former being a process predicated on a somewhat fictional nirvana of ‘patch perfection’, the latter the line where we’ve decided we’re ‘good enough’.”
Zen and the art of practical patching
Patch automation tools are frequently promoted by security companies as a way to solve this particular challenge, but the suitability of these tools isn’t universally accepted. In O’Rourke’s experience, they’re prohibitively expensive, while Watts is cautious of using them at the infrastructure level.
“I’m quite wary of automated patching in the data centre,” he says, “but I certainly encourage its use at the endpoint. You need the right testing regimen to be in place before you put your faith in automation, though. If budgets and timescales are tight, the better investment is in automated discovery and reporting, so at least you can make informed risk-based decisions on where your patching efforts should be concentrated.”
Faughnan shares these views. He argues that, while helpful, automation also needs to be balanced with human judgement in areas where a mistake could have significant harmful impacts – an argument that he’s previously applied to data science in general.
Thornton-Trump, on the other hand, says that he would be willing to automate infrastructure patches – under the condition that the system was backed up and the testing environment mimics production.
“All that said, you need to lobby as hard as you can for the right levels of budget and time to get this right,” Watts adds; “it should be included in TCO projections as a key component of the asset’s maintenance through its life, so that the CapEx and OpEx is there to make it happen.”
“In a lot of ways, we should be looking to CI/CD pipelines as a mechanism for automating patching,” Faughnan continues. “When done right, they can deliver solid applications with up-to-date third-party libraries, scanned for vulnerabilities, and tested before deployment – all without human intervention.”
If automated patch tools aren’t the answer, how can IT managers optimise their on-prem patching efforts? If he knew that, Watts says, he’d be a very happy man, but he advises that it needs a sufficient amount of support.
“If you are taking a prioritised approach then you have to start to think about how you protect less-patched environments from those that demand 100% compliance,” he says. “But honestly, you really don’t want to be in a position where you have to choose what you patch versus what you don’t.”
Enlisting MSPs to handle endpoint patching can be a huge asset, Thornton-Trump argues, freeing up IT teams to deliver more valuable services and focus on trickier data centre patch deployments. However, he admits there’s a natural reluctance within those same teams to outsource patch management, as it often leads to fears that the entire IT function will shortly follow.
Watts is currently investigating the possibility of using managed vulnerability management services, which he says “would be a logical next step”, although it’s “early days”. Just like Thornton-Trump, his reasoning is that outsourcing vulnerability management will free up support cycles for other tasks, although he cautions that “you can’t outsource risk”.
Patch management is a complicated subject, and it’s clear that every organisation is going to need a different approach. However, there are some constants when it comes to best practice.
For Faughnan, culture is the key to patch management. In order to be sustainable, he says, it has to be part of the corporate DNA. Thornton-Trump, meanwhile, takes a more pragmatic view. He advises minimising the risk of downtime through disaster recovery and thorough testing. Watts is concerned with business risk rather than downtime risk, however.
“Make sure the risk of not doing it is owned by the right people in both business and technology. Invest appropriately in the people, process and technology to manage patching effectively. Measure the effectiveness and translate that to risk mitigation to justify the process in terms of its time, cost and complexity.”
“Rinse, and repeat.”
Learn more about how to protect your organization’s data with CenturyLink Managed Security Services.
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user.