Patching: Good but Not Quite Good Enough
As much as we may like for this not to be true, unfortunately, all software has bugs. A significant portion of cyber security is attempting to identify these bugs before a hacker does and hoping that anything missed isn’t exploitable. If it is exploitable. it’s a vulnerability that needs to be fixed as soon as possible. If not, it still can be an issue (and need patching), but the need for fixing it may not be as urgent.
The process of fixing vulnerabilities is called patching and is one of the biggest problems facing the field of cybersecurity. Patching is a race between the hacker and the defender to determine who gets to the bug first. If it’s the hacker, cyber incidents and data breaches are the result. To sort out these problems, it is strongly suggested that you count on OT cyber security solutions.
However, this constant race isn’t a sustainable way of handling security. Rather than relying on patching efforts to secure systems, software needs to be protected from attack even if it includes exploitable vulnerabilities. Defenses like runtime application self protection (RASP) may be the solution to the patching problem.
The State of Patching
Patching has become an accepted part of the software development lifecycle. Organizations frequently release software with exploitable issues and then work to fix any bugs after release. This has led to the creation of bug bounty programs and regular patch schedules like Microsoft’s Patch Tuesday (the second Tuesday of every month).
The primary factor that determines the effectiveness of an organization’s patching efforts is how quickly a patch is applied after it has been released. Many attackers can develop and use exploits for a vulnerability within hours or days of the public announcement of the vulnerability.
In some cases, the patching outlook is relatively good. Project Zero, a team within Google that works to identify and ethically report vulnerabilities to the software’s creator, has recently released data on the speed at which vendors patch their code. Project Zero focuses on software and hardware used internally at Google, which limits the scope of their efforts. Project Zero gives a software developer 90 days to issue a patch for a vulnerability before they publicly disclose the vulnerability. This balances between the need to protect vulnerabilities from exploitation and to prevent vendors from ignoring vulnerability reports. According to Project Zero, 95.8% of vendors issue patches within the 90-day window.
However, the outlook is not exactly encouraging for more general patch management. In 2018, security research firm tCell published an analysis of the rates at which organizations patch web applications with known vulnerabilities. On average, a critical vulnerability took 38 days to patch, a non-critical one took four days longer on average, and the longest-lived vulnerability lasted 340 days, almost a full year.
The Cost of Missed Patches
Unpatched software leaves organizations and individuals vulnerable to cyberattacks. Recent history has demonstrated that hackers quickly attempt to exploit known vulnerabilities, and unpatched computers can have significant security impacts.
One well-known example of the impacts of poor patching is the Wannacry outbreak of early 2017. Wannacry was a ransomware worm that caused millions of dollars in damages to organizations around the world due to lost revenue or costs of purchasing decryption keys for their locked computers. A crucial component of the WannaCry ransomware was the vulnerability that it exploited to spread from computer to computer. This exploit, called EternalBlue, was developed by the NSA and publicly leaked by the ShadowBrokers. The ShadowBrokers leak occurred on April 14, but a patch was available from Microsoft a full month earlier. The scope of the WannaCry outbreak (which occurred in May) was caused by organizations’ failure to apply an existing patch for a known vulnerability.
Another example of a hugely damaging but entirely preventable cyber incident was the Equifax data breach that leaked sensitive credit information of hundreds of millions of consumers. This data breach was caused by Equifax’s use of Apache Struts, which had a vulnerability that hackers exploited to gain access to Equifax’s network. However, this vulnerability had a patch available in March and was known to be exploited by hackers long before the Equifax hack in mid-May. Again, an organization’s failure to promptly apply security patches caused a major cybersecurity incident.
Security Beyond Patching
While the WannaCry and Equifax breaches were caused by organizations’ failure to apply a patch over two months after release, even smaller delays can have significant impacts. A week’s delay for testing and rollout may be reasonable for an organization but leaves the company vulnerable for that window.
Continuing to play the game of racing to apply patches before they’re exploited by attackers is a losing proposition for any organization. New data protection regulations have increased penalties and liability for organizations failing to protect their customers’ sensitive data, and regulators have demonstrated their willingness to fine organizations for any negligence.
Organizations need to implement protections that will thwart attempted exploitation of vulnerabilities without waiting for patches. Runtime application self-protection (RASP) provides built-in protection against even zero-day vulnerabilities through individual monitoring of potentially vulnerable applications. By implementing RASP-based defensive solutions for their public-facing applications, organizations can achieve the necessary level of protection for their sensitive data without relying on their ability to collect, test, and apply patches for their software’s vulnerabilities before hackers find and exploit them.