DBIR 2016: Lots of Noise and Drama, But Patching Still Sucks

Unless you've been offline in a remote land for the past month or so, you've undoubtedly heard that the 2016 VzB DBIR is out. As with every year, two things have happened: 1) DBIR is now the basis of almost all infosec vendor marketing promos, and 2) data analysts are coming out of the woodwork to levy the same old criticisms and accusations that we hear every year.

At the end of the day, there's a few consistent takeaways. First, yes, the data is biased. All data is biased. That's life. Welcome to data analysis 101. There's no such thing as "pure objectivity," only "more or less subjective." Second, yes, the data is dirty. It's inevitable, especially at scale coming from multiple sources. I think the bulk of the incident data is decent. Where things, as always, go off the rails is around the much-maligned vulnerability section (for example, read Dan Guido's criticism pieceread Dan Guido's criticism piece, which links to others as well). Third, for all the noise and drama and bickering and ad hominem attacks, my conclusions don't change. At. All.

Patching Sucks

First and foremost, guess what? Orgs still suck at patching. Like, super-duper big-time suckage. From the plethora of IR reports, we know that vulns typically exist unpatched in live production environments for anywhere from 3-9 months, and oftentimes much, much, much longer. I've seen places that had vulnerable, unpatched apps and services that date back years. It's sad.

Now, why does this happen? Well, several reasons. For one thing, patch-in-place can be tricky as it has this lovely tendency to break applications. For another thing, patching tools are still generally so-so at best. For another other thing, it's a still oftentimes a highly manual process that simply doesn't scale well, and we're dealing with insufficient resources to maintain everything. Consider, 20 years ago we often talked about sysadmin-to-server ratios of 1:10 or 1:20. Today, thanks to the promise of automation and orchestration tools (note, I say promise, not necessarily delivery), we're instead in the range of at least 1:100 if not 1:1000 or even 1:10000. That's at least how sysadmin positions seem to be staffed and funded today. Which would be all good and fine if all those systems (hosts, images, instances, whatever) were setup cleanly in the first place and fully leveraged vuln and patch mgmt tools that heavily automate the process. Alas, that's not generally reality.

The answer, at least in terms of servers, is moving to a DevOps model, and even possibly to the containerized micro-services architecture model. In both these cases, you update your golden images and automatically run everything through the CI/CD pipeline, quickly build and test, push out in an A/B fashion, ramp up the new, ramp down the old, and fallback to the old if something breaks. Easy-peasy, right? Of course, how many orgs are operating in this fashion today? Very few. And note this doesn't do anything to address one of the most vulnerable endpoints: the one in the hands of your users!

Rather than talk about the challenges to endpoint security, instead let me simply point you to my piece on how to solve that problem by solving the underlying model itself.

There's one other attribute to patching that is often overlooked, though finally gaining some visibility and tracking, and that's around application security and the presence of latent vulnerabilities in known bad libraries. At Gartner, we referred to this as "software composition analysis," and I think in general "component vulnerabilities" is the general phrase that describe things. Rather than go in-depth on the topic, I'll point you to Veracode's blog post on DBIR highlighting this issue as well as note that you can address this in your CI/CD pipeline for free simply by implementing OWASP Dependency Check, which I highly recommend doing. (no, seriously, stop reading, go implement it, then come back and finish reading - this blog post will still be here later)

People Suck

The second major theme from DBIR and myriad other reports is that people continue to be the weakest link in security models. Now, this is for a number of reasons, most of which revolve around the fact that we give them easily exploited systems and then somehow expect them to magically protect these vulnerable boxen without any tools for self-defense. So, ultimately, as infosec and IT professionals, it's our fault that people suck, because we're making them suck. Think about that the next time you want to mutter "stupid users" over the latest compromise.

Beyond the fundamental failures of expecting people to not get pwnd in a vulnerable environment, there are other things that can and should be done. Doing security awareness in the traditional, stupid, CBT-driven way is not it. Instead, we need to do a far better job engaging our target audience and embed practices and awareness into their DNA. You cannot do this in one standard cookie cutter manner, but instead must invest in more progressive methods.

Rather than delve into details about this topic, about which I'm in no way an expert, instead let me point you to a true expert on the topic, Samantha Davison. Check out her RSA 2016 talk slides and then go watch her interview on Microsoft's Modern Workplace web series.

Data Dicks Abound

As eluded to earlier, every year there seems to be an uproar over the data and analysis that goes into the DBIR. In fact, it seems to ratchet-up more and more shrill every year, and frankly, it's getting a bit annoying. As already noted, yes, the data is dirty, yes there are biases, yes there are things that can (and should) be done differently and better. Great, make the point, criticize the methods and the data sets and then move on.

Unfortunately, we have a very strongly opinionated subset of data investigators (get it, "dicks" is a reference to PIs, as in investigators haha I made a funny)... and, of course, when you take strong personalities and put them into a debate over minutiae, well, you're gone get some classic top grade drama. BUT... I have to ask... in all of this snarling and gnashing of teeth and devolution to ad hominem attacks... ARE YOU MAKING ANYTHING BETTER?

I submit two things here: First, no, the public rants and shame-fests are not contributing positively to the discussion or the industry. In fact, in many ways, it tries to tear down the entire report, even though I don't think anybody anywhere would disagree with the main conclusions that we suck at patching and people are easily compromised. So, praytell, why waste the energy? Second, there absolutely could be a positive outcome from these criticisms... if and only if they remain focused on the data and methods, and propose ways to fix the problem, rather than simply piling on without construct solutions.

There are a LOT of smart people involved in these raucous debates... many of whom I respect... and yet, I find it grossly disappointing that they've become data dicks under an alternative meaning and are not seemingly providing a positive addition to the industry.

Of course, some of this also derives from the outright abuse of trust at the hands of industry of the previously free and public OSVDB, which shutdown in April because vendors were taking advantage of free data for commercial purposes rather than paying a paltry sum to keep the project, and Open Security Foundation (OSF) up and running.

Suffice to say, there's much angst to go around, but I think there's also a key takeaway here, which is this: We need to have the discipline and self-control to direct these discussions toward constructive criticisms and how to make things better, rather than letting everything devolve into a high school hallway slap-fest. At the end of the day, I really don't care how companies prioritize their patches, just that they get their heads out of the sand and actually patch things in a timely fashion! Leave the rest of haggling and wrangling to vendors selling products.

At the end of the day, we know a few things, because they're consistent across all reports all over the place. People are easily duped via phishing attacks. Malware is still easily successful. Patching is still highly problematic (and indicative of a larger problem with how we deploy and maintain systems/environments).

All the kibitzing over data and analysis and recommendations ends up being a rather pedantic, arcane exercise at this point, because once again we see that orgs are simply not evolving or maturing rapidly enough. We need better operational models more than better analytical models or even better data sets.

So, please, don't worry too much about the ivory tower debates amongst the data dicks. Instead, focus on fixing your processes, practices, and architecture. Focus on key metrics like Mean Time To Compromise (MTTC), Mean Time To Detection (MTTD), and Mean Time To Remediation/Recovery/Repair (MTTR). I think you'll find that these simple engineering metrics will serve you far better than arguing over the best way to prioritize patches for your environment. Why? Because many/most orgs aren't even patching, let alone trying to figure out which patches to apply first. Oy.

Have a good day!

My Other Pages

Support Me

Support EFF

Bloggers' Rights at EFF

Creative Commons License
This blog is licensed under a Creative Commons License.
Powered by Movable Type 5.2.10