« "Best Practice" (You're Saying it Wrong) | Main | Free Advice for Adobe »

It's Your Methods, Not Your Madness

There has been a lot of negative, cynical chatter lately about risk assessment and risk management. The average person doesn't understand it, and people who should understand it oftentimes throw up their hands in despair when citing examples such as the failures of Wall Street that led to the current economic mess. Unfortunately, all of this despair and cynicism seeks to throw out the baby with the bath water, as if to say that one bad apple spoils an entire orchard.

To me, I think the biggest challenges to risk management today lie in a few key areas: accountability, consequences, and formalized assessment methods. The first two areas are easy to explain. If you're doing a good job assessing and managing risk, then you can start holding people accountable for their decisions and actions. That accountability should then lead to consequences (positive or negative). Unfortunately, we live in an era where we fear failure, and thus pad ourselves, our families, our investments, and our country against suffering negative consequences. Without negative consequences, what is the point of managing risk?

The last area, I think, is where much of the focus has turned as of late in the infosec industry. Formalized risk assessment methodologies are still generally immature, and they can frequently be problematic. However, many of the arguments made are divisive at best and willfully ignorant at worse. We should be very concerned about this last type of argument, because it tends to lead to a path of FUD, fueled by pseudo-experts who stir the pot with confusion with unknown intentions.

A Bit of Background
Before going into the various arguments about risk assessment and risk management, it's first important to know a little bit about the conversation, key players, and history.

Key players:
* "quals" - This group of people make use of "qualitative" risk assessment practices. That is, rather than use numbers and calculations, they instead develop rubrics that are descriptive in nature.
* "quants" - This group of people make use of "quantitative" risk assessment practices. They rely heavily on statistical methods, seeking to put numbers, math, and science behind their reasoning.
* "risk cynics" - There is an increasingly vocal group of cynics that repeated make arguments about how risk assessment is a failed discipline, how it will never succeed, and pointing out what they see as fatal flaws in the various approaches. Their arguments tend to be monotonous and repetitive, and you'll note that they generally just tear down without offering viable alternatives.
* "indies" - This last group, of which I consider myself part, represents the hopeful few (or maybe many - it's hard to know) who believe that risk assessment and management has a viable future, but we do not generally fall cleanly into any of the above categories. Maybe I should have called this group the "risk optimists," but that wouldn't be completely accurate, either. Maybe realists... ;)

I'll go into more detail below about the typical worn arguments, but here's a quick summary:
* "inadequate data" - One of the most common arguments is that we don't have enough data from which to derive reasonable estimates for anything (loss, probability, frequencies, etc.). The actuarial tables leverage by the insurance industry are frequently cited, with a quip that since no such thing exists for infosec, then there's nothing we can do.
* "faulty value/loss estimates" - Related to the first argument, this argument keys in on the estimates used to measure impact of a loss event and flames that we cannot reliable estimate the impact of an event like a breach. Sure, we might know how much it would cost to monitor, discover, and recover from a breach, but what about the enduring impact, such as to stock price or consumer confidence?
* "faulty probability estimates" - Also related to the first argument, this arguments looks specifically at the probability estimates typically used in risk calculations and says "there are too many unknowns - especially unknown unknowns - to make these estimates even remotely reasonable." This line of argument tends to lead to the next quip.
* "unknown unknowns" - My favorite argument is the one that essentially says "we don't know everything, thus we can't know anything." Because the world is infinite, there are threats and vulnerabilities that we have no envisioned, which means we thus cannot estimate their likelihood of occurring, let alone their impact to the business.

We'll go into a lot more detail below, but this should give you a good starting point for now. The big thing to bear in mind with risk management as a discipline is that it has been around for a long time and is, in fact, very mature. Information risk management is a relatively new subset within the overall discipline, and it's suffering through growing pains as one might expect, but that does not nullify the entire discipline, as we'll now see.

Data Suckage

The most common starting point for criticism of information risk management is to target the data. Common complaints are:
* There isn't enough data.
* The data isn't reliable.
* The data isn't consistent.
Each of these complaints are valid, at least to a point. However, like with everything else you'll see in the ensuing sections that there are ways to mitigate these concerns, not entirely, but to a level that makes it acceptable and useful.

Not Enough

The first quip, particularly from the crowd affectionately referred to as "concretists," is that there simply is not enough data. If only we had reams upon reams of actuarial data like ye olde insurance firm, then perhaps we might be able to make use of it. Of course, this is really just a cop-out argument, and one that ignores current practices in statistics and probability.

First and foremost, some data is better than no data. Second, even if that data is "salted" (i.e. not all good), it's still useful to us. Through the mathmagic of Bayesian statistics, we can still run calculations and distributions. What we will find, particularly through use of Monte Carlo simulations, is that less data may not have a high degree of confidence, or the scatter plot may be a bit scattered. However, as our data grows, so will our scatter focus, which is all a good thing.

Second, you have to start somewhere. I think the sad part of this "your data sucks" argument is that the "do nothing" alternative is not useful. Of course, this argument often comes from quals who like to go in and make arbitrary assessments that have no grounding in reality. It's sad, really, because if you have some known good data, why would you ignore it and simple run on supposition? It maketh no senseth.

Last, I have to hold to account certain people who use this flog this flagging argument (especially certain "retired" people who yet persistent into their old age meddling in affairs they no longer seem to understand - as a complete non sequitor, I don't think you get to call yourself "retired" if you continually engage in conversations within the industry, and it's even all that much more perplexing that you might use the "I'm retired" excuse when confronted with new research and approaches, yet still like to try tearing these new methods down). Data has been gathering dust in annals for decades now. Why isn't it publicly available? On the flip side, who cares about data more than 20 years old? Yes, the data is lacking, but then again this whole Internet thing has really only been amped up and taken serious by businesses for the south side of 15 years. Give it a little time, eh?

Not Reliable

Apparently what we need in this world is perfection. It's the ultimate goal, isn't it? It's also not realistic. There is no such thing as perfect data. Or practices. Or methodologies. Please get over it.

Some data is better than no data, even if we recognize that the data is not perfect. As just noted above, we really have less than 15 years of run time during which to gather data. Realistically, it's even less than that - probably more in the range of 10 years. We hear complaints about the reliability of the data we're using, with citations to software engineering theory, but the simple fact is that we can only work with what we have while we develop more.

The data is reliable, assuming you know how to break it out and use it. Look at the Attrition archive. There is adequate data available to make use of for analysis. If we can do trending analysis off this data, then we can absolutely do risk analysis as well.

The key, however, is making sure that we factor in confidence and not work with single absolute numbers. We know the data is a bit unreliable, so we can account for that explicitly. To that end, we typically want to work with ranges instead of single numbers. The tighter the range, the higher our confidence, which will then show through our calculations and visualizations. Notice that this whole time we're still able to make use of the data available.

Not Consistent

One of the more valid of the concerns about data is its consistency. In this context, I'm talking really about the consistency of its collection and classification. Data breach reporting is a perfect example where, in lieu of standard reporting requirements, we don't necessarily get the same types of data each time. This is a problem that IAttrition and the likes encounter on a regular basis. And even if you have a standard data collection approach, such as Verizon Business's VerIS Framework, you may still run into consistency challenges when comparing one repository to another (e.g. comparing the Verizon DBIR to Veracode to WASC to Attrition).

This challenge underscores the need for mandates and standardization around data breach reporting, in particular, but it also highlights that we need to be cognizant of where we're getting our data when we start acting on it. We want as much data as possible for the short history available, and we want it to be as reliable as possible, which then means we need to work extra hard on standardizing data sets to ensure consistency and to help weed out bad data, among other things. Little tweaks, such as ensuring that we have consistent placeholder use, can go a long way toward helping ensure that our data is more usable and useful.

In the end, of course, we come back to the same quip as above: some data is better than no data, even if our data confidence is only moderate. Once you have a start, you can then refine your models and data sets over time to ensure better quality data, and to improve your overall analysis.

Method Suckage

Before I launch into this section, if you've not read my post "Compliance & Risk Management Are Not the Devil", then please hop over there for a minute and do so. In particular, check out the section "Risk and Intrinsic Value" for a preview of what I'm about to say...

The next most common argument against information risk assessment is that the methodologies are wholly inadequate. Typically this criticism is leveled against the qualitative methods out there, because, frankly, they're generally quite inadequate. Not to completely discount qualitative assessments, because there is a time and a place, but we need to be careful to separate them out from other types of methods, if only for the purposes of quality and comparison. Back to this in a moment.

One of the primary problems with methods is the historical reliance on Annualized Loss Expectancy (ALE). More often than not we end up getting ourselves into trouble by pulling arbitrary numbers out of dark places, when instead we need completely transparency and illumination to see how a number was calculated or derived (as with crypto research, insight into how numbers are manufactured is vital to proving the integrity and reliability of the system). As noted above, where we get our numbers from is rather important, especially when we are performing a quantitative risk assessment.

Too much time is wasted on this attack, though. ALE is not an end-all-be-all kind of number, and is really heavily abused. Frankly, it's just downright wrong to be using it on its own. It needs proper context, which is lacking in a standalone number. More importantly, we shouldn't be using single numbers, but rather ranges. At the same time, it is also valuable to adopt Jack Jones' preferred approach of breaking these estimated impacts into primary and secondary. It turns out that we can fairly reliable estimate what our real, direct costs will be for a given security incident. It's the indirect costs where we tend to see much broader scatter, and thus need to compensate accordingly.

The specific quip that I read recently was that there's "no sound method of actually measuring loss magnitude" (sorry, but I've lost the source of the quote, though I think it came from the Securosis thread on ALE). This quip is about half right. Yes, ahead of time it is extremely difficult to accurately estimate the combined primary and secondary impact. However, getting back to semantics, there are a few key points:
* How much accuracy do we need? As we've already discussed, using ranges can help us improve our estimates, and then we can perform statistical analysis on multiple data sets to improve these estimates. But let's not forget that we are not talking about an exact science here. If we were, then we'd not be having these arguments, nor would we be relying on statistical models quite so much. We need enough accuracy to make quality decisions, but we should not believe in some mystical, magical "perfect" result that will solve all problems.
* Splitting impact between primary and secondary helps. We can estimate our direct costs fairly well. We know how much hardware, software, and resource time costs. We have a pretty good idea how long it takes to detect and correct major classes of issues. It's the secondary costs where we have a lot more fudge factor. However, at the same time we have enough large examples of security incidents that we can make a reasonable guess at how expensive an incident could be, even if we're using a wide range like $1 to $500m (though obviously want a tighter range than that). Remember, the goal here is providing good enough assessment results to make quality decisions.
* Why are focusing so heavily on impact? The focus on financial cost is natural to the business, but it also seems to have its roots in the long-since-debunked myth of Security ROI (or ROSI). Risk management decisions based on information risk assessment and analysis should not be oriented toward trying to estimate a return, but rather on loss control/management. Infosec is trying to helping defend against, and optimize recovery of, security incidents. Information risk management provides us with useful data points to see where we need to improve our spends to help optimize defensibility and recoverability.

Consistency: The Risk Analysis Panacea? (not)

Jack Jones has a great post up titled "Managing Inconsistency" in which he talks about this dream state of having consistency between assessments. In the dream state, two assessors will walk into an organization with their own tools and will produce results that have high degree of parity. That is, they will gather their own data, make their own calculations, and yet find that they get the same effective results. It's a nice idea, but is it really all that important? (answer: yes and no)

On the one hand, yes, we do need consistency. Without consistency we then get into issues of integrity and bias. I'm too lazy to go grab the citation at this point, but there's been at least one study released recently that shows how security managers tend to skew roadmaps to their own personal bailiwicks instead of doing what's right by their respective organization. So, yes, we need consistency in order to help reign in some of the chaos that comes from implicit and explicit bias.

However, on the other hand, we need to make sure that we don't look to info risk management as some sort of panacea. Instead, info risk management is a tool in the overall toolbox that we need to use in infosec, just like words are the tools we use to build and convey thoughts. The English language is very instructive on this point in that there are typically multiple ways to say something, getting the same meaning across, all while using different words or word-order. E.g. "My name is Ben." and "Ben is what I am called." are functionally equivalent, and yet they are completely different sentences. What degree of parity is necessary?

The key points here, derived from Jack's post above, are that variance in risk assessment and analysis is manageable, and it is secondary to the overall outcome of the method and the ability of management to make meaningful use of the method's results. As with the data concerns, if we go into an assessment knowing that there is the potential (likelihood) for variance, we can then compensate for it programmatically. Over time, our methodologies should become refined and better tuned to help reduce variance, but until then, we simply need to compensate for it.

Methods, Methods Everywhere

One last point here is that there a numerous methods, and they're not necessarily all the same or equal. Check out Chris Hayes' quick poll "Impromtu IT Risk Assessment Poll" for a quick list of a couple approaches. Also note the results and just how many people have no formal approach at all.

We need to start moving more aggressively away from the "security is more art than science" mentality here. In my mind, you don't get to complain about data or methods if you're not helping address the deficiencies. If you're relying on WFITW (Wet Finger In The Wind), then you're as much a part of the problem as those cynics who seek to tear down any serious attempts at improving the situation.

Unknown Unknowns

The final common argument - and by far the most inane - against risk assessment is that of death by unknown unknowns. The argument goes that, because we lack data, and because we don't really know what else is out there (in terms of threats, vulnerabilities, and attackers), then we simply can't make any sort of reasonable estimate of anything. This argument is akin to the iceberg analogy, saying that we can plan all we want for the visible tip of the iceberg, but we'll eventually be sunk by the other 7/8ths of the iceberg that's hidden under the surface.

Of course, there are some problems with this argument. First, we now know how to deal with icebergs. Sonar didn't originally exist, but it sure does now, allowing us to better foresee the problems posed. Second, our statistical analyses leverage ranges to better compensate for unknowns. Third, while we need to care about unknowns, we can also compensate for them directly. There is no requirement to exhaustively enumerate all threats in the universe. Instead, we can take an information-centric approach, choosing to look at ways to optimize defensibility and recoverability, which is in-and-of-itself a sound strategy.

The bottom line is this: as with all the arguments, we know that we're not dealing with or striving for perfection, and can thus compensate accordingly. In all of these arguments are grains of truth, but none of them are insurmountable. Moreover, given an alternative of "doing nothing" or "working blindly," I'll happily take approaches with a known margin for error.

Cynic Suckage

There's much I could say here about the cynics, but I think I can boil this down to 2 quick points:
1) Go read David Mortman's post "Decision Making Not Analysis Paralysis".
2) If you're criticizing without contributing, then you're not really helping much.

We've come to a weird place in the evolution of this industry. After a plateau of more than a couple years, we're now seeing a huge backlash against misunderstood areas like risk management. It's time to quit whining about the problem and start helping to solve it. My recommendations for minimal action are:
1) Contribute financially or as a volunteer to the Open Security Foundation.
2) Drive your organization to opt into data breach reporting using a framework like the VerIS Framework.

The next time you see a risk cynic attacking risk assessment and analysis, please do the courageous thing and ask them for alternative solutions. And, when they tell start telling you about their "due diligence" approach, feel free to roll your eyes and ignore them the rest of the day/week/month/life. Not only is a "due diligence" approach antiquated, but even my legal defensibility approach leaves room for information risk management.

Path to the Future

Information risk management provides us with a viable future. It will, in fact, continue to be core to what we should be doing from an overall assurance perspective. That being said, there are a few cliches that we should keep in mind as we march on:
* Security is a journey, not a destination.
* Perfection is a myth that does not help us evolve the industry. Idealism, on the other hand, is very useful, so long as it's tempered by a touch of realism. Idealism is not the same as perfection.
* There are no silver bullets. Risk is no panacea.
* Risk management is not broken, but rather is evolving and improving over time.
* Recent failings in risk management (e.g. Wall Street and the economy) are reflective of the need to ensure that the risk v reward balance, complete with negative consequences, must be allowed to function and flourish. If you remove negative consequences, then there's no reason reason to manage risk.
* This isn't Lord of the Rings: there isn't one risk measurement to rule them all. Different valid approaches exist, just as there are different data sources with equally valuable, yet distinct, datasets.
* There is a time and place for constructive criticism. Outright, non-contributory cynicism does not qualify as constructive criticism.

If you've found this post to be interesting or useful, or if you have an interest in earnestly contributing to the development and evolution of information risk management, then I highly recommend joining "The Society of Information Risk Analysts" mailing list over on Google Groups.


TrackBack URL for this entry:

Listed below are links to weblogs that reference It's Your Methods, Not Your Madness:

» Supposition and the Drum Beat of (Cyber)War from The Falcon's View
"Two things are infinite: the universe and human stupidity; and I'm not sure about the universe." -Albert Einstein The subtitle for this piece could easily be "a whole lotta stupid goin' on." Is it something about summertime, or have we... [Read More]

Post a comment


This page contains a single entry from the blog posted on June 1, 2010 3:59 PM.

The previous post in this blog was "Best Practice" (You're Saying it Wrong).

The next post in this blog is Free Advice for Adobe.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.