« Feeds... | Main | Sometimes Changing the Problem Helps Solve the Problem »

Mulligan: Recasting "Dowsing Your Way Through Enterprise Risk"

A couple weeks ago I attempted to provide a new analogy for how much of risk assessment seems to be performed these days (see "Dowsing Your Way Through Enterprise Risk"). That post received a lot of comments, but it seems to have missed the mark completely. Looking back now, it's a poorly written post that lacked clarity of point and purpose. So, allow me to recast the article in a different light.

Gut Ratings == Dowsing

One assertion made in prior discussions was that it is absolutely possible to walk into an organization and quickly determine the "most valuable assets" without needing to do much; certainly without performing a formal assessment. Yes, sure, that's absolutely true. And sometimes you even find the actual "most valuable assets" that way. Just as dowsing can find water. However, you certainly would not call that scientific. It's instinctual rather than analytical.

I do not take issue with doing this, btw. It's an important and necessary task. You have to start somewhere, and until you can develop data, you have to work off hunches, assertions, and innuendo. However, let's be very clear that this is only a starting point, and not the end-all-be-all. To produce particularly valid results, at some point you must shift to an analytical process using a well-defined method.

By way of analogy, however, these quick gut-level assessments are no different than dowsing. You are applying your own internal sense and opening the door for being swayed by bias and uncertainty. And, more importantly, it's not uncommon at all to find that what is thought initially to be the "most important" turns out to not be nearly as important. For example, it's typical for "most important" to be assigned to data without proper consideration of the systems and applications supporting that data. If accessibility of data is the top priority, then the data itself may end up being far less important than everything in front of it. fwiw.

Text -> Number == Weak

One of my biggest problems with many risk assessment techniques today is how they convert qualitative values to numbers, and then try to pass it off as quantitative. No matter how you swing it, there is going to be a degree of uncertainty in your estimates, which will inevitably make things squishy.

For this reason, I think of risk assessment more as a "soft" science than as a "hard" science. This does not necessarily invalidate all that is being done in the areas of risk assessment and risk management. It does, however, mean that we must take everything with a grain of salt. After all, because humans are involved in the equation, there will absolutely be a margin of error, uncertainty, and bias (see here for an interesting discussion of how poor humans are at estimating risks).

This point may seem like the Schneierism on poor risk decisions, but I want to take it a different direction.

1) Risk assessment is subjective and loaded with uncertainty, and thus is challenged in being truly repeatable. Yes, I know this is not particularly earth-shattering, and maybe not all that interesting, but it's a very important point. When's the last time your consultant or auditor sat down with you and their risk assessment report and said "Before we go through this, I'd like to talk about the uncertainty factor."? Just as the standard deviation is a useful number, so would be an "uncertainty factor" in understand just how reliable the assessment is.

2) There is no reliable way today to quantify a qualitative value. People are going to bristle at this point, and to a degree they're right. First, if you look at the IAM and IEM you will note that one of your very first steps is to thoroughly define your risk levels (minimally High, Medium, and Low). During those definitions you provide hard values with which you can measure. In the IEM specifically, there is then a formula for assigning numeric values to the risk ratings, even compensating for how objective or subjective the rating is. In this manner, IEM does a very good job of ensuring that results are consistent and useful.

Similarly, Bayes allows us to work with data that may or may not be of the best quality without losing the value. This statistical approach can then be paired with methodologies like FAIR to provide reasonable results. Alex Hutton put it this way: "FAIR allows us to use information that we might otherwise be unable to use if you combine the taxonomy with a Bayesian approach."

3) Too much of the discussion in risk assessment and risk management seems to be around "getting better data" (such as to reduce uncertainty) without looking at the core problem: how the data itself is created and collected. One of the things that most concerns me about risk assessment techniques is the potential for introducing a lot of uncertainty and bias into the assessed value. Methodologies like IAM and FAIR have gone to great lengths to compensate for this case, but it's still something of which to be aware.

What's Better?

Simply put: nothing. Today there are no methodologies that don't rely on a subjective assessment of risk. Much work has gone into compensating for the subjectivity of the assessor, but it cannot eliminate the uncertainty or bias. For proof of just how difficult a task this really is, one need only look at the QSAs that have been placed on the remediation list for PCI DSS assessment. If it's hard to get assessment of fairly well defined requirements correct, then imagine how hard it is to estimate risk based on a few fuzzy factors.

Then What's the Point?

Well, first and foremost, the point is not simply to whine about the state of things. Instead, there a couple reasons I put this together, as well as the last attempt. Risk assessment is an interesting area. It's evolving, but it also seems to have some interesting challenges. Let's be honest, better vulnerability or incident data will not simply solve the uncertainty problems we're talking about. Why? Because knowing the number of actual incidents does not tell us the negative - the number of incidents avoided due to various activities. While we can hypothesize correlations (such as we're starting to see in justifying PCI), they will never progress to fact in a "hard science" kind of way.

As such, I'm beginning to question the entire concept of risk management as it applies to infosec, info assurance, etc. (whatever you want to call it). As with PCI, I really question if this is the best approach - if this is in fact the best way to do business? On the other hand, is there really a point in trying to better quantify what's going on? We certainly seem to think that this is a good idea when it comes to gathering metrics, and good metrics can provide us with very good insight.

One Path Forward

I was criticized in the previous attempt on this topic for not providing thoughts on the way forward. I think there is a path, but I think its roots are in risk management at large, with tendrils stretching then into risk assessment. To this end, I'm working on queuing up a project to put a new spin on risk assessment. This project will be based on a few ideas:
* Defining the risk management lifecycle
* Weighting risk assessment results with metrics
* Reducing the relative importance of the risk assessment data itself
* Leveraging a maturity model approach to give orgs an overall goal for growth

Watch for more on this in the coming weeks. In the meantime, I gladly welcome comments, questions, and constructive criticism. My goal here is to challenge the status quo toward determining if there is, in fact, a better approach. Wherein it may be true that what we have today is adequate, or good enough for now, it seems also true that not everyone is satisfied. As such, it's time to push hard for a new take on the old school.

TrackBack

TrackBack URL for this entry:
http://www.secureconsulting.net/MT/mt-tb.cgi/899

Post a comment

About

This page contains a single entry from the blog posted on June 4, 2009 8:32 PM.

The previous post in this blog was Feeds....

The next post in this blog is Sometimes Changing the Problem Helps Solve the Problem.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.