Lord Kelvin Was Wrong

| 1 TrackBack

Subtitle: Why Evidence-Based Risk Management Is As Good As Good As It Can Get

"If you cannot measure it, you cannot improve it." (Lord Kelvin)

I go round-and-round on the whole "information risk" topic... I believed in it for a while, then I didn't, then I did, and so on and so forth. Mind you, I'm definitely not in the Donn Parker camp on things here. I'm also a big fan of FAIR and the work that Jack Jones has done over the years. That being said, I'm tired of the in-fighting between the different camps, in particular because these are truly petty squabbles that adds almost no real, practical value.

Where I'm really getting hung-up today is with Kelvin's notion that you can't improve something if you can't measure it. I suppose I'm perhaps putting words into his mouth a bit when I imply "empirically" and "quantitatively" in front of "measure," but if you look at a lot of the churn in info-risk circles, you'll see that much of it is around quantitative techniques. To boot, if you then look at Douglass Hubbard's How to Measure Anything, then you'll further see that there's an obsession with quantitative measurements, even if only through using ranges. There are, however, problems and limitations...

Of Definitions and Units...

When I talk about measurements, I should further qualify that by saying "meaningful measurements." That is, a measurement is absolutely pointless, if not outright counter-productive, if it is not meaningful. In my context here, I define meaningful as having a solid definition, using recognized units of measurement, that can be made in a consistent and repeatable manner. This is where I take issue with Lord Kelvin and Hubbard... even if I use Hubbard's ranges to do estimated measurements, they do not, in my opinion, absolutely track as being meaningful. And, to Kelvin, just because you can arbitrarily measure something does not mean that it is useful, such as toward improvements. On the flip side, there are some things that just aren't easily measurable (assuming they can be measured at all).

Before getting into that, let's also talk about units of measurement. If you're talking about information risk today, what's the unit of measurement? Please, stop and think about that for a minute. If you're measuring "risk," then what is your unit of measurement? This, I think, is one of the first places where a reasonable understanding of risk analysis breaks down. We're oftentimes not even sure what it is that we're measuring.

Measuring Intangible Qualities

Back to the earlier point... how do you measure things like intangible qualities that aren't particularly measurable? For example, how do you measure happiness, willpower, or well-being? We sometimes use science to give us the illusion of insight into these intangible qualities, but in the end, there's really nothing truly definitive to be measured. We must, then, resort to a qualitative assessment, which will necessarily introduce error, drift, and fuzziness. Is this a bad thing? Yes and no.

On the one hand, you want to limit fuzziness in your measurements as it can reduce the consistency and repeatability of your measurements. On the other hand, if you can create a reasonable rubric upon which to base your qualitative assessments, then you can address some of the consistency and repeatability issues inherent in a qualitative analysis.

At the same time, we need to be accutely aware of the illusions of science that can arise out of assigning numeric results to a qualitative assessment. Just because an assessed value is numeric does not mean that it's somehow more reliable. For example, asking me to rate my happiness on a scale of 1-5 every day for a week can be useful for seeing how my individual happiness may be affected, but it doesn't really tell you much about my happiness. Moreover, depending on the type of analysis being performed, it may not be useful, or even valid, to compare my scores to your scores over the same period. For example, if the analysis is looking at how people in general feel over the course of the week, then it's probably ok. However, if you're instead trying to compare my level of happiness to someone else's level of happiness, then the analysis will be particularly faulty and invalid (because you're not measuring happiness factors with any real degree of detail or certainty).

The main takeaways here are: Qualitative measurements are necessary and can be useful, but they're tricky, and must then be actively managed for what they are. That said, all hope is not lost, and, in fact, there are ways to put all these pieces together. However, despite the apparent weaknesses in qualitative measurements, they should not be thrown out altogether under some delusion that all things can be measured meaningfully using quantitative assessment techniques.

Evidence-Based Risk Analysis (EBRA)

"Give me your tired, your poor
Your huddled masses yearning to breathe free"
(from "The New Colossus" by Emma Lazarus)

The beauty of evidence-based risk analysis is that you can have your cake and eat it, too. In fact, with evidence-based risk analysis, you can take all your data points, regardless of whether they're quantitative or qualitative, and regardless of the assumed or documented quality, and account for all of these variables in one combined approach.

The simple fact of the matter is that we will never be able to get away from qualitative assessments, nor should we. Moreover, the quality of certain quantitative techniques also require a degree of faith, which should also be accounted for. In the end, EBRA should simply be an input to your overall decision analysis approach, which in turn should support your governance approach.

Meaning for the Future?

What does it all mean? Well, maybe something, maybe nothing... ;) Some quick thoughts...

Today: Don't panic! :) (ok, I just felt that needed to be said) There's really not much change needed today. We already informally do evidence-based risk analysis. The biggest change needs to be in our understanding and acceptance of weak data points as somehow being supremely authoritative. Don't just accept vendor statements on risk. Make them justify their findings! I'll stop short of saying that we should remove "risk" from our lingo, but we're almost at that point.

Near-Term: Quantitative approaches needed to be refined and improved. Where possible, they should be used. Where not, decent rubrics need to be constructed to more consistently capture qualitative results. A new focus should be placed on decision analysis and how to better incorporate that into overall governance approaches.

Long-Term: We need to get away from the petty squabbles that mire us today and move our focus to survivability. When you move to a survivability mentality, it becomes much easier to define and capture reasonable metrics for performance (see my post "Survivability Rather Than Security Metrics"). Beyond this change in mentality, we need to continue improving our measurements, including dropping those that simply don't work or make sense. This means learning how to focus on what's important. And, as part of this, we need to make sure that we work to equip and empower people to make good decisions.

1 TrackBack

I've already written a bit about how we've gotten to where we are today in the infosec industry, as well as having talked a bit about my definition of GRC as a discipline. However, I think there's value in taking... Read More

About this Entry

This page contains a single entry by Ben Tomhave published on March 29, 2011 1:53 PM.

Correlation: Community Size vs. Community-Centric Values was the previous entry in this blog.

Thoughts on How We've Gotten Here is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7