« Ow! Yet Another Climbing Injury (YACI)... | Main | New Threats in Web 2.0 »

Notes on "The Psychology of Security"

I recently read Daniel Gilbert's Stumbling on Happiness (I blog briefly about it here), which got me to thinking about the tricks the brain plays on us and how this might apply to security. Interestingly, not long after that Dr. Bruce Schneier posted a paper titled The Psychology of Security, which he presented at the 2007 RSA Conference. In reading through his paper, I found a considerable amount of similarity with Gilbert's book, which was interesting. More interesting, however, were insights I've gained into how we as infosec practitioners might be able to better present security concepts to consumers and customers so that they'll welcome what we offer, rather than resist security improvements.

Following are my notes from reading Schneier's paper, plus some additional follow-up.

* Security needs to address primitive constructs for risk assessment. Human brains have not evolved to account for modern threats.

* Findings (e.g., assurance) should always be placed into a context of gains rather than losses because people are risk averse to gains. To illustrate, an example:

Subjects were divided into two groups. One group was given the choice of these two alternatives:

* Alternative A: A sure gain of $500.
* Alternative B: A 50% chance of gaining $1,000.
The other group was given the choice of:
* Alternative C: A sure loss of $500.
* Alternative D: A 50% chance of losing $1,000.
The study showed that people were far more likely to choose A over B, but in contrast strongly preferred D over C. What this means is that given a gain situation, people preferred a sure thing. However, if they had to choose a loss, then they were willing to take a risk if it meant they might not lose as much (or anything at all). Studies also showed that people responded more favorably to the A-B type gain comparison than to the C-D type loss comparison.

For findings, this means that we should state requirements in terms of a sure gain in security, rather than as a chance of a gain. If this can't be done, then the next best thing is to frame the preferred choice as a chance of loss rather than as a sure loss.

* Along these same lines, people are more likely to accept incremental gains, but not incremental losses. Also, they're more likely to trade off more security to keep what they have. As such, we need to make sure findings are phrased as a win-win scenario as much as possible -- in an effort to prevent the feeling that something is being lost.

* People inherently assume that there is a greater probability of a good outcome than a bad outcome.

* People are far more likely to accept risks if they feel that they have control over them. As such, findings should frame risks in a manner that suggests the risks can't be controlled (minimized), only prepared for. While we're definitely trying to remove vulnerabilities, overall what we're trying to do is develop resilient systems. The trick here is to get people to think about threats in a manner similar to natural disasters, rather than as technical challenges that can be manipulated. It's not a matter of if we'll be attacked, but when. Not a matter of if we'll have failures, but when. And so on.

* Easily remembered data carries greater weight than hard-to-remember data. Research also shows that examples need to be personal, detailed, and vivid. Even if the example does not have an immediate impact, studies have shown that over time vivid examples stand out far more clearly than staid examples. This means that statistics alone are not adequate. We essentially need testimonials that can be used to reinforce threats. Making these examples personal is the big key. Stories about other companies will be less effective than stories about our own company. We also need to work to bring security stories to the forefront, much like headline news. The more people hear about, and think about, security incidents, the more likely they are to be concerned about those things and take actions accordingly.

* When presenting risks, it is more effective to frame them individually than in a group. Meaning, each finding in a report should be addressed independently, if possible. Findings addressed in a group will decrease the overall feeling of risk than if addressed individually.

* More information is not the key. Studies have shown that people are already overwhelmed by the amount of information available. Thus, the more we can boil things down, the better. Sure, it doesn't hurt to have the data to back assertions up. However, it's probably not information the average person needs to see.

Some additional reading, derived from the end notes of the paper, include:

Daniel Gilbert's LA Times article

Don Norman's Being Analog

I've also ordered some of the books listed in the end notes, and have picked up a couple essays by Freud for comparison. My main concern is not to put too much weight in likely "pop psychology" sources, but to now ferret out better academic resources.



TrackBack

TrackBack URL for this entry:
http://www.secureconsulting.net/MT/mt-tb.cgi/248

Listed below are links to weblogs that reference Notes on "The Psychology of Security":

» Odds and Ends from The Falcon's View
Time for another miscellaneous entry in ye olde blog. Today I thought I'd focus on a few quicks hits on personal cultural exploration. Later this week: posts on upcoming travel, meeting with bright young minds of the future, and the... [Read More]

» Reflections on the Minnesota Trip... from The Falcon's View
It's Monday and I'm back to work after my brief trip to Minnesota. Visited my parents, spoke to 3 groups of students at Concordia, had an excellent conversation with Dad's psychology prof colleague M about research I'm working on, hung... [Read More]

» Another Week Gone... *whew* from The Falcon's View
Thaaaaaank goodness it is indeed Friday. It's been a very busy week, and I'm just glad it's over. Ran a few times (mostly poorly). Lifted on Tuesday (very well). But, the theme, by-in-large, has been work, work, work! Overall, it's... [Read More]

Post a comment

About

This page contains a single entry from the blog posted on February 27, 2007 8:38 AM.

The previous post in this blog was Ow! Yet Another Climbing Injury (YACI)....

The next post in this blog is New Threats in Web 2.0.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.