Dan Geer (CISO at In-Q-Tel) last week posted two articles and the text of a recent speech he'd given. The two articles account for chunks of the speech, but in a nice, easily-consumed format. His comments urge (or predict) nothing short of a major sea change that, at first blush, seems to feed into the 2012 mythos, but in reality may represent a watershed realization in the industry (as such, expect it to be another 20 years before the world collectively realizes the wrong turn(s) made;).
Before talking about this further, let me point you to his pieces:
* "Power. Law."
* "More or Less"
* "People in the Loop: Are They a Failsafe or a Liability?"
A few quick thoughts on the pieces:
* "Power. Law." - discusses the need for automation for content mgmt and sysadmin tasks... humans can't keep up, so machines will be increasingly relied upon to catch up and keep on top of things... there simply aren't enough people in the field to match the pace of change...
* "More or Less"- discusses what roles will become automated, what will remain "human," and the potential cost of human "failsafe" roles (liability)... also, secondarily, he poses an interesting (political) conundrum about how to redistribute wealth when there are no human jobs/roles to be filled by those out of work...
* "People in the Loop: Are They a Failsafe or a Liability?" - the whole big picture, pulling in thoughts from the two articles, plus elaborating on these ideas and providing several examples and analogies
At OWASP USA 2010 (my notes), David Rice talked about the parallels between the anti-pollution movement and application security. He suggested that we're still at the "green movement" stage of things (at best), trying to reduce the overhead costs associated with application security, but not yet to the point of "going blue"; that is, moving to sustainable practices that can help grow top-line revenue. Those thoughts seem even more prescient today in light of Geer's articles and speech. What we're doing is not sustainable, and will require a major paradigm shift. More importantly, we're so far behind, and falling ever-farther behind on a daily basis, that any notion of catching up following the same trajectory is simply unrealistic and irrational.
Instead, what's becoming increasingly obvious is that we need to completely change the problem space. Today, the problem space is still largely defined by the traditional mindset of systems and network administrators slogging it out in the trenches, trying to keep up with the ever-evolving and accelerating attacks being thrown at them. This approach relies on the zero-sum mentality that all attacks must (can) be stopped. In reality, this approach is failing precisely because it relies on humans to counter attacks and threats, rather than allowing systems to automate the growth and defenses. Attackers can evolve attacks exponentially, while defenders can only respond linearly. This is not a good situation to be in!
For some time now, some of us have advocated two key changes. First, we've talked about the need to shift from the zero-sum approach to a survivability mentality. Instead of believing that all attacks can be rebuffed, it's instead imperative to assume that attacks will succeed, and then build out monitoring and response capabilities that equal or exceed defensive capabilities in order to ensure continue operations despite degraded conditions, and to reduce the overall (negative) impact of an event.
In an example that furthers my thinking here, consider my response to the post "Insecure at any speed.", which talks about the findings in the recent breach data report from Trustwave SpiderLabs:
Perhaps the problem here is focusing on the occurrence of breaches rather than on the recovery from them and the subsequent impact on the bottom line. If the focus shifts away from the traditional "all breaches are bad, they must all be stopped" to "the business must continue to operate and survive despite degraded conditions," then the overall approach can be revised accordingly. I found the TW report to be unnecessarily inflammatory and derogatory, seemingly implying that people are stupid for not having specific technologies in place. Yet, who are they to make that call? It's up to the business to assess and understand the risks in those decisions and account for them accordingly. There are many tools that /can/ be used, but the right question is "what tools will help us continue functioning as a business?" That's where this security industry has gone so terribly wrong, and where we still see continued resistance to change.
Second, we've talked about the need to dissolve "infosec," splitting it between operational responsibilities and GRC (as a discipline). We're starting to see cases emerging where this very thing has been done, which has resulted in a far more effective risk management program, and that allows operations to become better optimized. In both cases, these changes lead to more legally defensible approaches and decisions, which means that risk and legal liability are better managed (see a bit more on legal defensibility here and here)
The tie-in to the Geer pieces is this: Once you split-out the operational duties, you can then start looking at methods to automate those activities. Where we have people sifting through logs, SIEM reports, etc., today (as well as pushing out patches, hardening servers, improving elasticity, etc.), we can instead start looking at sentient systems (beyond simple AI) that can start managing all those tasks. The down-side to this reality is that it means losing a lot of those human operators (admins) as they're replaced by machines. The up-side is that machines (in theory) will be better able to detect, respond to, and evolve defenses as threats evolve.
On the flip side, the GRC (as a discipline) program remains largely human. Where automation comes into play is in helping collect, aggregate, and correlate data, and automate reporting. However, overall, you still need humans to follow processes, make (risk management) decisions, etc. These are business leadership responsibilities, supplemented to a degree by systems, but not completely replaceable. This situation, by the way, is one that we encounter frequently in the GRC (software) industry, where customers ask us for solutions that, ultimately, map to processes that humans have to follow, rather than to activities that can simply be automated. Tools are great for collecting, aggregating, and correlating data. They're even great for helping build reports. However, when you then start talking about topics like enterprise risk management, you quickly find that there are significant limits to what can and cannot be automated. Finding the right balance is, of course, the challenge; and, I think that things are still too unsettled to know what that "right balance" might be (though that may change soon, and quickly, if Geer is correct).
At the end of the day, this is a thought piece; and, a compelling one at that. We are already aware of sound alternative approaches, and they will integrate well with the vision that Geer has espoused. The question is whether or not we'll have the intestinal fortitude necessary to make the hard decisions and move forward. I can't help but wonder if the socio-politico-economic question on wealth redistribution won't end up being the lynchpin question to resolve. Certainly here in the US, the right has no interest in addressing the problem in an egalitarian way. Yet, what if the problem space comes precisely down to the fact that the old models do not fit the modern reality? What if the old economics have no relevance to current fiscal realities? These are heady questions, and they ultimately come back to the need to aggressively reach consensus on the "right" (i.e., correct) path forward and vision for the future. It seems increasingly likely that we're at a proverbial crossroads that could either take us to Utopia or Dystopia, depending on whether or not we pull together or fly apart. It's clear that we live in interesting times.