If attending Defcon and meeting some of the hacker crews taught me anything this year, it's that we've pretty much already lost the war and are just squabbling in petty battles. That we continue to think and talk like there is a winnable case just delays the inevitable. It's not if your organization will be compromised, it's when. If you don't understand and accept this argument, then you're just setting yourself up for a lot of unnecessary pain and suffering.
As cynical as these comments may seem, the simple fact is that the security industry is not right. For that matter, business isn't all that right, either. Whether it be obsession with signature-based tools, or shoddy accounting practices, a fallacious belief that compliance equals security, or simply deluding yourself into not thinking you'd be a target, the outcome is always the same: exposure leads to compromise leads to loss. Yet why be bound by this outmoded way of thinking? Why simply accept that things are broken?
It's long been my belief that there are proactive measures that can be taken to reduce exposure. The problem, however, has been one of gauging when you've done "enough." When are your efforts complete? The general fall-back is "industry best practices," but what is the objective being achieved? I've come to believe that there are two key objectives that should be targeted through security, governance, and risk management practices: defensibility and recoverability. Neither of these terms, or their associated concepts, are new, but I felt it was important to highlight them again, and particularly with a twist that you may find new or different.
Historically, when thinking about defensibility, we've usually correlated the concept with something like Defense in Depth (DiD). DiD talks about setting up layered controls within an environment so that if one control fails, then the others will compensate and contain or stop the resultant incident. This notion is all good and fine until one encounters Pete Herzog's Möbius Defense presentation and you realize that "wow, my DiD isn't really very deep."
The flaw in logic is thinking about defensibility in technical terms. It's not really about the tools you use to defense yourself - or, at least, it should not be. Instead, it should be about legal protection for your organization. Defensibility in the legal community is about establishing a position against which arguments (or accusations), such as on negligence, liability, or criminal wrong-doing, can be countered from sound footing. That is, when your organization has a breach, it can come to the table and demonstrate that it had in fact performed due diligence and can prove that it had upheld a reasonable standard of care.
Using this perspective of legal defensibility changes the game a bit, or at least it should for the techies in the crowd. On the surface, it may seem like it's regressing to a stance of focusing on "best practices" (which Scott Adams, via Dilbert, termed "mediocrity"), which to a point would be true, but beyond that it asks the question: Did you do all that was reasonable in protecting your corporate assets? A subjective question to which would come a subjective answer, but it's an important point to carry forward.
Perhaps the best recent example that I can provide of failing to establish a defensible position is of embattled Heartland CEO Robert Carr (see here and here for some good summaries of commentary). In the story from Carr we learn that Heartland was compromised by a relatively old school attack, by a known perpetrator, through systems deemed outside the scope of PCI DSS. More importantly, however, Carr reveals that he thought that PCI compliance equaled security, and thus thought enough was being done. Whether or not this is in fact true can be debated endlessly, but the biggest point to take away from this is that his arguments are weak and indefensible. That is, he admitted to focusing only on certain systems - and even on a limited set of protections for those systems - rather than working to establish a defensible position for the entire enterprise.
Defensibility should be one of your top goals in making arguments for implementing changes in your environment. Forget about risk for a minute as the sole focus of your program (don't forget it about it permanently, because you will need a risk management program to help demonstrate defensibility) and instead consider that if you cannot make an coherent argument in front of 12 angry men that your organization is doing what is reasonable to protect company assets, then you need to identify those gaps and start making real changes.
If the basis of your proactive initiatives is seeking a defensible position, then at the same time you must invest in recoverability. Again, it's not a question of if, but when, you will be compromised. If you work from this assumption, then the second key focus needed is on recoverability. This topic can stretch into multiple directions and layers, about which much has been written, but it cannot be stressed enough. If you do not have regular backups in place of data important to the business, then you are setting yourself up for failure. If you do not have alternative location and communication plans in place addressing natural disasters and epidemics, then you are setting yourself up for failure. If you do not have personnel employed or on retainer to assist with forensics and incident response, then you are setting yourself up for failure.
A key part of recoverability also ties into architecture. How compartmentalized are data and systems within your organization? Can there be more compartmentalization? Also, what other controls are in place that would hasten recover and lessen the impact of a breach? Data encryption? Remote wipe for mobile devices? LoJack for Laptops? Central data storage that has secure access and strong authentication? A legal team that understands tech issues?
In traditional language, we're talking about business continuity planning and disaster recovery plans. The objective is clear: identify what's important and then model threats against those things to build contingency plans. Because, again, the assumption should always be that something bad will happen some day. If it never does, fine, but the chances of that being true are slim to none.
And, by the way, don't forget about logging and monitoring as a key component of recoverability. If you don't have extensive logs of what's happening with your apps and systems, then your options for recovery will be highly limited. How can you prevent the same bad thing from happening again if you don't know what happened in the first place.
The point to this rambling is quite simple: it's time to change our thinking and approach to assurance management. Instead of getting sucked into the black hole of fighting against each new threat and vulnerability, we instead need to structure programs around a defensible argument of due diligence and reasonable care. And, because we know that something bad will happen some day, we must have refined and thorough plans in place to ensure effective and efficient recoverability from incidents.
None of this should sound new, but it should give you pause to reconsider current initiatives. If the war is lost, then perhaps it's time to change the battlefield to one where we can at least feel a little better about our efforts and rekindle hope that perhaps there's a reasonable cause for action. Perhaps this is little solace, but it at least gives us an opportunity to regroup and refocus efforts (such as around data encryption and key management).
(Note: cross-posted from the Truth to Power Association Practical Security Core.)