My attention was drawn this morning to an ISC Diary "guest post" by Dr. Eric Cole ("What are the 20 Critical Controls?"). In it, he points to the SANS "20 Critical Security Controls - Version 3.0," which was released in August. In the ISC Diary post, Cole talks about using these controls for "quick wins" and in the controls list itself SANS says "These controls allow those responsible for compliance and those responsible for security to agree, for the first time, on what needs to be done to make systems safer."
Unfortunately, while the list isn't technically inaccurate in terms of capabilities available, there are a few problems. And, contrary to their assertion that compliance and security people can finally agree on something, I don't think these controls are actually controls, let alone a source of true consensus.
1. They're not "controls"
I come from the IT UCF school of thinking on controls. When I think of a control statement, I think of something that is discrete (in the mathematical sense) and actionable. The SANS 20CSC list is nothing more than a wish list of products and practices, and they do not immediately translate to actions.
One thing that you might be thinking here is "Sure, that's great, but there are sub-controls under each of those 20 controls, so that's where you get your actionable requirements." I can understand why someone might think that, but it's just not right. Let me give you an example from IT UCF. Here's a list of 4 controls that stack hierarchically under the IT UCF in the "Technical Security" impact zone:
* Establish and maintain an access classification scheme policy and standards.* Establish and maintain a security access classification model that limits confidential access to only those individuals who require access.* Establish and maintain a business security requirement and an access classification statement for users and service providers for the different systems and networks.* Establish clear and consistent guidance between controls, information classification, systems, and networks.
Notice that even the top-level control statement is directly actionable. The SANS list, in contrast, is not nearly so actionable. Really, what they're describing is 20 impact zones (at best), and even that is a bit of a stretch.
2. They're not scalable
This list of "controls" is more like a shopping wish list, which is all good and fine if you're the largest organization with the deepest pockets. However, how do you go to the smaller firms and tell them that they need to go spend a few million dollars on new technologies, or at least a million or two on outsourcing? I think we oftentimes forget that small firms represent 99.7% of all employer firms in the U.S., and employ more than half of all private sector employees (source: "How important are small businesses to the U.S. economy?").
More importantly, the "controls" advocate practices that simply cannot be met by the average small firm. DLP for everybody? A well-trained security staff that is expert in secure network engineering? If nothing else, this list should encourage small firms to simply outsource everything, even if it costs more. However, where's the risk analysis? Is it really sound thinking to arbitrarily tell organizations that they need to adopt certain products and practices?
Certainly, there are places where this makes sense (e.g., AV, patching, basic firewalls, backups). However, beyond that, where do you draw the line? I really disagree with the assertion that this list somehow represents all the practices that every organization should do. Moreover, while I'm sure it maps ok to most major standards or regulations (e.g., FISMA, PCI DSS), it's certainly not the end-all-be-all.
3. They're designed to sell product
Really, they should call this list "20 Pseudo-Critical Faux Controls for Technology Adoption." It's clear that this list exists to push more product (just see the "View the User Vetted Tools" link), and not just to improve security. Unfortunately, while tools are useful when deployed properly, it is irresponsible and inappropriate to advocate specific technologies across the board (especially specific vendors).
Does this list represent useful information? Sure.
Is it the absolute minimum list of things that every org should do? No way.
Is the approach reasonable or comprehensive? Not at all.
What's missing? Context and program development.
The list is technology-centric. It overlooks policies, governance, and a reasonable risk management approach. Perhaps they feel these things are implicit, but if you know anything about controls frameworks, then you know that they must be discrete and actionable, and they should be independent of technologies. It's unfortunate that this has been done, but it's not surprising. It is, however, disappointing.