It's quite possible (likely?) that this is not an original idea, but it's something that occurred to me while we were discussing presentation planning for the year at the most recent OWASP NoVA chapter meeting. To me, application security can be neatly divided into three key domains: Architecture, Secure Coding, and Testing. Each of these domains represents a distinct set of practices that need to be addressed to have a truly complete appsec program.
What I find fairly interesting about these domains is that they're not always all considered, whether it be in a software security program, or a Computer Science curriculum, or in contributions from key organizations like OWASP. In fact, it seems that Architecture, in particular, doesn't get much love at all, while a nominal amount of time is spent on secure coding (still less than we need), and while the lion's share of time is dedicate to the development of tools for testing.
When I say "architecture" here I'm really using the term loosely. In building construction, it is the job of architects to understand and define the requirements, and then translate them into a design that not only takes into consideration the customer's vision, but that also makes sound decisions about structure, support, stability, and so on.
So it also should be with software architecture. When done well, architecture should be a confluence of experience that brings together not just the customer and the developer, but also operations, security, and any other stakeholders who may have a valuable contribution. The goal here is to pre-vet the design very early in the process in order to gather as many requirements as possible, to produce as complete and sound a design as possible, and to try and ensure that the application will be a lasting success with adequate infrastructure, stability, and security.
It seems that oftentimes this does not happen at all. How many organizations bring everybody together when architecting solutions? How common is it for development teams to consult with operations, security, and other stakeholders at the outset of the process? From what I've seen, this is exceedingly rare. More often than not, operations sees the application and associated requirements on the eve of deployment. Security may not see the design at all, but will rather be expected to pull out a tool for testing, also on the eve of deployment, in hopes of adding some value.
I contend that this overlooked domain is the key to addressing many deficiencies in appsec today. If for no other reason than to create and document dialog around the design process so that each application will move forward with a written history that explains why tradeoff decisions were made (a key component to proving due diligence should something bad happen in the future).
The area of secure coding definitely receives attention, whether it be in the form of Top N lists, specialized training for developers, the availability of development tools and environments that help address security, or academic research into improved programming languages, environments, etc. However, despite these efforts, they are still not nearly as prevalent as is desirable, especially at the foundational levels.
This situation is the result of a paradox. It's what I think of as the Perfectionist's Dilemma. If all developers were true perfectionists, and if they were given the time necessary to be perfectionists in their coding, then it seems unlikely that we would have as many problems as we do with the code in production. Of course, at the same time, very little code would ever make it to production. The dilemma is the direct result of a couple factors:
1) Perfection is unachievable: No matter how we might delude ourselves, there are simply too many factors that go into building complex systems, making it irrational and impossible to achieve perfection. Moreover, how do you define perfection? You may be able to write a single line of interpreted code (print("hello world");) that is semantically "perfect," but that cannot compensate for problems introduced in other areas, such as in the compiler, the OS, etc. If you can't define "perfection," then how can you achieve it?
2) Humans are fallible: Perhaps a single line of code is trivial and thus a bad example, but show me the developer who is writing a single line of code each year. How many lines of code does the average full-time developer write each year? 10,000? 100,000? 500,000? The larger the number, the greater the likelihood that the human will make a mistake (or, in reality, several).
Since perfection is not achievable, we then turn to tools and testing to compensate for these weaknesses. However, I have to wonder if we put too much faith in these tools. It reminds me of the monologue from early in the movie Top Gun that talks about the history of the US Navy "Top Gun" program and how pilots had become too reliant on the technology and lost some of their skills (see Wikipedia for more details). Have we reached a similar point in appsec where we need to reduce our dependency on our technology in order to take the art to a new level.
Here we have the bailiwick of the appsec industry. Vendors have products, and they clearly have the budget to market them. If you were to blindly go off what's prevalent, you'd think that appsec was almost exclusively about the tools available to catch the problems introduced by developer error, etc. Of course, nothing could be farther from the truth. Nonetheless, tools are important and useful.
There are several types and levels of testing, which I won't write about extensively. Suffice to say, whether you want to analyze uncompiled code or run an automated scan against the resultant application, there's a tool for that. There are also tools to help developers write better code in the first place, not to mention libraries, techniques, APIs, etc., that work toward the same objective.
Though not part and parcel to testing, there are also process-oriented tools, such as secure coding variants of SDLC or SDL, that help achieve some of the objectives outlined above. As an industry, we must be careful to make sure that we do not overlook all classes of tools, while at the same time we need to realize that, as of today, no tool will catch 100% of problems 100% of the time, highlighting the importance of all three domains.
What About Metrics?
Now, I'm sure a few people out there are saying "yes, yes, fine, but you're clearly missing some things, like metrics!". This is, of course, quite true. Measuring the quality of software is very important, at least if you can find a reliable to do that. The problem, of course, is that simply relying on a metric such as the number of bugs found per lines of code may seem like a good idea, but it may not actually be measuring what people think it's measuring (it's kind of like merit-based pay for teachers based on student performance - it's extremely difficult to do fairly and in a way that actually measures teacher performance rather than student performance).
It is important to remember that appsec is merely one part of an overall enterprise assurance program. As such, it needs to account for requirements, risk, operational security, audit, compliance, quality, performance, and many, many more things. However, if we can begin to bring balance to these three domains - particularly by increasing focus on collaborative architecture - then I believe that we can pave the way to jumping the curve to the next level of performance.