Which Came First: The Software or The Security?

A very common chant in the secure coding (or application security, or just plain infosec) community is "bake security in from the beginning." From the perspective of writing new code, building new apps and systems, and so on, this makes perfect, wonderful, logical sense. And, for that matter, while it may on the surface look like fixing vulnerabilities after the fact is cheaper, those calculations often leave out the cost of resolving a compromise (see "Mythbusting, Secure code is less expensive to develop"). But, in chatting with my good buddy yesterday, an interesting question was raised: what about all the software that's already out there?

Historically, it's pretty clear that software came well before software security. Systems were originally closed and the level of expertise and access necessary to fool with them was generally quite high. This is not to say that security didn't exist - in fact, quite the opposite, security has obviously existed long before software. The problem, however, is that security up until the advent of modern technology was oriented toward physical security, supplemented by policy and law. This problem plagues us to this day insomuch as the human brain is not wired to handle non-physical threats very well.

As a result of this order of evolution, an interesting problem pops up; that of retroactively applying software security practice to legacy code and systems. Perhaps the best example of this type of problem is the Y2K rush to fix billions of lines of legacy code before the rollover to 2000. Millions - perhaps billions - perhaps trillions - of dollars were poured into addressing this one little bug. Now think about legacy software - such as SCADA networks - and consider that fixing those problems are not trivial. Now look at the PCI DSS. Here is a standard that expects companies to address a long list of problems in legacy systems and applications that may or may not be conducive to retrofitting security measures.

Flipping back to the Y2K comparison, the estimate at the end of 1999 was that $21B was spent fixing all that code. $21B - yes, billion. What's particularly interesting about those cost estimates, however, is who was generally stuck footing that bill. GE estimated $550M spent. And then there's the insurance industry, which spent an estimated $6.8B. Now consider who else was frequently stuck fixing this problem: vendors, banks, and other large corps.

Now let's compare this to compliance with the PCI DSS. No longer are the costs just borne by large corps. This isn't a problem addressed only at companies with billion dollar budgets. In fact, just the opposite is true. Many organizations affected are small in comparison with the Y2K companies. And yet, when you get down to it, we're looking at requirements that are much more extensive than fixing a date calculation bug. Yes, Y2K had many challenging logic problems to fix, but that's not the point. The point is that small companies are oftentimes locked into platforms that are by definition legacy and that may never achieve PCI compliance - certainly not at the hands of the merchants (I'm somewhat excluding compensating controls here - please allow that to slide;).

There is, of course, a counter-argument. Whereas many Y2K systems were mainframes coded in the 70s and 80s, using such fun languages as ADA and COBOL, the current applications are coded using much more modern codebases. In fact, a significant number of in-scope systems are web-based, and thus represent a fairly mild challenge in terms of fixing. But let's not forget that there are dozens of requirements to address in these environments, and the scope can very easily explode if simple things like network segregation do not exist.

None of this is to say that companies should not be doing what is required in PCI. In fact, just the opposite, they should be applying these principles - within a risk management framework - to their entire organization. However, that being said, it is important to consider regulations like PCI in a proper context that is weighted to include the cost of addressing legacy systems and applications. Then add in time estimates. Y2K was not addressed overnight. It took a few years of extensive work, and even then it was never fully done. To expect that organizations will be able to rapidly address all their software security weaknesses in short order is similar folly.

This situation then introduces an interesting challenge: does security testing, planning, and remediation include a time component? Should it? The answer, I believe, is a resounding "YES!" with a caveat. That caveat is - wait for it - risk assessment and management. *TADA!* The best reasonable way to factor in time considerations for addressing security concerns is by assessing the risks and prioritizing them accordingly. Bringing this back to PCI, I will then point you at the "Prioritized Approach for DSS 1.2", which effectively acknowledges that it is not reasonable to expect instant full compliance with the DSS. Just the opposite, the card brands recognize the costs and challenges involved. Of course, it would be interesting to know how their risk assessment maps to yours, but that's a different story. :)

So which came first, the software or the security? I guess it depends on what kind of security we're talking about and when the software was created. :)

About this Entry

This page contains a single entry by Ben Tomhave published on May 15, 2009 8:00 PM.

Does Tokenization Solve Anything? was the previous entry in this blog.

Dowsing Your Way Through Enterprise Risk is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7