"If you're a startup trying to get a product off the ground, you've probably been told to build an "MVP" - a minimum viable product - as promoted by the Lean Startup methodology. This translates into products being rapidly developed with the least number of features necessary to make an initial sale or two. Oftentimes, security is not one of the features that makes it into the product, and then it gets quickly forgotten about down the road."Continue reading here...
In the world of DevOps we often like to talk about rapid iteration in relationship to shortened feedback cycles, and yet oftentimes something gets lost in translation. Specifically, just because failure is ok, because failure leads to learning, it does not mean that we shouldn't be thinking at all. And, yet... it's all too common!
We all know there are problems with security. We all know that things aren't keeping pace or improving measurably and meaningfully at a rate or in a manner that most of us would deem sufficient or acceptable. Yet, all we seem to be doing is continuing to cast stones, castigate decision-makers, and pound the FUD drum. Why isn't anybody talking about addressing the core obstacles?
I've been reading a lot lately about generative culture at the suggestion of my boss. Apparently this topic has been popping up and circulating with frequency through DevOps circles in recent months, and seeing as I'm currently charged with doing "stuff" related to security and DevOps, it seemed like a good thing to research.
For those unfamiliar with generative culture, I recommend reading up on it. I found these pieces to be of particular value:
- "The Future of Generative Organizations"
- "What Makes a Culture Generative?"
- "Building Generative Cultures"
- "Safety Culture - Theory and Practice"
- "Creating a Generative Culture & Overcoming Barriers to Change"
What's most interesting about generative culture is that it really fits well with the current problems facing organizations today with respect to security. That is, infosec spend is still continuously viewed as overhead cost, infosec people are still viewed as obstacles (even when trying to play nicely with DevOps teams), and infosec tools continue to be undermined by the human element, which often sees security as an externality to their specific duties (even when it really oughtn't be).
I'm not completely sure why, but I've been holding off writing this post for a couple months now. Maybe, in part, I didn't want to jinx myself. Maybe, in part, I didn't want to open myself up to criticism or ridicule for Yet Another Job Change in such a short period of time. But... I think the time is now right to more publicly announce and talk about this transition, so here goes...
In mid-June I left Ellucian, where I'd been slamming my head against the wall for several months, and joined New Context as a "security architect" (or, as I put it on LinkedIn, "person of interest"). The title itself is somewhat irrelevant as it's not overly representative of my current responsibilities, which include biz dev, research / thought leadership, product management, and yes, likely, some consulting.
I plan to provide more updates in the coming months about some of the things I'm working on, such as around our Lean Security business management model, but I'll hold back on that for now.
In the meantime, if anybody wants to catch-up, or if there's interesting in bringing us in, please feel free to reach out! New Context has a very senior team that's very deep in areas like agile software development, DevOps engineering and infrastructure, as well as - of course - security. We have several awesome partners, too (a list that's constantly growing). If we can't help you out directly, then it's very likely we can connect you with someone who can.
In preparing for my Cloud Security World 2016 talk, "Automagic! Shifting Trust Paradigms Through Security Automation," I did a lot of thinking about what can be automated, how to automate, and how to demonstrate and measure value around all that jazz. It occurred to me. however, that perhaps I was looking at those questions all wrong. Is it really a question of whether or not something should be automated, so much as it's a question of what shouldn't be automated?
At first blush, this may seem like a silly way of thinking about things. After all, it's probably still too early to talk about automating, well, just about everything, right? As it turns out, this isn't the case. Not even close. There are so many ways to automate many of our standard development, operational, and security responsibilities that I'm actually surprised we're still hearing complaints about inadequate hirable resources and not instead hearing complaints about too much automation stealing jobs.
That said, there are certainly several areas where automation requires human involvement, either as a fail-safe, or as a manual process. Here are a few of those categories and a little information on why fully automating is at least premature, if not an outright bad idea.
(pre-comment: if you've never personally dealt with depression, you may not understand)
One of the things I'm coming to find in this life is that, at the end of the day, we're all alone in this world. Even if we're surrounded by friends, they're external to our lives and will never be inside the darkest of places: our own heads.
We see this play out in many ways. Maybe it's the unintentional neglect of a friendship that highlights loneliness. Maybe it's the negativity of "friends" in reaction to new ideas. Maybe it's just that inner voice, reminding you of the darkness within. No matter how you cut it, we are all alone with ourselves.
Unless you've been offline in a remote land for the past month or so, you've undoubtedly heard that the 2016 VzB DBIR is out. As with every year, two things have happened: 1) DBIR is now the basis of almost all infosec vendor marketing promos, and 2) data analysts are coming out of the woodwork to levy the same old criticisms and accusations that we hear every year.
At the end of the day, there's a few consistent takeaways. First, yes, the data is biased. All data is biased. That's life. Welcome to data analysis 101. There's no such thing as "pure objectivity," only "more or less subjective." Second, yes, the data is dirty. It's inevitable, especially at scale coming from multiple sources. I think the bulk of the incident data is decent. Where things, as always, go off the rails is around the much-maligned vulnerability section (for example, read Dan Guido's criticism pieceread Dan Guido's criticism piece, which links to others as well). Third, for all the noise and drama and bickering and ad hominem attacks, my conclusions don't change. At. All.
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."
George Bernard Shaw
The response to my most recent post has been intriguing insomuch as it was completely predictable and expected (though nonetheless disheartening). The few people who've commented have generally said things like "unrealistic" and "unimplementable" and "already been done, failed." Ironically, none of these criticisms are true, nor are they even necessarily knowable. Sure, there have been other attempts at strict compartmentalization (see Qubes OS), but those attempts aren't a true analog for what I suggested. I digress...
The purpose behind my post here is twofold. First, framing problems is imperative to solving them. Frame a problem in the wrong way and you'll either find no answer, or worse, you'll find a woefully inadequate (or even regressive) answer. Second, we as an industry need to stop being total a**holes when presented with new ideas and open our minds to future possibilities. There's nothing worse than hearing about a new approach, idea, technology, whatever, and immediately responding negatively. What's up with that? Rude, to say the least. Again, I digress...
Framing problems is really what I want to talk about today. The ability to shift our thinking to alternative viewpoints is incredibly critical when thinking about how to solve various problem states. In the example of my endpoint security post, the shift in thinking is to realize once and for all that the current framing of the problem makes it unsolvable. We have ample history now to clear demonstrate that how we're attacking (traditional OS) endpoint security simply isn't reasonable, rational, or pragmatic. As such, time to pivot.
"Insanity: doing the same thing over and over again and expecting different results."
As a security architect, I've come to truly loathe the endpoint security space. The "answer" seems to be an unending stream of "yet another agent" to layer onto an endpoint, usually just to supplement another tool that's insufficient. Rarely, if ever, can I remove one of these tools (like AV! I still have AV after all this time?!), which means I get to encounter all sorts of conflicts and problems, and for what benefit? Why am I investing hundreds of thousands for incredibly small incremental gains? Insanity...
Part of the challenge with endpoint security is the problem state. As it stands today, we're typically stuck in a traditional general purpose OS environment with very little useful segregation. We deploy tools that live inside this general environment and then hope that a) they keep functioning, b) don't introduce more problems, and c) are somehow able to get enough visibility to assert reasonable control. Sheer folly. It's like trying to estimate the size of an infinite universe from an ant's perspective.
Putting aside specialized solutions deployed to endpoints for solving non-endpoint problems (like monitoring or controlling data movement)... the core focus of endpoint security /should/ be focused on monitoring for state changes. Unfortunately, in a general OS environment, this is very difficult because there are rarely clean, clear boundaries that can be watched for these state changes. In the mobile world we see this problem moving to a slightly more tenable position wherein wrappers and containers can be deployed to better define boundaries, which then enables watching for state changes. We're also starting to see this in production applications that leverage a container-based micro-services architecture. All of which leads me to an interesting thought: