Recently in infosec Category

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

Design For Behavior, Not Awareness

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch his TedX) in order to make meaningful gains in security practices. Anton even went so far as to write about it a little over a week ago (sorry for the delayed response - work travel). As promised, I feel it's important to counter his arguments a bit.

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

I recently had the privilege of attending BJ Fogg's Behavior Design Boot Camp. For those unfamiliar with Fogg's work, he started out doing research on Persuasive Technology back in the 90s, which has become the basis for most modern uses of technology to influence people (for example, use of Facebook user data to influence the 2016 US Presidential Election). The focus of the boot camp was around "behavior design," which was suggested to me by a friend who's a leading expert in modern, progress security awareness program management.

Thinking about how best to apply this new-found knowledge, I've been mulling opportunities for application of Fogg models and methods. Suddenly, it occurred to me, "Hey, you know what we really need is a new sub-field that combines all aspects of security behavior design, such as security awareness, anti-phishing, social engineering, and even UEBA." I concluded that maybe this sub-field would be called something like "behavioral security" and started doing searches on the topic.

RSA USA 2017 In Review

Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc.

For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge).

With that... let's drill into my key highlights...

From my NCS blog post:

Despite the rapid growth of DevOps practices throughout various industries, there still seems to be a fair amount of trepidation, particularly among security practitioners and auditors. One of the first concerns that pops up is a blurted out "You can't do DevOps here! It violates separation of duties!" Interestingly, this assertion is generally incorrect and derives from a general misunderstanding about DevOps, automation, and the continuous integration/deployment (CI/CD) pipeline.

Continue reading here...

"If you're a startup trying to get a product off the ground, you've probably been told to build an "MVP" - a minimum viable product - as promoted by the Lean Startup methodology. This translates into products being rapidly developed with the least number of features necessary to make an initial sale or two. Oftentimes, security is not one of the features that makes it into the product, and then it gets quickly forgotten about down the road."
Continue reading here...

In the world of DevOps we often like to talk about rapid iteration in relationship to shortened feedback cycles, and yet oftentimes something gets lost in translation. Specifically, just because failure is ok, because failure leads to learning, it does not mean that we shouldn't be thinking at all. And, yet... it's all too common!

The Heart of DevOps Is Cooperation

I've been reading a lot lately about generative culture at the suggestion of my boss. Apparently this topic has been popping up and circulating with frequency through DevOps circles in recent months, and seeing as I'm currently charged with doing "stuff" related to security and DevOps, it seemed like a good thing to research.

For those unfamiliar with generative culture, I recommend reading up on it. I found these pieces to be of particular value:

What's most interesting about generative culture is that it really fits well with the current problems facing organizations today with respect to security. That is, infosec spend is still continuously viewed as overhead cost, infosec people are still viewed as obstacles (even when trying to play nicely with DevOps teams), and infosec tools continue to be undermined by the human element, which often sees security as an externality to their specific duties (even when it really oughtn't be).


About this Archive

This page is an archive of recent entries in the infosec category.

humor is the previous category.

leadership-management is the next category.

Find recent content on the main index or look in the archives to find all content.

Creative Commons License
This blog is licensed under a Creative Commons License.
Powered by Movable Type 6.3.7