What Shouldn't Be Automated, Really?

In preparing for my Cloud Security World 2016 talk, "Automagic! Shifting Trust Paradigms Through Security Automation," I did a lot of thinking about what can be automated, how to automate, and how to demonstrate and measure value around all that jazz. It occurred to me. however, that perhaps I was looking at those questions all wrong. Is it really a question of whether or not something should be automated, so much as it's a question of what shouldn't be automated?

At first blush, this may seem like a silly way of thinking about things. After all, it's probably still too early to talk about automating, well, just about everything, right? As it turns out, this isn't the case. Not even close. There are so many ways to automate many of our standard development, operational, and security responsibilities that I'm actually surprised we're still hearing complaints about inadequate hirable resources and not instead hearing complaints about too much automation stealing jobs.

That said, there are certainly several areas where automation requires human involvement, either as a fail-safe, or as a manual process. Here are a few of those categories and a little information on why fully automating is at least premature, if not an outright bad idea.

Forensics and Incident Response

Several security automation and orchestration vendors offer capabilities in support of forensics and incident response, but these functions tend to center around enrichment rather than automated response. There will be increasing opportunities to automate some responses, and we have in fact seen this sort of action already around certain classes of attacks (for example, DDoS protection and brute force login attack response).

That said, there will be a continuing need for humans to be in the loop for forensics and incident response for the foreseeable future. Really good forensic investigators and incident responders provide much value that we thus far cannot easily automate. There are often times where it's necessary to have a human fail-safe making decisions about types of responses. Moreover, limitations in underlying IT architecture (network, endpoint, cloud) add complexity such that it's not trivial to automate responses. Some day this may change (such as automated enforcement around containers), but it'll be a while.

Legacy Patch Management

Much of the vulnerability management process can and should be automated. Vulnerability scans can be automated, feeding data into GRC and/or ticketing systems and/or CMDBs. These systems can then further automate mapping identified vulnerabilities in environments to system and application owners, and in turn automatically generate tickets (work orders) for resolution. However, that is typically where the human fail-safe has to intervene in order to ensure that the patch is safe to deploy, is approved to deploy, and to then schedule the deployment. Automation can then take over after this point once a change is reviewed, approved, and scheduled, but there is still the need for a human.

Now, that said, the DevOps world reduces the need for human involvement. In fact, in a heavily automated continuous integration / continuous deployment (CI/CD) pipeline where real-world A/B testing is always ongoing, it is more than feasible to automate the patching activities, pushing out updated images with the patches deployed, and then watching the deployment to ensure it lands successfully. Old images can then scale down as new ones scale up, dramatically reducing the need for human involvement. In fact, human involvement is then only necessary as a break/fix fail-safe.


Standard break/fix scenarios are largely the domain of humans, and seem likely to continue to be such for the foreseeable future. After all, break/fix is the very scenario where you want the human fail-safe to be involved. When bad things happen, recovery can be automated, but root cause analysis is still highly valuable, and typically necessitates human interaction.

As artificial intelligence (AI) and machine learning (ML) continue to advance, it is conceivable that much of the break/fix context and enrichment data can be collected and pre-analyzed, but I suspect that 100 years from now we'll still have engineers in the loop to review and analyze an event to determine what needs to be done to prevent it from occurring in the future.


Even though an AI has apparently written a passable short-form novel, that's not to say computers are ready to write their own software just yet. This will undoubtedly change in the future, but until then we will have humans in the loop, and that means we'll almost certainly continue to have problems with application and software security. However, fear not, as it will not be surprising to see future-generation languages move increasingly toward natural language and abstract constructs that can then be more easily coded and manipulated. We're likely far closer to a brave new world and self-writing software than we may want to admit.

Baseline Builds

In talking about "builds" in this context, I'm not talking about software builds than are frequently automated already. Instead, I'm talking about the underlying components that need to be assembled in order to get to that desired CI/CD pipeline state. Components include the CI/CD pipeline itself, standard images, language and IDE customizations, choosing and deploying various components of preference, such as repositories, builders/packers, QA/testing tools, appsec testing tools, orchestration & automation tools, and so on.

Once all of these pieces are in place and properly chained together there will then be a decreasing set of responsibilities for humans. But, it doesn't go completely away. Standardizing images will still be necessary (at least overseeing updates, if not doing them manually). Building additional tooling (like Netflix's Simian Army) will also be necessary and provide an opportunity for human involvement. Similarly, security assessments of overall environments (higher level, not at the appsec testing or vuln scanning level) will continue to be manual as they tend to gauge "squishy" factors rather than simply identifying tools and practices that should be definitively present in a programmatic and queryable fashion.

Approvals and Authorizations

Humans will continue to need to review and approve certain requests. However, over time we will see standard patterns emerge that will reduce the amount of human involvement necessary to authorize/re-authorize things like access roles. On the flip side, automation may lead to an increased need for human interaction to review and approve certain activities. For example, out-of-band approval methods for code commits or code deployments could provide a degree of utility to ensure that unauthorized persons aren't submitting code into the CI/CD pipeline. Additionally, there may be cases where exfiltration of sensitive/restricted data may trigger an out-of-band fail-safe response for a human to review and approve the action (thus helping limit data exfiltration by attackers). The future state for approvals and authorizations will be a nice example of human fail-safes in action.

Policy and Process

As automation increases, the necessity for humans to define policies and processes for governance and oversight will also increase. We can think of this as building policy and process guardrails into automated activities and workflows, not dissimilar from codifying Asimov's Threat Laws of Robotics. The objective for most human interaction will quickly begin to tend toward these governance responsibilities within a heavily automated world, with other emphases on architecting and building solutions becoming more abstract in nature (and less hands-on). Being able to clearly articulate limitations for automation and specifying where a human fail-safe must be involved will be an incredibly important task.

To conclude these thoughts, there will be a need for human involvement for the ongoing, foreseeable future in many areas. However, what's also clear is that these positions will be increasingly specialized and require a high degree of training and/or experience. This (not-so-distant) future state will pose interesting challenges to new workers as they enter various fields.

It will be interesting to watch as automation, ML, and AI continue to evolve and mature, and how this continues to impact the overall industry. The hope is it will lead to great efficiency and effectiveness, as well as more inherently secured environments. Many of the traditional problems will continue to exist, such as around application and software security, identity and access management, and ensuring adequate audit trails are maintained for various activities and authorizations.

We indeed live in interesting times, midway through the digital industrial revolution.

(this article was originally published on the MISTI blog)

About this Entry

This page contains a single entry by Ben Tomhave published on July 21, 2016 2:24 PM.

Alone In This World was the previous entry in this blog.

It's Time For (A) New Context is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives


  • about
Powered by Movable Type 6.3.7