Operations (Ops) is getting a lot of attention these days as the next big thing in technology to fix. Why now? What changed? Well, for the most part, over the last 10-ish years technology has been accelerating through various levels of abstraction and virtualization. This evolution has lifted many of the technical barriers that were preventing the traditional infrastructure administrator from working more efficiently. Its this ‘lifting of barriers’ that has shifted the spotlight from the technology itself onto the way the technology is being implemented.
As as side effect of infrastructure becoming more programmable, the opportunities to present infrastructure-based resources to teams outside of NetOps have transitioned from “extremely difficult” to “must have” in what seems over-night. This Programmable Infrastructure evolution is no secret with organizations already conducting vendor ‘programmability capability’ surveys under the premise of “if you can’t be automated, you’re not part of our Mode 2 architecture”.
This has resulted in a rapid, and significant, shift in influence over infrastructure purchasing decisions. A shift out of infrastructure leadership and towards teams that look after continuous deployment tool-chains and application delivery automation systems.
Now, while this rapid shift towards the ‘new influencer’ on infrastructure purchasing decisions has accelerated the DevOps movement and, with it, some cool new solutions, it has also brought with it some new problems. Many automation early-adopters quickly realized that NetOps and DevOps lack a common language. Not transport languages like HTTP, or even presentation/rendering languages like HTML. No, quite literally, they speak different languages.
A great example of this is a comparison of Blue/Green architecture and A/B testing. At a high-level, they are talking about similar technical requirements–clever traffic steering–but for very different reasons. As per RedHat’s Principle Middleware Architect, Christian Posta’s, blog (Blue-green Deployments, A/B Testing, and Canary Releases), Blue/Green deployments are about standing up side-by-side systems to allow a safe, simple cut-over to a new service, and, if needed, and simple roll-back to the previous, untouched system. On the other hand, A/B testing is about running simultaneous production systems and directing portions of internet traffic at each. The latter is used to test features and functionality while measuring and comparing user uptake.
In these examples we have different teams with different objectives but, in both cases, they are steering internet traffic across systems and resources in a controlled, timely fashion, and without service interruption.
Back to the language barrier, these two teams work in the same organization and require very similar functionality, but use very different tools to perform the same actions. Developers will solve traffic-steering requirements with software, either written themselves or even open-source. Whereas infrastructure teams will use hardware to meet their traffic steering requirements. This brings us to the crux of the problem: why are they both investing time in like solutions that will run parallel to each other? Well, this language barrier problem comes, largely, from the fact that there’s no skill-overlap between NetOps and Developers. Each have been solving similar problems, side-by-side, for years.
As F5’s VP of product development, Dave Votava, has been heard to say, similar to a hammer, where everything looks like a nail, to a developer, infrastructure must look like an API. Unfortunately, for the developer, its not as simple as strapping an API onto network infrastructure. This is because the API itself only permits remote execution of management tasks. It does not provide any explanation of the nuances and dependancies of the device for which it serves.
For example, lets say we have a piece of infrastructure that supports internet traffic routing, and that this routing is based on HTTP profiles which are used to define and match traffic patterns. Now, if that network infrastructure has a specific requirement that the profiles must be created before the traffic steering policy is applied, because that policy must reference the HTTP profiles at the time of creation, then we have a technical nuance specific to that network infrastructure. If the configuration is built out of order it will fail. And to add complexity, each device within the infrastructure has its own nuances that do not translated across all other devices.
This is the kind of thing that NetOps engineers are experts at. They know the technologies they work with extremely well because that’s what they do all day, work with those devices. So, what would it take to get that process automated and into a DevOps continuous deployment tool-chain?
This isn’t a technology problem. The issue is that of a lack of ‘domain specific’ knowledge. The dependences, requirements, object hierarchy of the infrastructure configurations are not conveyed through the API’s. This is the domain-specific knowledge held by the NetOps engineers. So, what to do? Do we start teaching the tool-chain operators and architects about all of the infrastructure? NO. Definitely not. We mustn’t stop them coding. Their desire to automate is so they can remain productive at all times; creating apps and services that generate revenue or improve productivity.
A far simpler, and more logical solution, is to reduce the programmability skills gap. To create the Super-NetOps engineer!
What is a Super-NetOps engineer? Three parts sarcasm, two parts snark, and a splash of vermouth… I jest. It’s a shift of that domain-specific knowledge towards presenting infrastructure services via APIs. Its a NetOps engineer that has taken the time to translate all of their familiar CLI commands into REST API calls that can, once tested and documented, be shared with teams outside of NetOps. The Super-NetOps engineer understands the JSON payload that must accompany an HTTP POST, and how to point out to the DevOps tool-chain operators which properties they should ‘Patch’ (a little joke there) for each deployment and which properties to preserve at all cost.
This isn’t a huge leap. In fact, myself and a few others have been trekking about the country the last few months offering free infrastructure automation training days. This in addition to pushing vast amounts of content and tutorials to various web resources. The movement is real!!!
That said, like every good journey, it starts with the first step.