#24: Becoming Super-NetOps: Day 1

If you’re right at the beginning of your Super-NetOps journey, then this article is for you!

A common question from our NetOps pals is, “How do you get started?” Well, let me take a stab at that in todays article. Before we begin, a few things to keep in mind.

  1. EVERYONE is starting from a different level. Focusing on who’s ahead of you in the learning path will only distract you from your success. This journey is about you. Go at YOUR pace.
  2. Making mistakes is good. Mistakes are evidence that you are learning new things and have the confidence in yourself to evolve. Learn more about a great culture for innovation from John Allspaw, here. <- A good one to share with the team!
  3. Beyond the tools and scripts, pay close attention to how you change your approach to troubleshooting and moving forward with a solution. Often overlooked, learning to automate without adopting and nurturing new practices and culture is almost pointless. Take the time to see how process changes and how you begin to look at challenges differently.

Ok, keeping these points in mind, here’s some new concepts to look into:

1. Understand RESTful interfaces

If this is your first time venturing away from the GUI/CLI, or maybe you just want a refresh, I recommend you watch this great REST API introduction video posted by WebConcepts. In this video you’ll see how you can communicate with popular on-line services including Facebook, Google Maps, and Instagram via their REST APIs:


2. Interact with a RESTful Interface

There are many tutorials on the internet that show how to communicate with a RESTful interface from a scripting language, like Python or Javascript. But what if you don’t know the scripting languages they refer to? Sometimes its best to avoid an overload of too many new concepts to learn at once.

For this very reason, I tend to direct people to the awesome, multi-platform REST client, POSTMAN:

While their messaging does target ‘API Development teams’, its fantastic for API beginners, too. With POSTMAN installed, I recommend you watch the great tutorial “How to use the POSTAN API Response Viewer”:


Once you’ve worked through the basics, I recommend going through the POSTMAN video tutorials to learn some of the time-saving features you’ll come to depend on:

3. Troubleshooting JSON

Now that you’ve had some interaction with a RESTful interface, you’ve probably had some experience with how a small error can break things. Fear not, while you’re on your path to becoming JSON-fluent there’s always the great on-line JSON validator,

Simply ‘paste’ your misbehaving JSON data into the text field and click ‘Validate JSON’. Below its showing me that I missed a comma on the end of the second line:


We’ll look at more data formats, like YAML, in future posts.

4. Conclusion

If you’ve worked through these exercises then congratulations is in order. You have already begun your journey towards becoming a Super-NetOps engineer! If you’re still feeling a little overwhelmed with these concepts, there is no shame is working through them again from the beginning. Repetition builds expertise and being comfortable with change is all part of the journey!

Next in the series we’ll look at some more advanced POSTMAN features and then take what you’ve learned in POSTMAN and apply it to scripting languages.


REDtalks #18 – Enabling the docker TCP API in AWS

Not a traditional REDtalks post today (no interview/demo), but this took me a while to work out so I thought I’d share.

What’s this about?

It all started with me building REST extensibility solutions for F5 Networks in AWS. I created (Launched) a new AWS AMI Linux instance – yep, the very first one on the list: “Amazon Linux AMI 2017.03.0 (HVM), SSD Volume Type“.

Next, I followed the AWS instructions to install docker:

sudo yum update -y

sudo yum install -y docker

sudo service docker start

sudo usermod -a -G docker ec2-user

docker info

NOTE: Full docs here:

This is where I got stuck!

As part of the solution I needed to issue a docker command on the docker host, from inside a container… Ok, Batman, to the Google-copter…

There’s loads of suggestions out there to map /var/run/docker.sock into the container using -v. For example:

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock my_container sh

With this you can execute:

$ curl --unix-socket /var/run/docker.sock http:/containers/json

HOWEVER, there are loads of forum posts saying to be real careful about mapping /var/run/docker.sock to all you containers…

What to do?

Enable the API over TCP! 

Back to the Google-copter, there are a few posts out there about getting it running on Ubuntu but none for the Linux AMI distro…

A solution (hours later…)

1. Change some startup options:

The default ‘OPTIONS’ in /etc/init.d/docker is:


we need to change this to:

OPTIONS="${OPTIONS:-${other_args}} -H tcp:// -H unix:///var/run/docker.sock"

So, you need to go ahead and edit that to something link this:

$ sudo vi /etc/init.d/docker

OPTIONS="${OPTIONS:-${other_args}} -H tcp:// -H unix:///var/run/docker.sock"

2. Restart docker for these options to take effect:

sudo service docker restart

Now you have enabled the docker API over TCP! #w00t

Test the API

Lets get the API version:

curl http://<ip_address>:2375/version

NOTE: Replace <ip_address> with the IP address of the docker host, or its hostname!

The response will look something like this:


Note the:


Now add that version number to the beginning of the URI, slap json on the end of it, and presto:

curl http://<ip_address>:2375/v1.24/images/json




Now you can go read this:


CAUTION: One last step, and this is REALLY important! Don’t leave your Docker API open on the Internet!

Does DevOps need a “Super-NetOps”?

Operations (Ops) is getting a lot of attention these days as the next big thing in technology to fix. Why now? What changed? Well, for the most part, over the last 10-ish years technology has been accelerating through various levels of abstraction and virtualization. This evolution has lifted many of the technical barriers that were preventing the traditional infrastructure administrator from working more efficiently. Its this ‘lifting of barriers’ that has shifted the spotlight from the technology itself onto the way the technology is being implemented.

As as side effect of infrastructure becoming more programmable, the opportunities to present infrastructure-based resources to teams outside of NetOps have transitioned from “extremely difficult” to “must have” in what seems over-night. This Programmable Infrastructure evolution is no secret with organizations already conducting vendor ‘programmability capability’ surveys under the premise of “if you can’t be automated, you’re not part of our Mode 2 architecture”.

This has resulted in a rapid, and significant, shift in influence over infrastructure purchasing decisions. A shift out of infrastructure leadership and towards teams that look after continuous deployment tool-chains and application delivery automation systems.

Shifting Influence

Now, while this rapid shift towards the ‘new influencer’ on infrastructure purchasing decisions has accelerated the DevOps movement and, with it, some cool new solutions, it has also brought with it some new problems. Many automation early-adopters quickly realized that NetOps and DevOps lack a common language. Not transport languages like HTTP, or even presentation/rendering languages like HTML. No, quite literally, they speak different languages.

A great example of this is a comparison of Blue/Green architecture and A/B testing. At a high-level, they are talking about similar technical requirements–clever traffic steering–but for very different reasons. As per RedHat’s Principle Middleware Architect, Christian Posta’s, blog (Blue-green Deployments, A/B Testing, and Canary Releases), Blue/Green deployments are about standing up side-by-side systems to allow a safe, simple cut-over to a new service, and, if needed, and simple roll-back to the previous, untouched system. On the other hand, A/B testing is about running simultaneous production systems and directing portions of internet traffic at each. The latter is used to test features and functionality while measuring and comparing user uptake.

In these examples we have different teams with different objectives but, in both cases, they are steering internet traffic across systems and resources in a controlled, timely fashion, and without service interruption.

Back to the language barrier, these two teams work in the same organization and require very similar functionality, but use very different tools to perform the same actions. Developers will solve traffic-steering requirements with software, either written themselves or even open-source. Whereas infrastructure teams will use hardware to meet their traffic steering requirements. This brings us to the crux of the problem: why are they both investing time in like solutions that will run parallel to each other? Well, this language barrier problem comes, largely, from the fact that there’s no skill-overlap between NetOps and Developers. Each have been solving similar problems, side-by-side, for years.

Skills Gap

As F5’s VP of product development, Dave Votava, has been heard to say, similar to a hammer, where everything looks like a nail, to a developer, infrastructure must look like an API. Unfortunately, for the developer, its not as simple as strapping an API onto network infrastructure. This is because the API itself only permits remote execution of management tasks. It does not provide any explanation of the nuances and dependancies of the device for which it serves.

For example, lets say we have a piece of infrastructure that supports internet traffic routing, and that this routing is based on HTTP profiles which are used to define and match traffic patterns. Now, if that network infrastructure has a specific requirement that the profiles must be created before the traffic steering policy is applied, because that policy must reference the HTTP profiles at the time of creation, then we have a technical nuance specific to that network infrastructure. If the configuration is built out of order it will fail. And to add complexity, each device within the infrastructure has its own nuances that do not translated across all other devices.

This is the kind of thing that NetOps engineers are experts at. They know the technologies they work with extremely well because that’s what they do all day, work with those devices. So, what would it take to get that process automated and into a DevOps continuous deployment tool-chain?

This isn’t a technology problem. The issue is that of a lack of ‘domain specific’ knowledge. The dependences, requirements, object hierarchy of the infrastructure configurations are not conveyed through the API’s. This is the domain-specific knowledge held by the NetOps engineers. So, what to do? Do we start teaching the tool-chain operators and architects about all of the infrastructure? NO. Definitely not. We mustn’t stop them coding. Their desire to automate is so they can remain productive at all times; creating apps and services that generate revenue or improve productivity.

A far simpler, and more logical solution, is to reduce the programmability skills gap. To create the Super-NetOps engineer!

Skills Gap Reduction

What is a Super-NetOps engineer? Three parts sarcasm, two parts snark, and a splash of vermouth… I jest. It’s a shift of that domain-specific knowledge towards presenting infrastructure services via APIs. Its a NetOps engineer that has taken the time to translate all of their familiar CLI commands into REST API calls that can, once tested and documented, be shared with teams outside of NetOps. The Super-NetOps engineer understands the JSON payload that must accompany an HTTP POST, and how to point out to the DevOps tool-chain operators which properties they should ‘Patch’ (a little joke there) for each deployment and which properties to preserve at all cost.

This isn’t a huge leap. In fact, myself and a few others have been trekking about the country the last few months offering free infrastructure automation training days. This in addition to pushing vast amounts of content and tutorials to various web resources. The movement is real!!!

That said, like every good journey, it starts with the first step.… is live

Its time REDtalks graduated from a basement project and into something a little more grownup! So, here we are with the first post. The benefit to you, my valued reader/viewer/listener, is that all of the content will now be in one place, right here on

You can also subscribe to the REDtalks Audio podcast available at the following locations:

Everything will be migrated here over the next week, and new episodes are also on the way! So, come back soon, and come back often.

Thanks for listening.