REDtalks #18 – Enabling the docker TCP API in AWS

Not a traditional REDtalks post today (no interview/demo), but this took me a while to work out so I thought I’d share.

What’s this about?

It all started with me building REST extensibility solutions for F5 Networks in AWS. I created (Launched) a new AWS AMI Linux instance – yep, the very first one on the list: “Amazon Linux AMI 2017.03.0 (HVM), SSD Volume Type“.

Next, I followed the AWS instructions to install docker:

sudo yum update -y

sudo yum install -y docker

sudo service docker start

sudo usermod -a -G docker ec2-user

docker info

NOTE: Full docs here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

This is where I got stuck!

As part of the solution I needed to issue a docker command on the docker host, from inside a container… Ok, Batman, to the Google-copter…

There’s loads of suggestions out there to map /var/run/docker.sock into the container using -v. For example:

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock my_container sh

With this you can execute:

$ curl --unix-socket /var/run/docker.sock http:/containers/json

HOWEVER, there are loads of forum posts saying to be real careful about mapping /var/run/docker.sock to all you containers…

What to do?

Enable the API over TCP! 

Back to the Google-copter, there are a few posts out there about getting it running on Ubuntu but none for the Linux AMI distro…

A solution (hours later…)

1. Change some startup options:

The default ‘OPTIONS’ in /etc/init.d/docker is:

OPTIONS="${OPTIONS:-${other_args}}"

we need to change this to:

OPTIONS="${OPTIONS:-${other_args}} -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"

So, you need to go ahead and edit that to something link this:

$ sudo vi /etc/init.d/docker

#OPTIONS="${OPTIONS:-${other_args}}"
OPTIONS="${OPTIONS:-${other_args}} -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"

2. Restart docker for these options to take effect:

sudo service docker restart

Now you have enabled the docker API over TCP! #w00t

Test the API

Lets get the API version:

curl http://<ip_address>:2375/version

NOTE: Replace <ip_address> with the IP address of the docker host, or its hostname!

The response will look something like this:

{"Version":"1.12.6","ApiVersion":"1.24","GitCommit":"7392c3b/1.12.6","GoVersion":"go1.6.3","Os":"linux","Arch":"amd64","KernelVersion":"4.9.20-11.31.amzn1.x86_64","BuildTime":"2017-03-07T20:34:04.601909006+00:00"}

Note the:

"ApiVersion":"1.24"

Now add that version number to the beginning of the URI, slap json on the end of it, and presto:

curl http://<ip_address>:2375/v1.24/images/json

Returns:

[{"Id":"sha256:80ee3a0f225e543668eca9922bf5642bce0e484403df665f3ac9b107d2895d40","ParentId":"","RepoTags":["npearce/ilxe-festivus:latest"],"RepoDigests":["npearce/ilxe-festivus@sha256:973dbf813f6a7f07929b8fd86da4fa9b79f613228e3942fb35d9d525fcfa61b0"],"Created":1495771390,"Size":85001082,"VirtualSize":85001082,"Labels":{}}]

 

Now you can go read this: https://docs.docker.com/engine/api/v1.24/

Enjoy!

CAUTION: One last step, and this is REALLY important! Don’t leave your Docker API open on the Internet!

Does DevOps need a “Super-NetOps”?

Operations (Ops) is getting a lot of attention these days as the next big thing in technology to fix. Why now? What changed? Well, for the most part, over the last 10-ish years technology has been accelerating through various levels of abstraction and virtualization. This evolution has lifted many of the technical barriers that were preventing the traditional infrastructure administrator from working more efficiently. Its this ‘lifting of barriers’ that has shifted the spotlight from the technology itself onto the way the technology is being implemented.

As as side effect of infrastructure becoming more programmable, the opportunities to present infrastructure-based resources to teams outside of NetOps have transitioned from “extremely difficult” to “must have” in what seems over-night. This Programmable Infrastructure evolution is no secret with organizations already conducting vendor ‘programmability capability’ surveys under the premise of “if you can’t be automated, you’re not part of our Mode 2 architecture”.

This has resulted in a rapid, and significant, shift in influence over infrastructure purchasing decisions. A shift out of infrastructure leadership and towards teams that look after continuous deployment tool-chains and application delivery automation systems.

Shifting Influence

Now, while this rapid shift towards the ‘new influencer’ on infrastructure purchasing decisions has accelerated the DevOps movement and, with it, some cool new solutions, it has also brought with it some new problems. Many automation early-adopters quickly realized that NetOps and DevOps lack a common language. Not transport languages like HTTP, or even presentation/rendering languages like HTML. No, quite literally, they speak different languages.

A great example of this is a comparison of Blue/Green architecture and A/B testing. At a high-level, they are talking about similar technical requirements–clever traffic steering–but for very different reasons. As per RedHat’s Principle Middleware Architect, Christian Posta’s, blog (Blue-green Deployments, A/B Testing, and Canary Releases), Blue/Green deployments are about standing up side-by-side systems to allow a safe, simple cut-over to a new service, and, if needed, and simple roll-back to the previous, untouched system. On the other hand, A/B testing is about running simultaneous production systems and directing portions of internet traffic at each. The latter is used to test features and functionality while measuring and comparing user uptake.

In these examples we have different teams with different objectives but, in both cases, they are steering internet traffic across systems and resources in a controlled, timely fashion, and without service interruption.

Back to the language barrier, these two teams work in the same organization and require very similar functionality, but use very different tools to perform the same actions. Developers will solve traffic-steering requirements with software, either written themselves or even open-source. Whereas infrastructure teams will use hardware to meet their traffic steering requirements. This brings us to the crux of the problem: why are they both investing time in like solutions that will run parallel to each other? Well, this language barrier problem comes, largely, from the fact that there’s no skill-overlap between NetOps and Developers. Each have been solving similar problems, side-by-side, for years.

Skills Gap

As F5’s VP of product development, Dave Votava, has been heard to say, similar to a hammer, where everything looks like a nail, to a developer, infrastructure must look like an API. Unfortunately, for the developer, its not as simple as strapping an API onto network infrastructure. This is because the API itself only permits remote execution of management tasks. It does not provide any explanation of the nuances and dependancies of the device for which it serves.

For example, lets say we have a piece of infrastructure that supports internet traffic routing, and that this routing is based on HTTP profiles which are used to define and match traffic patterns. Now, if that network infrastructure has a specific requirement that the profiles must be created before the traffic steering policy is applied, because that policy must reference the HTTP profiles at the time of creation, then we have a technical nuance specific to that network infrastructure. If the configuration is built out of order it will fail. And to add complexity, each device within the infrastructure has its own nuances that do not translated across all other devices.

This is the kind of thing that NetOps engineers are experts at. They know the technologies they work with extremely well because that’s what they do all day, work with those devices. So, what would it take to get that process automated and into a DevOps continuous deployment tool-chain?

This isn’t a technology problem. The issue is that of a lack of ‘domain specific’ knowledge. The dependences, requirements, object hierarchy of the infrastructure configurations are not conveyed through the API’s. This is the domain-specific knowledge held by the NetOps engineers. So, what to do? Do we start teaching the tool-chain operators and architects about all of the infrastructure? NO. Definitely not. We mustn’t stop them coding. Their desire to automate is so they can remain productive at all times; creating apps and services that generate revenue or improve productivity.

A far simpler, and more logical solution, is to reduce the programmability skills gap. To create the Super-NetOps engineer!

Skills Gap Reduction

What is a Super-NetOps engineer? Three parts sarcasm, two parts snark, and a splash of vermouth… I jest. It’s a shift of that domain-specific knowledge towards presenting infrastructure services via APIs. Its a NetOps engineer that has taken the time to translate all of their familiar CLI commands into REST API calls that can, once tested and documented, be shared with teams outside of NetOps. The Super-NetOps engineer understands the JSON payload that must accompany an HTTP POST, and how to point out to the DevOps tool-chain operators which properties they should ‘Patch’ (a little joke there) for each deployment and which properties to preserve at all cost.

This isn’t a huge leap. In fact, myself and a few others have been trekking about the country the last few months offering free infrastructure automation training days. This in addition to pushing vast amounts of content and tutorials to various web resources. The movement is real!!!

That said, like every good journey, it starts with the first step.

REDtalks.live… is live

Its time REDtalks graduated from a basement project and into something a little more grownup! So, here we are with the first REDtalks.live post. The benefit to you, my valued reader/viewer/listener, is that all of the content will now be in one place, right here on https://REDtalks.live

You can also subscribe to the REDtalks Audio podcast available at the following locations:

Everything will be migrated here over the next week, and new episodes are also on the way! So, come back soon, and come back often.

Thanks for listening.

Nathan