REDtalks #11 – Anthony on Automation with Excel

UPDATED: Feb 9th, 2017

The beauty of REST is that you can call it from just about anywhere. REST doesn’t care which OS you are using or whether you prefer a command-line tool over a GUI, or vice versa. Most importantly, engineers can start benefitting from REST APIs without the need for huge corporate investments and monolithic orchestration projects.

Proving this point, in episode #11 we have Anthony Gerace who demonstrates an accelerated device on-boarding solution using Microsoft Excel (and some VBA script).

Anthony has been at the foundation of MANY demonstrations and proof-of-concept projects. Watch this episode to see how he’s simplified his own workflows, and helped many others get started, with this fine example of how REST APIs are for everyone!

Thanks for joining us, Anthony!

And here’s the spreadsheet on Github:—Platform-Builder

REDtalks #10 – Getting started with Ansible Tower and F5 iApps

There’s a lot of great content out there on Ansible playbooks. However, not so much on how to get started. So, this article is to share how I got from “never seen Ansible” to “automating L4 – L7 Service deployments”. I hope its useful to you!

WARNING: What are you actually installing?!

When I signed up for an Ansible Tower eval license I received a link to the Ansible Tower download (its a gzipped tarball – .tgz), which assumes you have Ansible running. Ansible Tower is the nice web interface that use the underlying Ansible (CLI) to do the work. Unless you’re an expert, don’t use the Ansible Tower tgz install!

Instead, use the Anisble Tower *bundle*, which installs both Ansible and Ansible Tower together, configured and integrated. This way you won’t have to mess with dependancies or configuration files afterwards… If you are new to Ansible, I promise you this will make a HUGE difference.

Lab Guide

The complete step-by-step lab guide, including all the playbooks, has been published to GitHub, which you can find here.

However, if video instructions are more your scene, I recorded all the steps in the various lab ‘README’ files into a single video, below.


REDtalks #09 – Joel on lab portability with Vagrant

Joel King of WWT is back with us again and in this episode we cover the importance of test/eval environment portability.

These days, few have the luxury of static, consistent environments. Especially those working with automation and orchestration solutions. Consequently, the requirement so spin up, and even repurpose, test or evaluation environments is increasing in importance. Today, for example, I might be verifying that my firewall can work with Phantom Cyber. Tomorrow, it might be integrating my load-balancer with Ansible Tower. Different teams use different tools and its extremely difficult to constantly build out environments to stay on top of them all.

So, watch this episode to hear how Joel has been using Vagrant to create, and even share, is test/eval environments with other engineers, while eliminating the need to copy, or migrate, heavy virtual machines and their virtual environment configurations.

This kind of thinking is a key step in the evolution from traditional NetOps to Super-NetOps!

Thanks for joining us, Joel!

REDtalks #08 – Hitesh on Imperative vs Declarative (and sandwiches)

Our very first REDtalks guest, Hitesh Patel, is back again to help us understand a fundamental architectural shift in how systems must change to support a Mode 2 methodology. In this episode we talk about levels of abstraction and how they effect automation processes. We discuss the operational overhead involved in supporting infinite deployment options versus adopting a service templating process, and how this hinges on the abstraction of domain-specific knowledge.

Explained through the art of sandwich making versus microwaving a burrito, listen to this episode to understand the differences between imperative and declarative models.

Thanks again for your time, Hitesh.

Does DevOps need a “Super-NetOps”?

Operations (Ops) is getting a lot of attention these days as the next big thing in technology to fix. Why now? What changed? Well, for the most part, over the last 10-ish years technology has been accelerating through various levels of abstraction and virtualization. This evolution has lifted many of the technical barriers that were preventing the traditional infrastructure administrator from working more efficiently. Its this ‘lifting of barriers’ that has shifted the spotlight from the technology itself onto the way the technology is being implemented.

As as side effect of infrastructure becoming more programmable, the opportunities to present infrastructure-based resources to teams outside of NetOps have transitioned from “extremely difficult” to “must have” in what seems over-night. This Programmable Infrastructure evolution is no secret with organizations already conducting vendor ‘programmability capability’ surveys under the premise of “if you can’t be automated, you’re not part of our Mode 2 architecture”.

This has resulted in a rapid, and significant, shift in influence over infrastructure purchasing decisions. A shift out of infrastructure leadership and towards teams that look after continuous deployment tool-chains and application delivery automation systems.

Shifting Influence

Now, while this rapid shift towards the ‘new influencer’ on infrastructure purchasing decisions has accelerated the DevOps movement and, with it, some cool new solutions, it has also brought with it some new problems. Many automation early-adopters quickly realized that NetOps and DevOps lack a common language. Not transport languages like HTTP, or even presentation/rendering languages like HTML. No, quite literally, they speak different languages.

A great example of this is a comparison of Blue/Green architecture and A/B testing. At a high-level, they are talking about similar technical requirements–clever traffic steering–but for very different reasons. As per RedHat’s Principle Middleware Architect, Christian Posta’s, blog (Blue-green Deployments, A/B Testing, and Canary Releases), Blue/Green deployments are about standing up side-by-side systems to allow a safe, simple cut-over to a new service, and, if needed, and simple roll-back to the previous, untouched system. On the other hand, A/B testing is about running simultaneous production systems and directing portions of internet traffic at each. The latter is used to test features and functionality while measuring and comparing user uptake.

In these examples we have different teams with different objectives but, in both cases, they are steering internet traffic across systems and resources in a controlled, timely fashion, and without service interruption.

Back to the language barrier, these two teams work in the same organization and require very similar functionality, but use very different tools to perform the same actions. Developers will solve traffic-steering requirements with software, either written themselves or even open-source. Whereas infrastructure teams will use hardware to meet their traffic steering requirements. This brings us to the crux of the problem: why are they both investing time in like solutions that will run parallel to each other? Well, this language barrier problem comes, largely, from the fact that there’s no skill-overlap between NetOps and Developers. Each have been solving similar problems, side-by-side, for years.

Skills Gap

As F5’s VP of product development, Dave Votava, has been heard to say, similar to a hammer, where everything looks like a nail, to a developer, infrastructure must look like an API. Unfortunately, for the developer, its not as simple as strapping an API onto network infrastructure. This is because the API itself only permits remote execution of management tasks. It does not provide any explanation of the nuances and dependancies of the device for which it serves.

For example, lets say we have a piece of infrastructure that supports internet traffic routing, and that this routing is based on HTTP profiles which are used to define and match traffic patterns. Now, if that network infrastructure has a specific requirement that the profiles must be created before the traffic steering policy is applied, because that policy must reference the HTTP profiles at the time of creation, then we have a technical nuance specific to that network infrastructure. If the configuration is built out of order it will fail. And to add complexity, each device within the infrastructure has its own nuances that do not translated across all other devices.

This is the kind of thing that NetOps engineers are experts at. They know the technologies they work with extremely well because that’s what they do all day, work with those devices. So, what would it take to get that process automated and into a DevOps continuous deployment tool-chain?

This isn’t a technology problem. The issue is that of a lack of ‘domain specific’ knowledge. The dependences, requirements, object hierarchy of the infrastructure configurations are not conveyed through the API’s. This is the domain-specific knowledge held by the NetOps engineers. So, what to do? Do we start teaching the tool-chain operators and architects about all of the infrastructure? NO. Definitely not. We mustn’t stop them coding. Their desire to automate is so they can remain productive at all times; creating apps and services that generate revenue or improve productivity.

A far simpler, and more logical solution, is to reduce the programmability skills gap. To create the Super-NetOps engineer!

Skills Gap Reduction

What is a Super-NetOps engineer? Three parts sarcasm, two parts snark, and a splash of vermouth… I jest. It’s a shift of that domain-specific knowledge towards presenting infrastructure services via APIs. Its a NetOps engineer that has taken the time to translate all of their familiar CLI commands into REST API calls that can, once tested and documented, be shared with teams outside of NetOps. The Super-NetOps engineer understands the JSON payload that must accompany an HTTP POST, and how to point out to the DevOps tool-chain operators which properties they should ‘Patch’ (a little joke there) for each deployment and which properties to preserve at all cost.

This isn’t a huge leap. In fact, myself and a few others have been trekking about the country the last few months offering free infrastructure automation training days. This in addition to pushing vast amounts of content and tutorials to various web resources. The movement is real!!!

That said, like every good journey, it starts with the first step.