Jenkins and the Ansible plugin

I have been slacking on posting anything as of late and it hasn’t been from lack of topics, but my latest client has been me keeping quite engaged. The project has been really cool. I’ve been leading an installation of Red Hats PaaS environment, Openshift. Part of our engagement has been to demonstrate how DevOps and PaaS can increase developer productivity, quality of code, decrease time to market, and confidence the application will do what is supposed to.

During any installation of Openshift the recommendation is to always stand up DNS server specific for Openshift. The reason is that DNS dynamically adds and removes records as containers are added/removed from the PaaS and most network admins are not keen on allowing these dynamic features in the enterprise DNS servers (and rightfully so). This causes problems though. This means that any computer that logs into the Openshift console and creates a container won’t be able to access the container, because their computer won’t be able to resolve the new DNS entry. The fix is to basically forward on traffic to this server from the enterprise servers. However, since making such a change to a company’s DNS server(s) is not something to take lightly and takes time to evaluate risks and such, you may need to create a workaround like adding the Openshift DNS server to your computers list of DNS resolvers.

So nice workaround, right? Well, yes and no. We have worked around the issue of having the developers computer be able to resolve the correct FQDN, cool. What happens when as part of our DevOps process we leverage enterprise Jenkins server to push code directly to Openshift? You got it, the FQDN won’t resolve. The FQDN will not resolve without the correct DNS server.

What do you do? I mean the current set up is only temporary while the new enterprise DNS servers are set up to forward to the Openshift DNS. We really can’t blindly update the DNS servers for Jenkins, because there are a number of other development teams and quite possibly break their builds.

So what do we do… Yes, you got it. Ansible.

jenkins openshift bridge

We will use Ansible to bridge the Jenkins server to our server running the correct DNS to then remotely push an application to Openshift.You may  be asking yourself why not just use ssh? There are a few other little tidbits I didn’t mention and that is Openshift requires a special set of client tools that are not installed on the Jenkins server. There is a Openshift plugin which works, but only with ssh keys. The Jenkins server doesn’t use SSH keys, so no Openshift Plugin. However, even if we could use SSH keys the jenkins server wouldn’t be able to resolve the hostname. Which brings us back to using Ansible to execute commands on the bridge VM and then uses the Openshift client tools and the correct DNS server resolve and push code.

We ran into other issues with using the Ansible plugin that I’ll talk about in another post.

Advertisements

Puppet and Ansible… you’re being weighed, Part 2

Okay this is a follow up on my other post.

Wow, okay both configuration management tools do roughly the same type of thing, but I think they are geared for types users.

Documentation:

I’m sure you being a crawler of technology as I am know how important it is to have good documentation. I think one of the first turn offs to a tool is bad documentation, because it begs the question of does more than one person understand it or complex and there is no other way to represent it.

As I pointed out in my prior post, I had worked with puppet before so I know the documentation was decent, but what I quickly recalled was that the tutorials suck and actually there aren’t any unless you download their VM, which is for VirtualBox and I use KVM. I didn’t really need it because I have awesome books, Pro Puppet by James Turnbull and Jeffrey McCune as well as Puppet Cookbook by Thomas Uphill and John Arundel and http://www.puppetcookbook.com/ . With these tools I was able to become fairly proficient quickly.

When I started looking at Anisble I was taken aback at just the amount of information they had, albeit it was organized well. That part felt kind of intimidating. However, HOWEVER, wow what a pleasure to read! I really can’t reiterate that again. A PLEASURE. The Author(s) wrote a story that slowly built you up from knowing nothing through practically intermediate user of their tool in literally no time. The instructions were funny and had character that made me laugh as I was reading it and had little anecdotes in just the right places where I started to ask myself a question and boom they answer it. Really great stuff and kudos to those guys.

Setup:

Starting with Pupppet, I stood up the standard Master, Agent topology using the instructions on puppetlabs. With Puppet, it is recommended for the Master to be a CA as well so that it can manage the agents that can communicate with it. And this part was not difficult, but I did not like the fact that I had to review and approve any requests (i think this goes away if I used a tool like Foreman), Any who, next I starting tinkering with the puppet.conf file, but that is elusive in its own right. I mean they only have a handful of options in the conf file, but are not really explained very well in the file itself, so I had some difficulties get it tweaked just right. After I set up the master the agent was much more of a breeze.

Ansible…. easy, easy, easy. Really the only thing you have to do is run


$ yum install ansible

and boom your done. Now you can use it as is and work with nodes, however, it is much nicer to leverage ssh-agent so that Ansible can communicate with nodes without prompting login. This is also very straight forward, however kind of annoying to have to manage ssh keys. I liken this process to the CA process above.

Ease of use:

ehhh. I’m really trying not to sound like a Ansible fan boy here…. As I said, going through the documentation I got a fairly good understanding of the mechanics involved with using it. I would say was up and running and testing stuff in an hour. Puppet on the other hand is a bit more complicated. You have to ensure that your master daemon is running and then on your agent machine you have to make sure it is also running and can connect to the master. I was running it in onetime mode so that I could test each time I made a change on the agent (I could have tested on my master first, but whats the fun in that). Each time I made a change to my puppet script I’d hop over to my agent and run a cli command.

Now the output from the command for Puppet is quite cryptic and what I would expect from a typical complier. The difference with Ansible is that once I finished and script change I could run it right then and there. I also felt the errors were better captured and explained. I think this is really a function of Ansibles decision to execute top-down approach vice dynamic (it may also be a function of python vice ruby exec pref), because it is easier to capture where things fail and consistency. When running Puppet there is no guarantee that the execution ordering will be the same, so you may get an error one pass of the code attempt to fix run it again and get another error fix that error only to run again and find out the original error you thought you fixed isn’t… which leads to grrrrr.

🙂

Time to code:

This kind plays in ease of use, but I thought we should have a separate section for it. So, here was my approach before starting this journey of configuration management. I first stood up a separate server and documented all the steps I needed to run my configuration across a set of nodes as if I was going to do it via bash or python. Then I took those steps and translated them into a Ansible and Puppet speak, easy right? hmmm…

Fan boy alert…

Well I’d like to say my experience with both tools was awesome and quick, since really I had done the hard part of figuring how everything should be installed and configured, but it wasn’t. Ansible probably took me a full day of working with the tool and understanding how to debug and the appropriate ways to leverage its tooling, Puppet on the other hand probably took me a minimum of 3 days. Mind you I did Anisble first, the tool I had never used before, so clearly you’d think I would have had an edge on it rather than puppet. I didn’t. It turns out there is a serious issues with dynamic execution when your configuration relies on a particular ordering of execution. Since Puppet is dynamic there are ways you can chain execution commands (resources) together, but it is actually difficult to do and more importantly troubleshoot. Even if you set print (notify) statements before and after each resource command and there is something required before it can run, the execution will drop into that required resource and so on and so forth and all then while you’re still getting other messages as Puppet attempts to run all the attributes of each resource and so even if you have print statements before and after each resource (you have to chain them together!) the output is not clean and difficult to trace. With Ansible being top down execution it was very easy to see what errors existed and what I needed to adjust to get working.

Another thing I found daunting and worth a mention is the idea of environments. So with Anisble you write a playbook and each playbook would be composed of a block of configurations you wanted to happen on a group of nodes and to run a different play book you simply reference it from the command line. This is awesome, because I want to setup and teardown my vm with my configurations so I can fully test everything is working correctly… Well that isn’t so easy with Puppet. Since puppet has a master/agent relationship we can’t have both setup and teardown as part of the communication, otherwise nothing would be configured and couldn’t guarantee which was executed when, so I had to setup what they call is environments. This is analogous to dev, test, prod. The problem is that I have to make this declaration in my puppet.conf file, which got annoying real quick when trying to build this project. I won’t begin to mention my discovery process to figure this out after my perspective from Ansible was skewed to easy 🙂 .

Summary:

I think there is a place for both of these tools to cohabitate, but I’m not sure my uses cases would necessitate Puppet over Ansible other than if I was forced to use it by a Red Hat product. Puppet is the big kid on the block (for now at least) and is the defacto where as Ansible is the new kid and gaining popularity. I won’t recommend one over the other blanketly, because it honestly depends on use case. What I will say if you are migrating scripts that do some type of configuration management you will be wise to use Ansible, because you WILL have issues with execution order that you never really gave much thought to (unless of course you are the puppet master 🙂 ).

Happy configuration managements!!!!

Puppet and Ansible… you’re being weighed, Part 1

I have been hearing a lot of buzz around Puppet and Ansible for some time. Many many moons ago I tinkered with Puppet and did the little tutorials online but never really had a reason to go much deeper. I figured it was an obligation, since most of the products I use on day to day revolve around Red Hat and Red Hat has more or less adopted Puppet (for better or worse). During the same time Ansible was starting to traction. I really didn’t really see the advantages of using Ansible over Puppet other than the whole agent vice agent less approach.

Well as it turns out my client is in desperate need of a configuration management tool and I truly mean desperate. My experience with configuration management tools is that they are really a nice to have, but most shops have their own way of doing configuration management and when they weigh all the assets they have invested in (software, people, processes, etc) against migrating to a new way of doing things it is just too expensive for them to undertake (for now!).

My client doesn’t have this luxury. They are under staffed and under trained. Off the bat they are behind. If they were trained, maybe they’d be able to just, just keep up with operational demand, but any snag with throw everything behind. So this is the case we all hear about, but possibly never experience in real life where the work people do is not done well so they have to spend additional time fixing what the did putting them further behind on the next task.

So, when a team can’t write/manage complex scripts and they need to troubleshoot every configuration and environment because everything has been done ad-hoc and not to mention deadlines, what do they do? They have no choice but to use a configuration management tool to alleviate them from having to troubleshoot work they’ve already done.

In rolls Puppet and Ansible. My project [1] attempts to highlight the differences between Ansible and Puppet doing the same configurations. My hope is that it will give a good idea of the level of rigor involved in using the tools and to pick the one that is most appropriate for my client.

My next post will talk about some things I found annoying and useful about both tools and what I will inevitably recommend to my client.

[1] https://github.com/jmarley/anisble-puppet-workshop