PaaS, Feelin the Waters

You don’t know what you don’t know. period.

What is the best way to find out what you don’t know? Doing it!

I have been on several engagements implementing OpenShift and most companies completely underestimate the amount of commitment needed to implement it successfully.

What is/should be the #1 goal of setting up a PaaS in your data center? The people and system(s) that need/want to use your services. If you don’t have customers that are interesting in using your services a PaaS solution probably isn’t a wise investment for your team. Intuitive right? Well most of us would like to think so, but more often than not leaders get caught up in the shiny new thing. Heck, I’m guilty of it. Whenever a new software tool comes out I’m all over it. It’s cool and exciting.

What is wrong with cool and exciting? Noting, absolutely nothing. Actually it is the grease that keeps our technical innovation moving. There is a problem though when someone blindly says, hey i just saw this new shiny thing that may work for us, so lets go ahead and fully implement and invest a lot of time and energy into this new thingy. While you can clearly see where I’m going and this is intentional, this sort of silly thing happens all the time.

Now, we talk PaaS. So, given my basic example from above, for talented IT teams there shouldn’t be an issue to standing up a new service or providing a new tool to the business. The problem is PaaS is not just “a” service . It is a multitude of services all working together in concert. And those services are all layered on top of different tooling and technologies. To configure, manage and maintain said services the administrators need to be fluent in each of those technologies and unless your team has some exceptionally clever people expecting them to pick up and run with this is foolhardy.

Okay, suppose we have a team of exceptional people, current with the latest tools/technologies what next? Good question. Well we would ask what are we (PaaS) doing here? What is PaaS making more efficient or services it will be replacing or creating? Suppose PaaS will be doing all of the above. Sweet! Are there plans for transition/decommission/migration plans for existing services?

It all boils down to expectations. The goal with something like this should be to test the waters and understand the commitment that is needed for running a successful PaaS. With that said, set expectations low. Not that you should expect the tool to not work as intended, but don’t jump in the water until you know what your jumping into.

Here are the steps I would follow if I were consider a PaaS environment for my data center (if I owned/lead one):

  1. Justify need; establish use cases. This may be directly related to current and recurring needs/issues/complaints.
  2. Define stakeholders
  3. Review personal resources (developers,administrators,testers, security, network, etc); technical specialties, workloads and adaptability.
  4. Pick a simple use case to test the waters
  5. Require all requirements needed for use case
  6. Perform PaaS bakeoff and pick product
  7. Hire consultants:
    1. Roadmap
    2. Installation
    3. Best Practices
  8. Evaluate lessons learned and plan forward

With this approach there is limited risk to over committing resources (financial, personal and otherwise) and allows time to set appropriate expectations as far as roll out, adoption and sustainability.



Jenkins and the Ansible plugin

I have been slacking on posting anything as of late and it hasn’t been from lack of topics, but my latest client has been me keeping quite engaged. The project has been really cool. I’ve been leading an installation of Red Hats PaaS environment, Openshift. Part of our engagement has been to demonstrate how DevOps and PaaS can increase developer productivity, quality of code, decrease time to market, and confidence the application will do what is supposed to.

During any installation of Openshift the recommendation is to always stand up DNS server specific for Openshift. The reason is that DNS dynamically adds and removes records as containers are added/removed from the PaaS and most network admins are not keen on allowing these dynamic features in the enterprise DNS servers (and rightfully so). This causes problems though. This means that any computer that logs into the Openshift console and creates a container won’t be able to access the container, because their computer won’t be able to resolve the new DNS entry. The fix is to basically forward on traffic to this server from the enterprise servers. However, since making such a change to a company’s DNS server(s) is not something to take lightly and takes time to evaluate risks and such, you may need to create a workaround like adding the Openshift DNS server to your computers list of DNS resolvers.

So nice workaround, right? Well, yes and no. We have worked around the issue of having the developers computer be able to resolve the correct FQDN, cool. What happens when as part of our DevOps process we leverage enterprise Jenkins server to push code directly to Openshift? You got it, the FQDN won’t resolve. The FQDN will not resolve without the correct DNS server.

What do you do? I mean the current set up is only temporary while the new enterprise DNS servers are set up to forward to the Openshift DNS. We really can’t blindly update the DNS servers for Jenkins, because there are a number of other development teams and quite possibly break their builds.

So what do we do… Yes, you got it. Ansible.

jenkins openshift bridge

We will use Ansible to bridge the Jenkins server to our server running the correct DNS to then remotely push an application to Openshift.You may  be asking yourself why not just use ssh? There are a few other little tidbits I didn’t mention and that is Openshift requires a special set of client tools that are not installed on the Jenkins server. There is a Openshift plugin which works, but only with ssh keys. The Jenkins server doesn’t use SSH keys, so no Openshift Plugin. However, even if we could use SSH keys the jenkins server wouldn’t be able to resolve the hostname. Which brings us back to using Ansible to execute commands on the bridge VM and then uses the Openshift client tools and the correct DNS server resolve and push code.

We ran into other issues with using the Ansible plugin that I’ll talk about in another post.

installing Openshift v2: devil’s in the details

This past several weeks I’ve been venturing into Red Hat’s Cloud PAAS technology, Openshift. I attempted to install the community version (origin) and the enterprise version. I had mixed luck. With the community version the documentation was incredible, I mean a monkey could do the installation based off of their instructions, really is top notch. Great Job guys! I ran into some packages not being quite stable so i decided then try the enterprise distribution.

So not being an expert (yet!) in Installing/Configuring openshift I needed to rely on the instructions heavily, don’t judge 🙂 . I had no real issues to speak of doing the installation until I installed the broker console. Then I was getting this friendly error:

[root@broker ~]# service openshift-broker start
Starting openshift-broker: Unable to open /etc/scl/prefixes/v8314!
chown: cannot access `Gemfile.lock': No such file or directory
Bundle failed
Run 'bundle install --local' in /var/www/openshift/broker/ to see the problem.

Being a newbie to Red Hats Software Collections I was not sure what the error was and/or why I was getting it. So I cracked open the code base and noticed that the version of ruby I was using was calling this cryptically named command v8134 and without fail couldn’t figure out what it was. I asked a few of my colleagues, but to no avail came up short.

soooo, I checked to see if anybody else had come across this issue before and when the whole error is entered no dice, so I started taking bits off and only leaving the core part of the error and like magic I found the bug had been reported by someone else ( Turns out the RPM file accidentally left off that dependency. And manually installing the v8134 with yum install fixed my issue. All I can say is awesome! Because I could finally move on.

I was able to set up my host no problem and now I’m ready for consumers….jk

lesson learned always google for bugs, trim out error and/or go to vendors site and search their known bugs.