Continuous Integration/Delivery for Operations… Whaaaa?

Continues Integration (and yes I mean Jenkins) is awesome, plain and simple. CI provides a way for developers to continuously integrate, test and give a degree of confidence that it will work with in other environment(s) to ensure stability. If this tool is so great why aren’t sys admins using it as well? Shouldn’t sys admins test their code, measure code coverage, integrate with other sys admins configs and finally ensuring it will work across environments?

If your a sys admin I know what you’re thinking.. I do test my code and I do ensure it works in our environments. And you’re probably also thinking, Why use a CI tool? What is the point if we’re just writing scripts and doing system configurations?

That’s the point, it isn’t just some script that is written once and forgotten about. Generally the scripts sys admins write are the backbone of any organization and are tucked away in a file system somewhere. It is possible some testing was performed when the script was written, but things change and unless those tests are continuously be run when the script is applied again there is no sure way of knowing something is broken.

Step in CI server. What does this get us? Well, let me make the assumption, silly as it may be, that all of the IT operations scripts are under source control (I know, I know, not a safe assumption, but I want this as another blog post down the road). With that assumption in place, now lets assume the Ops team has also built some basic unit testing around what their script will do. After that we set up a job that will constantly poll our scripts for any changes and should a change occur our tests will run against the script verifying everything still works as expected.

What does this get the Ops team? Now anytime anybody updates the script, known tests are run to verify everything still works as expected and the entire team is notified the status of the script. There is now stability, repeatability, and an overall confidence in a companies infrastructure.

For those of us who hate writing documentation. Having this CI/CD process for scripts also acts as a living document. IT organizations can leverage it also as repository for intellectual capital. Yes, not all the code will live on the Jenkins server, but in an ideal scenario all scripts will be tested there having 1 place for everybody to review the health and assets for the IT Ops team.

Short story long. There is a movement to get more automation, rigor and confidence around doing IT Ops and the only way to get there is by writing good code, which implies good testing. This will help with less rework, unnecessary troubleshooting, loss of intellectual artifacts and allow the team to focus on more interesting things rather than trying to figure out what you did a year ago and why the script is no longer working.

Get in the habit of using CI/CD you and your IT organization will not regret it.


Docker and my initial thoughts

This past week I had an opportunity to tinker a bit with Docker and I think it is really cool… But I’m not sure I think it is quite the silver bullet that everyone makes it out to be.

Will Docker stream line deployments across environments, yes.

Will Docker ensure consistent configurations for the server environment, yes.

Will Docker replace Maven, no.

Will Docker replace continuous build servers, no.

Will Docker be used by the developers or administrators, unclear. If a developer is working in a somewhat loose data center, then the developer will need to document/script out the deployment steps so it can be replicated in each environment, which can be fraught with holes. Generally the developer will script out the deployments with respect to the application sever and not necessarily the os. This presents an issue, because now we are expecting developers to have a more in-depth knowledge of linux to properly understand the linux container and then the application server container on top of it.

Let’s assume the developer has an in-depth knowledge of the os and is capable of building images: networks, services, configuring the application server, etc. As a system admin, I would have to wondering, what was enabled/disabled within the docker image, is it secure, is it configured correctly and am I responsible for reviewing it? And what about for each development team? With a workload like this it sounds like a new position, which would eat into any revenue savings by switching to docker.

If system administrator is responsible for packaging application the outlook doesn’t look much better. To create Docker images well is challenging and does take some time to learn. No doubt adding this many layers to a Docker image some tuning would need to be done and the tuning wouldn’t necessarily be repeatable across teams/projects.

There is definitely a place for Docker, but I’m not sure it belongs at this phase of the development process. Docker seems to me to be better suited for the linux container, vice the linux container and the JEE container.

Or maybe it makes sense for a small company/team that has a lot of rock stars on it.

I’ll post more as I work with!