Docker: Dockerfile oddities

I have been itching for a reason to create a project using docker. I wanted to get more hands on experience than the [well] documented tutorials from docker, because stuff always, always is left off. A few weeks ago I was asked to give a demo of Jboss Data Virtualization to some system integrators. Anybody familiar with JDV knows it is an extremely powerful tool with many use cases and almost immediately I’m thinking leveraging docker containers to isolate each use case. This would allow me to spin up these demos in a matter of seconds and don’t really have to worry about issues that always happen when running demos on host box.

Reason to use Docker…Check

Know how of Docker…ehhh Check

Use Cases… Check . It turns out Teiid, our community project of JDV has a slew of quickstarts that hit more or less all the use cases I would want to show and quite thoroughly documented. I start here trying to dockerize these quickstarts for the community.

Great! Things are going well and I’m working with our technical marketing team leveraging all their knowledge and minimizing the amount of rework.

Then, there is an oddity. Part of our instructions calls a script to configure the server, however the server needs to be running first before it can be configured. Clearly this is easy start the server then execute the script against the running server…

snippet of my project


RUN $DV_HOME/jboss-eap-6.1/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 && \
$DV_HOME/jboss-eap-6.1/bin/jboss-cli.sh --connect --file=$DV_HOME/jboss-eap-6.1/teiidfiles/scripts/setup.cli

WRONG.

AFAIK, this will not work, Dockerfile layers (execution commands) are run in isolation and can’t have concurrent process running. Which kind of makes sense if there is unneeded complexity and whomever is pulling and building the image may be time and resource hog. But really who cares. Once the image is built its not like it has to be be built again and again once it is in your repository. Basically, due to this limitation I first need to run the server externally to create my custom server configurations then take that file and simply replace the current configurations.

Now it isn’t a big deal, but now instead of putting the burden on the container to do I now need to do an additional step of preparation, maintenance, documentation for this docker image.

With that said, once the image is built it is super nice to spin up a container on the fly and fast.

Docker and my initial thoughts

This past week I had an opportunity to tinker a bit with Docker and I think it is really cool… But I’m not sure I think it is quite the silver bullet that everyone makes it out to be.

Will Docker stream line deployments across environments, yes.

Will Docker ensure consistent configurations for the server environment, yes.

Will Docker replace Maven, no.

Will Docker replace continuous build servers, no.

Will Docker be used by the developers or administrators, unclear. If a developer is working in a somewhat loose data center, then the developer will need to document/script out the deployment steps so it can be replicated in each environment, which can be fraught with holes. Generally the developer will script out the deployments with respect to the application sever and not necessarily the os. This presents an issue, because now we are expecting developers to have a more in-depth knowledge of linux to properly understand the linux container and then the application server container on top of it.

Let’s assume the developer has an in-depth knowledge of the os and is capable of building images: networks, services, configuring the application server, etc. As a system admin, I would have to wondering, what was enabled/disabled within the docker image, is it secure, is it configured correctly and am I responsible for reviewing it? And what about for each development team? With a workload like this it sounds like a new position, which would eat into any revenue savings by switching to docker.

If system administrator is responsible for packaging application the outlook doesn’t look much better. To create Docker images well is challenging and does take some time to learn. No doubt adding this many layers to a Docker image some tuning would need to be done and the tuning wouldn’t necessarily be repeatable across teams/projects.

There is definitely a place for Docker, but I’m not sure it belongs at this phase of the development process. Docker seems to me to be better suited for the linux container, vice the linux container and the JEE container.

Or maybe it makes sense for a small company/team that has a lot of rock stars on it.

I’ll post more as I work with!