What is the deal with all these changes that are not backwards compatible?
Everybody loves docker and how cool it is to be able to spin up a container locally in a matter of seconds, but now that more products are starting to adopt the docker project (or at least the container technology) the instability of the project will/are peculating downstream to projects/products using them.
Think about it. I have a customer who is not fluent in all of the technologies and projects that comprises OpenShift and to them the OpenShift product is responsible for any issues, perceived or not that arise when working with it. For example, lets say that the Docker API changes in a major way and previous versions are no longer allowed to pull images from a image repository if they weren’t built with a particular version of Docker. Then when they use OpenShift and it doesn’t work with their container images they will blame OpenShift.
Well that is exactly what happened. Now there is major chasm between not only users who’ve already built many of their images with an older version of docker, but any products/projects that are tied to Docker’s API are also broken between versions.
queue worlds smallest violin…
Seriously though, this is a real problem.
What do you do? If your customer/client/project/team needs the older version you’re going to have to use the older version to rebuild any artifacts with that version. This includes images and layer 1. All the metadata associated with the layers needs to be from the older API.
In my project docker-openshift-issues I created a CentOS vagrant box with Docker 1.8.2 installed and everything worked swimmingly. Yes, it was a pain to build and test it. Yes, I’m going to have to explain to my client that the latest version of Docker is not supported. Yes, I’m also going to get some sneers about it being unsupported and that they are going to have to rebuild anything with the Docker 1.8.2 API and even their remote developers.
Silver lining? Yup! As we all know it is simply good practice to have all of your Docker images setup in such a way that you can repeat the entire build all they way from layer 1, so in a sense it is a forcing factor for projects to know how and what is actually in their images. The second more important lining is that we (Container community) need a common definition between container technologies, such that there is no lockin. We are currently going through the circles at the moment with the open container initiative (awesome sauce), but still has a ways to go.
I think a real competitor and innovator to Docker containers is CoreOS. They have a newish project called Rocket and has a lot of potential. They are following open standards for everything. Another important feature that I like is that the build tool (https://github.com/appc/acbuild) with the idea that we should be able to use any build tool for our container images, but then be able to run them on any container runtime. Cool! More to come as I continue to test, but be on the lookout as kubernetes rolls this into a container runtime offering.