As I've been working to improve my devops skills and knowledge, particularly in regard to using Docker, I have begun to wonder what the best way is to handle Dockerfiles and the rest of the Docker image build environment. Sometimes I feel that with the code makes the most sense, treating it like a Makefile or other build data. Now that I am looking for references suggesting the this technique I am having trouble finding them, but I have seen it often Here are a couple posts on the web which suggest that. This includes Amazon's ECS Continuous Deployment pipline using CodePipleine here and here. Other times I feel that the Dockerfile and Docker build environment should be separate. Docker is not your application, it is just one way of deploying your application. You don't package Windows or OSX or Linux with your application for people who are going to deploy it to those directly.

This evening I decided I'd just take a look at some of the major repos on hub.docker.com. I looked at NGiNX, Apache HTTPD, ElasticSearch, Consul, MongoDB, and NodeJS. As you can see in those linked Dockerfiles, each one of them keeps the code separate and pulls it in from a remote repository of some sort. The Docker Registry container keeps the built binary with the Dockerfile, but the Dockerfile and build scripts are separate from the actual Docker Registry source code.

This has pretty much sealed it for me. Actual application code in one place, Docker build environment in a separate repository is ok and seems to be the norm among major projects. I will continue to keep a Dockerfile, and in most cases a compose.yml, with my projects, though. It works great for use as a development environment where I can mount the local code into the container.