Docker Compose save and load#

I recently had the opportunity to extend the Docker Compose codebase to add the save and load subcommands.

Why is this needed?#

In order to deploy software in an airgapped environment, both software and configurations must be transferred between networks. When Docker Compose is used for deployment, a the items to transfer between networks include the docker-compose.yml config file and the Docker image for each service. There is no obvious existing tool to prepare these items for transfer between networks, so I decided to build my own.

How it works#

Since the Docker CLI comes with a mechanism for saving and loading images (docker save and docker load), it made sense to simply extend the docker compose command to support a similar interface.

save: The docker compose save command does the following:

  • Pulls any service images possible

  • Builds any service images possible

  • Builds a .tar archive with the following:

    • The project’s compose file

    • The service images (collected using Docker’s save endpoint)

load: The docker compose load command does the following:

  • Expands the .tar archive

  • Copies the compose file from the archive to disk

  • Extracts the service images from the archive

  • Calls Docker’s load endpoint to add the images to the local repository.

To use this for effective deployment transfers in airgapped environments, simply run the docker compose save command on an internet-connected host, copy the resulting archive to a transfer medium, transfer it to the airgapped host, and run docker compose load to import all the necessary items.

Lessons Learned#

This was my first time working with a large existing Go codebase. I’ve worked with a few that I’ve started on my own, but never with an existing codebase, or one of this size. I must say, it was a very simple and enjoyable experience. I’ve come away from this project witih a renewed appreciation for Go.

One Docker trick I learned along the way was that the docker save command actually supports dumping multiple images to one .tar archive. They can then be loaded in bulk using docker load. This considered, it probably would have been easier to write a short bash script to accomplish my goal, but where’s the fun in that?