preloader

Lessons learned building Docker Compose

Learnings from the co-creator of Docker Compose about the importance of scalability in containerization that led to its creation.

/images/blog/cover-images/lessons-learned-building-docker-compose.png
Docker Compose and scalability

by on

This is a guest post by Docker Compose co-creator and friend of Shipyard Aanand Prasad. You can find him at his website or on Twitter.


Disclaimer: While I was one of the original authors of Docker Compose, I do not own it, and I’m no longer involved with Docker’s development. Please don’t read this as anything more than what it is – some lessons about software design, learned from a strange time in my career.

At some time in early 2014, and lasting until some time in late 2016, my job became to design abstractions so powerful that they abstracted away the difference between running software on one machine and running it on thousands. I was far from the only one doing it, of course – my peers and I were doing it collectively, collaboratively, trying to figure out how to model this new landscape we were exploring.

It was strange and confusing work sometimes, but it appears to still be relevant today, because “containers” and “orchestration” don’t seem to be going anywhere. So in case it helps anyone, I’ll share some lessons I learned from both doing that work and watching others do it.

Back in the day, the way to get two Docker containers communicating was with links. The process was as follows:

Create your upstream container – let’s say it’s a database, named db.

$ docker run --name=db my-db-image

Create your downstream container – for example a web app, named web – with a link to the db container.

$ docker run --name=web --link=db my-web-image

Code inside web can now access db at the hostname db.

This worked fantastically well for the simple case where you’ve just got one of each container, and one needs to connect to the other, and the upstream one is created before the downstream one. For quite a while, the plan was to stick with links, and improve them in various ways – make them resilient to one or both containers going down, make them work even when the containers were running on different hosts, and so on.

But we scrapped that plan, because links turned out to be a constraining abstraction in the first place. For example, let’s suppose you had 10 database containers (db01, db02, db03, etc) instead of just one. Which one are you going to link your web container to?

Suppose we say “just pick one according to some strategy”. OK, well what happens when that particular database container goes down? Does the web container go down too? If not, I suppose it’s your job to detect that condition and change the link to point to a different one.

Suppose we instead say “link to all of them and let web decide which one to talk to”. OK, but now you have to remember to create and remove links every time a database container comes online or goes offline (assuming links are dynamically creatable). And we haven’t started talking about scaling our web containers up: are we going to end up with a whole tangle of links, one per database container per web container?

Of course, the basic problems of dealing with how to hook up particular services are the same regardless of the abstraction: something has to decide who connects to what, how they find each other, and what happens when something goes down. But as you can see, an ill-fitting abstraction just makes everything conceptually more difficult.

For these reasons and more, Docker deprecated the links abstraction, and replaced it with networks.

Here’s how the same basic setup works with networks:

1 - Create a network with a particular network driver.

$ docker network create --driver=some-network-driver my-network

2 - Create your database container, and attach it to the network.

$ docker run --name=db --network=my-network my-db-image

3 - Create your web container, and attach it to the network.

$ docker run --name=web --network=my-network my-web-image

As before, code inside web can now access db at the hostname db.

This isn’t a hugely different process to go through, but it’s profoundly different when you think about scaling up the database and web services. Of course, as before, something still needs to decide which web container’s going to talk to which database container, and something still needs to actually facilitate that communication, but the abstraction is no longer in the way: the user no longer has to manage an explosion of links. They just create containers and attach them to networks, and the network driver determines the run-time behaviour.

Volumes

Around the same time, we ran into a conceptual problem with persisting data that I thought bore an odd resemblance to the problem with links. The user story went something like this: I’ve got a database, and I want to destroy and recreate the container (for example, to upgrade the image), but of course I want the data to stick around.

The official solution used to look like this:

1 - Create a “data-only container” to store the data:

$ docker run --name=data --volume=/data my-data-image

2 - Create the database container, and tell it to get its volumes from the data container:

$ docker run --name=db --volumes-from=data my-db-image

3 - When you destroy and recreate the database container, you just re-attach it to the data container:

$ docker rm db
$ docker run --name=db --volumes-from=data my-db-image

Even before you start to think about scaling up, this feels a bit weird. Why do I have to create a non-functional container – something I’ve been encouraged to think about as a running process – just to keep track of some data? This is the first sign that the abstraction is off.

But it gets even weirder when you scale from one host to N hosts. Do I have to worry about where my containers are now? Does my database container have to be on the same host as the data container? What happens if that host dies? Have I lost my data? All of these questions are answerable, of course, but it feels more and more like we’re cutting against the grain.

Instead, what if we treat volumes like we treat networks: something that you create independently of containers, where a driver encapsulates all of the decisions about where and how to store and move data around? Now we have an analogous process:

1 - Create a volume with a particular volume driver.

$ docker volume create --driver=some-volume-driver my-volume

2 - Create your database container, and attach the volume to it.

$ docker create --name=db --volume=my-volume:/data my-db-image

3 - When you destroy and recreate the database container, you just re-attach it to the volume:

$ docker rm db
$ docker create --name=db --volume=my-volume:/data my-db-image

Again, we haven’t solved the technical problems yet, but we’ve separated them from the user’s mental model: the volume driver is responsible for the difficult decisions about where and how data is stored – whether on a single machine or a multi-host cluster – and the user just creates containers and attaches them to volumes.

One versus Many

What jumped out at me with both of these design evolutions was how abstractions that work fine on one computer do not necessarily scale to many computers. It may seem obvious in retrospect, but it wasn’t at the time – you can dig yourself into the wrong abstraction for a long time before finally giving up on it.

It’s tempting to generalise and compress all of this down into something like: design for the “many computers” case first, and then the “one computer” case will be trivial. Certainly, this would insulate you from some bad abstractions. But it elides something else that’s crucial to making good software: people generally start small and scale up, and when you’re starting out, you need the simplest abstractions possible.

This was where Docker Compose came in. Instead of using all these fine-grained commands to build your application one piece at a time, you define it in one place: a Compose file.

services:
  web:
    image: my-web-image
  db:
    image: my-db-image
    volumes:
      - /data

After that, you run a single command:

docker-compose up

and it creates or updates all the networks, volumes, and containers it needs to.

Notice that you don’t even need to explicitly create networks or volumes, because Compose creates them by default. If you need special configuration or drivers, of course, you can be explicit, but that’s a power user feature.

The lesson I took away was that you want to be able to say this:

Despite being as simple as possible, this abstraction continues to work when you scale from one computer to many.

But it’s just as important to flip it around:

Despite being able to scale from one computer to many, this abstraction is as simple as possible.


Aanand is the co-creator of Docker Compose. You can find more of his code here

Share:

What is Shipyard?

Shipyard is the Ephemeral Environment Self-Service Platform.

Automated review environments on every pull request for Developers, Product, and QA teams.

Stay connected

Latest Articles

Shipyard Newsletter
Stay in the (inner) loop

Hear about the latest and greatest in cloud native, container orchestration, DevOps, and more when you sign up for our monthly newsletter.