Docker
Building a Docker image is a common way to deploy all sorts of applications. However, doing so from a monorepo has several challenges.
Good to know:
This guide assumes you're using create-turbo or a repository with a similar structure.The problem
In a monorepo, unrelated changes can make Docker do unnecessary work when deploying your app.
Let's imagine you have a monorepo that looks like this:
You want to deploy apps/api
using Docker, so you create a Dockerfile:
This will copy the root package.json
and the root lockfile to the Docker image. Then, it'll install dependencies, copy the app source and start the app.
You should also create a .dockerignore
file to prevent node_modules from being copied in with the app's source.
The lockfile changes too often
Docker is pretty smart about how it deploys your apps. Just like Turborepo, it tries to do as little work as possible.
In our Dockerfile's case, it will only run npm install
if the files it has in its image are different from the previous run. If not, it'll restore the node_modules
directory it had before.
This means that whenever package.json
, apps/api/package.json
or package-lock.json
change, the Docker image will run npm install
.
This sounds great - until we realize something. The package-lock.json
is global for the monorepo. That means that if we install a new package inside apps/web
, we'll cause apps/api
to redeploy.
In a large monorepo, this can result in a huge amount of lost time, as any change to a monorepo's lockfile cascades into tens or hundreds of deploys.
The solution
The solution is to prune the inputs to the Dockerfile to only what is strictly necessary. Turborepo provides a simple solution - turbo prune
.
Running this command creates a pruned version of your monorepo inside an ./out
directory. It only includes workspaces which api
depends on. It also prunes the lockfile so that only the relevant node_modules
will be downloaded.
The --docker
flag
By default, turbo prune
puts all relevant files inside ./out
. But to optimize caching with Docker, we ideally want to copy the files over in two stages.
First, we want to copy over only what we need to install the packages. When running --docker
, you'll find this inside ./out/json
.
Afterwards, you can copy the files in ./out/full
to add the source files.
Splitting up dependencies and source files in this way lets us only run npm install
when dependencies change - giving us a much larger speedup.
Without --docker
, all pruned files are placed inside ./out
.
Example
Our detailed with-docker
example goes into depth on how to use prune
to its full potential. Here's the Dockerfile, copied over for convenience.
This Dockerfile is written for a Next.js app that is
using the standalone
output
mode.
Remote Caching
To take advantage of remote caches during Docker builds, you will need to make sure your build container has credentials to access your Remote Cache.
There are many ways to take care of secrets in a Docker image. We will use a simple strategy here with multi-stage builds using secrets as build arguments that will get hidden for the final image.
Assuming you are using a Dockerfile similar to the one above, we will bring in some environment variables from build arguments right before turbo build
:
turbo
will now be able to hit your Remote Cache. To see a Turborepo cache hit for a non-cached Docker build image, run a command like this one from your project root:
Was this helpful?