Back to blog

Turborepo 1.6

Friday, October 21st, 2022
Matt Pocock
Name
Matt Pocock
X
@mattpocockuk
Greg Soltis
Name
Greg Soltis
X
@gsoltis
Nathan Hammond
Name
Nathan Hammond
Tom Knickman
Name
Tom Knickman
X
@tknickman
Anthony Shew
Name
Anthony Shew
X
@anthonysheww
Jared Palmer
Name
Jared Palmer
X
@jaredpalmer
Mehul Kar
Name
Mehul Kar
X
@mehulkar
Chris Olszewski
Name
Chris Olszewski

Turborepo 1.6 changes the game for Turborepo - you can now use it in any project.

Update today by running npm install turbo@latest.

Any codebase can use Turborepo

Turborepo helps speed up tasks in your codebase. Until now, we'd built Turborepo specifically for monorepos - codebases which contain multiple applications and packages.

Turborepo is fantastic in monorepos because they have so many tasks to handle. Each package and app needs to be built, linted, and tested.

But we got to thinking: lots of codebases that aren't monorepos run plenty of tasks. Most CI/CD processes do a lot of duplicated work that would benefit from a cache.

So we're excited to announce that any codebase can now use Turborepo.

Try it out now by starting from the example, or by adding Turborepo to an existing project:

Add Turborepo to your project

  1. Install turbo:
Terminal
npm install turbo --save-dev
  1. Add a turbo.json file at the base of your new repository:
./turbo.json
{
  "pipeline": {
    "build": {
      "outputs": [".next/**", "!.next/cache/**"]
    },
    "lint": {
      "outputs": []
    }
  }
}
  1. Try running build and lint with turbo:
Terminal
turbo build lint

Congratulations - you just ran your first build with turbo. You can try:

When should I use Turborepo?

Turborepo being available for non-monorepos opens up a lot of new use cases. But when is it at its best?

When scripts depend on each other

You should use turbo to run your package.json scripts. If you've got multiple scripts which all rely on each other, you can express them as Turborepo tasks:

turbo.json
{
  "pipeline": {
    "build": {
      "outputs": ["dist/**"]
    },
    "lint": {
      // 'build' should be run before 'lint'
      "dependsOn": ["build"]
    },
    "test": {
      // 'build' should be run before 'test'
      "dependsOn": ["build"]
    }
  }
}

Then, you can run:

Terminal
turbo run lint test

Because you've said that build should be run before lint and test, it'll automatically run build for you when you run lint or test.

Not only that, but it'll figure out the optimal schedule for you. Head to our core concepts doc on optimizing for speed.

When you want to run tasks in parallel

Imagine you're running a Next.js app, and also running the Tailwind CLI. You might have two scripts - dev and dev:css:

package.json
{
  "scripts": {
    "dev": "next",
    "dev:css": "tailwindcss -i ./src/input.css -o ./dist/output.css --watch"
  }
}

Without anything being added to your turbo.json, you can run:

Terminal
turbo run dev dev:css

Just like tools like concurrently, Turborepo will automatically run the two scripts in parallel.

This is extremely useful for dev mode, but can also be used to speed up tasks on CI - imagine you have multiple scripts to run:

Terminal
turbo run lint unit:test e2e:test integration:test

Turborepo will figure out the fastest possible way to run all your tasks in parallel.

Prune now supported on npm

Over the last several releases, we've been adding support for turbo prune on different workspace managers. This has been a challenge - turbo prune creates a subset of your monorepo, including pruning the dependencies in your lockfile. This means we've had to implement logic for each workspace manager separately.

We're delighted to announce that turbo prune now works for npm, completing support for all major package managers. This means that if your monorepo uses npm, yarn, yarn 2+ or pnpm, you'll be able to deploy to Docker with ease.

Check out our previous blog on turbo prune to learn more.

Performance improvements in the cache

Before 1.6, Turborepo's local cache was a recursive copy of files on the system to another place on disk. This was slow. It meant that for every file that we needed to cache, we'd need to perform six system calls: open, read, and close on the source file; open, write, and close on the destination file.

In 1.6, we've cut that nearly in half. Now, when creating a cache, we create a single .tar file (one open), we write to it in 1mb chunks (batched writes), and then close it (one close). The halving of system calls also happens on the way back out of cache.

And we didn't stop there. Over the past month we've invested significantly in our build toolchain to enable CGO which unlocks usage of best-in-class libraries written in C. This enabled us to adopt Zstandard's libzstd for compression which gets us an algorithmic 3x performance improvement for compression.

After all of these changes we're regularly seeing performance improvements of more than 2x on local cache creation and more than 3x on remote cache creation. This gets even better the bigger your repository is, or the slower your device is (looking at you, CI). This means we've been able to deliver performance wins precisely to those who needed it the most.