• Pack
  • Docs
  • Benchmarks

Benchmarks

We created a test generator that makes an application with a variable amount of modules to benchmark cold startup and file updating tasks. This generated app includes entries for these tools:

  • Next.js 11
  • Next.js 12
  • Next.js 13 with Turbopack
  • Vite

As the current state of the art, we're including Vite along with Webpack-based Next.js solutions. All of these toolchains point to the same generated component tree, assembling a Sierpiński triangle in the browser, where every triangle is a separate module.

This image is a screenshot of the test application we run our benchmarks on. It depicts a Sierpiński triangle where each single triangle is its own component, separated in its own file. In this example, there are 3,000 triangles being rendered to the screen.

Metrics

Let's break down exactly what each of these metrics mean, and how they'll impact your day-to-day developer experience.

Curious about how these benchmarks are implemented, or want to run them yourself? Check out our benchmark suite documentation in the Turbo monorepo.

Cold startup time

This test measures how fast a local development server starts up on an application of various sizes. We measure this as the time from startup (without cache) until the app is rendered in the browser. We do not wait for the app to be interactive or hydrated in the browser for this dataset.

Next.js 13

turbo

Turbopack

1.1s

Next.js 12

3.4s

Vite

4.8s

Next.js 11

7.7s

Startup time by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).

Data

To run this benchmark yourself, clone vercel/turbo and then use this command from the root:

TURBOPACK_BENCH_COUNTS=1000,5000,10000,30000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "startup/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite CSR)."

Here are the numbers we were able to produce on a 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115):

bench_startup/Next.js 11 SSR/1000 modules              7.7±0.06s
bench_startup/Next.js 11 SSR/5000 modules             24.8±0.11s
bench_startup/Next.js 11 SSR/10000 modules            49.4±0.28s
bench_startup/Next.js 11 SSR/30000 modules           176.2±1.42s
bench_startup/Next.js 12 SSR/1000 modules              3.4±0.01s
bench_startup/Next.js 12 SSR/5000 modules             10.7±0.02s
bench_startup/Next.js 12 SSR/10000 modules            20.1±0.07s
bench_startup/Next.js 12 SSR/30000 modules            76.6±0.66s
bench_startup/Turbopack SSR/1000 modules          1125.3±44.54ms
bench_startup/Turbopack SSR/5000 modules               3.6±0.02s
bench_startup/Turbopack SSR/10000 modules              7.5±0.44s
bench_startup/Turbopack SSR/30000 modules             22.3±1.29s
bench_startup/Vite CSR/1000 modules                    4.8±0.02s
bench_startup/Vite CSR/5000 modules                   19.2±0.15s
bench_startup/Vite CSR/10000 modules                  37.2±0.29s
bench_startup/Vite CSR/30000 modules                 109.5±1.14s

File updates (HMR)

We also measure how quickly the development server works from when an update is applied to a source file to when we receive a custom browser event that the modified code was executed.

Note that executing modified code does not mean that the changes are visible to the user yet. When a React component changes, it still needs to be re-rendered by the browser. With this methodology, we are focusing only on the time the compiler takes to do its work and the time it takes for the browser to evaluate an updated module.

For Hot Module Reloading (HMR) benchmarks, we first start the dev server on a fresh installation with the test application. We then run five changes to warm up the tooling. This step is important as it prevents discrepancies that can arise with cold processes.

Once our tooling is warmed up, we measure the sixth file update. After repeating this 10 times, we use the average as our final number.

Next.js 13

turbo

Turbopack

15ms

Vite

87ms

Next.js 12

134ms

Next.js 11

273ms

Turbopack vs. Next.js (Webpack) vs. Vite HMR by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).
Turbopack vs. Vite HMR by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).

From our benchmarks, we find the following:

  • Turbopack HMR is 10x faster than Vite once the application scales above 30k modules. These marks continue to improve with more modules, showing 20x faster above 50k modules.
  • Turbopack HMR is 700x faster than Webpack-based Next.js 11 for large applications with over 50k modules.

The takeaway: Turbopack performance is a function of the size of an update, not the size of an application.

We're excited by Turbopack's performance, but we also expect to make further speed improvements for both small and large applications in the future. For more info, visit the comparison docs for Vite and Webpack.

Data

To run this benchmark yourself, clone vercel/turbo and then use this command from the root:

TURBOPACK_BENCH_COUNTS=10,100,200,500,1000,2000,3000,4000,5000,10000,20000,30000,50000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "hmr_to_eval/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite CSR)"

Here are the numbers we were able to produce on a 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115):

bench_hmr_to_eval/Next.js 11 SSR/10 modules         77.9±21.03ms
bench_hmr_to_eval/Next.js 11 SSR/100 modules        100.5±2.04ms
bench_hmr_to_eval/Next.js 11 SSR/200 modules         98.7±4.48ms
bench_hmr_to_eval/Next.js 11 SSR/500 modules       140.0±14.06ms
bench_hmr_to_eval/Next.js 11 SSR/1000 modules      273.2±17.11ms
bench_hmr_to_eval/Next.js 11 SSR/2000 modules      404.0±24.81ms
bench_hmr_to_eval/Next.js 11 SSR/3000 modules      498.3±22.10ms
bench_hmr_to_eval/Next.js 11 SSR/4000 modules      698.8±54.46ms
bench_hmr_to_eval/Next.js 11 SSR/5000 modules      849.7±14.64ms
bench_hmr_to_eval/Next.js 11 SSR/10000 modules    1713.9±32.96ms
bench_hmr_to_eval/Next.js 11 SSR/20000 modules         5.0±0.12s
bench_hmr_to_eval/Next.js 11 SSR/30000 modules         6.9±1.75s
bench_hmr_to_eval/Next.js 11 SSR/50000 modules        11.6±0.50s
bench_hmr_to_eval/Next.js 12 SSR/10 modules          50.2±5.68ms
bench_hmr_to_eval/Next.js 12 SSR/100 modules         45.7±2.99ms
bench_hmr_to_eval/Next.js 12 SSR/200 modules         50.6±8.27ms
bench_hmr_to_eval/Next.js 12 SSR/500 modules        93.9±19.93ms
bench_hmr_to_eval/Next.js 12 SSR/1000 modules      133.9±12.69ms
bench_hmr_to_eval/Next.js 12 SSR/2000 modules      147.4±24.38ms
bench_hmr_to_eval/Next.js 12 SSR/3000 modules      210.7±33.00ms
bench_hmr_to_eval/Next.js 12 SSR/4000 modules      295.0±21.65ms
bench_hmr_to_eval/Next.js 12 SSR/5000 modules      386.2±84.54ms
bench_hmr_to_eval/Next.js 12 SSR/10000 modules    1067.1±133.67ms
bench_hmr_to_eval/Next.js 12 SSR/20000 modules         2.8±0.09s
bench_hmr_to_eval/Next.js 12 SSR/30000 modules         5.4±0.50s
bench_hmr_to_eval/Next.js 12 SSR/50000 modules         8.5±0.32s
bench_hmr_to_eval/Turbopack SSR/10 modules           13.3±0.49ms
bench_hmr_to_eval/Turbopack SSR/100 modules          13.8±1.10ms
bench_hmr_to_eval/Turbopack SSR/200 modules          14.5±2.18ms
bench_hmr_to_eval/Turbopack SSR/500 modules          14.3±1.63ms
bench_hmr_to_eval/Turbopack SSR/1000 modules         15.3±0.27ms
bench_hmr_to_eval/Turbopack SSR/2000 modules         14.1±0.14ms
bench_hmr_to_eval/Turbopack SSR/3000 modules         15.1±0.37ms
bench_hmr_to_eval/Turbopack SSR/4000 modules         16.0±0.31ms
bench_hmr_to_eval/Turbopack SSR/5000 modules         16.5±0.41ms
bench_hmr_to_eval/Turbopack SSR/10000 modules        14.9±1.35ms
bench_hmr_to_eval/Turbopack SSR/20000 modules        15.3±1.52ms
bench_hmr_to_eval/Turbopack SSR/30000 modules        15.1±1.03ms
bench_hmr_to_eval/Turbopack SSR/50000 modules        16.7±3.73ms
bench_hmr_to_eval/Vite CSR/10 modules                94.8±5.24ms
bench_hmr_to_eval/Vite CSR/100 modules              102.4±2.32ms
bench_hmr_to_eval/Vite CSR/200 modules              101.7±2.39ms
bench_hmr_to_eval/Vite CSR/500 modules              100.1±4.24ms
bench_hmr_to_eval/Vite CSR/1000 modules              86.5±9.13ms
bench_hmr_to_eval/Vite CSR/2000 modules             111.6±3.76ms
bench_hmr_to_eval/Vite CSR/3000 modules             105.0±2.51ms
bench_hmr_to_eval/Vite CSR/4000 modules             99.0±12.26ms
bench_hmr_to_eval/Vite CSR/5000 modules             92.5±22.77ms
bench_hmr_to_eval/Vite CSR/10000 modules           110.4±32.82ms
bench_hmr_to_eval/Vite CSR/20000 modules           204.4±61.72ms
bench_hmr_to_eval/Vite CSR/30000 modules           256.2±67.72ms
bench_hmr_to_eval/Vite CSR/50000 modules          509.8±137.92ms

If you have questions about the benchmark, please open an issue on GitHub.

Last updated on 2022-11-08T18:15:26.000Z