The Docker logo, protected by copyright

Docker process creation overhead when compiling

Notes published the
9 - 12 minutes to read, 2308 words
Categories: c c++
Keywords: build system c c++ cmake docker

I am by no means a docker expert, but as it is one of the buzzwords of the moment, I tried to find a use case for it in my daily workflows. As I’m interested in compiling a piece of code with different compilers, I researched how people manage to put docker and compiler together.

As far as I’ve seen, there is mainly one approach:

  • Take a base image

  • Put the compiler in it (skip if the base image already has the compiler)

  • Add the build system

  • Add other tools that can help during compilation (ccache, distcc, colorgcc, …​.)

  • Add other tools that can help for working (cppcheck, iwyu, …​)

At least for me, this approach does not scale well.

There are multiple issues, the first one being that it is not obvious what to choose as a base image. And to avoid known vulnerabilities, it is important to keep those images updated.

Secondly, every tool needs to be installed. How? Either through the package manager of the base image or manually. As one of the selling points of Docker is reproducibility, relying on the package manager, without further precautions (at least pinning packages), means that the images are not reproducible, as there is no assurance that the exact same packages will be available when recreating the image (look for example at gcc5).

Especially when you want to test a specific compiler version, a specific build system version, or whatever tool, you probably need to build it, package it and/or install manually.

At this point, packaging every compiler and other tools with all version variations on the host system is probably easier, and will most likely cause you fewer headaches, also because there is no need to learn a new tool (docker), which brings its own set of issues.

After those considerations, I noticed that GCC provides a set of images with every compiler version released.

Being able to use them, would indeed provide a great advantage, as I could avoid managing all those compiler versions, by simply using the docker image as a black box.

The first problem I had to tackle was the integration between different tools.

I do not want to use the GCC docker image as a base image and extend it. Putting every tool I use for working with C++ in the image looks like a lot of headaches and maintenance burden. Since I normally use an IDE for writing code, my file manager (and of course also the shell) for navigating files, and a dozen of other programs (git, SVN, tmux, Valgrind, …​.) I don’t see any benefit in replicating maybe 70% of my host environment in a docker container, and at least double the number of tools I have to manage.

So maybe I got it all wrong, docker is not (also) about packaging.

Package Software into Standardized Units for Development, Shipment, and Deployment

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.

So docker should indeed fit my needs.

I just want to use the packaged compiler, other tools should be able to use it.

Turns out it is not that simple.

Of course, the first thing I did was research on how other people work with it.

As mentioned earlier, the common approach is to repackage everything or extend the image with the given package manager.

I could not find an image that would work well if you use an IDE, or integrate other tools like ccache, at least without modifying the image.

So either I’m missing something, or everyone needs to write its own Dockerfile because of course, we do not all have the same needs. And the worst part is that probably you need to know how the base image was made, at least for knowing the directory structure, what tools are on it, what GNU/Linux distribution has been used, and so on.

If all I want is a single image with a bunch of tools, and I’m not that interested in its version, this might be fine.

As I wanted to test a library with GCC (at least version 6, 7, 8, 9), different clang versions, different CMake versions (3.10 to latest) and different Conan versions, I already need to create and maintain more than, let us say (4 GCC compiler + 4 clang compilers) x (5 CMake versions) 40 different images, not taking into account other tools (like Conan, make, ninja, Cppcheck, …​) or compilers that could help during development.

On the other hand, if I could "integrate" tools of different images without modifying them, I only need (4 GCC compiler + 4 clang compilers) + (5 CMake versions) 13 images.

This approach would then have other advantages like:

  • Better separation of tools, one image contains one application, thus has a single purpose, its easy to avoid installing unnecessary things and reuse them unchanged.

  • I can reuse (and I really mean reuse, not adapt) as black box official images, I do not need to maintain them.

  • If I can make this integration happen, I can even use my local CMake version or IDE and work with an image, which makes using any tool I want much easier as I do not need to modify the image every time I realize that I forgot to package something, or that there are other tools I could try out.

  • It scales much better, adding another tool is a new independent image, so it scales linearly. When extending images, I need to update every image I already had to ensure consistent environments.

So I tried to see if I could use an unmodified GCC docker image with my local build system and CMake.

My first attempt to integrate it with CMake was creating a toolchain file with the following content


# FIXME: instead of hard coding 1000, ask for real user id...
set(command "docker;run;-v;${CMAKE_SOURCE_DIR}:${CMAKE_SOURCE_DIR};-w;${CMAKE_CURRENT_SOURCE_DIR};--user;1000;gcc:9")
foreach(LANG C CXX ASM)


and my second attempt looked like


# FIXME: instead of hard coding 1000, ask for real user id...
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE "docker run -v ${CMAKE_SOURCE_DIR}:${CMAKE_SOURCE_DIR} -w / --user 1000 gcc:9")


In the first attempt, I defined CMAKE_C_COMPILER_LAUNCHER, CMAKE_CXX_COMPILER_LAUNCHER and CMAKE_ASM_COMPILER_LAUNCHER, while RULE_LAUNCH_COMPILE applies to all compiler automatically.

The issue and I believe it’s an internal CMake error but did not get a response from the mailing list yet, is that CMake does not honor those variables when querying the compiler, so it determines, for example, the version from my local compiler (or thinks that there is no compiler available). Furthermore, when it’s the internal test suite to check if the compiler works get executed, it uses those variables, as I could clearly see that the compiler in the container was invoked, and not the one installed on my host.

As this mismatch caused some issues, I opted for another approach:




and looks like


# FIXME: instead of hard coding 1000, ask for real user id...
exec docker run --volume "$SOURCE_DIR":"$SOURCE_DIR" -w "$PWD" --user 1000 gcc:9 gcc "$@";

This is a maintenance burden, as instead of changing at one location the compiler version, I need to maintain for every compiler a different script, but now CMake seems to detect everything correctly. I also discovered later that the previous approach would not work, as CMake changes the working directory while compiling, so hard-coding -w in a toolchain file would lead to compile error afterward.

For testing purposes, I tried to build libressl with he given toolchain file.

rm -rf /path/to/libressl/build.docker;
cmake -S /path/to/libressl -B build.docker -DCMAKE_TOOLCHAIN_FILE=toolchain-docker-shell.cmake --debug-trycompile
time cmake --build /path/to/libressl/build.docker

With docker run
real    9:13.49
user    45.968
sys     35.874

Using my native compiler

rm -rf /path/to/libressl/build
cmake -S /path/to/libressl -B build --debug-trycompile;
time cmake --build /path/to/libressl/build
real    46.508
user    2:04.40
sys     30.804

It is 12 times slower! (from 46 seconds to 9 minutes and 13 seconds)

Also, notice that in I left --rm out, as I thought that it might have been better to do the cleanup once after building, instead of wasting time at every invocation.

I guess we can leave compiler differences out, as my host version is gcc (Debian 9.2.1-21) 9.2.1 20191130, while the docker version is gcc (GCC) 9.2.0 (yes, not exactly the same, but nearly).

As docker does many things apart from starting a process, I verified that

time gcc --version>/dev/null
real    0.003
user    0.001
sys     0.000

time docker run gcc:9 gcc --version>/dev/null
real    4.519
user    0.046
sys     0.019

there is indeed a big performance penalty.

After researching a little bit if it was possible to reduce the startup cost, I found docker exec.

docker run --rm --name gcc-9 --detach --tty gcc:9 sh;
time docker exec gcc-9 gcc --version>/dev/null
real    0.391
user    0.028
sys     0.028

It looks better, but there is still a noticeable performance penalty compared to invoking a tool on the host.

Nevertheless, I modified


# FIXME: instead of hard coding 1000, ask for real user id...
exec docker exec --workdir "$PWD" --user 1000 gcc-9 gcc "$@";

and recompiled libressl

docker run --volume /path/to/libressl:/path/to/libressl --rm --name gcc-9 --detach --tty gcc:9 sh; #start instance and keep running in order to be able to use exec
rm -rf /path/to/libressl/build.docker;
cmake -S /path/to/libressl -B /path/to/libressl/build.docker -DCMAKE_TOOLCHAIN_FILE=toolchain-docker-shell.cmake --debug-trycompile
time cmake --build /path/to/libressl/build.docker
real    5:57.59
user    43.368
sys     32.701

It is still 8 times slower!

And no, it’s not something that disappears or can be ignored because we can scale thanks to docker, as someone tried to convince me. And even if scaling would help (and docker does not add anything to it), by taking the opposite approach we could argue to ditch docker and the expensive server farm and replace it with something that costs one-tenth.

Even if I would have a server farm with infinite resources, it will still be 8 times slower (supposing that the difference does not inherently depend on my machine), than when building locally on the server. Also, there is a limit to the possibility to build software in parallel (see also Amdahl’s law for more information), as there are some sequential parts. I bet that more CPUs than files to compile would bring no advantage in the best case, and make the build slower in the worst case, as the system needs to manage more resources. So throwing more CPUs and cores will not solve the problem either, on the contrary, it will probably make the system slower.

Unfortunately, this performance hit is unbearable for working/testing/debugging continuously on a big C and C++ project. Also libressl compiles quickly, I have worked on projects where (clean) build times are above 10 or even 20 minutes, extending them over one or nearly 3 hours means killing any productivity.

There are of course alternatives to docker for my use-cases, like packaging all different versions of the tools I want, but at least now I know why all docker images tend to be "fat". There are no good integration possibilities, and the startup cost, especially of docker run, but even of docker exec, is not negligible, especially for those applications whose runtime is shorter than a couple of seconds.

Maybe if the GCC images would have created a more minimal image, for example, alpine seems to take a more minimal approach, the startup times might have been smaller. On the other hand, it seems that some design decisions of Alpine can lead to slower execution times, so maybe using Debian as a base image is the right choice after all. Maybe it’s worth trying to use the official Debian images directly, and the slim variations, to see if it makes a difference. But that means that I need to create all those images by myself. I would rather package the compilers to different directories under /opt and reuse them in every environment I like, and eventually in docker images too if I need to work in some environments that require them.

So docker does not seem to fit my needs

  • It is not easier to maintain than packages on the host

  • It does not scale as well as native (maintenance and runtime)

  • All isolation features, except maybe for the filesystem, are not really relevant, so they are not an advantage

So this is not a blog post with a happy ending, but it was still an interesting and noteworthy experiment.

Do you want to share your opinion? Or is there an error, some parts that are not clear enough?

You can contact me anytime.