When Apple announced their custom CPU machines, part of the hype was how fast they are. I saw a bit of buzz like, “Oh man, my compile times are so amazingly fast,” from people who went ahead and got the Mac Mini with the M1 chip. I waited for the 16″ MacBook Pro, and I was expecting…well, you know, speed. That’s what everyone said, right?
In 1995, I worked at a startup in San Jose and we used PowerPC based Macs to build our software. We used CodeWarrior, because its compiler was way faster than MPW (and its UI was less user-abusive…some things don’t change; Apple’s developer tools still hate developers). Even so, when we kicked off a build, we had time for two (sometimes three) programmers to play a couple rounds of darts.
Today, I’m working on a project that builds in a Docker container. Because of this, every single time I rebuild, *everything* gets recompiled. When it’s two files, who cares? But when it’s hundreds…well, I don’t care what CPU you’re using because the process is not CPU bound. Compile times are I/O bound, once you’re not allowed to cache intermediate artifacts.
During this rebuild, I wrote a blog post. Next one, I’m gonna go to the bathroom. After that…practice the bagpipes?
4 thoughts on “Speed Is Relative”
You should totally be caching your intermediate artifacts for development work.
The way we do this is to have 2 stages in our build (https://docs.docker.com/develop/develop-images/multistage-build/). The first stage is development – it installs all the stuff we need to do builds (libraries, compilers, etc) – but it does not actually do builds. The second stage does the actual build.
While you are doing development work, use the first stage and mount your code into the container – along with a place to put artifacts:
docker build –target development my_container .
docker run –rm -it -v $(PWD):/development -w /development my_container bash
Then you do all your dev work in that bash container. Because you’ve mounted pwd into the container, you get to edit your source code on the host and build in the container – iteratively – just like you’re used to. It happens to be a ‘remote system’, but you’re old enough you’re used to that. And vscode (and most good editors) has excellent tools for debugging into containers.
Once you’re happy with it, THEN you do the final build (which probably does not have your intermediate stuff, depending on how you go about it):
docker build –target production my_container .
Right…okay, I see. Sure, that’s good. Thanks!
Follow up report: so, I sort of did that, but not, since a) I’m not using VSCode and I’m learning enough other junk at the moment I didn’t want to get involved and b) reworking the docker-compose and Dockerfile wasn’t working out cleanly. So, what I’ve done is write a small shell script to type all that verbose docker junk so I don’t fat finger it or get it wrong. Then, another one which I source when the container spins up, which exports all the environment variables my server expects to have, to run properly (normally specified in docker-compose.yml). Then, *another* one which runs the build and then copies all the artifacts into a working directory and runs the server. And it works! Sweet!
Now, of course, I’m discovering that there’s a segfault somewhere in the server framework at shutdown, and it happens sometime after my code is done, so I guess I don’t care too much…except that now I wonder where all the blasted core files are going.
First thing I do for any project is write a Dockerfile and a Makefile to do all the Docker things with the right targets, mounts, ports, etc.
* make image – do the build
* make shell – fire up a shell with the source mounted into the container so I can do work
* make server – fire it up
etc. I copy the last Makefile I used and change the PROJECT_NAME (which ends up driving the image name and container names, etc). Then it’s just a matter of setting the port mappings and other little stuff.