|
|
|
|
@ -1,63 +0,0 @@
|
|
|
|
|
# Dev containers
|
|
|
|
|
|
|
|
|
|
A container is a great development environment. However, they tend to underperform because:
|
|
|
|
|
1. it's easy to fall into the Docker Compose trap
|
|
|
|
|
1. people don't use Alpine
|
|
|
|
|
|
|
|
|
|
Using a dev container has multiple benefits:
|
|
|
|
|
|
|
|
|
|
- "infrastructure as code", a.k.a. . Your codebase itself (in the `ops/devcontainer` dir) defines explicitly defines all the tools and dependencies that you use, with .
|
|
|
|
|
- reproducibility: a few simple commands to create a clean working setup anywhere Docker is supported (i.e., anywhere)
|
|
|
|
|
- isolation: you have clean setup and teardown. You can install a bunch of crap to try it out, and you don't have to remember what it was so you can purge it afterward. Just delete the container.
|
|
|
|
|
|
|
|
|
|
## Quick start
|
|
|
|
|
|
|
|
|
|
Build the container:
|
|
|
|
|
```bash
|
|
|
|
|
ops/devcontainer/build.sh
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Run the container:
|
|
|
|
|
```bash
|
|
|
|
|
ops/devcontainer/start.sh
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Concepts
|
|
|
|
|
|
|
|
|
|
A dev container is meant to be short-lived, constantly thrown away and recreated as needed. This explicitly divides the filesystem into "keep" (the source tree and any useful artifacts / caches) and "throw away" (everything else). Frequently regenerating the container ensures that "your environment" never deviates too far from the Infrastructure As Code in your repo; it forces you to add any new tools to the Docker image build process. In service of this, using `docker run --rm [...]` is always recommended.
|
|
|
|
|
|
|
|
|
|
Contrary to conventional container ideology, it is not necessary to keep your images tiny and minimize container layers at all costs. For example, conventional container ideology frequently suggests constructs like `RUN cmd1 && cmd2 && cmd3` rather than doing each `cmd` in its own `RUN` layer, in order to reduce the amount of layers generated from 3 to 1. These practices are optimized for massive horizontal deployments, where you have a gazillion containers and images, and resource usage is a big problem. Obviously, the GAS stack is the complete opposite; you want very few containers, ideally just 1 at most. So having *more* layers is actually better, because it speeds up rebuild times by avoiding very heavy, frequently rebuilt layers. It also makes the Dockerfile much easier to read.
|
|
|
|
|
|
|
|
|
|
## Methodologies
|
|
|
|
|
|
|
|
|
|
There's a few useful techniques and strategies when using dev containers:
|
|
|
|
|
|
|
|
|
|
- user management
|
|
|
|
|
- volumes for code
|
|
|
|
|
- volumes for caching
|
|
|
|
|
- openrc services
|
|
|
|
|
- `--net host`
|
|
|
|
|
|
|
|
|
|
### User management
|
|
|
|
|
|
|
|
|
|
To make working in a dev container seamless, create a user on it that matches your host machine user (UID and GID).
|
|
|
|
|
|
|
|
|
|
If you don't do this, Git will complain about conflicting ownership, and any tools or tests that create files will create them as "root", which then have to be constantly `chown`'d on the host.
|
|
|
|
|
|
|
|
|
|
To make this work, it's necessary to have a `build.sh` script which passes the current user's UID as a build arg to the Docker build step.
|
|
|
|
|
|
|
|
|
|
### Volumes
|
|
|
|
|
|
|
|
|
|
Anything not in a volume (or built into the image) will be lost on container restart. I like to mount the codebase on `/code`.
|
|
|
|
|
|
|
|
|
|
For compiled languages (or anything that needs to "build" the project, e.g., linters), mounting build cache directories can also be useful.
|
|
|
|
|
|
|
|
|
|
### OpenRC
|
|
|
|
|
|
|
|
|
|
OpenRC is much simpler than systemd. If you want to run background processes, or network services, making the root process OpenRC and writing an openrc service script is the best effort-to-value ratio. ChatGPT can help you write openrc service scripts.
|
|
|
|
|
|
|
|
|
|
### `--net host`
|
|
|
|
|
|
|
|
|
|
Because this is a dev container, it's meant to make your life easier, not get you tangled up in security best-practices and so forth. One of the biggest annoyances of using containers is having to do port mapping, which leads to an explosion of config.
|
|
|
|
|
|
|
|
|
|
Using `docker run --net host [...]` makes the container use the host's networking, instead of creating a virtual network that you have to explicitly map ports back and forth between.
|