Put your uv project inside a Docker container
There are quite some opinions out there on how to dockerize Python projects.
I've had many a discussion on how to put Poetry inside Docker and now that I'm replacing Poetry with uv every where I can, I feel it's best to also post my opinion on how to do that.
So let's jump into building a Dockerfile for a Python project maintained with uv
.
What we want to achieve:
- Stable layers, makes re-deployments faster
- No development tools, makes everything safer
- Based on the lock of requirements, keeps dev and prod aligned more
- No virtual env inside the docker, docker is already a controlled environment
- No root in prod
- Minimize attack service by using distroless python image.
For this, we will use docker BuildKit and inject our own PyPi url. So we start with:
#!/bin/bash
set -e
rm -rf dist
uv build
export PIP_INDEX_URL="https://pypi.org/simple"
DOCKER_BUILDKIT=1 docker build \
--secret id=pip_index_url,env=PIP_INDEX_URL \
.
This script will clean the dist
folder and makes sure our package is ready to be put into a docker from there. We also enable BUILDKIT and pass our PIP_INDEX_URL to the build as a secret.
Now for the Dockerfile. As you can imagine, this is going to be a multi-stage build. The first stage sets up a basic build image:
# Base has Python with pip
FROM debian:12-slim AS dev-base
WORKDIR /app
RUN apt-get update && \
apt-get install --no-install-suggests --no-install-recommends --yes python3-pip gcc libpython3-dev
Now we have to start getting ready to install our Python packages. For this we want to compile a requirements.txt
file with all our dependencies as they should be. But there is a small theoretical catch here: if you export your lock on one system, it not be what you expect to require on another system. For example if you target an arm
deployment but feel comfortable with developing on x86_64
.
Here is the plan to not mis out on any corner cases: export the lock into constraints.txt
inside Docker, then use those constraints to compile a requirements.txt
also inside Docker. This should make sure we have all versions exactly as we need and any extra dependencies specified in our requirements.txt
as well.
# Compile is meant to generate requirements for the last installable step
FROM dev-base AS compile
RUN --mount=type=cache,target=/root/.cache \
--mount=type=secret,id=pip_index,env=PIP_INDEX_URL \
--mount=type=secret,id=pip_index,env=PIP_EXTRA_INDEX_URL \
pip install uv==0.5.9 \
--break-system-packages \
--disable-pip-version-check
COPY pyproject.toml uv.lock /app/
# We want the same version of packages, but we might need more
# because the lock file might be meant for another architecture or version of Python
RUN uv export --format requirements-txt --output-file /constraints.txt \
--no-editable --no-dev --no-emit-workspace --frozen \
--no-index --no-hashes
# We then compile using the constraints and our python we expect to run in production
RUN uv pip compile --constraints /constraints.txt --output-file /requirements.txt pyproject.toml
That will give us a compile stage with /requirements.txt
that is an exact list of all the dependencies we need inside our Docker, regardless of the Python version and architectures our uv.lock
is maintained on. Downside is that if your development Python version and the Docker Python versions drift apart, it will not automatically break the build.
Then for the final step. Instead of using a virtual environment, we just tell pip to install in /env
and /app
directly:
# Install requirements into /env
RUN --mount=type=cache,target=/root/.cache \
--mount=type=secret,id=pip_index,env=PIP_INDEX_URL \
--mount=type=secret,id=pip_index,env=PIP_EXTRA_INDEX_URL \
pip install \
--no-deps --disable-pip-version-check \
--target /env \
--requirement /requirements.txt
ADD dist/*.whl /tmp
# Install the application into /app
RUN --mount=type=cache,target=/root/.cache \
--mount=type=secret,id=pip_index,env=PIP_INDEX_URL \
--mount=type=secret,id=pip_index,env=PIP_EXTRA_INDEX_URL \
pip install \
--no-deps --disable-pip-version-check \
--target /app \
/tmp/*.whl
As you can see, this is the step where our wheel files come into play. This may seem late in the game, but up till now only pyproject.toml
and uv.lock
changes would trigger a rebuild, so it's good that we have this extremely mutable artifact as late as possible.
At this point we have our own code inside /app
and all requirements in the /env
folder on the system. These are the folders we finally take into our distroless image. We leave pip
, uv
, the cache, requirements, and the wheel file behind.
FROM gcr.io/distroless/python3-debian12:nonroot
COPY --from=install /env /env
COPY --from=install /app /app
WORKDIR /app
ENV PYTHONPATH=/app:/env
ENTRYPOINT [ "/usr/bin/python", "-m", "polario.main" ]
There you are. That will put everything into a distroless container. To try this out, I wrote it out for Polario, which is library to more easily handle Hive partitioned datasets with Polars.
Let's finally evaluate how far along we came with what we set out to do:
- Stable layers, makes re-deployments faster? Yes, by keeping our
/env
and/app
folders split, we make sure that if we re-deploy with only/app
changes, our env layer is already cached on the hosting server. - No development tools, makes everything safer? Yes, we left behind pip and uv. If we ever get arbitrary command execution issues, those won't be commands you can run.
- Based on the lock of requirements, keeps dev and prod aligned more? Yes, we used the pyproject.toml and uv.lock to make sure we keep the development versions and prod versions in check. We had to accept that there is no strict coupling at this stage. Maybe a check to break the build would be nice.
- No virtual env inside the docker, docker is already a controlled environment? Yes, by just using pip to install into a target instead of creating a virtual environment with python redirect.
- No root in prod? Yes, distroless with nonroot tag for the win!
- Minimize attack service by using distroless Python image? Yes, who needs all those extra blobs in prod, we don't!
Happy hacking!