# Ensure we always have an up to date lock file.
if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then
uv lock
fi
Doesn't this defeat the purpose of having a lock file? If it doesn't exist or if it's invalid something catastrophic happened to the lock file and it should be handled by someone familiar with the project. Otherwise, why have a lock file at all? The CI will silently replace the lock file and cause potential confusion.For applications, it's recommended (but still optional) to commit lock files so that very specific and consistent dependencies are maintained to prevent arbitrary, unsupervised package upgrades leading to breakage.
There are many projects that use pip-compile to lock things down. You couldn’t use python in a regulated environment if you didn’t. I’ve written many Makefiles that explicitly forbid CI from ever creating or updating the actual requirements.txt. It has to be reviewed by a human, or more.
To me, one of the big advantages of UV (and similar tools) is that they make locked dependencies the default, rather than something you need to learn about and opt into. These sorts of better defaults are sorely needed in the Python ecosystem.
Should I be committing the lock file?
I also feel like this handles rare edge cases, but it seems like a pretty straightforward way to do so.
There is never a reason for an automated system to create a lockfile.
Where the lockfile doesn't exist, it creates it from whatever current is, and the lockfile then gets thrown away later. So it's equivalent to what you're saying, it just avoids having two completely separate install paths. I think it's the correct approach.
If you do `uv sync --locked` it will not succeed if the lock file does not exist or is out of date.
Edit: I slightly misread your comment. I strongly agree that having no lock file or a lockfile that does not match your specified dependencies is a case where a human should intervene. That's why I suggest you should always use the --locked option in your build.
Speed is okay, but security of a package manager is far more important.
https://chaitalks.tech/uv-a-modern-python-package-manager-in...
And while I'm here ... how does uv go about mitigating typosquatting risks ? I could imagine how it might issue warnings if you perhaps it notices you requesting "dlango", which would work OK for the top 10% but are you suggesting there's some more general solution built into uv ?
I did a quick search but 'typosquatting' is not an easy string to cut through.
Given how often the python community already deals with breaking changes, it shouldn't be much different for pip to adopt saner defaults in a new major version.
b) pip now has an option _not_ to run arbitrary code by disallowing source distributions, by passing --only-binary :all:
"By default, pip does not perform any checks to protect against remote tampering and involves running arbitrary code from distributions. It is, however, possible to use pip in a manner that changes these behaviours, to provide a more secure installation mechanism." https://pip.pypa.io/en/stable/topics/secure-installs/
Python packages are often just a zip file full of py files, with one of them called 'setup.py'. Running this file installs the package (originally using [distutils](https://docs.python.org/3.9/install/index.html#install-index)). This installation may fail if dependencies are not present, but there’s no method provided for installing those dependencies. You’re supposed to read the error message, go download the source for the missing dependencies, then run their setup.py scripts to install them.
In the end, every package manager (so far at least) download and runs untrusted (unless you've verified it manually) 3rd party code. Whatever the security difference is between uv and pip implementation-wise is dwarfed compared to if you haven't found a way of handling untrusted 3rd party code yet.
- Removing requirements.txt makes it harder to track the high-level deps your code requires (and their install options/flags). Typically requirements.txt should be the high level requirements, and you should pass them to another process that produces pinned versions. You regenerate the pinned versions/deps from the requirements.txt, so you have a way to reset all dependencies as your core ones gain or lose nested dependencies.
- +COPY --from=ghcr.io/astral-sh/uv:0.7.13 /uv /uvx /usr/local/bin/ seems useful, but the upstream docker tag could be repinned on a different hash, causing conflicts. Use the hash, or use a different way to stage your dependencies and copy them into the file. Whenever possible, confirm your artifacts match known hashes.
- Installing into the container's /home/project/.local may preserve the uv pattern, but it's going to make a container that's harder to debug. Production containers (if not all containers) should install files into normal global paths so that it's easy to find the, reason about them, and use standard tools to troubleshoot. This allows non-uv users to diagnose the application running, and removes extra abstraction layers which create unneeded complexity.
- +RUN chmod 0755 bin/ && bin/uv-install* - using scripts makes things easier to edit, but it makes it harder to understand what's going on in a container, because you have to run around the file tree reading files and building a mental map of execution. Whenever possible, just shove all the commands into RUN lines in the Dockerfile. This allows a user to just view the Dockerfile and know the entire execution without extra effort. It also removes some complexity in terms of checking out files, building Docker context, etc.
- Try to avoid docker compose and other platform-constrained tools for the running of your tests, for the freezing of versions, etc. You SDLC should first be composed of your build tools/steps using just native tools/environments. Then on top of that should go the CI tools. This separation of "dev/test environment" from CI allows you to take your "dev/test environment" and run it on any CI platform - Docker Compose, GitHub Actions, CircleCI, GitLab CI, Jenkins, etc - without modifying the "dev/test environment" tools or workflow. Personally I have a dev.sh that sets up the dev environment, build.sh to run any build steps, test.sh to run all the test stuff, ci.sh to run ci/cd specific stuff (it just calls the CI/CD system's API and waits for status), and release.sh to cut new releases.
Only thing I’m not sure about: why is having your list of requirements in requirements.txt vs project.toml? Isn’t it just one file vs another?
The very first section of the article talks about replacing requirements.txt with pyproject.toml which contains a similar high-level list of deps
This is what pushed me to use Poetry.
I thought it only locks down hashes?
A simple "requirements.in" I did over this weekend was a single dependency:
miniboss >=0.4, <0.5
And used pip-compile to pin all transitive dependencies: pip-compile -o requirements.txt requirements.in
This generated a "requirements.txt" with 14 dependencies with pinned versions: attrs==25.3.0
...13 more dependencies
It's then only a matter of running "pip install -r requirements.txt" in the venv for my "application" (wrapper scripts for Docker).I've largely settled on this scheme for work and person projects because it's simple (only dev dependency is pip-tools or uv), and it doesn't tie me to a particular Python project management tool (pipenv, pdm, poetry, etc.).
>In docker you can just raw COPY pyproject.toml uv.lock* . then run uv sync --frozen --no-install-project. this skips your own app so your install layer stays cacheable. real ones know how painful it is to rebuild entire layers just cuz one package changed.
>UV_PROJECT_ENVIRONMENT=/home/python/.local bypasses venv. which means base images can be pre-warmed or shared across builds. saves infra cost silently. just flip UV_COMPILE_BYTECODE=1 and get .pyc at build.
> It kills off mutable environments. forces you to respect reproducibility. if your build is broken, it's your lockfile's fault now. accountability becomes visible
Some of these are uv following the standards while pip is still migrating away from legacy behavior, some of these are design choices that uv has made, because the standard is underdefined, it's a tool specific choice, or uv decided not to follow the standards for whatever reason.
I think 2 languages are enough, we don't need a 3rd one that nobody asked for.
I have nothing against Rust. If you want a new tool, go for it. If you want a re-write of an existing tool, go for it. I'm against it creeping into an existing eco-system for no reason.
A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.
https://github.com/python-pendulum/pendulum/issues/844
In my ideal world if someone wanted fast datetimes written in Rust (or any other language other than C) they'd write a proper library suitable for any language to consume over FFI.
So far this Rust stuff has left a bad taste in my mouth and I don't blame the Linux community for being resistant.
I will be out enjoying the sunshine while you are waiting for your Pylint execution to finish
I can't help but think uv is fast not because it's written in Rust but because it's a fast reimplementation. Dependency solving in the average Python project is hardly computationally expensive, it's just downloading and unpacking packages with a "global" package cache. I don't see why uv couldn't have been implemented in Python and be 95% as fast.
Edit: Except implementing uv in Python requires shipping a Python interpreter kinda defeating some of it's purpose of being a package manager able to install Python as well.
However rust is a thousand times faster than python.
At the end, if you don't like it don't use it.
"I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself."
Is there anything being done in uv that couldn't be done in Python?
I detailed this in another comment but pip (via requirements.txt): 8.1s, poetry: 3.7s, uv: 2.1s.
Not even 10x against pip and certainly not against poetry.
> Is there anything being done in uv that couldn't be done in Python?
Speed, at the very least.
You could just ignore uv and use whatever you want...
They've been teaching C in universities for like 40 years to every Computer Science and Engineering student. The number of professionally trained developers who know C compared to Rust is not even close. (And a lot of us are writing Python because it's easy and productive, not because we don't know other languages.)
> If Python developers were the inventors of uv - they'd have invented uv
I updated a rust-implemented wheel to 3.13 compat myself and literally all that required was bumping pyo3 (which added support back in June) and adding the classifier. Afaik cryptography had no trouble either, iirc what they had to wait on was a 3.13 compatible cffi .
> I'm sure some of the changes are going too far. We are open to revert them if there's an interest from maintainers to merge this PR :)
Notably they bumped the bindings (“O3”) for better architecture coverage, and that required some renaming as 0.23 completed an API migration.
I watched the video and he does mention it going from 30s to 3s when switching from a requirements.txt approach to a uv based approach. No comparison was done against poetry.
I am unable to reproduce these results.
I just copied his dependencies from the pyproject.toml file into a new poetry project. I ran `poetry install` from within Docker (to avoid using my local cache) `docker run --rm -it -v `pwd`:/work python:3.13 /bin/bash` and it took 3.7s
I did the same with an empty repo and a requirements.txt file and it took 8.1s.
I also did through `uv` and it took 2.1s.
Better performance?, sure. A lot better performence?, I can't say that with the numbers I got. 10x performance?... absolutely not.
Also, this isn't a major part of anybody's workflow. Docker builds happen typically on release. Maybe when running tests during CI/CD after the majority of work has been done locally.
I do get the sentiment that a user of these tools, being a Python developer could in theory contribute to them.
But, if a tool does its job, I don't care if it's not "in Python". Moreover, I imagine there is a class of problems with the Python environment setup that'd break the tool that could help you fix it if the tool itself is written in Python.
If there are two versions of X, it becomes possible to use the wrong one.
If a tool to manage X depends on X, some of the changes that we would like the tool to perform are more difficult, imperfect or practically impossible.
From its homepage: https://rye.astral.sh/
> If you're getting started with Rye, consider uv, the successor project from the same maintainers.
> While Rye is actively maintained, uv offers a more stable and feature-complete experience, and is the recommended choice for new projects.
> Having trouble migrating? Let us know what's missing.
Need modern Python on an ancient server running with EOL’d distro that no one will touch for fear of breaking everything? uv.
Need a dependency or two for a small script, and don’t want to hassle with packaging to share it? uv.
That said, I do somewhat agree with your take on extensions. I have a side project I’ve been working on for some years, which started as pure Python. I used it as a way to teach myself Python’s slow spots, and how to work around them. Then I started writing the more intensive parts in C, and used ctypes to interface. Then I rewrote them using the Python API. I eventually wrote so much of it in C that I asked myself why I didn’t just write all of it in C, to which my answer was “because I’m not good enough at C to trust myself to not blow it up,” so now I’m slowly rewriting it in Rust, mostly to learn Rust. That was a long-winded way to say that I think if your external library functions start eclipsing the core Python code, that’s probably a sign you should write the entire thing in the other language.
Cool story bro.
I'm totally against Python tooling being in dismal dissaray for 30 years I've been using the language, and if it takes some Rust projects to improve upon it, I'm all for it.
I also not rather have the chicken-and-egg dependency issue with Python tooling written in Python.
>A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.
Somehow the availability and wide knowledge of C didn't make anyone bother writing a datetime management lib in C and making it as popular. It took those Pendulum Rust coders.
And you could of course use pytz or dateutil or some other, but, no, you wanted to use the Rust-Python lib.
Well, when you start the project yourself, you get to decide what language it would be in.
>I think 2 languages are enough, we don't need a 3rd one that nobody asked for.
Enough for what? The uv users dont have to deal with that. Most ecosystems use a mix of language for tooling. It's not a detail the user of the tool has to worry about.
>I'm against it creeping into an existing eco-system for no reason.
It's much faster. Because its not written in Python.
The tooling is for the user. The language of the tooling is for the developer of the tooling. These dont need to be the same people.
The important thing is if the tool solves a real problem in the ecosystem (it does). Do people like it?
Having your python management tools also be written in python creates a chicken-and-egg situation. Now you have to have a working python install before you can start your python management tool, which you are presumably using because it's superior to managing python stuff any other way. Then you get a bunch of extra complex questions like, what python version and specific executable is this management tool using? Is the actual code you're running using the same or a different one? How about the dependency tree? What's managing the required python packages for the installation that the management tool is running in? How do you know that the code you're running is using its own completely independent package environment? What happens if it isn't, and there's a conflict between a package or version your app needs and what the management tool needs? How do you debug and fix it if any of this stuff isn't actually working quite how you expected?
Having the management tool be a compiled binary you can just download and use, regardless of what language it was written in, blows up all of those tricky questions. Now the tool actually does manage everything about python usage on your system and you don't have to worry about using some separate toolchain to manage the tool itself and whether that tool potentially has any conflicts with the tool you actually wanted to use.
Look at the number of stars ruff and uv got on github. That's a meteoric rise. So they were validated with ruff, and continued with uv, this we can call "was asked for".
> I'm against it creeping into an existing eco-system for no reason.
It's not no reason. A lot of other things have been tried. It's for big reasons: Good performance, and secondly independence from Python is a feature. When your python managing tool does not depend on Python itself, it simplifies some things.
Current Dockerfile pip is as simple as:
COPY --chown=python:python requirements.txt .
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir --compile -r requirements.txt
COPY --chown=python:python . .
RUN python -m compileall -f .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
https://docs.astral.sh/uv/guides/integration/docker/#using-u...(We'd recommend pinning the version or SHA in production)
By default `uv` won't generate `pyc` files which might make your service much slower to start.
See https://docs.astral.sh/uv/reference/settings/#pip_compile-by...
Just install it and try running something using the —-with flag. That’s where I became intrigued.
As someone who usually used platform pythons, despite advise against that, uv is now what got me to finally stop doing so.
j4mie•5h ago
uv python pin <version> will create a .python-version file in the current directory.
uv virtualenv will download the version of Python specified in your .python-version file (like pyenv install) and create a virtualenv in the current directory called .venv using that version of Python (like pyenv exec python -m venv .venv)
uv pip install -r requirements.txt will behave the same as .venv/bin/pip install -r requirements.txt.
uv run <command> will run the command in the virtualenv and will also expose any env vars specified in a .env file (although be careful of precedence issues: https://github.com/astral-sh/uv/issues/9465)
slau•4h ago
ljm•1h ago
Uv works more or less the same as I’m used to with other tooling in Ruby, JS, Rust, etc.
politelemon•3h ago
JimDabell•3h ago
> uv will respect Python requirements defined in requires-python in the pyproject.toml file during project command invocations. The first Python version that is compatible with the requirement will be used, unless a version is otherwise requested, e.g., via a .python-version file or the --python flag.
— https://docs.astral.sh/uv/concepts/python-versions/#project-...
smeeth•3h ago
For some reason uv pip has been very slow, however. Unsure why, might be my org doing weird network stuff.