Also love Ruff from the Astral team. We just cut our linting + formatting across from pylint + Black to Ruff.
Saw lint times drop from 90 seconds to < 1.5 seconds. crazy stuff.
uv add <mydependencies> --script mycoolscript.py
And then shoving #!/usr/bin/env -S uv run
on top so I can run Python scripts easily. It's great!- https://everything.intellectronica.net/p/the-little-scripter
~~That mutates the project/env in your cwd. They have a lot in their docs, but I think you’d like run --with or uv’s PEP723 support a lot more~~
Instant reactive reproducible app that can be sent to others with minimal prerequisites (only uv needs to be installed).
Such a hot combo.
Claude 4's training cutoff date is March 2025 though, I just checked and it turns out Claude Sonnet 4 can do this without needing any extra instructions:
Python script using uv and inline script dependecies
where I can give it a URL and it scrapes it with httpx
and beautifulsoup and returns a CSV of all links on
the page - their URLs and their link text
Here's the output, it did the right thing with regards to those dependencies: https://claude.ai/share/57d5c886-d5d3-4a9b-901f-27a3667a8581 If you need to run these scripts, use "uv run script-name.py". It will automatically install the dependencies. Stdlibs don't need to be specified in the dependencies array.
since e.g. Cursor often gets confued because the dependencies are not installed and it doesn't know how to start the script. The last sentence is for when LLMs get confused and want to add "json" for example to the dependency array.1: https://old.reddit.com/r/Python/comments/12rk41t/astral_next...
It seems easy to imagine Astral following a similar path and making a significant amount of money in the process.
One day they're going to tell me I have to pay $10/month per user and add a bunch of features I really don't need just because nobody wants to prioritize the speed of pip.
And most of that fee isn't going to go towards engineers maintaining "pip but faster", it's going to fund a bunch of engineers building new things I probably don't want to use, but once you have a company and paying subscribers, you have to have developers actively doing things to justify the cost.
I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).
What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.
An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]
But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS.
Let's be honest, all tries to bring a cpython alternative failed (niche boosters like PyPy is a separate story, but it's not up-to-date, and not entirely exact). For some reason, people think that 1:1 compatibility is not critical and too costly to pursue (hello, all LLVM-based compilers). I think, it's doable and there's a solid way to solve it. What if Astral thinks so too?
Rust's speed advantages typically come from one of a few places:
1. Fast start-up times, thanks to pre-compiled native binaries.
2. Large amounts of CPU-level concurrency with many fewer bugs. I'm willing to do ridiculous threading tricks in Rust I wouldn't dare try in C++.
3. Much lower levels of malloc/free in Rust compared to some high-level languages, especially if you're willing to work a little for it. Calling malloc in a multithreaded system is basically like watching the Millennium Falcon's hyperdrive fail. Also, Rust encourages abusing the stack to a ridiculous degree, which further reduces allocation. It's hard to "invisibly" call malloc in Rust, even compared to a language like C++.
4. For better or worse, Rust exposes a lot of the machinery behind memory layout and passing references. This means there's a permanent "Rust tax" where you ask yourself "Do I pass this by value or reference? Who owns this, and who just borrows is?" But the payoff for that work is good memory locality.
So if you put in a modest amount of effort, it's fairly easy to make Rust run surprisingly fast. It's not an absolute guarantee, and there are couple of traps for the unwary (like accidentally forgetting to buffer I/O, or benchmarking debug binaries).
Conda rewrote their package resolver for similar reasons
tl;dw Rust, a fast SAT solver, micro-optimisation of key components, caching, and hardlinks/CoW.
Even on a single core, this turns out to be simply false. It isn't that hard to either A: be doing enough actual computation that faster languages are in fact perceptibly faster, even, yes, in a web page handler or other such supposedly-blocked computation or B: without realizing it, have stacked up so many expensive abstractions on top of each other in your scripting language that you're multiplying the off-the-top 40x-ish slower with another set of multiplicative penalties that can take you into effectively arbitrarily-slower computations.
If you're never profiled a mature scripting language program, it's worth your time. Especially if nobody on your team has ever profiled it before. It can be an eye-opener.
Then it turns out that for historical path reasons, dynamic scripting languages are also really bad at multithreading and using multiple cores, and if you can write a program that can leverage that you can just blow away the dynamic scripting languages. It's not even hard... it pretty much just happens.
(I say historical path reasons because I don't think an inability to multithread is intrinsic to the dynamic scripting languages. It's just they all came out in an era when they could assume single core, it got ingrained into them for a couple of decades, and the reality is, it's never going to come fully out. I think someone could build a new dynamic language that threaded properly from the beginning, though.)
You really can see big gains just taking a dynamic scripting language program and turning it into a compiled language with no major changes to the algorithms. The 40x-ish penalty off the top is often in practice an underestimate, because that number is generally from highly optimized benchmarks in which the dynamic language implementation is highly tuned to avoid expensive operations; real code that takes advantage of all the conveniences and indirection and such can have even larger gaps.
This is not to say that dynamic scripting languages are bad. Performance is not the only thing that matters. They are quite obviously fast enough for a wide variety of tasks, by the strongest possible proof of that statement. That said, I think it is the case that there are a lot of programmers who have no idea how much performance they are losing in dynamic scripting languages, which can result in suboptimal engineering decisions. It is completely possible to replace a dynamic scripting language program with a compiled one and possibly see 100x+ performance improvements on very realistic code, before adding in multithreading. It is hard for that not to manifest in some sort of user experience improvement. My pitch here is not to give up dynamic scripting languages, but to have a more realistic view of the programming language landscape as a whole.
I don't know python but in JavaScript, triggering 1000 downloads in parallel is trivial. Decompressing them, like in python, is calling out to some native function. Decompressing them in parallel in JS would also be trivial (no idea about python). Writing them in parallel is also trivial.
What would a dynamic scripting language look like that wasn't subject to this limitation? Any examples? I don't know of contenders in this design space but I am not up on it.
The improvements came from lots of work from the entire python build system ecosystem and consensus building.
Sure, other tools could handle the situation, but being baked into the tooling makes it much easier to bootstrap different configurations.
uv does the Python ecosystem better than any other tool, but it's still the standard Python ecosystem as defined in the relevant PEPs.
It creates a venv. Note were talking about the concept of a virtual environment here, PEP 405, not the Python module "venv".
off topic, but i wonder why that phrase gets used rather than 10x which is much shorter.
- 10x is a meme
- what if it's 12x better
Order of magnitude faces less of that baggage, until it does :)
In common conversation, the multiplier can vary from 2x - 10x. In context of some algorithms, order of magnitudes can be over the delta rather than absolutes. eg: an algorithms sees 1.1x improvement over the previous 10 years. A change that shows a 1.1x improvement by itself, overshadows an an order-of-magnitude more effort.
For salaries, I've used order-of-magnitude to mean 2x. Good way to show a step change in a person's perceived value in the market.
Long answer: Because if you put a number, people expect it to be accurate. If it was 6x faster, and you said 10x, people may call you out on it.
A metal wheel is still just a wheel. A faster package manager is still just a package manager.
My primary vehicle has off-road capable tires that offer as much grip as a road-only tire would have 20-25 years ago, thanks to technology allowing Michelin to reinvent what a dual-purpose tire can be!
Can you share more about this? What has changed between tires of 2005 and 2025?
https://www.caranddriver.com/features/a15078050/we-drive-the...
> In the last decade, the spiciest street-legal tires have nearly surpassed the performance of a decade-old racing tire, and computer modeling is a big part of the reason
(written about 8 years ago)
The good thing about reinventing the wheel is that you can get a round one.
https://scripting.wordpress.com/2006/12/20/scripting-news-fo...
Personally the only thing I miss from it is support for binary data - you end up having to base64 binary content which is a little messy.
Just `git clone someproject`, `uv run somescript.py`, then mic drop and walk away.
Maybe that functionality isnt implemented the same way for uvx.
You could try this equivalent command that is under "uv run" to see if it behaves differently: https://docs.astral.sh/uv/concepts/tools/#relationship-to-uv...
e.g.
$ uv tool install asciinema
$ asciinema play example.cast
You don't have that problem with Poetry. You go make a cup of coffee for a couple minutes, and it's usually done when you come back.
other than that, it's invaluable to me, with the best features being uvx and PEP 723
What I want is, if my project depends on `package1==0.4.0` and there are new versions of package1, for uv to try install the newer version. and to do that for a) all the deps, simultaneously, b) without me explicitly stating the dependencies in the command line since they're already written in the pyproject.toml. an `uv refresh` of sorts
I think you're just specifying your dependency constraints wrong. What you're asking for is not what the `==` operator is for; you probably want `~=`.
pyproject.toml is meant to encode the actual constraints for when your app will function correctly, not hardcode exact versions, which is what the lockfile is for.
Though I do think with Python in particular it's probably better to manually upgrade when needed, rather than opportunistically require the latest, because Python can't handle two versions of the same package in one venv.
[1]: I do sometimes write the title or the description. But never the deps themselves
uv add example>=0.4.0
Then it will update as you are thinking.
pyproject.toml’s dependency list specifies compatibility: we expect the program to run with versions that satisfy constraints.
If you want to specify an exact version as a validated configuration for a reproducible build with guaranteed functionality, well, that’s what the lock file is for.
In serious projects, I usually write that dependency section by hand so that I can specify the constraints that match my needs (e.g., what is the earliest version receiving security patches or the earliest version with the functionality I need?). In unserious projects, I’ll leave the constraints off entirely until a breakage is discovered in practice.
If `uv` is adding things with `==` constraints, that’s why upgrades are not occurring, but the solution is to relax the constraints to indicate where you are okay with upgrades happening.
Yeah, that's pretty much what I've been doing with my workaround script. And btw most of my projects are deeply unserious, and I do understand why one should not do that in any other scenario.
Still, I dream of `uv refresh` :D
Much prefer not thinking about venvs.
I've written a lightweight replacement script to manage named central virtual envs using the same command syntax as virtualenvwrapper. Supports tab completion for zsh and bash: https://github.com/sitic/uv-virtualenvwrapper
A problem I have now, though, is when I I jump to def in my editor it no longer knows which venv to load because it's outside of the project. This somehow used to work with virtualenwrapper but I'm not sure how.
Perhaps uv will continue its ascendancy and get there naturally. But I’d like to see uv be a little more aggressive with “uv native” workflows. If that makes sense.
It's a nice software.
I don't see a way to change current and global versions of python/venvs to run scripts, so that when I type "python" it uses that, without making an alias.
https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
I specifically want to run "python", rather subcommands for some other command, since I often I want to pass in arguments to the python interpreter itself, along with my script.
* Redis -> redict, valkey
* elastic search -> opensearch
* terraform -> opentofu
(Probably a few more but those are the ones that come to mind when they "go rogue")
Or would it be possible to go this fast in python if you cared enough about speed?
Is it a specific thing that rust has an amazing library for? Like Network or SerDe or something?
pip could be made faster based on this, but maybe not quite as fast.
Using Rust is responsible for a lot of speed gains too, but I believe it's the hard linking trick (which could be implemented in any language) that's the biggest win.
I now use uv for everything Python. The reason for the switch was a shared server where I did not have root and there were all sorts of broken packages/drivers and I needed pytorch. Nothing was working and pip was taking ages. Each user had 10GB of storage allocated and pip's cache was taking up a ton of space & not letting me change the location properly. Switched to uv and everything just worked
If you're still holding out, really just spend 5 minutes trying it out, you won't regret it.
Really? :)
requirements.txt is just hell and torture. If you've ever used modern project/dependency management tools like uv, Poetry, PDM, you'll never go back to pip+requirements.txt. It's crazy and a mess.
uv is super fast and a great tool, but still has roughnesses and bugs.
# Makefile
compile-deps:
uv pip compile pyproject.toml -o requirements.txt
compile-deps-dev:
uv pip compile --extra=dev pyproject.toml -o requirements.dev.txt
There's also some additional integration which I haven't tried yet: https://mise.jdx.dev/mise-cookbook/python.html#mise-uv
Is it better about storage use? (And if so, how? Is it just good at sharing what can be shared?)
pip + config file + venv requires you to remember ~2 steps to get the right venv - create one and install stuff into it, and for each test run, script execution and such, you need to remember a weird shebang-format, or to activate the venv. And the error messages don't help. I don't think they could help, as this setup is not standardized or blessed. You just have to beat a connection of "Import Errors" to venvs into your brain.
It's workable, but teaching this to people unfamiliar with it has reminded me how.. squirrely the whole tooling can be, for a better word.
Now, team members need to remember "uv run", "uv add" and "uv sync". It makes the whole thing so much easier and less intimidating to them.
There are times when you do NOT want the wheel version to be installed (which is what --no-binary implements in pip), but so many package managers including uv don't provide that core, basic functionality. At least for those that do use pip behind the scenes, like pipenv, one can still use the PIP_NO_BINARY environment variable to ensure this.
So I'll not be migrating any time soon.
See https://docs.astral.sh/uv/reference/environment/#uv_no_binar...
uv is still quite new though. Perhaps you can open an issue and ask for that?
When, why? Should I be doing this?
I can see how if you've had issues with dependencies you would rave about systems that let you control down to the commit what an import statement actually means, but I like the system that requires the least amount of typing/thinking and I imagine I'm part of a silent majority.
uv pip install --system requests
but it's more typing. If I type 5 characters per second, making me also type "uv --system" is the same as adding 2 seconds of runtime to the actual command, except even worse because the chance of a typo goes up and typing takes energy and concentration and is annoying.Also, it seems like a sign that even Python tooling needs to not be written in Python now to get reasonable performance.
I also appreciate that it handles most package conflicts and it constantly maintains the list of packages as you move. I have gotten myself into a hole or two now with packages and dependencies, I can usually solve it by deleting venv an just using uv to reinstall.
#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "python-dateutil",
# ]
# ///
#
# [python script that needs dateutil]
So there isn't really much to do to make it simpler.
Or maybe create a second binary or symlink called `uvs` (short for uv script) that does the same thing.
Rather, pip was broken intentionally two years ago and they are still not interested in fixing it:
https://github.com/pypa/packaging/issues/774
I tried uv and it just worked.
Fast is a massive factor.
I haven't used it much, but being so fast, I didn't even stop to think "is it perfect at dependency management?" "does it lack any features?".
Just today I set it up on 20 PCs in a computer lab that doesn't have internet, along with vs code and some main packages. Just downloaded the files, made a powershell script and it's all working great with Jupyter etc... Now to get kids to be interested in it...
After that many years of optimization pure python seems still to be wishfull thinking. It's ai/mk success is also only as a shim language around library calls.
You can use ent ENV variable UV_CONCURRENT_DOWNLOADS to limit this. In my case it needed to be 1 or 2. Anything else would cause timeouts.
An extreme case, I know, but I think that uv is too aggressive here (a download thread for every module). And should use aggregate speeds from each source server as a way of auto-tuning per-server threading.
One possible alternative is Pants. It's also written in Rust for performance, but has more flexibility baked into the design.
uv is basically souped-up pip.
Pants is an entire build/tooling system, analogous to something like Bazel. It can handle multiple dependency trees, multiple types of source code, building and packaging, even running tests.
Many languages have many package management tools but most languages there are one or two really popular ones.
For python you just have to memorize this basically:
- Does the project have a setup.py? if so, first run several other commands before you can run it. python -m venv .venv && source .venv/bin/activate && pip install -e .
- else does it have a requirements.txt? if so python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
- else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run
- else does it have a pipfile? pipenv install and then prefix all commands with pipenv run
- else does it have an environment.yml? if so conda env create -f environment.yml and then look inside the file and conda activate <environment_name>
- else I have not had to learn the rules for uv yet
Thank goodness these days I just open up a cursor tab and say "get this project running"
> - else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run
That's not even correct. Not all projects with pyproject.toml use poetry (but poetry will handle everything with a pyproject.toml)
Just try uv first. `uv pip install .` should work in a large majority of cases.
pipenv is on the way out. bare `setup.py` is on the way out. `pyproject.toml` is the present and future, and the nice thing about it is it is self-describing in the tooling used to package.
I switched everything over and haven’t looked back.
It’s everything I hoped poetry would be, but 10x less flakey.
I have one complaint though, I want ./src to be the root of my python packages such that
> from models.attention import Attention
Works if I have a directory called models with a file called attention.py in it (and __init__.py) etc. The only way this seems to work correctly is if I set PYTHONPATH=./src
Surely the environment manager could set this up for me? Am I just doing it wrong?
I have read a few tickets saying uv won’t support this so everyone running my project will have to read the README first to get anything to run. Terrible UX.
Same with uv. They are doing very nice tricks, like sending Range requests to only download the metadata part from the ZIP file from PyPI, resolve them in memory and only after that downloading the packages. No other package manager does this kind of crazy optimization.
What was super unclear was how I develop locally with uv. Figuring out I needed `aider sync --extra` then `aider run --projrct /opt/aider aider` to run was a lot of bumbling in the dark. I still struggle to find good references for everyday running projects use with uv.
It was amazing though. There were so many pyproject and other concerns that it just knew how to do. I kept assuming I was going to have to do a lot more steps.
After the switch, the same dependency resolution was done in seconds. This tool single-handedly made iteration possible again.
However I really like installing uv globally on my Windows systems and then using uvx to run stuff without caring about venvs and putting stuff to path.
mh-•4h ago