# /// script
# dependencies = [
# "requests<3",
# "rich",
# ]
# ///
import requests, rich
# ... script goes here
Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies, magically installed into a temporary virtual environment without you having to think about them at all.It's an implementation of Python PEP 723: https://peps.python.org/pep-0723/
Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:
Write a Python script with inline script
dependencies that uses httpx and click to
download a large file and show a progress bar
Prior to Claude 4 I had a custom Claude project that included special instructions on how to do this, but that's not necessary any more: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/https://vorpus.org/blog/why-im-not-collaborating-with-kennet...
Kenneth Reitz has probably done more to enrich my life than most anyone else who builds things. I wouldn't begrudge him the idea of a nice workstation for his years of labour. Yeah, he's very imperfect, but the author has absolutely lost me
It would be like saying, "Don't use Laplace transforms because he did some unsavory thing at some point in time."
Maybe it's more like: Laplace created awesome things, but let's be fair and also put in his wikipedia page a bit about his political shenanigans.
A lot of of so-called geniuses, especially the self-styled ones with some narcissistic traits, get away with being an asshole. Their admirers have different norms for regular, boring people. I don't think that is fair or healthy for a community.
I'm not defending his assholery here, but it's not uncommon in tech.
Take an asshole techie and notice they tend to have devoted fans. It's just possible that Kenneth Reitz didn't get his fan base up before he exposed his personality for who he truly was. Steve Jobs, Zuck, Bill Gates, Linus Torvalds, ... were all called assholes at some point or another. Geez and those people aren't even the worst these days.
I liked Requests way back when but prefer httpx or aiohttp. I liked piping for about a month when it first came out, but jumped ship pretty quickly. I'm not familiar with his other works.
I also wouldn't begrudge the guy a laptop, but I do get what the author was saying. His original fundraiser felt off, like, if you want a nice laptop, just say so, but don't create specious justifications for it.
Those two tools are modeled after `requests`, so Reitz still has an influence in your life even if you don't use his implementation directly.
LLMs are funny.
It was good when it was new but it’s dangerously unmaintained today and nobody should be using it any more. Use niquests, httpx, or aiohttp. Niquests has a compatible API if you need a drop-in replacement.
If you need it, sure, it’s great, but let’s not encourage pulling in 3rd party libs unnecessarily.
(Of the non-Requests imports, about two thirds of them occur before Pip even considers what's on the command line — which means that they will be repeated when you use the `--python` option. Of course, Requests isn't to blame for that, but it drives home the point about keeping dependencies under control.)
Others like pip-tools have support in the roadmap (https://github.com/jazzband/pip-tools/issues/2027)
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = [
# "requests<3",
# "rich",
# ]
# ///
import requests, rich
# ... script goes hereUsually, when I use uv along with a pyproject.toml, I'll activate the venv before starting neovim, and then my LSP (basedpyright) is aware of the dependencies and it all just works. But with inline dependencies, I'm not sure what the best way to do this is.
I usually end up just manually creating a venv with the dependencies so I can edit inside of it, but then I run the script using the shebang/inline dependencies when I'm done developing it.
# make a venv somehow, probably via the editor so it knows about the venv, saving a 3rd step to tell it which venv to use
$ env VIRTUAL_ENV=.venv/ uv sync --script foo.py
but it's still janky, not sure it saves much mental tax By default, an exact sync is performed: uv removes packages that are not declared as dependencies of the project. Use the `--inexact` flag to keep extraneous packages.Edit: further reading <https://unix.stackexchange.com/a/605761/472781> and <https://unix.stackexchange.com/a/774145/472781> and also note that on older BSDs it also used to be like that
Indeed, OpenBSD’s and NetBSD’s `env` does not support `-S`. DragonflyBSD does (as expected).
Solaris as pointed out by the first link doesn’t even support more than one argument in the shebang so no surprise that its `env` does not support it either. Neither does IllumOS (but not sure about the shebang handling)
This gave me the questionable idea of doing the same sort of thing for Go: https://github.com/imjasonh/gos
(Not necessarily endorsing, I was just curious to see how it would go, and it worked out okay!)
https://gist.github.com/JetSetIlly/97846331a8666e950fc33af9c...
(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)
"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."
That's the syntax.
Built-in language syntax.
It might be “built-in syntax” from a specification viewpoint, but does CPython itself know anything about it? Does CPython’s parser reject scripts which violate this specification?
And even if CPython does know about it (or comes to do so in the future), the fact that it looks like a comment makes its status as “built-in syntax” non-obvious to the uninitiated
I'm mostly joking, but normally when people say language syntax they mean something outside a comment block.
uv and other tools would be forced to implement a full Python parser. And since the language changes they would need to update their parser when the language changes.
This approach doesn't have that problem.
Making it a "language feature" has no upside and lots of downside. As the PEP explains.
I think this is a design issue with PyPI though. It really should have some kind of index which goes from module names to packages which provide that module. (Maybe it already does but I don't know about it?)
Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.
Once that were done, it would be possible to implement the feature where "import X", if X can't be found locally, finds the primary package on PyPI which provides module X, installs it into the current virtualenv, and then loads it.
Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it. And CPython might say the virtualenv configuration needs to nominate an external package installer (pip, pipx, poetry, uv, whatever) so CPython knows what to do in this case.
You could even build this feature initially as an extension installed from PyPI, and then later move it into the CPython core via a PEP. Just in the case of an extension, you couldn't use the "from __future__" syntax.
> it may legally be a meta-package that runs one-shot configuration code when "built from source"
True, but if Python were to provide this auto-install via "import X" feature, packages of that nature could be supported by including in them a dummy main module. All it would need would be an empty __init__.py. You could include some metadata in the __init__.py if you wished.
Once "import X" auto-install is supported, you could potentially extend the "import" syntax with metadata to specify you want to install a specific package (not the primary package for the module), and with specific versions. Maybe some syntax like:
import foobarbaz ("foo-bar-baz>=3.0")
I doubt all this is going to happen any time soon, but maybe Python will eventually get there.PyPI never really saw much "design" (although there is a GitHub project for the site: https://github.com/pypi/warehouse/ as well as for a mirror client: https://github.com/pypa/bandersnatch). But an established principle now is that anyone can upload a distribution with whatever name they want — first come, first serve by default. Further, nobody has to worry about what anyone else's existing software is in order to do this. Although there are restrictions to avoid typo-squatting or other social engineering attempts (and in the modern era, names of standard library modules are automatically blacklisted).
> Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.
These kinds of conflicts are actually by design. You're supposed to be able to have competing implementations of the same API.
> Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it.
The language is not realistically going to change purely to support packaging. The time to propose this was in 2006. (Did you know pip was first released before Python 3.0?)
> but maybe Python will eventually get there.
That would require the relevant people to agree that with heading in that direction. IMX, they have many reasons they don't want to.
Anyway, this isn't the place to pitch such ideas. It would be better to try the Ideas and/or Packaging forums on https://discuss.python.org — but be prepared for them to tell you the same things.
Something like `pip install -r <(head myscript.py)`. (Not exactly, but you get the idea).
See also https://peps.python.org/pep-0723/#why-not-infer-the-requirem...
Another point is the ambiguous naming when having several packages doing roughly the same... Which is crucial here. Thank you! :)
My own take:
PEP 723 had a deliberate goal of not making extra work for core devs, and not requiring tools to parse Python code. Note that if you put inline comments on the import lines, that wouldn't affect core devs, but would still complicate matters for tools. Python's import statement is just another statement that occurs at runtime and can appear anywhere in the code.
Besides that, trying to associate version numbers with the imported module is a complete non-starter. Version numbers belong to the distributions[1], which may define zero or more top-level packages. The distribution name may be completely independent from the names of what is imported, and need not even be a valid identifier (there is a standard normalization process, but that still allows your distribution's name to start with a digit, for example).
[1]: You say "packages", but I'm trying to avoid the unfortunate overloading the term. PyPA recommends (https://packaging.python.org/en/latest/discussions/distribut...) "distribution package" for what you download from PyPI, and "import package" for what you import in the code; I think this is a bit unwieldy.
Be aware that uv will create a full copy of that environment for each script by default. Depending on your number of scripts, this could become wasteful really fast. There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default, or which disadvantages this has, but so far it's working fine for me, and saved me several gigabytes of storage.
Ironically, if my backup scheme were using hard links, then I could simply exclude the cache directory from backup, so I’d have no reason to do that mountpoint spiel, and uv’s hard links would work normally.
You’re correct, ZFS snapshots don’t produce copies, at least not at the time they’re being created. They work a little like copy-on-write.
I plan on offering this kind of cached precompiled bytecode in PAPER, but I know that compiled bytecode includes absolute paths (used for displaying stack traces) that will presumably then refer to the cached copies. I'll want to test as best I can that this doesn't break anything more subtle.
The hard-link strategy still saves disk space — although not as much (see other replies and discussion).
Possible reasons not to symlink that I can think of:
* stack traces in uncaught exceptions might be wonky (I haven't tried it)
* it presumably adds a tiny hit to imports from Python, but I find it hard to imagine anyone caring
* with the hard-linking strategy, you can purge the cache (perhaps accidentally?) without affecting existing environments
Hahahahaha.
Oh. I'm rolling on the floor. Hahahahaha.
How do you never learn? No, honestly, how do you never learn this simple thing: it will break! I will bet my pension on that it will break, and perhaps not you, but some hundreds of developers will have to try to debug and try to figure out where the dependencies went and why they weren't installed correctly, or why something was missing and so on.
There will never be a situation that you don't have to think about something as important as dependencies at all.
I've found it extremely useful because it makes it trivial for me to try out new dependency versions without thinking about which environment should install them in first.
When I'm teaching people Python the earliest sticking point is always "now activate your virtual environment" - being able to avoid that is huge.
uv, being both, is a more natural fit for an implementation of that PEP.
Here's a relevant discussion: https://discuss.python.org/t/idea-introduce-standardize-proj...
Pipx is a wrapper that does more or less what you're looking for (including PEP 723 support), but it arbitrarily refuses to process top-level packages unless they specify an entry point (which makes them "applications" even with abstract dependencies).
I'm planning to support it in PAPER, which can roughly be described as my vision of what pip and pipx, taken together, should have been.
One of the complaints about python is that a script stops working over time (because the python installation changes with os updates), and this kinda sorta doesn't make it go away entirely, but what it does do is to eliminate a bunch of the hassle to getting things to work.
it absolutely can, `uv` can also pin the python interpreter it uses with `requires-python = "==3.11"` or whatever.
Jar files are closer maybe...
From my perspective, people seem to make it difficult to use python, more from not understanding the difference between interactive shell and non-interactive shell configurations (if you think the above breaks system tools that use python, then you don't understand the difference, nor that system tools use /bin/python rather than /usr/env python), with a bit of cargo-cult mixed in, more than anything.
Implicit solutions like yours have lower cost of entrance, but larger cost of support. uv python scripts just work if you set them up once
> Also, what are your plans for migration when you'll need to move from one os version to another?
None of my one off python code is OS dependent. But, none of my professional production code is either, because it's rare to have OS specific python code (unless you're building your own libraries), so this concern is very confusing to me. But, to make sure I can reproduce my one off environment, I periodically pip freeze a requirements.txt.
One things that’s useful to my organization is that we can then proceed to scan the lockfile’s declared dependencies with, e.g., `trivy fs uv.lock` to make sure we’re not running code with known CVEs.
If you have a project with modules, and you'd like a module to declare its dependencies, this won't work. uv will only get those dependencies declared in the invoked file.
For a multi-file project, you must have a `pyproject.toml`, see https://docs.astral.sh/uv/guides/projects/#managing-dependen...
In both cases, the script/project writer can use `uv add <dependency>`, just in the single-file case they must add `--script`.
https://github.com/tobinjones/uvkernel
It’s a pretty minimal wrapper around “uv” and “iPython” to provide the functionality from the article, but for Jupyter notebooks. It’s similar to other projects, but I think my implementation is the least intrusive and a good “citizen” of the Jupyter ecosystem.
There’s also this work-in-progress:
https://github.com/tobinjones/pep723widget
Which provides a companion Jupyter plugin to manage the embedded script dependencies of noteboooks with a UI. Warning — this one is partially vibe-coded and very early days.
Marimo notebooks are easy to diff when committing to git. Plus you get AI coding assistants in the notebook, a reactive UI framework, the ability to run in the browser with Pyodide on WASM, and more. It will also translate your old Jupyter notebooks for you.
For me, what uv is to package managers, marimo is to notebooks.
Fun with uv and PEP 723
640 points | 227 comments
This is the one.
Poetry was a big difference, it was also very slow for me, and so I never picked it up.
Who are "they"? I am not waiting for anyone to adopt uv. I've adopted it myself and forgotten about pip as uv is pip-compatible for all practical purposes.
Compare to npm which has been the default installer/manager in NodeJS since forever, so it's totally supported, and any git repo you happen to download has a package.json
No, it isn’t. This misconception is common and I don’t know where it comes from.
Python is a very complex language hiding behind friendly syntax.
It really isn't, and that is why the install and setup tools are so problematic. They have to hide and overcome a lot of complexity.
> Constraints can be added to the requested dependency if specific versions are needed:
> uv run --with 'rich>12,<13' example.py
Why not mention that this will make uv download specified versions of specified packages somewhere on the disk. Where? Are those packages going to get cached somewhere? Or will it re-download those same packages again and again every time you run this command?
Maybe there should instead be a link to the Concepts section for people who want more details, but I feel it's fine as it is.
There's not much point recycling contents from a package cache unless you currently don't have a venv using that package and also don't reasonably expect to have one in the near future.
And I'm saying this as the guy complaining all the time about things like the size of Numpy wheels.
(I can imagine languages having official LLMs which would more or less "compress/know" enough of the language to be an ...
... import of last resort, of sorts, by virtue of which an approximation to the missing code would be provided.-
uv run --with jupyter jupyter notebook
Everything is put into a temporary virtual environment that's cleaned up afterwards. Best thing is that if you run it from a project it will pick up those dependencies as well. uv cache clean
Or, if you want to specifically clean just jupyter: uv cache clean jupyter % cd $(uv cache dir); find . -type f | wc -l; du -hs .
234495
16G .uvx marimo edit
A one liner with marimo that also respects (and records) inline script metadata using uv:
uvx marimo edit --sandbox my_notebook.py
this is neat af. my throw-away scripts folder will be cleaner now.
https://gist.github.com/pythoninthegrass/e5b0e23041fe3352666...
tl;dr
Installs 3 deps into the uv cache, then does the following:
1. httpx to call get request from github api
2. sh (library) to run ls command for current directory
3. python-decouple to read an .env file or env var
Out of the box, the Python extension redlines all the third-party imports.
As a workaround, I have to plunge into the guts of uv's Cache directory to tell VS Code the cached venv path manually, and cross fingers that it won't recreate that venv too often.
I've been working on a VSCode extension to automatically detect and activate the right interpreter: https://marketplace.visualstudio.com/items?itemName=nsarrazi...
I invoke the "Select Interpreter" action. A file selector opens, then I go to the user cache directory (e.g. ~/.cache on Linux, something like %LOCALAPPDATA%\Cache on Windows). It has a `uv` subdirectory, then I drill further down until I find the directory where uv keeps its venvs. Find the venv that corresponds to your script, then go to its `bin` subdirectory and select the Python executable.
The upside is that you only have to do this once per script.
The downside is that you have to do this once per script.
You can work around this surprising behavior by always using inline dependencies, or by using the `--project` argument but this requires that you type the script path twice which is pretty inconvenient.
Other than that uv is awesome, but this small quirk is quite annoying.
> Python doesn't require a requirements.txt file, but if you don't maintain this file, or neglect to provide one, it could result in broken functionalities - an unfortunate circumstance for a scripting language.
alias pip='echo "do not use pip, use uv instead" && false'
You can put that in your bashrc or zshrc. There's a way to detect if it's a cursor shell to only apply the alias in this case, but I can't remember how off the top of my head!
Interesting times, eh?
Who (who!) would have told us all we'd be aliasing terminal commands to *natural language* instructions - for machine consumption, I means. Not for the dumb intern ...
(I am assuming that must have happened at some point - ie. having a verboten command echo "here be dragons" to the PFY ...)
#!/usr/bin/env -S guix shell python python-requests python-pandas -- python3
in scripts for including per-script dependencies. This is language agnostic as long as the interpreter and its dependencies are available as guix packages. I think there may be a similiar approach for utilizing nix shells that way as well.
[0]: https://guix.gnu.org/manual/en/html_node/Invoking-guix-shell...
For a long time I assumed it’s a Python wrapper for libuv which is a much older and well-known project. But apparently it’s Python package manager #137 “this one finally works, really, believe me”
#!/bin/bash
SCRIPT="$1"; shift
uv sync --quiet --script "$SCRIPT" && exec uv run --python "$(uv python find --script "$SCRIPT")" nvim "$SCRIPT" "$@"https://github.com/pmarreck/yt-transcriber
A commandline youtube transcription tool I built that runs on Mac (nix-darwin) which automatically pulls in all the dependencies it needs
He does all the legwork and boils it down into an easily-understandable manner.
He’s also a user her in this thread at simonw. Just an immensely useful guy.
Works well, but the compilation of the dependencies (first run of the script) can be a bit long. A problem which uv (Python) probably won’t have…
gopalv•6mo ago
I have a bunch of scripts in my git-hooks which have dependencies which I don't want in my main venv.
#!/usr/bin/env -S uv run --script --python 3.13
This single feature meant that I could use the dependencies without making its own venv, but just include "brew install uv" as instructions to the devs.
Bluestein•6mo ago
dwood_dev•6mo ago
abdusco•6mo ago
theshrike79•6mo ago
They're the ones that just keep running in cron with zero modifications.
Python is when I need to iterate fast and just edit crap on the go, most of those will also be migrated to Go after they stabilise - unless there's a library dependency that prevents it.
`uv` is nice, but it's not "static binary that runs literally anywhere with no prerequisites" -nice
alkh•6mo ago
Hello71•6mo ago
alkh•6mo ago
fc417fc802•6mo ago
https://www.gnu.org/software/coreutils/manual/html_node/env-...
alkh•6mo ago
"To test env -S on the command line, use single quotes for the -S string to emulate a single parameter. Single quotes are not needed when using env -S in a shebang line on the first line of a script (the operating system already treats it as one argument)"(from your second link).
This is different for shebang on Mac though:
GNU env works with or without '-S':
#!/opt/homebrew/bin/genv -S bash -v
echo "hello world!"
BSD env works with or without '-S' too:
#!/usr/bin/env -S bash -v
echo "hello world!"
To conclude, looks like adding `-S` is the safest option for comparability sake :).
Skunkleton•6mo ago
alkh•6mo ago
tame3902•6mo ago
"IIRC, the catalyst for it was that early FreeBSD (1990's?) did split up the words on the '#!' line because that seemed convenient. Years later, someone else noticed that this behavior did not match '#!' processing on any other unix, so they changed the behavior so it would match. Someone else then thought that was a bug, because they had scripts which depended on the earlier behavior. I forget how many times the behavior of '#!' processing bounced back and forth, but I had some scripts which ran on multiple versions of Unix and one of these commits broke one of those scripts.
I read up on all the commit-log history, and fixed '#!' processing one more time so that it matched how other unixes do it, and I think I also left comments in the code for that processing to document that "Yes, '#!'-parsing is really supposed to work this way".
And then in an effort to help those people who depended on the earlier behavior, I implemented '-S' to the 'env' command.
I have no idea how much '-S' is used, but it's been in FreeBSD since June 2005, and somewhere along the line those changes were picked up by MacOS 10. The only linux I work on is RHEL, and it looks like Redhat added '-S' between RHEL7 and RHEL8." [https://marc.info/?l=openbsd-tech&m=175307781323403&w=2]
zahlman•6mo ago