But I'm feeling the lock-in accumulate. Each project adds uv-specific configs, CI assumes uv behavior, team gets used to uv workflows. The GitHub Action is convenient, so we use it. The resolver is better, so we depend on it.
We've watched this movie before. Great developer tool becomes indispensable, then business realities hit. Even Google dropped "don't be evil." The enshittification pattern is well-documented: be good to users until they're locked in, then squeeze. Not saying Astral will , they seem genuinely focused on developer experience.
But that's what everyone says in the first few years!
What's your approach here? Are you building abstraction layers? Keeping alternative workflows tested? Just accepting that you'll deal with migration if/when needed?
I keep adopting uv because it's the right technical choice, but I'm uneasy about having no real fallback plan if things change direction in 2-3 years. The better the tool, the deeper the eventual lock-in.
zahlman•3h ago
A lot of people are calling for Python to just bless the tool officially, distribute it with Python etc. (which is a little strange to me given that it's also promoted as a way to get Python!) — the way people talk about uv, makes it seem hard to get people to care even if that did happen.
Regardless, I feel strongly that everyone who cares about distributing their code and participating in the ecosystem should take the time to understand the underlying infrastructure of venvs, wheels/sdists, etc.
A big part of what uv has accomplished for Python is rapidly implementing new standards like PEP 723 and PEP 751 — rather, they were AFAICT well underway on the implementations while the standards were being finalized, and the Astral team have also been important figures in the discussion. Those standards will persist no matter what Astral decides to do as a company.
And for what it's worth, pip actually is slowly, inexorably improving. It just isn't currently in a position to make a clean break from many of the things holding it back.
> What's your approach here? Are you building abstraction layers?
The opposite: I'm sticking with more focused tools (including the ones I'm making myself) and the UNIX philosophy. PAPER is scoped to implement a lot of useful tools for managing packages and environments, but it's still low-level (and unlike pip, I'm starting with an explicit API). It won't manage a project for you, won't install Python, will have no [tool] config in pyproject.toml... it's really intended as a user tool, with overlapping uses for developers. On the other side, bbbb is meant to hook everything up so that you can run a build step (unlike Flit, commonly suggested as the simplest option), choose which files are in the distribution, have all the wheel book-keeping taken care of... but things like locating or invoking a compiler are out of scope. A full dev toolchain would include both, plus some kind of Make-like system (I have a vague design for one), a build front-end (`build` works fine except that it's hard-coded to use either pip or uv to make environments, so I might make a separate PAPER plugin...), an uploader (`twine` is fine, really!) and probably your own additional scripts according to personal preference.
PaulHoule•3h ago
Circa 2017 I was in charge of figuring out build problems at a startup that was using Python for machine learning. By the time I left I had a complete checklist of what the problems were but I was out of time.
I had notes for something that would have been equivalent to "uv" in many ways, in particular it would have a correct resolving algorithm. I had spike prototyped many interesting things, such as a system which could use http range request to download just the metadata for a wheel so it could rapidly scan a bunch of different versioned wheels to work out what would be compatible with what.
One problem I was worried about was, written in Python, it would be dependent on having a working Python environment to stand on -- my experience was that Python devs, particularly data scientists, could screw up their Pythons worse than I could imagine. uv's answer to be not written in Python was genius, not least because it contributes to speed (caching matters a lot, a pure-Python system could dot that) but because a static linked binary completely eliminates the dangers of corrupted environments. (poetry gets fubared if you use it long enough)
I didn't got forward with it because when I asked around I found Pythoners just didn't give a damn that the pip resolver didn't really work -- it works for them, if their project is simple. If your project is a little more complex it still works sometimes, works enough that you can whack it on the side the few times or start fresh occasionally. Only if you had the super-complex projects we were working on was it war all the time. I didn't think I could sell anyone else on using it so I gave up.
zahlman•1h ago
I'm designing PAPER to be small and efficient (to the extent Python allows), with an explicit API — largely with the intent of making it easy for others to extend, wrap, fork, integrate, interoperate with etc. it.
> I had notes for something that would have been equivalent to "uv" in many ways, in particular it would have a correct resolving algorithm.
Pip gained a correct (relative to the standards and metadata availability of the time) resolving algorithm in 2020 or soon thereafter. uv adds performance largely via heuristics and shortcuts (i.e., given multiple options for trying to fix a problem, it tries to try them in an order that's more likely to find a solution earlier). This stuff seems like the hard part and I'm probably doing to just drop in pip's logic (which has already been isolated as `resolvelib`, though it requires some interfacing) at least to start.
> uv's answer to be not written in Python was genius, not least because it contributes to speed (caching matters a lot, a pure-Python system could dot that) but because a static linked binary completely eliminates the dangers of corrupted environments. (poetry gets fubared if you use it long enough)
All you needed to do was make your tool have its own isolated environment (in an out-of-the-way place), have it function cross-environment (pip now accomplishes this by a "re-run the code" hack; PAPER has logic to just inspect the target environment explicitly) and refuse to install anything in the tool's environment that isn't vetted as necessary for a tool plugin etc. Yes, I'm sure data scientists can screw up environments something fierce. That's why you let them have their own playground. My plan is to also create a default "sandbox" environment (like what pip originally tried to do with `--user`, except actually a virtual environment and not given special path-hack treatment) for "library" installs when no target is explicitly specified, and do all "application" installs in new environments by default.
I don't know what issues you encountered with Poetry. If they're still a thing in 2.x (apparently they're at 2.1.3 now), I'm sure they'd appreciate a reproducer.
> I didn't got forward with it because when I asked around I found Pythoners just didn't give a damn that the pip resolver didn't really work -- it works for them, if their project is simple.
Pip does work for a lot of people (although it's slow even when it does literally nothing), which is a big part of why it didn't face a lot of pressure to improve. Circa 2017, the complaints were more nebulous, and probably a bigger share of them could be attributed to poor ecosystem standards (in particular, metadata availability) and to Setuptools. Things in the pyproject.toml era are the same in many ways, but very different in others.