frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Mercedes-AMG GT Track Sport Teased as Porsche 911 GT3 RS Fighter

https://www.thedrive.com/news/mercedes-amg-gt-track-sport-teased-as-porsche-911-gt3-rs-fighter
1•PaulHoule•1m ago•0 comments

Sentry Returning Intermittent 500s

https://status.sentry.io/incidents/8zd66t4svq4k
1•tjwds•2m ago•0 comments

Govt. Website 'Glitch' Removes Trump's Least Favorite Part of Constitution

https://www.rollingstone.com/politics/politics-features/trump-least-favorite-part-constitution-deleted-1235401874/
1•LopRabbit•4m ago•0 comments

Ultraprocessed vs. Minimally Processed Diets

https://www.nature.com/articles/s41591-025-03842-0
2•bookofjoe•5m ago•0 comments

Ask HN: Should schools have a subject where students can think about whatever?

1•amichail•6m ago•0 comments

Apple to invest $100B after pressure from Trump

https://www.bbc.com/news/articles/cdx0n7y29kdo
1•tartoran•6m ago•0 comments

Ask HN: Is Kundalini Dangerous?

2•praxipro•6m ago•1 comments

What's Trending on Open Library?

https://blog.openlibrary.org/2025/08/06/whats-trending-on-open-library/
2•raybb•8m ago•1 comments

How to interactively debug GitHub Actions with netcat

https://jacobtomlinson.dev/posts/2021/how-to-interactively-debug-github-actions-with-netcat/
2•mihau•8m ago•0 comments

Citizen Lab director warns cyber industry about US authoritarian descent

https://techcrunch.com/2025/08/06/citizen-lab-director-warns-cyber-industry-about-us-authoritarian-descent/
2•mdhb•8m ago•0 comments

Could lithium stave off Alzheimer's disease?

https://www.science.org/content/article/could-lithium-stave-alzheimer-s-disease
3•bikenaga•9m ago•0 comments

New hope for Alzheimer's: lithium supplement reverses memory loss in mice

https://www.nature.com/articles/d41586-025-02471-4
2•nullhole•12m ago•0 comments

Norway's Hedged Bet on Europe's Energy Future: A Garbage Disposal for Emissions

https://www.nytimes.com/2025/08/05/business/norway-cabon-capture-northern-lights.html
2•mitchbob•12m ago•1 comments

Trump, Apple to Announce New $100B Commitment to Manufacturing in US

https://www.cbsnews.com/news/trump-apple-committing-100-billion-manufacturing-us/
8•m463•13m ago•0 comments

Gleam v1.12.0 Released

https://github.com/gleam-lang/gleam/blob/main/changelog/v1.12.md
5•Alupis•13m ago•0 comments

AstraZeneca signs AI research deal with China's CSPC

https://www.reuters.com/business/healthcare-pharmaceuticals/astrazeneca-agrees-research-deal-worth-up-522-billion-with-cspc-2025-06-13/
2•colinprince•13m ago•0 comments

LLMs have loss aversion too

https://substack.com/inbox/post/170287169
2•mathattack•14m ago•1 comments

GPT-5 Livestream starts Aug 7 10am Pacific

https://x.com/i/trending/1953149560030949822
2•bretpiatt•15m ago•0 comments

The possibility of a giant impact on Venus

https://arxiv.org/abs/2508.03239
3•bikenaga•17m ago•0 comments

Fraudsters access KLM customer details in data breach

https://www.amlintelligence.com/2025/08/news-fraudsters-access-klm-customer-details-in-data-breach/
3•vinni2•19m ago•0 comments

Trump to Announce Additional $100B Apple Investment in U.S.

https://www.nytimes.com/2025/08/06/us/politics/trump-apple-investment.html
7•2OEH8eoCRo0•19m ago•0 comments

Minimize AI hallucinations and deliver up to 99% verification accuracy

https://aws.amazon.com/blogs/aws/minimize-ai-hallucinations-and-deliver-up-to-99-verification-accuracy-with-automated-reasoning-checks-now-available/
3•kurhan•20m ago•0 comments

80 Years Ago, Nuclear Annihilation Came to Japan

https://www.nytimes.com/2025/08/05/world/asia/hiroshima-nagasaki-japan-nuclear-photos.html
9•thm•21m ago•1 comments

Good context leads to good code: How we built an AI-Native Eng Culture

https://blog.stockapp.com/good-context-good-code/
4•waleedk•22m ago•1 comments

New Gemini app tools to help students learn, understand and study better

https://blog.google/products/gemini/new-gemini-tools-students-august-2025/
3•srameshc•24m ago•0 comments

An Open-Source Asynchronous Coding Agent

https://github.com/langchain-ai/open-swe
3•saikatsg•25m ago•0 comments

Car Reinforcement Learning Training

https://github.com/leesweqq/car_chase_robot_RL
4•kyleliiii•27m ago•1 comments

Intel struggles with key manufacturing process for next PC chip

https://www.reuters.com/world/asia-pacific/intel-struggles-with-key-manufacturing-process-next-pc-chip-sources-say-2025-08-05/
3•selimthegrim•28m ago•0 comments

Quick Read: What Happens If Every Light in the World Is Switched on at Once?

https://www.sciencealert.com/what-happens-if-every-light-in-the-world-is-switched-on-at-once
3•gautamsomani•28m ago•1 comments

Show HN: Text Symbols

https://symbol.so/
3•liquid99•28m ago•0 comments
Open in hackernews

We shouldn't have needed lockfiles

https://tonsky.me/blog/lockfiles/
38•tobr•2h ago

Comments

ratelimitsteve•2h ago
anyone find a way to get rid of the constantly shifting icons at the bottom of the screen? I'm trying to read and the motion keeps pulling my attention away from the words toward the dancing critters.
foobarbecue•2h ago
$("#presence").remove()

And yeah, I did that right away. Fun for a moment but extremely distracting.

vvillena•2h ago
Reader mode.
karmakurtisaani•2h ago
Agreed. It's an absolutely useless feature for me to see as well.
trinix912•2h ago
Block ###presence with UBlock.
bencevans•2h ago
https://times.hntrends.net/story/44813397
zahlman•1h ago
I use NoScript, which catches all of these sorts of things by default. I only enable first-party JS when there's a clear good reason why the site should need it, and third-party JS basically never beyond NoScript's default whitelist.
Joker_vD•2h ago
NPM has, starting with version 0.5.1, an absolutely lovely feature where it simply ignores the package-lock.json file altogether. Or to be more precise, "npm install" regenerates package-lock.json based on package.json. What's the point of "npm upgrade" then? Eh.
thunderfork•2h ago
You can use `npm ci` for "don't update the lockfile, fail if an exact lockfile match can't be collected"
0cf8612b2e1e•1h ago
I hate this reality.
omnicognate•2h ago
What if your project also uses librupa, which also depends on liblupa? Follow the chain of reasoning from that thought, or maybe spend a couple of decades dealing with the horror created by people who didn't, and you'll get to lockfiles.
hyperpape•2h ago
> But if you want an existence proof: Maven. The Java library ecosystem has been going strong for 20 years, and during that time not once have we needed a lockfile. And we are pulling hundreds of libraries just to log two lines of text, so it is actively used at scale.

Maven, by default, does not check your transitive dependencies for version conflicts. To do that, you need a frustrating plugin that produces much worse error messages than NPM does: https://ourcraft.wordpress.com/2016/08/22/how-to-read-maven-....

How does Maven resolve dependencies when two libraries pull in different versions? It does something insane. https://maven.apache.org/guides/introduction/introduction-to....

Do not pretend, for even half a second, that dependency resolution is not hell in maven (though I do like that packages are namespaced by creators, npm shoulda stolen that).

potetm•2h ago
The point isn't, "There are zero problems with maven. It solves all problems perfectly."

The point is, "You don't need lockfiles."

And that much is true.

(Miss you on twitter btw. Come back!)

jeltz•2h ago
You don't need package management by the same token. C is proof of that.

Having worked professionally in C, Java, Rust, Ruby, Perl, PHP I strongly prefer lock files. They make it so much nicer to manage dependencies.

potetm•1h ago
"There is another tool that does exactly the job of a lockfile, but better."

vs

"You can use make to ape the job of dependency managers"

wat?

jeltz•1h ago
I have worked with Maven and dependency management is a pain. Not much nicer than vendoting dependencies like you do for C. When I first was introduced to lock files that was amazing. It solved so many problems I had with vendored dependencies, CPAN and Maven.

Just because thousands of programmers manage to suffer through your bad system every day does not make it good.

aidenn0•1h ago
Now you're moving the goalposts; I think lockfiles that are checked-in to version control are superior to Maven's "Let's YOLO it if your transitive dependencies conflict." Version ranges are more expressive than single-versions, and when you add lockfiles you get deterministic builds.
deepsun•10m ago
I don't understand how Maven's YOLO is different from NPM's range.

If you force a transitive dependency in Maven, then yes, some other library may get incompatible with it. But in NPM when people declare dependency as, say, ~1.2.3 the also don't know if they will be compatible with a future 1.2.4 version. They just _assume_ the next patch release won't break anything. Yes npm will try to find a version that satisfies all declarations, but library devs couldn't know the new version would be compatible because it wasn't published at that time.

And my point is that it's _exactly_ the same probability that the next patch version is incompatible in both Maven and NPM. That's why NPM users are not afraid to depend on ~x.x or even ^x.x, they basically YOLOing.

cogman10•7m ago
Maven builds are deterministic (so long as you don't have SNAPSHOT dependencies). The version resolution is insane but deterministic. You'll only break that determinism if you change the dependencies.

That's precisely because maven doesn't support version ranges. Maven artifacts are also immutable.

Maven also supports manual override when the insane resolution strategy fails that's the "dependencymanagement" section.

hyperpape•1h ago
I think Maven's approach is functionally lock-files with worse ergonomics. You can only use the dependency from the libraries you use, but you're waiting for those libraries to update.

As an escape hatch, you end up doing a lot of exclusions and overrides, basically creating a lockfile smeared over your pom.

P.S. Sadly, I think enough people have left Twitter that it's never going to be what it was again.

potetm•1h ago
Of course it's functionally lock files. They do the same thing!

There's a very strong argument that manually managing deps > auto updating, regardless of the ergonomics.

P.S. You're, right, but also it's where the greatest remnant remains. :(

shadowgovt•11m ago
I fear it says something unfortunate about our entire subculture if the greatest remnant remains at the Nazi bar. :(

(To be generous: it might be that we didn't build our own bar the moment someone who is at least Nazi-tolerant started sniffing around for the opportunity to purchas the deed to the bar. The big criticism might be "we, as a subculture, aren't punk-rock enough.")

KerrAvon•6m ago
JFC, get off Twitter. It's a Nazi propaganda site and you are going to be affected by that even if you think you're somehow immune.
Karrot_Kream•14m ago
When I used to lead a Maven project I'd take dependency-upgrade tickets that would just be me bumping up a package version then whack-a-moling overrides and editing callsites to make dependency resolution not pull up conflicting packages until it worked. Probably lost a few days a quarter that way. I even remember the playlists I used to listen to when I was doing that work (:

Lockfiles are great.

simonw•2h ago
I see lockfiles as something you use for applications you are deploying - if you run something like a web app it's very useful to know exactly what is being deployed to production, make sure it exactly matches staging and development environments, make sure you can audit new upgrades to your dependencies etc.

This article appears to be talking about lockfiles for libraries - and I agree, for libraries you shouldn't be locking exact versions because it will inevitably pay havoc with other dependencies.

Or maybe I'm missing something about the JavaScript ecosystem here? I mainly understand Python.

kaelwd•2h ago
The lockfile only applies when you run `npm install` in the project directory, other projects using your package will have their own lockfile and resolve your dependencies using only your package.json.
aidenn0•1h ago
I think you missed the point of the article. Consider Application A, that depends on Library L1. Library L1 in turn depends on Library L2:

A -> L1 -> L2

They are saying that A should not need a lockfile because it should specify a single version of L1 in its dependencies (i.e. using an == version check in Python), which in turn should specify a single version of L2 (again with an == version check).

Obviously if everybody did this, then we wouldn't need lockfiles (which is what TFA says). The main downsides (which many comments here point out) are:

1. Transitive dependency conflicts would abound

2. Security updates are no longer in the hands of the app developers (in my above example, the developer of A1 is dependent on the developer of L1 whenever a security bug happens in L2).

3. When you update a direct dependency, your transitive dependencies may all change, making what you that was a small change into a big change.

(FWIW, I put these in order of importance to me; I find #3 to be a nothingburger, since I've hardly ever updated a direct dependency without it increasing the minimum dependency of at least one of its dependencies).

hosh•1h ago
Is the article also suggesting that if there are version conflicts, it goes with the top level library? For example, if we want to use a secure version of L2, it would be specified at A, ignoring the version specified by L1?

Or maybe I misread the article and it did not say that.

aidenn0•1h ago
It's maybe implied since Maven lets you do that (actually it uses the shallowest dependency, with the one listed first winning ties), but the thrust of the article seems to be roughly: "OMGWTFBBQ we can't use L2 with 0.7.9 if L1 was only tested with 0.7.9!" so I don't know how the author feels about that.

[edit]

The author confirmed that they are assuming Maven's rules and added it to the bottom of their post.

lalaithion•2h ago
What if your program depends on library a1.0 and library b1.0, and library a1.0 depends on c2.1 and library b1.0 depends on c2.3? Which one do you install in your executable? Choosing one randomly might break the other library. Installing both _might_ work, unless you need to pass a struct defined in library c from a1.0 to b1.0, in which case a1.0 and b1.0 may expect different memory layouts (even if the public interface for the struct is the exact same between versions).

The reason we have dependency ranges and lockfiles is so that library a1.0 can declare "I need >2.1" and b1.0 can declare "I need >2.3" and when you depend on a1.0 and b1.0, we can do dependency resolution and lock in c2.3 as the dependency for the binary.

tonsky•1h ago
One of the versions will be picked up. If that version doesn’t work, you can try another one. The process is exactly the same
Joker_vD•1h ago
> If that version doesn’t work, you can try another one.

And how will this look like, if your app doesn't have library C mentioned in its dependencies, only libraries A and B? You are prohibited from answering "well, just specify all the transitive dependencies manually" because it's precisely what a lockfile is/does.

tonsky•1h ago
Maven's version resolution mechanism determines which version of a dependency to use when multiple versions are specified in a project's dependency tree. Here's how it works:

- Nearest Definition Wins: When multiple versions of the same dependency appear in the dependency tree, the version closest to your project in the tree will be used.

- First Declaration Wins: If two versions of the same dependency are at the same depth in the tree, the first one declared in the POM will be used.

Joker_vD•50m ago
Well, I guess this works if one appends their newly-added dependencies are appended at the end of the section in the pom.xml instead of generating it alphabetically sorted just in time for the build.
deredede•1h ago
It's not "all the transitive dependencies". It's only the transitive dependencies you need to explicitly specify a version for because the one that was specified by your direct dependency is not appropriate for X reason.
deredede•1h ago
Alternative answer: both versions will be picked up.

It's not always the correct solution, but sometimes it is. If I have a dependency that uses libUtil 2.0 and another that uses libUtil 3.0 but neither exposes types from libUtil externally, or I don't use functions that expose libUtil types, I shouldn't have to care about the conflict.

shadowgovt•3m ago
[delayed]
egh•2h ago
we've all learned about things, not understood them, and thought "wow, these people must be idiots. why would they have made this complicated thing? makes no sense whatsoever. I can't believe these people, idiots, never thought this through like I have."

Most of us, fortunately, don't post these thoughts to the internet for anybody to read.

zahlman•1h ago
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

While I share the view that TFA is misguided in some ways, this isn't a productive or insightful way to make the point.

tonsky•1h ago
I worked for 20 years in an ecosystem that didn’t have lockfiles and had reproducible builds before the term was invented, and now you come and tell me that it couldn’t be?
spooky_deep•1h ago
Where did the packages come from? How did you know they hadn’t been changed at the bit-level since last time?
boscillator•2h ago
Ok, but what happens when lib-a depends on lib-x:0.1.4 and lib-b depends on lib-x:0.1.5, even though it could have worked with any lib-x:0.1.*? Are these libraries just incompatible now? Lockfiles don't guarantee that new versions are compatible, but it guarantees that if your code works in development, it will work in production (at least in terms of dependencies).

I assume java gets around this by bundling libraries into the deployed .jar file. That this is better than a lock file, but doesn't make sense for scripting languages that don't have a build stage. (You won't have trouble convincing me that every language should have a proper build stage, but you might have trouble convincing the millions of lines of code already written in languages that don't.)

aidenn0•1h ago
> I assume java gets around this by bundling libraries into the deployed .jar file. That this is better than a lock file, but doesn't make sense for scripting languages that don't have a build stage. (You won't have trouble convincing me that every language should have a proper build stage, but you might have trouble convincing the millions of lines of code already written in languages that don't.)

You are wrong; Maven just picks one of lib-x:0.1.4 or lib-x:0.1.5 depending on the ordering of the dependency tree.

epage•2h ago
Let's play this out in a compiled language like Cargo.

If every dependency was a `=` and cargo allowed multiple versions of SemVer compatible packages.

The first impact will be that your build will fail. Say you are using `regex` and you are interacting with two libraries that take a `regex::Regex`. All of the versions need to align to pass `Regex` between yourself and your dependencies.

The second impact will be that your builds will be slow. People are already annoyed when there are multiple SemVer incompatible versions of their dependencies in their dependency tree, now it can happen to any of your dependencies and you are working across your dependency tree to get everything aligned.

The third impact is if you, as the application developer, need a security fix in a transitive dependency. You now need to work through the entire bubble up process before it becomes available to you.

Ultimately, lockfiles are about giving the top-level application control over their dependency tree balanced with build times and cross-package interoperability. Similarly, SemVer is a tool any library with transitive dependencies [0]

[0] https://matklad.github.io/2024/11/23/semver-is-not-about-you...

hosh•1h ago
Wasn’t the article suggesting that the top level dependencies override transitive dependencies, and that could be done in the main package file instead of the lock file?
junon•41m ago
You should not be editing your cargo.lock file manually. Cargo gives you a first-class way of overriding transitive dependencies.
oblio•36m ago
Java is compiled, FYI.
matklad•36m ago
This scheme _can_ be made to work in the context of Cargo. You can have all of:

* Absence of lockfiles

* Absence of the central registry

* Cryptographically checksummed dependency trees

* Semver-style unification of compatible dependencies

* Ability for the root package to override transitive dependencies

At the cost of

* minver-ish resolution semantics

* deeper critical path in terms of HTTP requests for resolving dependencies

The trick is that, rather than using crates.io as the universe of package versions to resolve against, you look only at the subset of package versions reachable from the root package. See https://matklad.github.io/2024/12/24/minimal-version-selecti...

trjordan•2h ago
There is absolutely a good reason for version ranges: security updates.

When I, the owner of an application, choose a library (libuseful 2.1.1), I think it's fine that the library author uses other libraries (libinsecure 0.2.0).

But in 3 months, libinsecure is discovered (surprise!) to be insecure. So they release libinsecure 0.2.1, because they're good at semver. The libuseful library authors, meanwhile, are on vacation because it's August.

I would like to update. Turns out libinsecure's vulnerability is kind of a big deal. And with fully hardcoded dependencies, I cannot, without some horrible annoying work like forking/building/repackaging libuseful. I'd much rather libuseful depend on libinsecure 0.2.*, even if libinsecure isn't terribly good at semver.

I would love software to be deterministically built. But as long as we have security bugs, the current state is a reasonable compromise.

tonsky•1h ago
It’s totally fine in Maven, no need to rebuild or repackage anything. You just override version of libinsecure in your pom.xml and it uses the version you told it to
zahlman•1h ago
So you... manually re-lock the parts you need to?
aidenn0•1h ago
Don't forget the part where Maven silently picks one version for you when there are transitive dependency conflicts (and no, it's not always the newest one).
deredede•1h ago
Sure, I'm happy with locking the parts I need to lock. Why would I lock the parts I don't need to lock?
deredede•1h ago
What if libinsecure 0.2.1 is the version that introduces the vulnerability, do you still want your application to pick up the update?

I think the better model is that your package manager let you do exactly what you want -- override libuseful's dependency on libinsecure when building your app.

trjordan•12m ago
Of course there's no 0-risk version of any of this. But in my experience, bugs tend to get introduced with features, then slowly ironed out over patches and minor versions.

I want no security bugs, but as a heuristic, I'd strongly prefer the latest patch version of all libraries, even without perfect guarantees. Code rots, and most versioning schemes are designed with that in mind.

alexandrehtrb•2h ago
I completely agree.

.NET doesn't have lock files either, and its dependency tree runs great.

Using fixed versions for dependencies is a best practice, in my opinion.

horsawlarway•1h ago
This is wrong. DotNet uses packages.lock.json explicitly to support the case where you want to be able to lock transitive dependencies that are specified with a range value, or several other edge cases that might warrant explicitly declaring versions that are absent from csproj or sln files.

https://devblogs.microsoft.com/dotnet/enable-repeatable-pack...

https://learn.microsoft.com/en-us/nuget/consume-packages/pac...

Again - there's no free lunch here.

andix•2h ago
Lockfiles are essential for somewhat reproducible builds.

If a transient dependency (not directly referenced) updates, this might introduce different behavior. if you test a piece of software and fix some bugs, the next build shouldn't contain completely different versions of dependencies. This might introduce new bugs.

tonsky•1h ago
> Lockfiles are essential for somewhat reproducible builds.

No they are not. Fully reproducible builds have existed without lockfiles for decades

its-summertime•1h ago
of distros, they usually refer to an upstream by hash

https://src.fedoraproject.org/rpms/conky/blob/rawhide/f/sour...

also of flathub

https://github.com/flathub/com.belmoussaoui.ashpd.demo/blob/...

"they are not lockfiles!" is a debatable separate topic, but for a wider disconnected ecosystem of sources, you can't really rely on versions being useful for reproducibility

andix•1h ago
> they usually refer to an upstream by hash

exactly the same thing as a lockfile

andix•1h ago
Sure, without package managers.

It's also not about fully reproducible builds, it's about a tradeoff to get modern package manger (npm, cargo, ...) experience and also somewhat reproducible builds.

pluto_modadic•32m ago
...source?

show me one "decades old build" of a major project that isn't based on 1) git hashes 2) fixed semver URLs or 3) exact semver in general.

jedberg•1h ago
The entire article is about why this isn't the case.
andix•1h ago
It suggests a way more ridiculous fix. As mentioned by other comments in detail (security patches for transient dependencies, multiple references to the same transient dependency).
zokier•2h ago
umm wat

> “But Niki, you can regenerate the lockfile and pull in all the new dependencies!”

> Sure. In exactly the same way you can update your top-level dependencies.

how does updating top-level deps help with updating leaf packages? Is the author assuming that whenever a leaf package is updated, every other package in the dep chain gets immediately new release? That is fundamentally impossible considering that the releases would need to happen serially.

tonsky•1h ago
I updated the post, see near the bottom
nine_k•2h ago
The author seems to miss the point of version ranges. Yes, specific versions of dependencies get frozen in the lock file at the moment of building. But the only way to determine these specific versions is to run version resolution across the whole tree. The process finds out which specific versions within the ranges can be chosen to satisfy all the version constraints.

This works with minimal coordination between authors of the dependencies. It becomes a big deal when you have several unrelated dependencies, each transitively requiring that libpupa. The chance they converge on the same exact version is slim. The chance a satisfying version can be found within specified ranges is much higher.

Physical things that are built from many parts have the very same limitation: they need to specify tolerances to account for the differences in production, and would be unable to be assembled otherwise.

tonsky•1h ago
Yeah but version ranges are fiction. Some says: we require libpupa 0.2.0+. Sure you can find a version in that range. But what if it doesn’t work? How can you know that your library will work with all the future libpupa releases in advance?
freetonik•1h ago
In the world of Python-based end-user libraries the pinned (non-ranged) versions result in users being unable to use your library in an environment with other libraries. I’d love to lock my library to numpy 2.3.4, but if the developers of another library pin theirs to 2.3.5 then game over.

For server-side or other completely controlled environments the only good reason to have lock files is if they are actually hashed and thus allow to confirm security audits. Lock files without hashes do not guarantee security (depending on the package registry, of course, but at least in Python world (damn it) the maintainer can re-publish a package with an existing version but different content).

tonsky•1h ago
> I’d love to lock my library to numpy 2.3.4, but if the developers of another library pin theirs to 2.3.5 then game over.

Why? Can’t you specify which version to use?

spooky_deep•1h ago
> The important point of this algorithm is that it’s fully deterministic.

The algorithm can be deterministic, but fetching the dependencies of a package is not.

It is usually an HTTP call to some endpoint that might flake out or change its mind.

Lock files were invented to make it either deterministic or fail.

Even with Maven, deterministic builds (such as with Bazel) lock the hashes down.

This article is mistaken.

horsawlarway•1h ago
This is a great example of chesterton's fence.

The author of this piece doesn't understand why a top level project might want control of its dependencies dependencies.

That's the flaw in this whole article, if you can't articulate why it's important to be able to control those... don't write an article. You don't understand the problem space.

Semantic versioning isn't perfect, but it's more than a "hint", and it sure as hell beats having to manually patch (or fork) an entire dependency chain to fix a security problem.

aidenn0•1h ago
Author puts up Maven as an example of no lockfiles. Maven does allow a top-level project to control its transitive dependencies (when there is a version conflict, the shallowest dependency wins; the trivial version of this is if you specify it as a top-level dependency).

I think rather that the author doesn't realize that many people in the lockfile world put their lockfiles under version control. Which makes builds reproducible again.

horsawlarway•58m ago
Yes, but Maven doesn't support reproducibility (outside of plugins that basically haul in a lockfile). So his whole point is moot (Gradle now does, as an aside: https://docs.gradle.org/current/userguide/dependency_locking...)

Again - I don't think the author is aware enough of the problem space to be making the sort of claim that he is. He doesn't understand the problem lockfiles are solving, so he doesn't know why they exist and wants them gone... chesterton's fence in action.

---

Directly declaring deps is great. It's so great that we'd like to do it for every dependency in many (arguably most) cases. But doing that really sort of sucks when you start getting into even low 10s of deps. Enter... lockfiles and the tooling to auto-resolve them.

junon•36m ago
I think people forget NPM added package-lock.json for the npm@5 release that was rushed out the door to match the next node.js major and was primarily to cut down on server traffic costs as they weren't making money from the FOSS community to sustain themselves.
palotasb•1h ago
The author is perhaps presenting a good argument for languages/runtimes like JavaScript/Node where dependencies may be isolated and conflicting dependencies may coexist in the dependency tree (e.g., "app -> { libpupa 1.2.3 -> liblupa 0.7.8 }, { libxyz 2.0 -> liblupa 2.4.5 }" would be fine), but the proposed dependency resolution algorithm...

> Our dependency resolution algorithm thus is like this:

> 1. Get the top-level dependency versions

> 2. Look up versions of libraries they depend on

> 3. Look up versions of libraries they depend on

...would fail in languages like Python where dependencies are shared, and the steps 2, 3, etc. would result in conflicting versions.

In these languages, there is good reason to define dependencies in a relaxed way (with constraints that exclude known-bad versions; but without pins to any specific known-to-work version and without constraining only to existing known-good versions) at first. This way dependency resolution always involves some sort of constraint solving (with indeterminate results due to the constraints being open-ended), but then for the sake of reproducibility the result of the constraint solving process may be used as a lockfile. In the Python world this is only done in the final application (the final environment running the code, this may be the test suite in for a pure library) and the pins in the lock aren't published for anyone to reuse.

To reiterate, the originally proposed algorithm doesn't work for languages with shared dependencies. Using version constraints and then lockfiles as a two-layer solution is a common and reasonable way of resolving the dependency topic in these languages.

hosh•1h ago
What if the top level can override the transitive dependencies?

I have had to do that with Ruby apps, where libraries are also shared.

tonsky•1h ago
> would fail in languages like Python where dependencies are shared

And yet Java and Maven exist...

zahlman•1h ago
> Imagine you voluntarily made your build non-reproducible by making them depend on time. If I build my app now, I get libpupa 1.2.3 and liblupa 0.7.8. If I repeat the same build in 10 minutes, I’ll get liblupa 0.7.9. Crazy, right? That would be chaos.

No; in fact it's perfectly reasonable, and at the core of what the author doesn't seem to get. Developers have motivations other than reproducibility. The entire reason we have version number schemes like this is so that we can improve our code while also advertising reasonable expectations about compatibility. If we have dependents, then hopefully this also improves their UX indirectly — whether by taking advantage of optimizations we made, not encountering bugs that were actually our fault, etc. Similarly, if we have dependencies, we can seek to take advantage of that.

Upgrading environments is an opportunity to test new configurations, and see if they're any better than what's in existing lockfiles.

> But this is what version ranges essentially are. Instead of saying “libpupa 1.2.3 depends on liblupa 0.7.8”, they are saying “libpupa 1.2.3 depends on whatever the latest liblupa version is at the time of the build.”

But also, developers aren't necessarily using the latest versions of their dependencies locally anyway. If I did pin a version in my requirements, it'd be the one that I tested the build with, not necessarily the one that was most recently released at the time of the build. Not everyone runs an industrial-strength CI system, and for the size of lots of useful packages out there, they really shouldn't have to, either. (And in the pathological case, someone else could re-release while I'm building and testing!)

> But... why would libpupa’s author write a version range that includes versions that don’t exist yet? How could they know that liblupa 0.7.9, whenever it will be released, will continue to work with libpupa? Surely they can’t see the future? Semantic versioning is a hint, but it has never been a guarantee.

The thing about this is that "work with [a dependency]" is not really a binary. New versions also fix things — again, that's the main reason that new versions get released in the first place. Why would I keep writing the software after it's "done" if I don't think there's anything about it that could be fixed?

For that matter, software packages break for external reasons. If I pin my dependency, and that dependency is, say, a wrapper for a third-party web API, and the company operating that website makes a breaking change to the API, then I just locked myself out of new versions of the dependency that cope with that change.

In practice, there are good reasons to not need a guarantee and accept the kind of risk described. Lockfiles exist for those who do need a guarantee that their local environment will be set in concrete (which has other, implicit risks).

I see it as much like personal finance. Yes, investments beyond a HISA may carry some kind of risk. This is worthwhile for most people. And on the flip side, you also can't predict the future inflation rate, and definitely can't predict what will happen to the price of the individual goods and services you care about most.

> The funny thing is, these version ranges end up not being used anyway. You lock your dependencies once in a lockfile and they stay there, unchanged. You don’t even get the good part!

??? What ecosystem is this author talking about? Generating a lockfile doesn't cause the underlying dependency metadata to disappear. You "get the good part" as a developer by periodically regenerating a lockfile, testing the resulting environment and shipping the new lock. Or as a user by grabbing a new lockfile, or by just choosing not to use provided lockfiles.

> “But Niki, you can regenerate the lockfile and pull in all the new dependencies!” Sure. In exactly the same way you can update your top-level dependencies.

Has the author tried both approaches, I wonder?

Not to mention: the lockfile-less world the author describes, would require everyone to pin dependency versions. In practice, this would require dropping support for anything else in the metadata format. And (I did have to look it up) this appears to be the world of Maven that gets cited at the end (cf. https://stackoverflow.com/questions/44521542).

I like choice and freedom in my software, thank you.

> “But Niki, lockfiles help resolve version conflicts!” In what way? Version conflicts don’t happen because of what’s written in dependency files.

Perhaps the author hasn't worked in an ecosystem where people routinely attempt to install new packages into existing environments? Or one where users don't want to have multiple point versions of the same dependency downloaded and installed locally if one of them would satisfy the requirements of other software? Or where dependency graphs never end up having "diamonds"? (Yes, there are package managers that work around this, but not all programming languages can sanely support multiple versions of the same dependency in the same environment.)

aidenn0•1h ago
> No; in fact it's perfectly reasonable, and at the core of what the author doesn't seem to get. Developers have motivations other than reproducibility. The entire reason we have version number schemes like this is so that we can improve our code while also advertising reasonable expectations about compatibility. If we have dependents, then hopefully this also improves their UX indirectly — whether by taking advantage of optimizations we made, not encountering bugs that were actually our fault, etc. Similarly, if we have dependencies, we can seek to take advantage of that.

I'm actually with the author on this one, but checking-in your lockfile to version-control gets you this.

Joker_vD•41m ago
> No; in fact it's perfectly reasonable,

And this is how I once ended spending a Friday evening in a frantic hurry because a dependency decided to drop support for "old" language versions (that is, all except the two newest ones) in its patch-version level update. And by "drop support" I mean "explicitly forbid from building with language versions less than this one".

> The entire reason we have version number schemes like this is so that we can improve our code while also advertising reasonable expectations about compatibility.

Except, of course, some library authors deliberately break semver because they just hate it, see e.g. quote in [0], slightly down the page.

[0] https://dmytro.sh/blog/on-breaking-changes-in-transitive-dep...

jedberg•1h ago
Tangential, but what is up with all those flashing icons at the bottom of the page? It made it nearly unreadable.
imtringued•1h ago
I disagree with this blogpost in its entirety. Lockfiles are neither unnecessary, nor are they complicated. The argument presented against lockfiles boils down to a misrepresentation. I also dislike the presentation using the godawful yellow color and the stupid websocket gadget in the footer.

The entire point of lockfiles is to let the user decide when the version resolution algorithm should execute and when it shouldn't. That's all they do and they do it exactly as promised.

hosh•1h ago
Many comments talk about how top-level and transitive dependencies can conflict. I think the article is suggesting you can resolve those by specifying them in the top-level packages and overriding any transitive package versions. If we are doing that anyways, it circles back to if lock files are necessary.

Given that, I still see some consequences:

The burden for testing if a library can use its dependency falls back on the application developer instead of the library developer. A case could be made that, while library developers should test what their libraries are compatible with, the application developer has the ultimate responsibility for making sure everything can work together.

I also see that there would need to be tooling to automate resolutions. If ranges are retained, the resolver needs to report every conflict and force the developer to explicitly specify the version they want at the top-level. Many package managers automatically pick one and write it into the lock file.

If we don’t have lock files, and we want it to be automatic, then we can have it write to the top level package manager and not the lock file. That creates its own problems.

One of those problems comes from humans and tooling writing to the same configuration file. I have seen problems with that idea pop up — most recently, letting letsencrypt modify nginx configs, and now I have to manually edit those. Letsencrypt can no longer manage them. Arguably, we can also say LLMs can work with that, but I am a pessimist when it comes to LLM capabilities.

So in conclusion, I think the article writer’s reasoning is sound, but incomplete. Humans don’t need lockfiles, but our tooling need lockfiles until it is capable of working with the chaos of human-managed package files.

hosh•1h ago
If the dependencies are specified as data, such as package.json, or a yaml or xml file, it may be structured enough that tools can still manage it. Npm install has a save flag that lets you do that. Python dep files may be structured enough to do this as well.

If the package specification file is code and not data, then this becomes more problematic. Elixir specified dep as data within code. Arguably, we can add code to read and write from a separate file… but at that point, those might as well be lock files.

10000truths•1h ago
When discoverability and versioning of libraries is more-or-less standardized (a la Cargo/PyPI/NPM), automated tooling for dependency resolution/freezing follows naturally. The build tools for Java and C historically did not have this luxury, which is why their ecosystems have a reputation for caring a lot about backwards compatibility.
andy99•52m ago
In case the author is reading, I can't read your article because of that animation at the bottom. I get it, it's cute, but it makes it too distracting to concentrate on the article, so I ended up just closing it.
somehnguy•48m ago
I read the article but that animation was incredibly distracting. I don't even understand what it's for - clicking it does nothing. Best guess is a representation of how many people active on page.
fennecbutt•34m ago
It also covers a whole 1/4 of the screen on mobile...
Aaargh20318•28m ago
It covers 90% of the screen on iPad
fellowniusmonk•17m ago
I've never seen something so egregious before, it made it impossible to read without covering it with my hand.

But I realized something by attempting to read this article several times first.

If I ever want to write an article and reduce peoples ability to critically engage with the argument in it I should add a focus pulling animation that thwarts concerted focus.

It's like the blog equivalent of public speakers who ramble their audience into a coma.

mvieira38•13m ago
Give in to the noJS movement, there's no animation and it's a beautiful minimalistic site if you disable javascript
modernerd•11m ago
I did document.querySelector('#presence').remove();
appease7727•2m ago
Wow, that's one of the most abhorrent web designs I've ever seen
kaptainscarlet•47m ago
I somewhat agree because the main package file .eg package.json can act as a lock file if you pin packages to specific versions
whilenot-dev•23m ago
No tag other than latest has any special significance to npm itself. Tags can be republished and that's why integrity checks should be in place. Supply chain attacks are happening in open source communities, sadly.
bunjeejmpr•44m ago
This metadata should be in the top of your source as documentation.

We need the metadata. Not a new container.

wedn3sday•39m ago
I absolutely abhor the design of this site. I cannot engage with the content as Im filled with a deep burning hatred of the delivery. Anyone making a personal site: do not do this.
shadowgovt•15m ago
This author's approach would probably work "fine" (1) for something like npm, where individual dependencies also have a subtree of their dependencies (and, by extension, "any situation where dependencies are statically linked").

It doesn't work at all for something like Python. In Python, libpupa 1.2.3 depends on liblupa 0.7.8. But libsupa 4.5.6 depends on liblupa 0.7.9. Since the Python environment can only have one version of each module at a time, I need to decide on a universe in which libpupa and libsupa can both have their dependencies satisfied simultaneously. Version ranges give me multiple possible universes, and then for reproducibility (2) I use a lockfile to define one.

(1) npm's dependencies-of-dependencies design introduces its own risks and sharp edges. liblupa has a LupaStuff data structure in it. It changed very subtly between v0.7.8 and v0.7.9, so subtly that the author didn't think to bump the minor version. And that's okay, because both libpupa and libsupa should be wrapping their dependent data structures in an opaque interface anyway; they shouldn't be just barfing liblupa-generated structs into their client code. Oh, you think people actually encapsulate like that? You're hilarious. So eventually, a LupaStuff generated by libpupa is going to get passed to libsupa, which is actually expecting a subtly different struct. Will it work? Hahah, who knows! Python actually avoids this failure mode by forcing one coherent environment; since 'pupa and 'supa have to be depending on the same 'lupa (without very fancy module shenanigans), you can have some expectation that their LupaStuff objects will be compatible.

(2) I think the author is hitting on something real though, which is that semantic versioning is a convention, not a guarantee; nobody really knows if your code working with 0.7.8 implies it will work with 0.7.9. It should. Will it? "Cut yourself and find out." In an ideal world, every dependency-of-a-dependency pairing has been hand-tested by someone before it gets to you; in practice, individual software authors are responsible for one web of dependencies, and the Lockfile is a candle in the darkness: "Well, it worked on my machine in this configuration."

broken_broken_•2m ago
I agree with the premise, just use a specific version of your dependencies, that’s generally fine.

However: You absolutely do need a lock file to store a cryptographic hash of each dependency to ensure that what is fetched has not been tampered with. And users are definitely not typing a hash when adding a new dependency to package.json or Cargo.toml.