frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open-source Zig book

https://www.zigbook.net
159•rudedogg•2h ago•51 comments

The fate of "small" open source

https://nolanlawson.com/2025/11/16/the-fate-of-small-open-source/
91•todsacerdoti•2h ago•53 comments

Tracking users with favicons, even in incognito mode

https://github.com/jonasstrehle/supercookie
79•vxvrs•2h ago•19 comments

Heretic: Automatic censorship removal for language models

https://github.com/p-e-w/heretic
326•melded•6h ago•113 comments

The Pragmatic Programmer: 20th Anniversary Edition

https://www.ahalbert.com/technology/2023/12/19/the_pragmatic_programmer.html
20•ahalbert2•1h ago•1 comments

Dark Pattern Games

https://www.darkpattern.games
36•robotnikman•2h ago•14 comments

Z3 API in Python: From Sudoku to N-Queens in Under 20 Lines

https://ericpony.github.io/z3py-tutorial/guide-examples.htm
60•amit-bansil•3h ago•0 comments

What if you don't need MCP at all?

https://mariozechner.at/posts/2025-11-02-what-if-you-dont-need-mcp/
45•jdkee•3h ago•20 comments

I have recordings proving Coinbase knew about breach months before disclosure

https://jonathanclark.com/posts/coinbase-breach-timeline.html
173•jclarkcom•1h ago•57 comments

I finally understand Cloudflare Zero Trust tunnels

https://david.coffee/cloudflare-zero-trust-tunnels
61•eustoria•4h ago•17 comments

FPGA Based IBM-PC-XT

https://bit-hack.net/2025/11/10/fpga-based-ibm-pc-xt/
116•andsoitis•6h ago•22 comments

Linux mode setting, from the comfort of OCaml

https://roscidus.com/blog/blog/2025/11/16/libdrm-ocaml/
27•ibobev•2h ago•3 comments

Decoding Leibniz Notation (2024)

https://www.spakhm.com/leibniz
20•coffeemug•3h ago•0 comments

Fourier Transforms

https://www.continuummechanics.org/fourierxforms.html
80•o4c•1w ago•10 comments

Your Land, My Land (Offrange) – Lithium vs. Lettuce in the Imperial Valley, CA

https://ambrook.com/offrange/photo-essay/lithium-v-lettuce
13•mfburnett•1d ago•0 comments

Shell Grotto, Margate

https://en.wikipedia.org/wiki/Shell_Grotto,_Margate
10•Michelangelo11•1w ago•1 comments

Brimstone: ES2025 JavaScript engine written in Rust

https://github.com/Hans-Halverson/brimstone
178•ivankra•10h ago•84 comments

Peter Thiel sells off all Nvidia stock, stirring bubble fears

https://www.thestreet.com/investing/peter-thiel-dumps-top-ai-stock-stirring-bubble-fears
20•hypeatei•39m ago•8 comments

Anthropic’s paper smells like bullshit

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/
757•vxvxvx•10h ago•236 comments

Garbage collection is useful

https://dubroy.com/blog/garbage-collection-is-useful/
104•surprisetalk•8h ago•26 comments

The Man Who Keeps Predicting the Web's Death

https://tedium.co/2025/10/25/web-dead-predictions-george-colony/
28•thm•4h ago•6 comments

Waiting for SQL:202y: Group by All

http://peter.eisentraut.org/blog/2025/11/11/waiting-for-sql-202y-group-by-all
29•ingve•5d ago•9 comments

De Bruijn Numerals

https://text.marvinborner.de/2023-08-22-22.html
56•marvinborner•6h ago•7 comments

Measuring the doppler shift of WWVB during a flight

https://greatscottgadgets.com/2025/10-31-receiving-wwvb-with-hackrf-pro/
109•Jyaif•1w ago•0 comments

Holes (1970) [pdf]

https://rintintin.colorado.edu/~vancecd/phil375/Lewis1.pdf
25•miobrien•2d ago•6 comments

Vintage Large Language Models

https://owainevans.github.io/talk-transcript.html
56•pr337h4m•8h ago•14 comments

Adding an imaginary unit to a finite field

https://www.johndcook.com/blog/2025/11/16/finite-field-i/
7•ibobev•2h ago•1 comments

Running the "Reflections on Trusting Trust" Compiler (2023)

https://research.swtch.com/nih
102•naves•7h ago•5 comments

AirPods libreated from Apple's ecosystem

https://github.com/kavishdevar/librepods
1223•moonleay•21h ago•357 comments

Dissecting Flock Safety: The Cameras Tracking You Are a Security Nightmare [video]

https://www.youtube.com/watch?v=uB0gr7Fh6lY
127•emsign•6h ago•47 comments
Open in hackernews

The fate of "small" open source

https://nolanlawson.com/2025/11/16/the-fate-of-small-open-source/
89•todsacerdoti•2h ago

Comments

RyanHamilton•2h ago
Less incentive to write small libraries. Less incentive to write small tutorials on your own website. Unless you are a hacker or a spammer where your incentives have probably increased. We are entering the era of cheap spam of everything with little incentive for quality. All this for the best case outcome of most people being made unemployed and rolling the dice on society reorganising to that reality.
zwnow•2h ago
But some webdev said they are 10x faster now so it cant be bad for humanity /s
phoronixrly•1h ago
> We are entering the era of cheap spam of everything with little incentive for quality

Correction -- sadly, we're already well within this era

NitpickLawyer•1h ago
> or a spammer where your incentives have probably increased.

Slight pushback on this. The web has been spammed with subpar tutorials for ages now. The kind of medium "articles" that are nothing more than "getting started" steps + slop that got popular circa 2017-2019 is imo worse than the listy-boldy-emojy-filled articles that the LLMs come up with. So nothing gained, nothing lost imo. You still have to learn how to skim and get signals quickly.

I'd actually argue that now it's easier to winnow the slop. I can point my cc running in a devcontainer to a "tutorial" or lib / git repo and say something like "implement this as an example covering x and y, success condition is this and that, I want it to work like this, etc.", and come back and see if it works. It's like a litmus test of a tutorial/approach/repo. Can my cc understand it? Then it'll be worth my time looking into it. If it can't, well, find a different one.

I think we're seeing the "low hanging fruit" of slop right now, and there's an overcorrection of attitude against "AI". But I also see that I get more and more workflows working for me, more or less tailored, more or less adapted for me and my uses. That's cool. And it's powered by the same underlying tech.

NegativeK•1h ago
The problem isn't that AI slop is doing something new. Phishing, blogspam, time wasting PRs, website scraping, etc have all existed before.

The problem is that AI makes all of that far, far easier.

Even using tooling to filter articles doesn't scale as slop grows to be a larger and larger percentage of content, and it means I'm going to have to consider prompt injections and running arbitrary code. All of this is a race to the bottom of suck.

AstroBen•1h ago
The difference is that the cost of slop has decreased by orders of magnitude. What happens when only 1 in 10,000 of those tutorials you can find is any good, from someone actually qualified to write it?
skydhash•1h ago
The thing is, what is the actual point of this approach? Is it for leaning? I strongly believe there’s no learning without inmersion and practice. Is it for automation? The whole idea of automation is to not think about the thing again unless there’s a catastrophic error, it’s not about babysitting a machine. Is it about judgment? Judgment is something you hone by experiencing stuff then deciding whether it’s bad or not. It’s not something you delegate lightly.
cynicalsecurity•2h ago
He almost got it right. It's not just the fate of small open source. It's the fate of all programmers now. Why hire a programmer when an LLM costs less, works faster and makes less mistakes (OP compliments better error handling, read the article).

Unless you are a product owner, you have paying clients that love you and your product and won't simply ditch it in favour of a new clone, you are really screwed.

RhythmFox•2h ago
He also points out a pointless type check in a type checked language...

Your name is very accurate I must say.

jazzypants•1h ago
That type check is honestly not pointless at all. You can never be certain of your inputs in a web app. The likelihood of that parameter being something other than an arraybuffer is non-zero, and you generally want to have code coverage for that kind of undefined behavior. TypeScript doesn't complain without a reason.
vanschelven•1h ago
"when an LLM costs less, works faster and makes less mistakes"... indeed, but it doesn't follow at all that it's the fate of all programmers _now_... at least in my experience none of these things are true ATM.
PunchyHamster•1h ago
Well, at the very least it costs less than asking intern to look for a lib doing something particular and give some examples... still about as accurate as the intern tho.
skydhash•55m ago
How many time has it happened for a company to actually ask an intern for a library?
dakiol•1h ago
So far I've never seen yet a non-programmer release production-grade code using only LLMs. There's just so much to care about (from security, deployments, secret management, event-driven architectures, and a large etc.) that "just" providing a prompt to create an "app" doesn't cut it. You need infra and non-engineers just don't know shit about infra (even if it's 99% managed), you need to deploy your llm-generated code in that infra; that should happen in a ci/cd probably. And what about migrations? Git? Who's setting up the api gateway? I don't mean to say that LLMs don't know how to do that, but you need to instruct them to do so, and even there, they will make silly mistakes and you need to re-instruct them or fix it.

Prompting is just 50% of the work (and the easy part actually). Ask the Head of Product or whoever is there to deploy something valuable to production and maintain it for 6 months while not losing money. It's just not going to happen, not even with truly AGI.

59nadir•1h ago
An LLM might be able to replace the majority of the code Sindre Sorhus has put out there, but it's probably a stretch to think that it could replace someone like John Carmack.

Trivial NPM libraries were never needed, but LLMs really are the nail in the coffin for them even when it comes to the most incompetent programmers because now they can literally just ask an LLM to spit out the exact same thing.

CuriouslyC•2h ago
Small open source is still valuable, but the bar is higher. If your project is something that's trivial and nobody just thought to do it before you and bothered to do it after, that's probably not going to survive, but if your project is a small focused tool that handles something difficult really well, it's 100% got a future.
mccoyb•1h ago
I don’t think open source is going anywhere. It’s posed to get significantly stronger — as the devs which care about it learn how to leverage AI tools to make things that corporate greasemonkeys never had the inspiration to. Low quality code spammers are just marketing themselves for jobs where they can be themselves: soulless and devoid of creative impulse.

That’s the thing: open source is the only place where the true value (or lack of value) of these tools can be established — the only place where one can test mettle against metal in a completely unconstrained way.

Did you ever want to build a compiler (or an equally complex artifact) but got stuck on various details? Try now. It’s going to stand up something half-baked, and as you refine it, you will learn those details — but you’ll also learn that you can productively use AI to reach past the limits of your knowledge, to make what’s beyond a little more palatable.

All the things people say about AI is true to some degree: my take is that some people are rolling the slots to win a CRUD app, and others are trying to use it to do things that they could only imagine before —- and open source tends to be the home of the latter group.

nowittyusername•1h ago
True innovation will come from open source for sure. As the developers don't have the same economic incentives to be "safe", "ethical" "profitable" or whatever. large corporations know this and fear this development. That's why i expect a significant lobbying to take hold in USA that will try and make local AI systems illegal. And I think they will be very convincing to the government. Because the government also fears the "peasants" and giving them any true semblance of real AGI like systems. I bet very soon we will start seeing various classifications that will define what is legal and what is not for a citizen to possess or use.
exasperaited•1h ago
> It’s posed to get significantly stronger

It's really not. Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue about what is involved in working on long-running, difficult projects or how offensive it is to just slather some slop on a bug report and demand it is given scrutiny.

Even at the 10,000 feet view it has wasted people's time because they have to sit down and have a policy discussion about whether to accept AI submissions, which involves people reheating a lot of anecdotal claims about productivity.

Having learned a bit about how to write compilers I know enough to know that I can guarantee you that an AI cannot help you solve the difficult problems that compiler-building tools and existing libraries cannot solve.

It's the same as it is with any topic: the tools exist and they could be improved, but instead we have people shoehorning AI bollocks into everything.

micromacrofoot•1h ago
yeah we are getting lots of "I don't know how to do this and AI gave me this code that doesn't work, can you fix it" or "AI said it can do this" and the feature doesn't exist... some people will even argue and say "but AI said it doesn't take long, why won't you add it"
exasperaited•7m ago
It weaponises incompetence, carelessness and arrogance at every turn.

AI is a character test: I'm regularly fascinated by finding out who fails it.

In my personal life I have been treated to AI-generated comms from someone that I would never have expected it from.

They don't know I know, and they don't know that I think less of them, and I always will.

mccoyb•57m ago
Sounds like a lot of FUD to me — if major projects balk at the emergence of new classes of tools, perhaps the management strategy wasn’t resilient in the first place?

Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.

In such a setting, you’re working within a trusted party — and for a major project, that likely means extremely competent maintainers and contributors.

I don’t think these people will have any difficulty adapting to the usage of these tools …

exasperaited•12m ago
> Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.

It is a waste of time for large-scale volunteer-led projects who now have to deal with tons of shit.

doug_durham•47m ago
This isn't an AI issue. It is a care issue. People shouldn't submit PRs to project where they don't care enough to understand the project they are submitting to or the code they are submitting. This has always been a problem, there is nothing new. The thing that is new is more people can get to a point where they can submit regardless of their care or understanding. A lot of people are trying to gild their resume by saying they contributed to a project. Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.
exasperaited•13m ago
This is an AI issue because people, including the developers of AI tools, don't care enough.

The Tragedy Of The Commons is always about this: people want what they want, and they do not care to prevent the tragedy, if they even recognise it.

> Project owners should instead be working ways to filter out careless code more efficiently.

Great. So the industry creates a burden and then forces people to deal with it — I guess it's an opportunity to sell some AI detection tools.

zkmon•1h ago
Open source exists because coding was a significant effort and code was a thing of high value. Unsurprisingly companies hesitated to make the code public and free. All of this is changing now as coding has suddenly become trivial. So, yes, the mission of open source, in general, will be challenged.
levkk•1h ago
Several issues:

1. Reducing dependencies is a wrong success metric. You just end up doing more work yourself, except you can't be an expert in everything, so your code is often strictly worse.

2. Regenerating the same solutions with a probabilistic machine will produce bugs a certain percentage of the time. Dependencies are always the same code (when versioned).

3. Cognitive overhead for human review is higher with LLM-generated libs, for no additional benefit.

jcelerier•1h ago
> Reducing dependencies is a wrong success metric. You just end up doing more work yourself

Except it's just not true in many cases because of social systems we've built. If I want to ship software to Debian I have to make sure that every single of my 3rdparty dependencies is registered and packaged as a proper debian package - a lot of time it will take much less work to rewrite some code than to get 25 100-lines-of-code micro-libraries accepted into debian.

cactusfrog•1h ago
This author assumes that open sourcing a package only delivers value if is added as a dependency. Publicly sharing code with a permissive license is still useful and a radical idea.
lanstin•1h ago
Yeah if I find some (small) unmaintained code I need, I will just copy it (then add in my metrics and logging standards :)

It shouldn't be a radical idea, it is how science overall works.

Also, as per the educational side, I find in modern software ecosystem, I don't want to learn everything. Excellent new things or dominantly popular new things, sure, but there are a lot of branching paths of what to learn next, and having Claude code whip up a good enough solution is fine and lets me focus on less, more deeply.

(Note: I tried leaving this comment on the blog but my phone keyboard never opened despite a lot of clicking, and on mastodon but hit the length limit).

positron26•1h ago
Yep. Even just sharing code with any license is valuable. Much I have learned from reading an implementation of code I have never run even once. Solutions to tough problems are an under-recognized form of generosity.

This is a point where the lack of alignment between the free beer crowd and those they depend on is all too clear. The free beer enthusiast cannot imagine benefiting from anything other than a finished work. They are concerned about the efficient use of scarce development bandwidth without consciousness of why it is scarce or that it is not theirs to direct. They view solutions without a hot package cache as a form of waste, oblivious to how such solutions expedite the development of all other tools they depend on, commercial or free.

shevy-java•40m ago
I do agree with this, but there are some caveats. At the end of the day it is time people invest into a project. And that is often unpaid time.

Now, that does not mean it has no value, but it is a trade-off. After about 14 years, for instance, I retired permanently from rubygems.org in 2024 due to the 100k download limit (and now I wouldn't use it after the shameful moves RubyCentral did, as well as the new shiny corporate rules with which I couldn't operate within anyway; it is now a private structure owned by Shopify. Good luck finding people who want to invest their own unpaid spare time into anything tainted by corporations here).

Aperocky•1h ago
> the era of small, low-value libraries like blob-util is over.

Thankfully (not against blob-util specifically because I've never intentionally used it), I wouldn't completely blame llms either since languages like Go never had this dependency hell.

npm is a security nightmare not just because of npm the package manager, because the culture of the language rewards behavior such as "left-pad".

Instead of writing endless utilities for other project to re-use, write actual working things instead - that's where the value/fun is.

ncruces•1h ago
But as Go puts it:

“A little copying is better than a little dependency.”

https://go-proverbs.github.io/

skydhash•38m ago
Yep something like blob util could be a blog post or a gist (or several stack overflow answers). And a lot of NPM library falls under that. They always have the anemic standard library of JavaScript forgetting that the C library is even smaller.
threatofrain•33m ago
Copying is just as much dependency, you just have to do maintenance through manual find-and-replace now.
jamietanna•20m ago
Yeah it's the main thing I really dislike about this - how do you make sure you know where it's from? (ie licensing) What if there are updates you need? Are you going to maintain it forever?

For some definition of "small piece of code" that may be ok, but also sometimes this is more than folks consider

skydhash•6m ago
Do you know that you can just add a small text file or a comment explaining that a module is vendored code. Ad updates is handled the same way as the rest of the code. And you will be “maintaining” it as long as you need to. Libraries are not “here be dragons” best left to adventurous ones.
msla•16m ago
If I vendor a dependency that currently works for what my program does, I only have to care about it again if a security hole is discovered in it or if my program changes and the dependency is insufficient in some way. I don't have to worry about the person I'm importing code from going weird or introducing a bug that affects me.
sodapopcan•5m ago
Usually these types if things never change. I understand that all code is a liability, but npm takes this way too far. Many utility functions can be left untouched for many years if not forever.
ninkendo•5m ago
[delayed]
kermatt•28m ago
> since languages like Go never had this dependency hell

What is the feature of Go that this is referring to?

hnlmorg•1h ago
More likely, what we will see is the decline of low effort projects. The JavaScript/ Typescript ecosystem has been plagued with such packages. But that’s more anomalous to the JS community than it is a systemic problem with open source in general.

So if fewer people are including silly dependencies like isEven or leftPad, then I see that as a positive outcome.

BrenBarn•56m ago
> Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.

Use of an AI to write your code is also a form of dependency. When the LLM spits out code and you just dump it in your project with limited vetting, that's not really that different from vendoring a dependency. It has a different set of risks, but it still has risks.

ronbenton•36m ago
> and you just dump it in your project with limited vetting

Well yes there’s your problem. But people have been doing this with random snippets found on the internet for a while now. The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs

nolanl•27m ago
Right, but you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?" or "will the author be compromised in a supply-chain attack, or do a deliberate protestware attack?" etc. As for performance, a lot of npm packages don't have proper tree-shaking, so you might be taking on extra bloat (or installation cost). Your point is well-taken, though.
cortesoft•11m ago
Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features.

Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.

smcameron•47m ago
In the U.S., anything machine generated is uncopyrightable.

Why would you put uncopyrightable code into your codebase?

gpm•43m ago
Why wouldn't you? Your codebase (if you're a business) exists to make you money, people being able to copy some unknown portions of it without further license if they somehow legally get their hands on a copy of it seems entirely irrelevant.

PS. I think this is much less clear and much less settled law than you are suggesting.

siliconpotato•41m ago
Even worse...unmaintained code. Only the human-written one has a maintainer. The other one plagiariased by AI is instant legacy code
exasperaited•3m ago
> The other one plagiariased by AI is instant legacy code

I have used this "instant legacy code" concept before. It's absolutely true, IMO. But people really, really, really hate hearing it.

ebiester•38m ago
It's more nuanced. If I even have a few lines I can prove are mine, those parts are copywritable in the same way Pride and Prejudice is public domain but pride and prejudice and zombies is copyrighted.
hdgvhicv•17m ago
Autocomplete has been around for decades
shevy-java•42m ago
> Claude’s version is pretty close to the blob-util version (unsurprising, since it was probably trained on it!).

AI are thieves!

> I don’t know which direction we’re going in with AI (well, ~80% of us; to the remaining holdouts, I salute you and wish you godspeed!), but I do think it’s a future where we prize instant answers over teaching and understanding.

Google ruined its search engine years ago before AI already.

The big problem I see is that we have become WAY too dependent on these mega-corporations. Which browser are people using? Typically chrome. An evil company writes the code. And soon it will fire the remaining devs and replace them with AI. Which is kind of fitting.

> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)

Documentation in general sucks. But documentation is also a hard problem.

I love examples. Small snippets. FAQs. Well, many projects barely have these.

Look at ruby webassembly/wasm or ruby opal. Their documentation is about 99% useless. Or, even worse - rack in ruby. And I did not notice this in the past, in part because e. g. StackOverflow still worked, there were many blogs which helped fill up missing information too. Well all of that is largely gone now or has been slurped up by AI spam.

> the era of small, low-value libraries like blob-util is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see node:glob, structuredClone, etc.), but LLMs are the final nail in the coffin.

I still think they have value, but looking at organisations such as rubygems.org disrupt the ecosystem and bleeding it dry by kicking out small hobbyists, I think there is indeed a trend towards eliminating the silly solo devs who think their unpaid spare time is not worthy of anything at all, yet the big organisations eagerly throw down more and more restrictions onto them (my favourite example is the arbitrary 100k download limit for gems hosted at rubygems.org, but look at the new shiny corporate rules on rubygems.org - this is when corporations take over the infrastructure and control it. Ironically this also happened to pypi and they admit this indirectly: https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f... - of course they deny that corporations control pypi now, but by claiming otherwise they admit it, because this is how hobbyists get eliminated. Just throw more and more restrictions at them without paying them. Sooner or later they decide to do something better with their time.)