frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
1•mooreds•55s ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
2•mindracer•1m ago•0 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•1m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•2m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•2m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
1•captainnemo729•3m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•3m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•5m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•5m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•6m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•6m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•6m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•6m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•7m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•7m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•8m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•11m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•11m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•12m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•12m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•14m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•14m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•15m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•15m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•16m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•17m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•18m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•22m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•22m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•23m ago•0 comments
Open in hackernews

I'm Archiving Picocrypt

https://github.com/Picocrypt/Picocrypt/issues/134
233•jaden•6mo ago

Comments

MarkusQ•6mo ago
Wow.

That was strangely...something. Simultaneously not what I expected and yet just nailing the vibe of vibe slop frustration.

AndyNemmity•6mo ago
yep, it's a very old internet response, something you rarely see now. creative and interesting, in a meta sense.
reasonableklout•6mo ago
It's a nice message and I sympathize with the frustration, but the critique falls flat with the author's decision to pivot into AI research.
DaSHacka•6mo ago
Also the AI ghibli pfp...
subscribed•6mo ago
What's wrong with it?
latexr•6mo ago
It was clearly AI generated. So the author is clearly OK with using AI to generate slop in an area they don’t work in, while simultaneously decrying its use in an area they do work in. If they believe so strongly that AI use is destroying their industry, they should reflect on its effect on other industries too (it is well-documented how artists are being negatively impacted).

I agree with the commenters above that it makes the critique fall flat. The author is saying “This thing is so frustrating and harmful it makes me want to stop working in a field because of it. Oh, by the way, I use this tool myself for other things, and will indeed pivot to contribute directly to them”.

Lutger•6mo ago
I didn't interpret it as decrying the use of AI. Especially because he plans to dedicate his time and energy into researching the very same AI he rants about, basically promoting its use!

Instead, I see it as a deeply personal rant about the state of affairs which he considers inevitable himself. That is why he leaves the ship.

Before AI slop, there has always been just the agile slop of the bare(ly) minimum product, good enough to woo the ones making a buying decision, or at least until the career sharks have moved on to the next thing. That kind of slop has always been there and everywhere actually. Its called capitalism, or consumerism. The trick is to work for a place that isn't squeezed too hard, because its still in the investment phase or because it just earns money on its own merit.

AI will certainly transform things, just like higher level languages and frameworks have done so. Maybe programming without AI will be the 'micro optimization' of the future: something that is still there and valuable, but only sometimes and only in a certain niche. Slop is eternal, it just has a new face and a new name.

This blog to me is a nice personal rant about a smart young developer coming of age, trying to find his way and guard his ideals or standards against the onslaught of consumerism, just as ambitious young developers always have tried to do.

ryandv•6mo ago
Exactly. The self-contradiction exposes this farce of an arrangement for what it is: talented engineers are presented with immense monetary incentives to automate themselves out of the workforce, their carefully honed craftsmanship to be replaced by hordes of monkeys at typewriters producing voluminous slop.

Anybody can plainly see that the emperor is without clothes, but so long as the C-level rhetoric is sung to the tune of "Either you have to embrace the AI, or you get out of your career," [0] you may as well put on your own clown nose and wig and start dancing while the music is still playing and there are still seats out.

Farcical circumstances prompt paradoxical responses.

[0] https://news.ycombinator.com/item?id=44808645

subscribed•5mo ago
Oh, I understand you now. Thank you.

It didn't make this impression on me: I saw its generated or at least edited with LLM, but I assumed it's satire.

Thanks for not fobbing me off!

neomantra•6mo ago
The entire thing was crafted, including the profile pic.

His low-quality “petty comments” are to the low-effort haters.

whilenot-dev•6mo ago
> Advancements in intelligent AI and LLMs get me excited.

Large and centered on the authors website: https://evansu.com/

chromehearts•6mo ago
What a hypocrite lmao
qualeed•6mo ago
You can dislike what AI/LLMs are doing in the software development field, but be excited about their medical applications, as an example.
Shorel•6mo ago
The difference I perceive is the split between being the one designing the software, which is what he likes to do, and letting a LLM design the software, without the developer actually understanding what's going on, which is what he dislikes.

(This short comment exchange between us is also a meta commentary on that. Because our comments are much shorter than all the AI summaries, and at the same time, we add nuance and clarification to the ideas exposed, something the AI summaries don't do.)

KaiserPro•6mo ago
> falls flat with the author's decision to pivot into AI research.

yeah but _what_ part of AI research. There are loads of it that are nothing to do with slop, and might even have practical and useful benefits....

schmookeeg•6mo ago
I am thinking of that quotation that said [paraphrasing] "90% of my skills went to $0, but the other 10% are now worth 1000x"

This LLM-fuelled rant/departure is a thought-provoking expression of frustration from someone who focused on the 90%, not the 10% -- namely someone willing to handcraft software like an artisan.

I think we're in the mass-production era for code and nobody wants a hand-crafted anything anymore. Automated and mass-produced please. "Quantity has a quality all its own"

AndyNemmity•6mo ago
I think if you believe that 90% of skills went to 0% but the other 10% are worth 1000x, that makes sense.

But even if that's true, the 1000x is going to go to far fewer humans. Maybe you're in the lucky % saved, but a lot of people won't be.

It's interesting to consider. I don't have any takes one way or the other, I'm just observing. I have no idea how all of this works out.

bryanrasmussen•6mo ago
what exactly is the 90% that he focused on, and the 10% he didn't?

Is the 90% just hand crafting? I don't believe there can be no place for hand crafting things at all, because there is not any other business or human endeavor in history which has been 100% mass produced without any place for artisans in the market.

Even Automobile production has its place in the market for craftsmanship.

croes•6mo ago
>I think we're in the mass-production era for code and nobody wants a hand-crafted anything anymore.

Not all people like IKEA furniture.

>Quantity has a quality all its own

You understand the meaning of this Stalin attributed quote?

I wouldn’t want that kind of quality in planes, cars, surgical robots, power plants etc.

I think it’s a good time to introduce fines for severe damages by software bugs.

its-summertime•6mo ago
(income × 0 × 0.9) + (income × 1000 × 0.1) = income × 100

Really looking forwards to the 10000% pay raise

nurettin•6mo ago
LLMs are glorified "LMGTFY" tools. AI assistance doesn't make people experts at anything. Some genz vibe coder isn't getting your job guys calm the heck down.
denkmoon•6mo ago
The fear is like telling on yourself.
kryptiskt•6mo ago
The fear may just be a result of thinking about who is making the decisions. I know I'm good, my peers know I'm good. But how far up in management chain does that knowledge go?
forty•6mo ago
I think people who are afraid that AI coding is going to replace them should try using it a bit seriously. They should be quickly reassured.

What worries me more (on coding related impact of AI - because of course all the impact on fake news and democracies are much more worrying IMO) is having to deal with code written by others using AI (yes, people can write shity code on their own, but with manageable pace)

ben_w•6mo ago
I'm not worried that LLMs, as they currently and foreseeably (i.e. 18 months) are, will be a good substitute for high quality developers like me.

But oh boy have I seen a lot of mediocre coders get away with mediocrity for a long time — there's a big risk that employers won't care about the quality, just the price, for long enough that the developments in AI are no longer foreseeable.

xinayder•6mo ago
Tell that to C-level executives. They don't understand this, and until then, we, developers, can only be afraid of losing our jobs to a mediocre AI.
KaiserPro•6mo ago
As someone whos a bit older, and remembers the latter wave of offshoring, I can tell you that quality doesn't matter.

Yes, fake news driving by AI slop is a big problem, but that is only enabled by social media personality fiefdoms.

The shit is going to hit the fan if 10% of the highest paid US working population is laid off for AI outsourcing. That kind of social change brings revolution. and thats before the fracture of US social fabric.

Obscurity4340•6mo ago
What does that stand for?
echoangle•6mo ago
LMGTFY is „let me google that for you“
throwaway0665•6mo ago
bing it
latexr•6mo ago
https://googlethatforyou.com/what-is-lmgtfy-meaning.html
penguin_booze•6mo ago
That answer is designed for exactly this question.
devjam•6mo ago
I see what you did there
nurettin•6mo ago
It is a RTFMism from the early 2000s.
anal_reactor•6mo ago
> Some genz vibe coder isn't getting your job guys calm the heck down.

Then why isn't the software job market recovering

4ndrewl•6mo ago
Because a lot of devs were getting a free ride off the back of ten years of zirp money, and firing people is a sure fire way to pump your share options.
tracker1•6mo ago
With the .com crashes through 2000 and 2001, it wasn't until well after 2005-2008 until pay had started to come back up, and still without the crazy signing bonuses. We're still in the down trend, and the industry is bigger today, so longer/larger impact.

Not only that, but it's pushing market rates down significantly. I'm making about 60% of what I made the past few years... I could only handle not having income for so long. I was juggling two jobs for a while, but just couldn't manage it. Hoping to pick up some side work to fill the gaps. Have to face it, a lot of the high pay contract software jobs have just dried up for now.

sitzkrieg•6mo ago
doesnt change anyones mind when it comes to layoffs
sakjur•6mo ago
I’m concerned that AI slop will affect open source projects by tilting the already unfavorable maintainer-contributor balance even more towards low-quality contributions.

Daniel Stenberg (from the curl project) has blogged a bunch about AI slop seeping into vulnerability reports[1], and if the same happens to code contributions, documentation and so forth that can help turning a fun hobby project into something you dread maintaing.

[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

Shorel•6mo ago
Your last sentence should have been: CEOs calm the heck down.
penguin_booze•6mo ago
They can't. Shilling is practically their whole job.
franga2000•6mo ago
You don't need an equal replacement to lose your job, just a good enough and more economical one.

Lots of graphic designers lost their jobs or at least a lot of their work now that image generation models have gotten decent at rendering text. Now any idiot can whip up some advertising graphics at half the quality of a designer, but in 1/10th the time and 1/100th the cost (or even for free!). It doesn't matter that it looks like ass and makes no sense in context, they produced an acceptable result for a fraction of the cost.

Quality does not matter in the market, it never has. Whoever can produce the most slop at the lowest price nearly always wins. Yes, there are exceptions, many of them even. But not enough to employ nearly as many of us as there are now.

sky2224•6mo ago
I think this is just a bit doomerish honestly.

Yes, the AI hype is real, and yes there's a desire to cut costs by using AI within companies. However, I think the maintainer (Evan Su) has a bit of a narrow view on this matter. Evan is still a student in university.

This doesn't mean his perspective or opinion should be disregarded, it's more just I think he's declaring quite a career defining absolute for himself before really having a solid foot in the industry. Frankly, this rant seems kind of fueled by intense doom-scrolling on linkedin rather than by first hand experience.

hkt•6mo ago
To be fair, the job market is terrible. Not nearly as attributable to AI as people think, I suspect - I'd pin it on interest rates. Still, I'd consider other things if I had anywhere to go and he's not made a bad choice.
sky2224•6mo ago
If he's deciding to go elsewhere due to the current market, I think that's fair. However, probably his biggest talking point in the whole post is about how no one wants to craft artisan code anymore and how that's due to AI taking over and taking jobs. That's where I'm saying that the jump in conclusion is quite drastic given the absence of hard evidence of AI taking jobs being true (yet).
movetheworld•6mo ago
Seems like a nice project. I'm still a little bit concerned with the VirusTotal result: https://www.virustotal.com/gui/file/81bbdffb92181a11692ec665...
MrGilbert•6mo ago
That is to be expected by a project that contains encryption code, I'd say. Maybe their userbase isn’t big enough to report all false-positives and gain the reputation needed.
whizzter•6mo ago
It could also be that some ransomware crap has used their codebase as a foundation for their system and then it gets flagged.

It's a very very common problem in the Demoscene, demoscene authors striving to impress in the 64k category created better generic EXE compressors to shrink more content into their 64k binaries, just when any tool is publicly released it is almost immediately flagged by AV software since virus makers scour our forums knowing that once in a while a useful tool comes to hide their malicious code.

PostOnce•6mo ago
Just as a rule of thumb, doesn't the fact that only a few unknown vendors flag it—and all of the major vendors do not—indicate something? It would suggest a false positive, wouldn't it?
sakjur•6mo ago
When I checked a few years back, even a ”hello world” Go application compiled for Windows was flagged as malware by a malware scanner that I investigated.

I’m not at a computer where I could try that hypothesis right now, but back then my conclusion after testing various executables was that any unsigned Go binary would be flagged as a malware.

Shorel•6mo ago
I'm concerned about your comment.

I envision a future when people can't deduct anything by themselves and will rely on automated, flawed systems.

The problem is not knowing why and how the systems are flawed, and therefore being unable to have nuanced and truly accurate decisions.

MrGilbert•6mo ago
I like the creativity behind this. And I feel sorry for them that the current wave of AI has lead to them abandoning their pet project. Maybe they will pick up the project again, once the dust has settled. At the end, at least for me, they are pet projects for exactly that reason: An escape route from the day to day business. One should still be proud of what they achieved in their spare time. I don’t care if my job requires me to use K8s, Claude or Docker. Or if that’s considered "industry standard".

My projects, my rules.

raincole•6mo ago
I don't get it.

(I'm not trying to throw shades at the author. I know they have no obligation to maintain an open source project. I'm simply having a hard time gasping what's happening.)

epolanski•6mo ago
Seems like the author is abandoning software because in his opinion due to AI explosion employers don't care anymore about code quality and only quality.

I don't get it either, because that has always been the case, thus most of his post is borderline non sense.

Imo, what happened is that he took the opportunity of him entering academia to vent some stuff and quit maintaining his project.

worldsavior•6mo ago
He doesn't have interest in the project anymore. He didn't have a long time, and now that he stopped with software development and gone into the academia- he certainly doesn't have interest. Is that hard to get?

He explained the reasons he went into the academia, which is because of the AI, and AI is not the reason he stopped with development.

shiomiru•6mo ago
> I don't get it either, because that has always been the case, thus most of his post is borderline non sense.

Yes, making software development cheaper has been the main priority of the industry for a long time. The new development is that there's now a magic "do what I want" button that obviously won't quite do what you want but it's so cheap (for corporations, not humanity...) that you might as well pretend it does. (Compared to paying professionals who might even care about doing a good job, that is.)

tracker1•6mo ago
I've been a web application developer for nearly 30 years now. I care about the craft and discipline immensely. Then you pull up something like the Jack-In-The-Box menu site and fully realize that managers/executives don't give a damn if the stuff works well... 48MB of built JS?!? My daughter expressed how badly the site was working on her phone, and I got curious.

What's funny, is some will say, "use the app" instead for things like this... why should I trust someone to build a safe/secure app, who cannot build a reasonably functional website?

ranger_danger•6mo ago
To be fair, the app may be developed by a completely different team.

If your argument is still that you don't want to trust a company that can't make both functional, well... maybe you shouldn't be going to Jack-In-The-Box in the first place.

tracker1•6mo ago
My point is that I'm not going to give app-level access to my phone to a company that doesn't care enough to have a functional website. That said, I'm unlikely to install an app for anything on my device.

I don't actually install that many apps, and generally not retail apps anyway.

GoblinSlayer•6mo ago
I looked inside an average proprietary authenticator mobile app with 2 button interface and can confirm it's dumpster fire with 26mb of compiled code.
int_19h•6mo ago
The irony is that it is precisely because we all, collectively as software engineers, allowed the bar to be that low for so long, that AI coding is viable at all right now.
insane_dreamer•6mo ago
It hasn't always been the case to the degree that it now is, or with the "tools" to enable it.
karteum•6mo ago
I was thinking exactly the same : I also don't get it (even though I totally get that someone may lose motivation to work on a project, and certainly has no obligation to continue. But this justification sounds a bit weird to me).

Could this mean that he has been approached by some "men in black" asking to insert some backdoor in the code or to stop working on it, together with a gag order ? (actually I also wondered the same a long time ago with Truecrypt, even though to my knowledge no backdoor has ever been found in further audits...)

sunshine-o•6mo ago
I believe you need to separate two things:

- The author enjoyed writing quality open source code

- The author needs to make strategic decisions for his own career and livelihood and he doesn't have enough bandwidth for both

I don't feel he is happy about the decision he needs to make and he is pointing to something dark happening to software development and open source.

Now this is not new, and didn't start with LLM. I am sure if we ask the OpenBSD devs what they think about the modern mainstream open source ecosystem, docker, full stack development, etc. they see it like we might look at LLM generated code. This is just a question how much of a purist you are.

000ooo000•6mo ago
Did you not read the comment he wrote? It's straightforward
rurban•6mo ago
The new era Socrates dialogue. With a machine
chromehearts•6mo ago
Nice post really . I like the meta conversation .

But in my opinion, it's a bit hypocritical to blame / be mad at LLMs etc. ruining the fun of coding because & then use AI generated profile pictures.

Why not draw your ghibli styled profile picture yourself? Why use an AI generated image? Doesn't using image generators ruin the fun of drawing? Vibe-drawing?

> He criticizes "Large tech companies and investors" for prioritizing "vibe coding," but not a specific company or individual.

You could rewrite this generated response to match artists point of view as well

lol

ramon156•6mo ago
I second this, plus their responses are petty.

As an example:

> you're not helping anything or making some resonant statement with that thing above or your avatar

> Sorry for breathing and producing CO2.

That's not what the commenter argued, and that response is incredibly petty. It's a way to defuse the argument entirely (is that a straw-man? no idea).

chromehearts•6mo ago
Is it a strawman? I think so
zveyaeyv3sfye•6mo ago
> What the fuck bro

QFT

neom•6mo ago
https://github.com/Picocrypt/Picocrypt/issues/134#issuecomme...

This to me is the crux of the whole thing.

Almost like a knitter throwing away their needles because they saw a loom.

latexr•6mo ago
Considering the author is explicitly going into AI research, has an AI-generated profile picture, and claims front-and-centre on their website they are excited about LLMs, I don’t think that analogy works. Or rather, it is like a knitter throwing away their needles to eagerly go work in the loom manufacturing industry.
rikafurude21•6mo ago
I dont think many people would be excited at the thought of going from handcrafted artisan knitting to babying machines in the knitting factory. You need a certain type of autism to be into the latter.
roenxi•6mo ago
Fortunately this is the software industry. We've got a lot of those autists and that automating urge is the best part about software. If someone don't like the idea of sitting around babysitting factories of machines they certainly shouldn't go into DevOps. It would be safest to just avoid programming in general, given how much of the industry centres on figuring out how to deploy huge amounts of computing power in an autonomous fashion.
andai•6mo ago
Yeah, the whole industry is just speedrunning Factorio at this point.
dewey•6mo ago
The author is a student at a university. There's many paths to take that early in the career, I don't think people have to read too much into it.
stoneyhrm1•6mo ago
I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.

Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?

fxtentacle•6mo ago
So basically, he’s leaving software development because the job market is bad. Instead, he’s joining AI research which (currently) has a more healthy job market. That seems pretty reasonable to me, given that even widely used open source projects are only barely financially viable. Many open source projects end when the author finally gets a girlfriend, this one ends for a new job. Seems like a good outcome to me. Plus truly fascinating presentation.
KaiserPro•6mo ago
I mean your analogy is almost there. The loom is pretty old.

What your grasping for is https://en.wikipedia.org/wiki/Power-loom_riots

Where a precipitous drop in earning power, combined with longer working hours, high inflation and large companies making people unemployed cause large social unrest.

And yeah, I can see why they rioted.

FergusArgyll•6mo ago
Yeah, life has just been on a steady decline since 1826. Who wants all this food and medicine anyway
KaiserPro•6mo ago
I mean if you want that argument then sure, but given that those riots were one step in a long path to workers rights. The lesson here is that if we avoid exploiting workers and/or throwing them out on their arses, we can sidestep a load of social upheaval.

or we can not and just end up having a blood bath.

mproud•6mo ago
As a complete outsider looking at this, without additional context, I just have a hard time believing there aren’t more reasons, they’re just not willing to share them:

* I’m not passionate about it anymore

* I’m tired

* I want to repurpose my free time

* I’m not adding enough value compared to other options now available

In the end, it’s pointless to argue about why someone feels the way they do. If they are firm on their stance, don’t waste anybody’s time, no matter how irrational their argument is. Give up trying to be right.

Probably this guy should have just stopped engaging directly with some of the dialogue, but the fact that he is exploring the idea of trying to hand it off in some manner tells me he really does care about the project.

sublinear•6mo ago
I don't think this moment will age well. Is this an attempt to create a personal brand story?
uludag•6mo ago
I've felt similar to the author, a sort of despair that the only point of writing software now is to prop up the valuation of AI companies, that quality no longer matters, etc.

Then I realized that nothings stopping me from writing software how I want and feel is best. I've stopped using LLMs completely and couldn't be happier. I'm not even struggling at work or feeling like I'm behind. I work on a number of personal projects too, all without LLMs, and I couldn't feel better.

immibis•6mo ago
This is also a good opportunity to remember that MIT is not a strong enough open source license, and if you want to prevent corporations making money off your work, make it AGPL or even SSPL, plus a statement that AI training creates a derivative work (the latter may or may not have any legal effect).

MIT is a donation of your labour to corporations. With a stronger license, at least they're more likely to contribute back or to pay you for a looser license.

fouronnes3•6mo ago
When are we getting a GPLv4 that's AGPL + no LLM training? This is overdue.
pjerem•6mo ago
Like if LLM training cared about respecting licenses. :(
Octoth0rpe•6mo ago
Given Meta's history of torrenting every book it could get its hands on for training, I'm not convinced that the majority of AI companies would respect that license. Maybe if we also had a better way to prove that such code was part of the training set and see a couple of solid legal victories with compensation awarded.
immibis•6mo ago
They're also getting sued for it, and the judge ruled they had no right to torrent those books so now it's just a matter of calculating how many trillions Meta has to pay, then extracting it from them.
Octoth0rpe•6mo ago
Because Meta got caught. I'm not convinced that every random OSS lib will have the resources to audit every model out there for a hypothetical GPL+no training violation.
bayindirh•6mo ago
I'm pretty astounded that "The Stack" at least did and effort, and continue to do so by weeding out GPL or similar strong copyleft source code from their trove, and even implemented an opt-out mechanism [0].

They look like saints when compared to today's companies.

[0]: https://huggingface.co/spaces/bigcode/in-the-stack

ramses0•6mo ago
"Adversarial Internet" => if it touches the internet it's no longer yours. See a previous comment chain: https://news.ycombinator.com/item?id=44616163
ToucanLoucan•6mo ago
> if it touches the internet it's no longer yours

*Unless you're a member of the capital class, in terms of being a corporation or a wealthy individual, who can then make our two-tiered justice system work for you. As Disney is seemingly looking to do. Then it will absolutely work for you.

This is why I and people like me so often say "there is no war but the class war." Arguing about copyright misses the entire point: The law serves the large stakeholders in the system, not the people. The only thing that's changed is there is now a large stakeholder of whom a core pillar of their ongoing business is the theft of data at industrial scale which happens to include data of other large stakeholders which is why we're now seeing the slap fight.

By all means enjoy it, it's very entertaining watching these people twist themselves into knots to explain why it's okay for Nintendo to sue people into the ground for distributing copies of games they no longer sell in any capacity but simultaneously it's okay for OpenAI to steal absolutely goddamned everything on the grounds that nothing has been "really" taken due to being infinitely replicable, or because it's a public research org, or whatever flimsy excuse is being employed at the time.

As it has been from the beginning, my position is: whatever the rule we decide on, it should apply to everyone. A very simple statement on very basic ethics that seems to make a lot of people very angry for some reason.

ramses0•6mo ago
"The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread." Anatole France
ranger_danger•6mo ago
Be the change you wish to see.

Or just literally call your program's license "AGPL + no LLM training" and that may suffice.

immibis•6mo ago
the AGPL says that you can ignore any restrictions the author tried to impose on you, that's why you frame it as LLM training is already violating the AGPL by making a derivative work.
kannanvijayan•6mo ago
Tangentially, I wonder if logins and click-throughs can help address this on the legal front.

If you set up a login flow with a click through that explicitly sets the terms of access, specifying no cost for access by a person, and some large cost for access by AI.

Stepping past this prompt into the content would require an AI to either lie, committing both fraud and unauthorized access of content.. or behave truthfully, opting in the proprietor of the API to the associated costs.

In either case, the site operator can then go after the company doing the scraping to collect the fees as specified in the copyright contract (and perhaps some additional delta of punitive fines if content was accessed fraudulently).

pythonaut_16•6mo ago
Alternatively MIT does exactly what it says it does. It's up to you as an author whether you like those terms or if you'd prefer GPL, AGPL, or SSPL.

If you want a permissive license MIT is perfectly reasonable. If you want more restrictions or stronger copy-left then don't pick MIT.

rollcat•6mo ago
As far as I was able to tell, every single coding LLM out there still violates the terms of the MIT license, because the license requires attribution - and LLMs rarely (if ever?) provide any.
zetanor•6mo ago
I've not used AI to program and have very little interest in using AI to program, but I fail to see how laundering code through massive probabilistic lossy compression (silicon) should be treated any differently than laundering code through massive probabilistic lossy compression (biological). Should humans have to keep track of which software codebases they learn each pattern from, too?
ranger_danger•6mo ago
My understanding is this was part of the reasoning for a certain US court to rule that AI art is (at least in a default sense) fair use. You're right, both humans and AI "create" things by using things we have seen before... some say art itself can only ever be the sum of our past influences.
rollcat•6mo ago
Calling humans massive probabilistic lossy compressors is an insult to curiosity, creativity, compassion, and any number of other traits that push us to advance technology. We've invented everything from Babbage&Ada's, vacuum tubes, punch cards, to GPUs.

Code regurgitators can't even design a coherent API.

immibis•6mo ago
The point is that people who think they want permissive licenses usually don't, and eventually regret choosing them when a corporation treats their work as donated labour (because it is), assuming their software is important enough to be picked up by them (if not then license choice doesn't matter anyway).
bigstrat2003•6mo ago
> MIT is a donation of your labour to corporations.

No, MIT is a donation of your labor to the public. That includes corporations, yes, but it is not only corporations.

ranger_danger•6mo ago
I always found this stance puzzling. If the point of open source is to give your code to the public, why do people get upset when corporations do exactly what you told them they could do?

If you didn't want to give it to everyone, you shouldn't have chosen that license.

And if you choose a non-commercial license, people get upset that it's "not technically open source because the OSI says so" as if they are somehow the arbiter of this (or even should be). It's not like anyone owns the trademark to the term "open source" for software either.

Ironically, I've seen a lot of people in the last several years quit open source entirely and/or switch to closed source.

Alupis•6mo ago
> why do people get upset when corporations do exactly what you told them they could do?

A lot of people have been taught `corporations == bad`, part of the anti-capitalism efforts taught to our youth for a couple generations.

ranger_danger•6mo ago
Yes I understand... but they already knew that the license explicitly allows this, and they already knew companies regularly take advantage of FOSS without giving back, so I'm not sure why they were expecting to get lucky or something.

To me this is just like getting upset when someone forks your open source project. Which ironically I've seen happen a LOT. Sometimes the original developer/team even quits when that happens.

It's like... they don't actually want it to be open source, they want to be the ONLY source.

immibis•6mo ago
Because they don't think about it deeply - that's why reminders are necessary. They think they're only donating to people with similar attitudes to themselves. xGPL licenses (SSPL included) are the license family most similar to that...

... but MIT is what corporations told them they want. There has been a low-level but persistent campaign against xGPL in the past several years and the complaints always trace back to "the corporation I work for doesn't like xGPL." No individual free software developer has a problem with xGPL (SSPL not included).

ranger_danger•6mo ago
> No individual free software developer has a problem with xGPL

I do... I consider it the opposite of freedom. I think it places severe restrictions on your project that make it hard/impossible for some people (like companies) to use, especially if your project contains lots of code from other people that make it really hard/impossible to try to re-license if one day you decide you like/need money (assuming you have no CLA, I don't like those either).

But I also realize there's different kinds of freedom... freedom TO vs freedom FROM.

Some want the freedom TO do whatever they want... and others want freedom FROM the crazy people doing whatever they want.

I wish there was a happy medium but centrism doesn't seem to be very popular these days.

immibis•6mo ago
Which part of the GPL do you consider to be a "severe restriction" that "makes your project impossible to use"?

I agree that you can't legally take a bunch of GPL code and relicense it as proprietary. That's the point.

Freedom to/from is a false dichotomy; most rights can be expressed equivalently in either "to" or "from" form.

nurettin•6mo ago
It is not conspiracy, it is human nature.

Bernard shaw put it best:

    If at age 20 you are not a Communist then you have no heart. If at age 30 you are not a Capitalist then you have no brains.
Alupis•6mo ago
> MIT is a donation of your labour to corporations.

Unless you are willing to spend yourself into financial ruin pursuing legal action against some faceless megacorp - it literally doesn't matter what license you use.

I've lived enough to know there is "what should be" and then there is what actually happens in reality. We don't live in a reality where everyone just does things out of the goodness of their heart...

Adding some text to your project, hosted on a public website for all to see means some people will take your code regardless of the license or your intent - and, realistically, what are you going to do about it? Nothing...

So... please, let's get off this GPL high-horse. It's not some end-all-be-all holy text that solves all of the world's problems.

jvanderbot•6mo ago
My boss has taken this approach, and it took a load off the "LLM pressure".
iaiuse•6mo ago
MIT isn’t “weak” because it allows LLM training; it’s weak because it puts zero obligations on the recipient.

Blocking “LLM training” in a license feels satisfying, but I’ve run into three practical issues while benchmarking models for clients:

1. Auditability — You can grep for GPL strings; you can’t grep a trillion-token corpus to prove your repo wasn’t in it. Enforcement ends up resting on whistle-blowers, not license text.

2. Community hard-forks — “No-AI” clauses split the ecosystem. Half the modern Python stack depends on MIT/BSD; if even 5 % flips to an LLM-ban variant, reproducible builds become a nightmare.

3. Misaligned incentives — Training is no longer the expensive part. At today’s prices a single 70 B checkpoint costs about \$60 k to fine-tune, but running inference at scale can exceed that each day. A license that focuses on training ignores the bigger capture point.

A model company that actually wants to give back can do so via attribution, upstream fixes, and funding small maintainers (things AGPL/SSPL rarely compel). Until we can fingerprint data provenance, social pressure—or carrot contracts like RAIL terms—may move the needle more than another GPL fork.

Happy to be proven wrong; I’d love to see a case where a “no-LLM” clause was enforced and led to meaningful contributions rather than a silent ignore.

GoblinSlayer•6mo ago
Now that I think about it, it's a funny idea to poison LLMs to write suckless programs, so the next electron chat app will be more lightweight.
Danborg•6mo ago
This is dumb
tromp•6mo ago
How does "a very small (hence Pico), very simple, yet very secure encryption tool" come to depend on OpenGL, threatening its future on MacOS?
Timshel•6mo ago
> It's not easy to fix in the code either because it'll require major changes to the GUI library which can get messy, especially since GUIs were never a strength of Go.
Cthulhu_•6mo ago
There just doesn't seem to be a lot of viable competition to web based UIs these days.
kevingadd•6mo ago
If you're using a portable library that needs to render graphics on mac, it's probably using OpenGL to do it unless it has a platform-specific backend.
nicoburns•6mo ago
Historically, yes. These days it might well be using wgpu.
rollcat•6mo ago
Immediate mode UI toolkits are designed for pluggable backends, some even can discover the appropriate backend at runtime. If you're writing a game, you're expected to (and actually, you must) build your own integration.

ImGUI and Nuklear each have 20+ backends in their repos: <https://github.com/ocornut/imgui/tree/master/backends> <https://github.com/Immediate-Mode-UI/Nuklear/tree/master/dem...>

nullbyte808•6mo ago
I forked it and named it NanoCrypt. Time to rip out the GUI code. muha ha ha!
skylurk•6mo ago
Wouldn't that make it femtocrypt?
zikduruqe•6mo ago
There's a cli version anyways... https://github.com/Picocrypt/CLI
tptacek•6mo ago
Why else would you use this code other than for its UI?
jvanderbot•6mo ago
Honestly I feel the same.

I'm holding out hope that there will always be boutique/edge software to be written, which requires enough design and care to be mentally challenging and engaging - the craftsmanship kind.

When AlphaGo was announced, I had to keep telling people that "It's not like computers win at Go, it's just that we now have a tool that makes us way better at Go". If an alien race showed up and challenged us to Go to save the species, would we use a Go player or a AlphaGo, if we had to choose?

The problem is LLMs aren't like that, because software isn't like Go. And, they really are annoying to use, frustrating to redirect all the time, and generally cannot do what you want precisely, without putting in more mental energy to provide context and decompose further than you'd need to do the damn thing yourself. And then you lose an hour/two of flow, which is the reward for the whole process.

But at certain times they are a godsend and they have completely replaced some of the more boring parts of my job. I wouldn't want to lose them, not at all.

Like the author, I don't think we're heading to a healthy balance where LLMs help us be better at our job. I do think the hype is going in the wrong direction, and I do worry for the state of our field (Or at least the _ideals_ of our field). Call me naive, but I also thought it mattered what the code was.

spapas82•6mo ago
This was really huge. I actually had to pass it to an LLM to get an abstract ....

I didn't know about picocrypt but I already have two options for safely encrypting files: 7zip with its AES-256 (simple) and veracrypt with various algorithms (more involved but allows you to mount the encrypted vols). Actually these are already mentioned in the tool's readmy, great work: https://github.com/Picocrypt/Picocrypt?tab=readme-ov-file#co...

jongjong•6mo ago
Damn. The part about quality over quantity really hit home. I also got into software engineering as a way to exert craftsmanship and am disappointed with the state of things. It's potentially a great field for people who are interested in the pursuit of perfection. There are few areas which are as complex and take so much time and effort to master.

Software engineering provides a window into reality in a way which exposes you to its full complexity. It changes your brain, you start seeing complexity everywhere around you. You start seeing problems that nobody else sees. This is why I got into coding... But now the industry often feels like it's leading you astray and preventing you from truly flourishing.

This is sad because being skilled at coding feels great and it shapes your reality in a very positive way. Being able to think clearly is a great gift and a worthy goal to strive for. Having a logical mind feels inherently good. Being able to approach any topic, with anyone and maintaining full logical consistency feels good.

uyzstvqs•6mo ago
TLDR: Author of an open-source project has a crashout over other people using LLMs for coding, believes that AI will replace all developers, and decides to preemptively give up on software engineering entirely because of that.

IMO anyone who understands AI at a technical level will understand that this won't happen. No matter how many parameters, training and compute you throw at it, putting AI in direct charge of anything that's critical and not entirely predictable is going to backfire. Though, based on response from this author, it should be apparent that his response comes from a place of emotion, misunderstanding, and likely conformism to dogmatic anti-AI rhetoric of the same nature, rather than actual reason and logic.

progx•6mo ago
It is a matter of time. 5 years no problem. In 10 years some devs will be replaced. In 15 years, i don't think that "pure" developer jobs will exist in the most companies.
stoneyhrm1•6mo ago
I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.

Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.

dingnuts•6mo ago
it's not LLMs vs typing for loops by hand. It's LLMs vs snippets and traditional cheap, pattern based code generation, find and replace, and traditional refactoring tools

those are still faster and cheaper and more predictable than an LLM in many cases

StopVibeCoding•6mo ago
too few developers understand this
Tepix•6mo ago
The (current) last commend by hakavlad (same as hakavlad on HN perhaps?):

    @HACKERALERT Your decision may be somewhat irresponsible towards those who donated to the audit.
That audit was one year ago. The money didn't go towards the author. The source continues to be available. The author doesn't own you zilch.
rowanG077•6mo ago
Yes, I found this a profoundly weird comment. The audited code will be forever available and audited.
axus•6mo ago
Human beings are weird, and donations aren't always based on reason. I say it's better to discuss the feelings than worry about disapproval.

Surely a recent audit only increases the odds of someone assuming responsibility for a fork. Knowing there is a solid baseline to proceed from.

hakavlad•6mo ago
>The money didn't go towards the author.

Perhaps many would have refused to donate if they knew that the project would be archived in a year. Collecting for audit and then archiving the project is, in a way, a violation of expectations.

IncreasePosts•6mo ago
Did they perform the audit? That is what is important.

The more and more you start modifying code after the audit, the more and more useless the audit becomes.

hakavlad•6mo ago
Yes, they performed.
ranger_danger•6mo ago
> That is what is important

Depends on your perspective... If I'd known the project was going to stop soon after I donated, I probably wouldn't donate, even if the purpose of the money was strictly for an audit.

pavel_lishin•6mo ago
Would they have refused to donate if they knew the author would be hit by a bus in a year? Or hired by someone who refused to allow them to continue working on it?

I don't think the author had explicit plans to do this a year ago.

insane_dreamer•6mo ago
Did the author do the audit? Is the audit available? If so, then they did what people donated for. End of story.
GoblinSlayer•6mo ago
What are expectations? Audit is invalidated by the first change after it, so archivation is basically necessary. VeraCrypt was audited too, lol.
zamadatix•6mo ago
In regards to "As long as you can run the code, archiving this project means nothing, really." I think this section misses the main concern of archived software - what happens when one of those bugs is run into (either something not yet noticed or something due to external changes down the road) and there is no actively maintained version (which could include one you're willing to hack st yourself) to just update to?

The simpler the software the less urgent the concern but "I haven't had a problem in the last 2 years" is something I could say of most software which I end up needing to update, and it makes sense to make myself able to do so at a convenient time rather than the moment the problem occurs.

This project seems popular enough I'm sure eaiting a bit and seeing who the successor project is would be a safe bet as well though.

blenderob•6mo ago
I'm starting to feel kinda old and out of the loop. Could someone please explain the conversational style of this post?

It begins with a prompt directed at Gemini, followed by what appears to be an AI-generated response. Are these actual AI responses or is the developer writing their parting message in a whimsical way? I'm genuinely confused. Help much appreciated!

yifanl•6mo ago
This is a post framed in medias res, from the perspective of the developer, as portrayed by themselves, asking an LLM to construct the post that they post immediately afterwards.

I'm unsure if the post is actually created by Gemini or the developer's imitation. I suspect the latter.

morkalork•6mo ago
It is also a demonstration of what he is frustrated with what software development is becoming.
devheart•6mo ago
He posted a conversation with Gemini, including his real posts and then Gemini's responses.
phyzome•6mo ago
It's something of a satire, although presumably actually comes from a "conversation" with Gemini (because who would bother writing a bot's verbose and uninsightful responses like that?)
r00t-•6mo ago
Lol what a drama queen
tootie•6mo ago
> I originally intended to work in the software engineering industry, but seeing the complete disregard for high quality code, overpowering greed and hype, and the layoffs that follow from it

Replace "software engineering" with literally anything and this is a true statement since the dawn of civilization.

nektro•6mo ago
> quits dev due to frustration at LLM adoption

> new career path is LLM research

what?? how do you square that circle. deciding to go contribute to the very thing that steered your path off course