frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•2m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•3m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•5m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•5m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•8m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•8m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•13m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
2•throwaw12•14m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•14m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•15m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•17m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•20m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•23m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•29m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•31m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•36m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•38m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•38m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•41m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•42m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•44m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•45m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•48m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•49m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•52m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•53m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•53m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•55m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•58m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

AI Tribalism

https://nolanlawson.com/2026/01/24/ai-tribalism/
60•zurvanist•1w ago

Comments

pier25•1w ago
> heck, they could even double and it’d still be worth it

What about 10x more?

hackyhacky•1w ago
Shhh, don't give them any ideas.
pier25•1w ago
they might not have a choice when all that VC money runs out
njhnjhnjhnjh•1w ago
I'd pay $5000-$10,000 dollars per year for a full-time AI engineer powered by Claude or a similar backend.

Edit: If I get a raise, I'd consider paying up to $25,000 per year for the aforementioned Claude automaton.

justkys•1w ago
I agree with the thrust of this but:

> The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it), and we don’t need another breakthrough.

The costs should come down. I don’t know what costs this post refers to, but the cost of using Claude is almost definitely hiding the actual cost.

That said, I’m still hoping we ensure our public models out there work well enough with opencode or other options so my cost is more transparent to me, what is added to my electric bill rather than a subscription to Claude.

mromnia•1w ago
Considering what's happening with PC component prices, it's likely we won't have anything to run those public models on anyway. Everything might become permanently cloud-only at some point.
rudedogg•1w ago
This is kind of where I'm at.

I don't think everything is for certain though. I think it's 50/50 on whether Anthropic/whoever figures out how to turn them into more than a boilerplate generator.

The imprecision of LLMs is real, and a serious problem. And I think a lot of the engineering improvements (little s-curve gains or whatever) have caused more and more of these. Every step or improvement has some randomness/lossiness attached to it.

Context too small?:

- No worries, we'll compact (information loss)

- No problem, we'll fire off a bunch of agents each with their own little context window and small task to combat this. (You're trusting the coordinator to do this perfectly, and cutting the sub-agent off from the whole picture)

All of this is causing bugs/issues?:

- No worries, we'll have a review agent scan over the changes (They have the same issues though, not the full context, etc.)

Right now I think it's a fair opinion to say LLMs are poison and I don't want them to touch my codebase because they produce more output I can handle, and the mistakes they make are too subtle that I can't reliably catch them.

It's also fair to say that you don't care, and your work allows enough bugs/imprecision that you accept the risks. I do think there's a bit of an experience divide here, where people more experienced have been down the path of a codebase degrading until it's just too much to salvage – so I think that's part of why you see so much pushback. Others have worked in different environments, or projects of smaller scales where they haven't been bit by that before. But it's very easy to get to that place with SOTA LLMs today.

There's also the whole cost component to this. I think I disagree with the author about the value provided today. If costs were 5x what they are now, I think it would be a hard decision for me to decide if they are worth it. For prototypes, yes. But for serious work, where I need things to work right and be reasonably bug free, I don't know if the value works out.

I think everyone is right that we don't have the right architecture, and we're trying to fix layers of slop/imprecision by slapping on more layers of slop. Some of these issues/limitations seem fundamental and I don't know if little gains are going to change things much, but I'm really not sure and don't think I trust anyone working on the problem enough to tell me what the answer is. I guess we'll see in the next 6-12 months.

simonw•1w ago
> I do think there's a bit of an experience divide here, where people more experienced have been down the path of a codebase degrading until it's just too much to salvage – so I think that's part of why you see so much pushback.

When I look back over my career to date there are so many examples of nightmare degraded codebases that I would love to have hit with a bunch of coding agents.

I remember the pain of upgrading a poorly-tested codebase from Python 2 to Python 3 - months of work that only happened because one brave engineer pulled a skunkworks project on it.

One of my favorite things about working with coding agents is that my tolerance for poorly tested, badly structured code has gone way down. I used to have to take on technical debt because I couldn't schedule the time to pay it down. Now I can use agents to eliminate that almost as soon as I spot it.

mirsadm•1w ago
I've used Claude Code to do the same (large refactor). It has worked fairly well but it tends to introduce really subtle changes in behaviour (almost always negative) which are very difficult to identify. Even worse if you use it to fix those issues it can get stuck in a loop of constantly reintroducing issues which are slightly different leading to fixing things over and over again.

Overall I like using it still but I can also see my mental model of the codebase has significantly degraded which means I am no longer as effective in stopping it from doing silly things. That in itself is a serious problem I think.

soulofmischief•1w ago
Yes, if you don't stay on top of things and rule with an iron fist, you will take on tons of hidden tech debt using even Opus 4.5. But if you manage to review carefully and intercede often, it absolutely is an insane multiplier, especially in unfamiliar domains.
akomtu•1w ago
LLM is like a chef that cooks amazing meals in no time, but his meals often contain small pieces of broken glass.
AstroBen•1w ago
> The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it)

What worries me about this is that it might end up putting up a barrier for those that can't afford it. What do things look like if models cost $1000 or more a month and genuinely provide 3x productivity improvements?

alansaber•1w ago
They want you to have to pay for an advantage. If a single AI provider gets enough advantage, they'll be able to charge whatever they want.
ls612•1w ago
Given that models seem to be converging to similar capabilities and that there are plenty of open weights models out there market competition should drive prices towards the marginal cost of inference.
skybrian•1w ago
If they're paying you, they can afford it. Also, even if running large teams of coding agents becomes practical, you don't necessarily need more than one or two to learn.
mkozlows•1w ago
I mean, your employer will pay it. $1K/month is cheap for your employer.

But there is an interesting point about what it does to hobby dev. If it takes real money just to screw around for fun on your own, it's kinda like going back to the old days when you needed to have an account on a big university system to do anything with Unix.

AstroBen•1w ago
Open source software

Small bootstrapped startups

Are more what I had in mind. Of course an established company can pay it. I don't like the idea of a world where all software is backed by big companies

mkozlows•1w ago
I'm not too worried about startups: We used to have startups when they had to buy expensive physical servers and pay for business-class T1 connections and rent offices and all that. The idea that you can start a company with $20 and a dream is relatively new, and honestly a little bit of friction might be good.

But yeah, I share your concern about open source and hobby projects. My hope would be that you get free tiers that are aimed at hobby/non-profit/etc stuff, but who knows.

onion2k•1w ago
$1000 a month to make someone whose being $10,000 a month even 1.5x more productive is well worth the price.
AstroBen•1w ago
Today we have open source projects that can compete with proprietary ones just because people without initial funding had the ability to make it competitive

You can bootstrap something with yourself and a friend with some hard work and intelligence

This is available to people all over the world, even those in countries where $1000 is a months salary

Microsoft and their employees will be fine, yeah. That's not who I'm thinking about

Sharlin•1w ago
> I’m mostly […] doing routine tasks that it’s slow at, like refactoring or renaming.

So… humans are now doing the stuff that computers are supposed to do and be good at?

XenophileJKO•1w ago
No I think he means using a refactor tool in the IDE. Though really all we need to do is expose an API for the agent, which we should do.
Sharlin•1w ago
Yeah. Lots of talk about agents and whatever, but then you lack minimal integration with the most basic of tools?
crazygringo•1w ago
They're building all this stuff. Be patient. There's a large backlog. And it takes time and experimentation to figure out.
sph•1w ago
> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it.

Imagine if we had to suffer these posts, day in and day out, when React or Kubernetes or any other piece of technology got released. This kind of proselyting that is the very reason there is tribalism with AI.

I don't want to use it, just like I don't want to use many technologies that got released, while I have adopted others. Can we please move on, or do we have to suffer this kind of moaning until everybody has converted to the new religion?

Never in my 20 years in this career have I seen such maniacal obsession as it has been over the past few years, the never-ending hype that have transformed this forum into a place I do not recognise, into a career I don't recognise, where people you used to respect [1] have gone into a psychosis and dream of ferrets, and if you dare being skeptical about any of it, you are bombarded with "I used to dislike AI, now I have seen the light and if you haven't I'm sorry for you. Please reconsider." stories like this one.

Jesus, live and let live. Stop trying to make AI a religion. It's posts like this one that create the sort of tribalism they rail against, into a battle between the "enlightened few" versus the silly Luddites.

1: https://news.ycombinator.com/item?id=46744397

WA•1w ago
The author of that post Nolan is a pretty interesting guy and deep in the web tech stack. He’s really one of the last people I’d call "tribal", especially since you mention React. This guy hand-writes his web components and files bug reports to browsers and writes his own memory leak detection lib and so on.

If such a guy is slowly dipping his toes into AI and comes to the conclusion he just posted, you should take a step back and consider your position.

JoshTriplett•1w ago
I really don't care what authority he's arguing from. The "just try it" pitch here is fundamentally a tribalist argument: tribes don't want another tribe to exist that's viewed as threatening to them.
rpdillon•1w ago
Trying a new technology seems like what engineers do (since they have to leverage technology to solve real problems, having more tools to choose from can be good). I'm surprised it rings as tribalist.
JoshTriplett•1w ago
The impression I get from this post is that anyone who doesn't like it needs to try it more. It doesn't really feel like it leaves space for "yeah, I tried it, and I still don't want to use it".

I know what its capabilities are. If I wanted to manage a set of enthusiastic junior engineers, I'd work with interns, which I love doing because they learn and get better. (And I still wouldn't want to be the manager.) AIs don't, not from your feedback anyway; they sporadically get better from a new billion dollar training run, where "better" has no particular correlation with your feedback.

rpdillon•1w ago
I think it's going to be important to track. It's going to change things.

I agree on your specific points about what you prefer, and that's fine. But as I said 15 years ago to some recent Berkeley grads I was working with: "You have no right to your current job. Roles change."

AI will get better and be useful for some things. I think it is today. What I'm saying is that you want to be in the group that knows how to use it, and you can't there if you have no experience.

munksbeer•1w ago
There is of course an option here, you can just completely ignore the suggestion and all of these posts.
jaredcwhite•1w ago
Honestly that's what makes this all the more dangerous. He's trying to have his cake and eat it too: accept all of the hype and all of the propaganda, but then couch it in the rhetoric of "oh I'm so concerned I can remain in a sort of moderate & empathetic position and not fall prey to tribalism and flame wars."

There's no both-sides-ing of genAI. This is an issue akin to street narcotics, mass weapons of war, or forever chemicals. You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity. The OP is not a thoughtful moderate because that's not how any of this works.

munksbeer•1w ago
> You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity.

I don't think this has yet been established. We'll have to wait and see how it turns out. My inclination is it'll turn out like most other technological advancements - short term pain for some industries, long term efficiency and comfort gain for humans.

Despite the anti-capitalist zeitgeist, more humans of today live like kings compared to a few hundred years ago, or even 100 years ago.

But you seem to have jumped to a conclusion that everyone agrees: AI is harmful.

soulofmischief•1w ago
I don't think you understand how much things are about to change in a relatively short time. A lot of people are rightfully confused and concerned.

Many people are seeing this as an existential moment requiring careful navigation and planning, not just another language or browser or text editor war.

rpdillon•1w ago
This is exactly my position. Landscape-changing technology is impossible to get away from, because it follows you. It's like a local business owner in 1998 telling me they didn't care about the stupid "internet" thing, and then the internet blew away their business within 10 years. Similar story with the PC: folks didn't get the option to just "opt out" of a digital office because they liked typewriters and paper. Cell phones were this way also, and while many people post about how they hate their phones and needs to quit using it so much, pretty much everyone admits you can't live in society without one because they have pervaded so many interactions.

So that's how I think AI will be seen in 20 years: like the PC, the internet, and mobile phones. Tech that shapes society, for better or worse.

soulofmischief•1w ago
100%, even if models stopped advancing today, there's already enough utility that just needs to be constrained by traditional software. It's not going away; it's going to change our interfaces completely, and change how services interface with each other, how they're designed, and change the pace at which software evolves.

This is a tipping point and most anti-AI advocates don't understand that other software developers who keep telling them to reevaluate their positioned are often just trying to make sure no one is left behind.

rpdillon•1w ago
You of course don't have to use AI. Your core point is correct: the world around you is changing quickly, and in unpredictable ways. And that's why it's dangerous to ignore: if you've developed a way that worked in the world 10 years ago, there's a risk it won't play the same way in the world of 2030. So this is time-frame to prepare for whatever that change will be.

For some people, that's picking up the tool and trying to figure out what its good for (if anything) and how it works.

mwkaufma•1w ago
"I'm not being tribal, it's everyone _else_."
ls612•1w ago
I saw a similar inflection point to this guy personally, in 2024 the models weren’t good enough for me to use them for coding much, but around the time of o1/o3/Gemini 2.5 was when things changed and I haven’t looked back since.
lins1909•1w ago
What if I just enjoy how I work at the moment and don't really care about this stuff? Why do I _have_ to give it a go? Why don't LLM evangelists accept this as an option?

Choosing not to use AI agents is maybe the only tool position I feel I've had to defend or justify in over a decade of doing this, and it's so bizarre to me. It almost reeks of insecurity from the Agent Evangelists and I wonder if all the "fear" and "uncertainty" they talk about is just projecting.

mkozlows•1w ago
Nobody pushed you to use git when you were comfortable with svn? Nobody pushed you to use Docker when you were comfortable running bare metal? Nobody pushed you to write unit tests when you were comfortable not? Nobody pushed you to use CSS for layout when you were happy using tables?

Some of those are before your time, but: The only time you don't get pushed to use new technologies is when a) nothing is changing and the industry is stagnant, or b) you're ahead of the curve and already exploring the new technology on your own.

Otherwise, everyone always gets pushed into using the new thing when it's good.

mycall•1w ago
> Otherwise, everyone always gets pushed into using the new thing when it's good.

and then there is AS/400 and all the COBOL still in use which AI doesn't want to touch.

danaris•1w ago
Git had obvious benefits over svn.

Docker has obvious benefits over bare metal.

Etc.

My own experiences with LLMs have shown them to be entertaining, and often entertainingly wrong. I haven't been working on a project I've felt comfortable handing over to Microsoft for them to train Copilot on, and the testimonials I've seen from people who've used it are mixed enough that I don't feel like it's worth the drawbacks to take that risk.

And...sure, some people have to be pushed into using new things. Some people still like using vim to write C code, and that's fine for them. But I never saw this level of resistance to git, Docker, unit tests, or CSS.

mkozlows•1w ago
The resistance to those things was less angry, but it was there. giveupandusetables dot com no longer exists, but you can find plenty of chin-stroking blog posts about it. It was a big argument in the late aughts!
eichin•1w ago
The engineers using svn were the ones who were pushing for git - I was the one saying "we can't, because none of the conversion tools competently preserve branch history, and it's even worse on repos that started in CVS". Noone responsible for repos was pushing for git, it was end-users pulling for it (and shutting up when they learned how much work it would cause :-) That looked nothing like the drug-dealer-esque LLM push I've been seeing for the last 3 years.

(Likewise with CVS to svn: "you can rename files now? and branches aren't horrible? Great, how fast can we switch?" - no "pushing" because literally everyone could see how much better it was in very concrete cases, it was mostly just a matter of resource allocation.)

In the context of this discussion, it feels more like ipv6 :-)

sixtyj•1w ago
Some people don’t like to be pushed. They want their own rhythm.

But when you stop trying new stuff (“because you don’t want to”), it is a sign that your inner child got lost. (Or you have a depression or burnout.)

lins1909•1w ago
Lol. Surely this depends on what the new stuff is? Looks like all nuance goes out of the window when agents are involved.
rsynnott•1w ago
> Nobody pushed you to use git when you were comfortable with svn?

Generally, sensible companies held off on this sort of transition until the tool was mature and stable.

Way back in the day, I was involved in an abortive move from CVS to SVN. It went great for a week, then the SVN db corrupted itself irretrievably, taking a week's work with it... I think we finally moved for real about a year later, when SVN had abandoned its extremely unreliable BDB backend that early versions used.

Forcing adoption of [AI tool of the month] now feels a bit more like, say, adopting Darcs back during the DVCS wars than adopting git after it had won.

johnfn•1w ago
I think the standing assumption is that most of us take pride and enjoyment in being good at our craft - and some of us even want to be great at it. That means understanding all the tools at our disposal - to see if they are useful or not.

If that is not interesting to you I think that’s a totally fine choice, but you’re getting a lot of pushback from people who have made a different choice.

hypeatei•1w ago
I'm the same as you: I don't really care about this stuff. Given what we know about the start of these AI endeavors (torrenting/scraping the whole internet and training on it) then running their software seems like a liability. Is it pasting code verbatim that falls under an incompatible license? Is it training on my codebase? Why would I want to depend on this very compute intensive, cloud hosted tool?

At least with other advancements in our field like git, Docker, etc., they're made with a local-first mindset (e.g. your git repos can live anywhere and same with your docker images)

vidar•1w ago
Your work will eventually be driven by the same economics as the industry as a whole, project estimates 12 months from now will be done based on how long it takes a dev with full LLM backing, not your current speed. Then you need to be prepared to work at that speed.
kranner•1w ago
This confidence may be somewhat premature at this stage. The industry is extremely fashion-driven and may decide that agentically-based development results in mediocre and unmaintainable code. It depends too on how the flagship models develop.

Also not all of us need to sell ourselves as high-speed AI-boosted developers, especially those with decades of experience. Investors might well choose to invest in artisanal coding, and many of us can act as our own investors as well. So the inevitability of agentism is still undecided IMHO.

retrac98•1w ago
You don’t have to, of course, but you probably will if you want to be competitive in a professional capacity in the future.

Not doing so seems a bit like a farmer ploughing fields and harvesting crops by hand while seeking to remain competitive with modern machinery, surely?

kranner•1w ago
It remains to be seen whether these analogies will still hold in the longer term, or whether agentic code will come to be seen as cheap but buggy and mediocre.
nlawalker•1w ago
You don’t have to! Enjoy it! Just don’t bank on getting paid for it indefinitely. That’s the aspect of it that’s causing so much consternation.
lins1909•1w ago
And don't bank on getting economic benefits out of being able to use Claude Code too!
onion2k•1w ago
It almost reeks of insecurity from the Agent Evangelists and I wonder if all the "fear" and "uncertainty" they talk about is just projecting.

That's probably true on some level for some evangelists, but it's probably just as true that some people who are a bit scared of AI read every positive post about it as some sort of propaganda trying to change their mind.

Sometimes it's fine to just let people talk about things they like. You don't know what camp someone is in so it's good to read their post as charitably as possible.

Aeolun•1w ago
> Why do I _have_ to give it a go?

Because your boss is going to want you capable of using these things effectively even as shortly as 1-2 years from now? If not them, then their boss.

shantara•1w ago
I've seen a lot of developer tooling change and evolve over the course of my career, but with AI it was the first time I've seen people in non-technical managerial positions trying to force the engineers to make a switch. It was extremely bizarre.
halJordan•1w ago
What? Was anyone forcing people off vim when vs code dropped? No, go use vim. Same here. Why are you acting persecuted when you can literally do nothing.
themafia•1w ago
> I can already hear the cries of protest from other engineers who (like me) are clutching onto their hard-won knowledge.

You mean the knowledge that Claude has stolen from all of us and regurgitated into your projects without any copyright attributions?

> But I see a lot of my fellow developers burying their heads in the sand

That feeling is mutual.

yibers•1w ago
We did the same as devopers before Claude. We would copy paste from stack overflow. Now this process is heavily automated.
sodapopcan•1w ago
...from answers that were publicly shared without license. It's not the same thing, even though every LOVES to make this argument.

Also: Over the past 20 years, I could count the number of times on one hand that I was been able to get away with out-right copy/paste from SO.

eichin•1w ago
Stackoverflow code has a license (not per post, but a blanket one depending on which year - https://stackoverflow.com/help/licensing it's mostly CC BY-SA.) I've written corporate policies that emphasize that you can learn from SO answers, but (as you point out) they basically never fit exactly - and you should include a link to the original so when the next Ubuntu LTS breaks your clever hack, we can see if someone has already posted a fix :-)
rpdillon•1w ago
In a prior job, I had to scan a 2M+ line codebase for software license violations to support the sale of a unit to another corporation. One class of violation was using SO snippets, because they are licensed under CC and not compatible with the distribution model the new company was planning. Many weeks of work to track them all down.
sodapopcan•6d ago
Fair enough! Though my wholesale copy/paste point still stands.
themafia•1w ago
> Now this process is heavily automated.

And comes with a price tag paid to people who neither own nor generated that content. You don't think that shifts the ethical boundaries _significantly_?

rpdillon•1w ago
I don't. The general trend is that, in US rulings, courts have found that if the material was obtained legally, then training can be fair use. My understanding is that getting LLMs to regurgitate anything significant requires very specific prompting, in general.

I would very much like someone to give me the magic reproduction triple: a model trained on your code, a prompt you gave it to produce a program, and its output showing copyright infringement on the training material used. Specific examples are useful; my hypothesis is that this won't be possible using a "normal" prompt that's in general use, but rather a prompt containing a lot of directly quoted content from the training material, that then asks for more of the same. This was a problem for the NYT when they claimed OpenAI reproduced the content of their articles...they achieved this by prompting with large, unmodified sections of the article and then the LLM would spit out a handful of sentences. In their briefing to the court, they neglected to include their prompts for this reason. I think this is significant because it relates to what is really happening, rather than what people imagine is happening.

But I guess we'll get to see from the NYT trial, since OpenAI is retaining all user prompts and outputs and providing them to the NYT to sift through. So the ground-truth exists, I'm sure they'll be excited to cite all the cases where people were circumventing their paywall with OpenAI.

themafia•1w ago
> My understanding is that getting LLMs to regurgitate anything significant requires very specific prompting, in general.

Then you have been mislead:

https://arstechnica.com/features/2025/06/study-metas-llama-3...

> I would very much like someone to give me the magic reproduction triple

Here's how I saw it directly. Searched for "node http server example." Google's AI spit out an "answer." The first link was a Digital Ocean article with an example. Google's AI completely reproduced the DO example down to the content of the comments themselves.

So.. don't know what to tell you. How hard have you been looking yourself? Or are you just trying to maintain distance with the "show me" rubrick? If you rely on these tools for commercial purposes then the onus was always on you.

> So the ground-truth exists

And you expect a civil trial to be the most reliable oracle of it? I think you know what I know but would rather _not_ know it.

rpdillon•1w ago
To your last statement: not at all. I think releasing all the chats publicly would show that basically no one is using ChatGPT to circumvent paywalls because the model was trained on that material.

As to your Ars article, I'm familiar because I read Ars.

> The chart shows how easy it is to get a model to generate 50-token excerpts from various parts of Harry Potter and the Sorcerer’s Stone. The darker a line is, the easier it is to reproduce that portion of the book.

50-token excerpts are not my concern, that's 40 words. The argument that the plantiffs need to make is that people are not paying for the NYT because ChatGPT (part of the four fair use pillars, I could expand, but won't). That's gonna be tough. Let's revisit this after the ruling and/or settlement.

wvenable•1w ago
> You mean the knowledge that Claude has stolen from all of us and regurgitated into your projects without any copyright attributions?

You can't, and shouldn't be able to, copyright and hoard "knowledge".

themafia•1w ago
I did not suggest that; however, the law is clear. If I use my knowledge to produce code, under a specific license, then you take that code, and reproduce it without the license, you have broken the law.

You can twist this around as much as you like but there are several studies showing that LLMs and and will happily reproduce content from their training data.

wvenable•1w ago
> If I use my knowledge to produce code, under a specific license, then you take that code, and reproduce it without the license, you have broken the law.

Correct. But if read your code, produce a detailed specification of that code, and then give that code to another team (that has never seen your code) and they create a similar product then they haven't broken the law.

LLMs reproducing exact content from their training data is symptom of overfitting and is an error that needs correcting. Memorizing specific training data means that it is not generalizing enough.

themafia•1w ago
> and they create a similar product then they haven't broken the law.

That costs significantly more and involves the creation of jobs. I see this as a great outcome. There seems to be a group of people who share the opposite of my views on this matter.

> and is an error that needs correcting

It's been known for years. They don't seem interested in doing that or they simply aren't capable. I presume because most of the value in their service _is_ the copyright whitewashing.

> Memorizing specific training data means that it is not generalizing enough.

Is that like a knob they can turn or is it something much more fundamental to the technology they've staked trillions on?

wvenable•1w ago
> That costs significantly more and involves the creation of jobs. I see this as a great outcome.

I don't see it that way. If whatever you're doing can now be automated then it's become a bullshit job. It no longer a benefit to humanity to have a human sit on their ass, stand on their feet, or break their back to do a job that can be automated. As a software developer, it's my job to take the dumb repetitive stuff that humans do and make it so that humans never have to do that job again.

If that's a problem for society, it's because society is messed up.

> It's been known for years. They don't seem interested in doing that or they simply aren't capable.

I don't find that to be particularly big problem. Fundamentally an AI isn't just compressing all human knowledge and decompressing it on demand; it's tweaking parameters in a giant matrix. I can reproduce the lyrics of songs that I've heard but that doesn't mean there is a literal copy of that song in my brain that you could extract out with a well placed scalpel. It just means I've heard it a bunch of times and the giant matrix in my brain is tuned to be able to spit it out.

> Is that like a knob they can turn or is it something much more fundamental to the technology they've staked trillions on?

In a sense, it a knob. It's not fundamental to the technology; if it's reproducing something exactly that likely means it's over-trained on that data. It's actually bad for the models (makes them more incorrect, more rigid, and more repetitive) so that is a knob they will turn.

nateburke•1w ago
where are the productivity gains in GDP?

where are the websites that are lightning fast, where speed and features and ads have been magically optimized by ai, and things feel fast like 2001 google.com fast

why does customer service still SUCK?

Aeolun•1w ago
Because companies now develop shitty websites faster, they don’t magically get better.
wvenable•1w ago
We are still massively lacking in software. We're not at the stage of making websites faster, we're at the stage of making more of them. We haven't come close to hitting the point where we have enough software and now the job is to refine it.
cdata•1w ago
AI has pushed me to arrive at an epiphany: new technology is good if it helps me spend more time doing things that I enjoy doing; it's bad if it doesn't; it's worse if I end up spending more time doing things that I don't enjoy.

AI has increased the sheer volume of code we are producing per hour (and probably also the amount of energy spent per unit of code). But, it hasn't spared me or anyone I know the cost of testing, reviewing or refining that code.

Speaking for myself, writing code was always the most fun part of the job. I get a dopamine hit when CI is green, sure, but my heart sinks a bit every time I'm assigned to review a 5K+ loc mountain of AI slop (and it has been happening a lot lately).

Yodel0914•1w ago
I agree. I’m using copilot more and more as it gets better and better, but it is getting better at the fun stuff and leaves me to do the less fun stuff. I’m in a role where I need to review code across multiple teams, and as their output is increasing, so is my review load. The biggest issue is that the people who lean on copilot the most are the least skilled at writing/reviewing code in the first place, so not only do I have more to review, it’s worse(1).

My medium term concern is that the tasks where we want a human in the loop (esp review) are predicated on skills that come from actually writing code. If LLMs stagnate, in a generation we’re not going to have anyone who grew up writing code.

1: not that LLMs write objectively bad code, but it doesn’t follow our standards and patterns. Like, we have an internal library of common UI components and CSS, but the LLM will pump out custom stuff.

There is some stuff that we can pick up with analysers and fail the build, but a lot of things just come down to taste and corporate knowledge.

wvenable•1w ago
I've been using it to do big refactors are large changes that I would simply avoid because, before, the benefits don't outweigh the costs of the doing it. I think half the problem people have is just using AI for the wrong stuff.

I don't see why it doesn't help with reviewing, testing, or refining code either. One of the advantages I find is that an LLM "thinks" differently from me so it'll find issues that I don't notice or maybe even know about. I've certainly had it develop entire test harnesses to ensure pre/post refactoring results are the same.

That said, I have "held it wrong" and had it done the fun stuff instead and that felt bad. So I just changed how I used it.

cdata•1w ago
I read a lot of AI generated code these days. It makes really bad mistakes (even when the nature of the change is a refactor). I've tried out a few different tools and methodologies, but I haven't escaped the need to babysit the "agent." If I stepped aside, it would create more work for me and others on the backend of our workflow.

I read anecdotes of teams that push through AI-driven changes as fast as possible with awe. Surely their AIs are no more capable than the ones I'm familiar with.

wvenable•1w ago
I read all the code and it sometimes make mistakes -- but I wouldn't call it really bad. And often merely pointing it out will get a correction. Sometimes it is funny. It's not perfect but nothing is perfect. I have noticed that the quality seems to be improving.

I still think whether you see sustained value or not depends a lot on your workflow -- in what you choose to do or decide and what you let it choose to do or decide.

I agree with you that this idea of just pushing out AI code -- especially code written from scratch -- by an AI sounds like a disaster waiting to happen. But honestly a lot of organizations let a lot of crappy code into their code-base long before AI came long. Those organizations are just doing the same now at scale. AI didn't change the quality, it just changed the quantity.

ta12345678910•1w ago
The whole "it's turned political so it's bad" brush off that this article anchors itself on is crazy. I understand many Americans can't understand what it's like to be under threat, but I'm not pumping money into massive organizations that pay federal American taxes. And seriously, f*ck you for insinuating I should.
mediaman•1w ago
It is, in fact, not crazy, because none of this is predicated on using a specific vendor.

Many of these techniques can also work with Chinese LLMs like Qwen served by your inference provider of choice. It's about the harness that they work in, gated by a certain quality bar of LLM.

Taking a discussion about harnesses and stochastic token generators and forcing it into a discussion of American imperialism is making a topic political that is not inherently political, and is exactly the sort of aggressive, cussing tribalistic attitude the article is about.

ronsor•1w ago
Ironically the massive organizations are the ones that try to pay the least amount of federal taxes.
badsectoracula•1w ago
> “What about security?” [..] “What about performance?” [..] “What about accessibility?”

TBH i'm fine with AI but my main concern isn't any of these issues (even if they suck now -though supposedly Claude Code doesn't- they can get better in the future).

My main concern, by far, is control and availability. I do not mind using some AI, but i do mind using AI that runs on someone else's computer and isn't under my control - and i can, or have a chance at, understanding/tweaking/fixing (so all my AI use is done via inference engines that are written in C++ that i compiled myself and are running on my PC).

Of course the same logic applies to anything where that makes sense (i.e. all my software runs locally, the only things i use online/cloud versions for are things which are inherently about networking - e.g. chat, forums, etc, but even then i use -say- a desktop-based email client instead of webmail).

oytis•1w ago
Absolutely. I'm gonna go full agentic coding the day I can do it with open-weight models on my machine. Until then feeding someone else's models with more data on how to replace me in particular sounds insane to me.
crazygringo•1w ago
If you think it's going to replace you, then it's going to replace you regardless of whether you personally are feeding it data or not.

If it produces value for you, you should use it. If not, don't.

oytis•1w ago
So far I have been able to trade some efficiency for more control in my professional life. All of my tooling is open-source and local. I hope I can get away with it this time as well though sure some adjustment will be needed
crazygringo•1w ago
But why? Why not simply do/use whatever is most cost-effective? In the places where greater control leads to less efficiency, what is the benefit of control?

This is a genuine question -- I really don't understand. I appreciate local tooling when it helps my long-term efficiency, even if there's a learning curve. But not if cloud seems like it will always be more efficient. And while there are LLM's you can run locally, it doesn't seem like the ones useful for coding, with their vast memory and GPU requirements, will be realistic or cost-effective to run locally in the foreseeable future.

badsectoracula•1w ago
> But why? Why not simply do/use whatever is most cost-effective?

Because cost-effectiveness is a short term concern compared to...

> what is the benefit of control?

...the independence that being in control provides you in the long term. As for why to be independent, i hope it should be self-evident that being able to do what you want and work on without having to rely on 3rd parties for a core component of that work is a good thing.

And TBH i'm not sure why being fast at the cost of everything else (especially of independence and control) is even considered a good thing in the first place.

crazygringo•1w ago
> As for why to be independent, i hope it should be self-evident that being able to do what you want

To be honest, not really.

I have a million limitations in my life. Trying to achieve some kind of "independence" is not something I understand. I prefer to accept a kind of interdependence, to be part of an ecosystem. To work together, in sync, for mutual benefit.

I rely on third parties for my food, my housing, my health, my education, my technology, all of it. Using an LLM hosted elsewhere feels no different from using electricity generated elsewhere, or food grown elsewhere, or a computer manufactured elsewhere. So why the difference for you?

badsectoracula•1w ago
> I have a million limitations in my life.

Same but trying to add more limitations (in my view a reliance on a 3rd party to do what i want is a limitation) is not something i like to add without having an incredibly good reason without alternative options.

> So why the difference for you?

In general because each dependence comes with requirements and expectations (many of which i may not even know ahead of time) from my side. The simplest and most straightforward one when it comes to cloud LLMs would be the requirement to have internet connection (which i may or may not have, for a variety of reasons) and of course money to pay for it - and, at least with the way LLMs are currently monetized, that money would depend on how much i need it - and i may either not have that money or not want or even be able to spend it (again for whatever reasons). Even if someone else would pay (e.g. a workplace) this can have indirect effects, like my employer using the LLM use (either via how much i'd cost them for its use or how much i'm using it by counting tokens - the latter of which is something many people have mentioned is already being done, though for now it is to maximize LLM use as CEOs are still in their FOMO phase), which in turn have negative consequences for me.

Just like Microsoft nowadays has almost zero incentive to provide a good quality OS despite Linux existing, since they've captured an overwhelming majority of the desktop space, there is no guarantee that once some LLM provider captures the overwhelming majority of a market wont jack up prices and let quality languish even if there are theoretically alternatives - especially if said provider has built a dependency moat around it with various tools that only work with their LLMs (some LLM providers make their own tools and this isn't out of the good of their hearts).

But there is more to it than just the obvious stuff above. Being in control means nobody will force you do or not do something you dislike - even if you end up doing the same thing down the road, it'd be your decision, not someone else's forced on you.

One example i'm certain many people would have encountered is software updates making the experience of existing users worse. With something cloud-based there isn't much you can do - what if i liked the original GMail, YouTube or even Facebook interfaces more than their current incarnations? There is nothing i can do about it, i just have to accept that i have no control over them. The best i can do is hope that the developers, like in Reddit's case for example, would leave the old UI around and not mess with it much - but even then, i'm at the mercy of those developers, not in control myself. And while with something like GMail i could at least use a desktop application (and hope GMail doesn't remove the feature that make that possible), the core features of YouTube, Facebook and Reddit are mainly their userbases, not their UIs - i do not visit Facebook because i like how it works or behaves, i visit it because it is a point of contact with some family members and acquaintances. Similarly, i do not visit Reddit because i like its UX, i visit it because of the stuff people post and comment there.

Another example, more relevant to LLMs, would be when OpenAI upgraded ChatGPT from 3.5 to 4 or something like that (i do not use ChatGPT so i do not know) and people really disliked the change of tone their chatbots had. Say whatever you want about if that was good or not (though it'd be beside the point i'm trying to make), but ultimately, it was a clear example of someone in power (OpenAI) making changes that some of their users greatly disliked but had zero control or power to do anything about it. AFAIK a similar (though less publicized) issue was when Anthropic changed Claude 3 to Claude 4 but AFAIK Claude 3 still remains available - but that is, like with Reddit's case, because of Anthropic's "benevolence" (as long as it is financially viable for them, of course).

Willingly exposing myself to more dependencies, when my experience so far has shown that they come with long term consequences that are often not aligned with my desires isn't something i like doing. As you implied, there are already aspects of life where we do not have much control, but to me the existence of those acts more of an incentive to avoid losing further control where i can than to give up on it entirely.

On the topic of LLMs, from a personal perspective at least, if local LLMs end up being completely inadequate and making software becomes a matter of developers becoming little more than "remote LLM operators" then i'll just treat being a "remote LLM operator" the same way as being a secretary or accountant: something that i'm not interested in, even if their work often involves using computers.

crazygringo•1w ago
Very interesting, thanks that helps me understand.

It seems like you have what might be called an extreme sense of loss aversion, and so the more control and independence you have, the more you can prevent loss.

In contrast, I don't really have that. Sure I get annoyed when a software interface changes, but at the same time I see that the updates overall have also given me 10 other features I really appreciate, and so I see it as a net win. On the whole, I find that being embedded in a web of up-to-date dependencies has always been a large net positive on the whole. There are losses, but they are far outweighted by the wins, so whenever a loss bugs me I just remind myself of all the new helpful stuff. Like, Spotify's changes to UX drive me nuts sometimes. But they recently launched prompted playlists that have been a game changer for me. They added transitions between songs which is awesome. I'm using them to listen to audiobooks my library doesn't have. So I can put up with the UX.

But if you experiences losses psychologically as 10x the size of wins of the same "objective" size, then your calculus could be different. Pretty much everybody has loss aversion to some extent, it's considered a standard human trait -- I have to remind myself to put things into perspective myself sometimes -- but it sounds like you have a much stronger sense of it, so the control that greater independence gives you is much more valuable to you than it is to someone like me.

So that's why, when you say, "i hope it should be self-evident" -- it's not self-evident to someone like me at all, but I can see why it seems self-evident to you.

tom_•1w ago
I am happy to play my small part in helping fuel the supply of essays about what I can only describe as: This Stuff.
satisfice•1w ago
The post isn’t about tribalism. He barely mentions it.

It is another post that advocates for AI assisted coding without addressing the question of responsibility and trust. It makes claims without offering test data or even talking about testing.

munksbeer•1w ago
There is a very ironic meta being demonstrated here. The author's blog predicts exactly how the comment section here would unfold. It's like we're little puppets and we don't have control of our strings. I'm here to say "I agree with the author, I was very underwhelmed initially, but in the last few months I now use claude code a huge amount".

Others will be here to say "Just another evangelist telling us we're going to miss out".

To add a bit more of an interesting take (because all the arguments at this point are soooooo boring), my main issue at this point isn't whether I find it useful or not (I 100% do), it is that I'm now relying on claude a bit too much and I find it frustrating when I work offline. I am wary of that, a lot actually.