frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

There's a ridiculous amount of tech in a disposable vape

https://blog.jgc.org/2026/01/theres-ridiculous-amount-of-tech-in.html
422•abnercoimbre•1d ago•346 comments

1000 Blank White Cards

https://en.wikipedia.org/wiki/1000_Blank_White_Cards
189•eieio•8h ago•32 comments

ASCII Clouds

https://caidan.dev/portfolio/ascii_clouds/
209•majkinetor•9h ago•37 comments

I Love You, Redis, but I'm Leaving You for SolidQueue

https://www.simplethread.com/redis-solidqueue/
50•amalinovic•2h ago•21 comments

Show HN: Tiny FOSS Compass and Navigation App (<2MB)

https://github.com/CompassMB/MBCompass
3•nativeforks•28m ago•0 comments

Every GitHub object has two IDs

https://www.greptile.com/blog/github-ids
250•dakshgupta•19h ago•63 comments

Systematically generating tests that would have caught Anthropic's top‑K bug

https://theorem.dev/blog/anthropic-bug-test/
9•jasongross•2d ago•0 comments

Putting the "You" in CPU (2023)

https://cpu.land/
45•vinhnx•4d ago•3 comments

A 40-line fix eliminated a 400x performance gap

https://questdb.com/blog/jvm-current-thread-user-time/
277•bluestreak•12h ago•57 comments

Show HN: OSS AI agent that indexes and searches the Epstein files

https://epstein.trynia.ai/
106•jellyotsiro•9h ago•37 comments

No management needed: anti-patterns in early-stage engineering teams

https://www.ablg.io/blog/no-management-needed
206•tonioab•16h ago•212 comments

The truth behind the 2026 J.P. Morgan Healthcare Conference

https://www.owlposting.com/p/the-truth-behind-the-2026-jp-morgan
237•abhishaike•17h ago•49 comments

vLLM large scale serving: DeepSeek 2.2k tok/s/h200 with wide-ep

https://blog.vllm.ai/2025/12/17/large-scale-serving.html
110•robertnishihara•19h ago•29 comments

The $LANG Programming Language

208•dang•11h ago•40 comments

Show HN: 1D-Pong Game at 39C3

https://github.com/ogermer/1d-pong
35•oger•2d ago•6 comments

The Gleam Programming Language

https://gleam.run/
144•Alupis•8h ago•82 comments

Stop using natural language interfaces

https://tidepool.leaflet.pub/3mcbegnuf2k2i
85•steveklabnik•9h ago•38 comments

Are two heads better than one?

https://eieio.games/blog/two-heads-arent-better-than-one/
171•evakhoury•19h ago•52 comments

Show HN: The Tsonic Programming Language

https://tsonic.org
36•jeswin•18h ago•8 comments

The Emacs Widget Library: A Critique and Case Study

https://www.d12frosted.io/posts/2025-11-26-emacs-widget-library
80•whacked_new•2d ago•28 comments

April 9, 1940 a Dish Best Served Cold

https://todayinhistory.blog/2021/04/09/april-9-1940-a-dish-best-served-cold/
42•vinnyglennon•4d ago•4 comments

Show HN: Cachekit – High performance caching policies library in Rust

https://github.com/OxidizeLabs/cachekit
39•failsafe•9h ago•6 comments

UK secures record supply of offshore wind projects

https://www.bbc.co.uk/news/articles/cn9zyx150xdo
5•ljf•16m ago•3 comments

Sei (YC W22) Is Hiring a DevOps Engineer (India/In-Office/Chennai/Gurgaon)

https://www.ycombinator.com/companies/sei/jobs/Rn0KPXR-devops-platform-ai-infrastructure-engineer
1•ramkumarvenkat•10h ago

Handling secrets (somewhat) securely in shells

https://linus.schreibt.jetzt/posts/shell-secrets.html
66•todsacerdoti•4d ago•34 comments

AI generated music barred from Bandcamp

https://old.reddit.com/r/BandCamp/comments/1qbw8ba/ai_generated_music_on_bandcamp/
804•cdrnsf•17h ago•584 comments

The Tulip Creative Computer

https://github.com/shorepine/tulipcc
218•apitman•18h ago•51 comments

How to make a damn website (2024)

https://lmnt.me/blog/how-to-make-a-damn-website.html
208•birdculture•18h ago•60 comments

The Stick in the Stream

https://randsinrepose.com/archives/the-stick-in-the-stream/
8•zdw•4d ago•1 comments

Scott Adams has died

https://www.youtube.com/watch?v=Rs_JrOIo3SE
973•ekianjo•20h ago•1497 comments
Open in hackernews

LLMs are a 400-year-long confidence trick

https://tomrenner.com/posts/400-year-confidence-trick/
73•Growtika•2h ago

Comments

mossTechnician•1h ago
"AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.

But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.

ltbarcly3•1h ago
You are the masses. Are you afraid?
Xss3•1h ago
Hn commenters are not representative
das_keyboard•1h ago
They don't need to instill fear in everyone, but only a critical mass and most importantly _regulators_.

So there will be laws because not everyone can be trusted to host and use this "dangerous", new tech.

And then you have a few "trusted" big tech firms forming an oligopoly of ai, with all of the drawbacks.

iNic•1h ago
We should do both and it makes sense that different orgs have different focuses. It makes no sense to berate one set of orgs for not working on the exact type of thing that you want. PauseAI and ControlAI have each received less than $1 million in funding. They are both very small organizations as far as these types of advocacy non-profits go.
mossTechnician•1h ago
If it makes sense to handle all of these issues, then couldn't these organizations just acknowledge all of these issues? If reducing harm is the goal, I don't see a reason to totally segregate different issues, especially not by drawing a dividing line between the ones OpenAI already acknowledges and the ones it doesn't. I've never seen any self-described "AI safety" organizations that tackles any of the present-day issues AI companies cause.
iNic•21m ago
If you've never seen it then you haven't been paying attention. For example Anthropic (the biggest AI org which is "safety" aligned) released a big report last year on metal well being [1]. Also here is their page on societal impacts [2]. Here is PauseAI's list of risks [3], it has deepfakes as its second issue!

The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.

[1] https://www.anthropic.com/news/protecting-well-being-of-user...

[2] https://www.anthropic.com/research/team/societal-impacts

[3] https://pauseai.info/risks

rl3•1h ago
It's almost like there's enough people in the world that we can focus on and tackle multiple problems at once.
ACCount37•57m ago
I'd rather the "AI safety" of the kind you want didn't exist.

The catastrophic AI risk isn't "oh no, people can now generate pictures of women naked".

mossTechnician•42m ago
Why would you rather it not exist?

In a vacuum, I agree with you that there's probably no harm in AI-generated nudes of fictional women per se; it's the rampant use to sexually harass real women and children[0], while "causing poor air quality and decreasing life expectancy" in Tennessee[1], that bothers me.

[0]: https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

[1]: https://arstechnica.com/tech-policy/2025/04/elon-musks-xai-a...

ACCount37•6m ago
Because it's just a vessel for the puritans and the usual "cares more about feeling righteous than about being right" political activists. I have no love for either.

The whole thing with "AI polluting the neighborhoods" falls apart on a closer examination. Because, as it turns out, xAI put its cluster in an industrial area that already has: a defunct coal power plant, an operational steel plant, and an operational 1 GW grid-scale natural gas power plant that powers the steel plant - that one being across the road from xAI's cluster.

It's quite hard for me to imagine a world where it's the AI cluster that moves the needle on local pollution.

ltbarcly3•1h ago
I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.

Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.

I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

exceptione•1h ago
In a way LLMs are intelligence tests indeed.
dependency_2x•1h ago
I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.
SwoopsFromAbove•1h ago
And is the electric one intelligent? :p
dependency_2x•1h ago
Who cares!
hexbin010•1h ago
Got any recent specific examples of it saving you an entire evening?
Traubenfuchs•29m ago
0. Claude, have a look at frontend project A and backend project B.

1. create a skeleton clone of frontend A, named frontend B, which is meant to be the frontend for backend project B, including the oAuth configuration

2. create the kubernetes yaml and deployment.sh, it should be available under b.mydomain.com for frontend B and run it, make sure the deployment worked by checking the page on b.mydomain.com

3. in frontend B, implement the UI for controller B1 from backend B, create the necessary routing to this component and add a link to it to the main menu, there should be a page /b1 that lists the entries, /b1/xxx to display details, /b1/xxx/edit to edit an entry and /b1/new to create one

4. in frontend B, implement the UI for controller B2 from backend B, create the necessary routing to this component and add a link to it to the main menu, etc.

etc.

All of this is done in 10 minutes. Yeah I could do all of this myself, but it would take longer.

falloutx•12m ago
Did you need it though? Like most projects I see being done by people with Claude Code are just their personal projects, which they wouldn't have wasted their time on in the past but now they will get pulled into the terminal thinking its only gonna take 20 mins and they end up burning 100s of subscription dollars on it. If there is no other maintainer & the project is all yours, I dont see any harm in doing it.
HWR_14•45m ago
Bad example. IKEA assembles better with a manual screwdriver.
Traubenfuchs•43m ago
You wouldn't say that anymore if you would have ever assembled PAX doors.
HWR_14•37m ago
Maybe? I'm not familiar with every ikea product. But it looks like it take a dozen small screws into soft wood.
SwoopsFromAbove•1h ago
I also cannot calculate the square root of 472629462.

My pocket calculator is not intelligent. Nor are LLMs.

HWR_14•42m ago
You'd be surprised. You could probably get three digits of the square root in under a minute if you tried.
kusokurae•1h ago
It's incredible that on Hacker News we still encounter posts by people who will or cannot differentiate mathematics from magic.
obsoleetorr•1h ago
it's also incredible we find people which can't differentiate physics/mathematics from the magic of the human brain
adrianN•1h ago
Intelligence is not magic though. The difference between intelligence and mathematics can plausibly be the same kind of difference between chemistry and intelligence.
qsera•15m ago
There is Intelligence and there is Imitation of Intelligence. LLMs do the latter.

Talk to any model about deep subjects. You ll understand what I am saying. After a while it will start going around in circles.

FFS ask it to make an original joke, and be amused..

obsoleetorr•10m ago
> After a while it will start going around in circles.

so like your average human

> FFS ask it to make an original joke, and be amused..

let's try this one on you - say an original joke

oh, right, you dont respond to strangers prompts, thus you have agency, unlike an LLM

energy123•1h ago
Human intelligence is chemistry and biology, not magic. OK, now what?
ACCount37•53m ago
Your brain is just math implemented in wet meat.
TeriyakiBomb•1h ago
Everything is magic when you don't understand how things work.
jaccola•1h ago
There are dozens of definitions of "intelligence", we can't even agree what intelligence means in humans, never mind elsewhere. So yes, by some subset of definitions it is intelligent.

But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.

techpression•1h ago
I did that when I was 14 because I had no other choice, damn you SoundBlaster! I didn't get any menu but I got sound in the end.

I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).

But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.

self_awareness•37m ago
set BLASTER=A220 I5 D1
slg•58m ago
>I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.

qsera•6m ago
It is the imitation of intelligence.

It works because people have answered similar questions a million times on the internet and the LLMs are trained on it.

So it will work for a while. When the human generated stuff stops appearing online, then LLMs ll quickly fall in usefulness.

But that is enough time for the people who might think that it going to last for ever to make huge investments into it, and the AI companies to get away with the loot.

Actually it is the best kind of scam...

schnitzelstoat•1h ago
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).

But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.

NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.

latexr•1h ago
Those aren’t mutually exclusive; something can be both useful and a con.

When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.

https://youtu.be/l0K4XPu3Qhg?t=60

BoxOfRain•1h ago
I think the following things can both be true at the same time:

* LLMs are a useful tool in a variety of circumstances.

* Sam Altman is personally incentivised to spout a great deal of hyped-up rubbish about both what LLMs are capable of, and can be capable of.

latexr•58m ago
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
pousada•37m ago
It would make every marketing department and basically every startup founder conmen too. While I don’t completely disagree with that framing it’s not really helpful.
latexr•9m ago
No, that is not true. Coca-Cola doesn’t advertise itself as a cure for cancer. Dropbox doesn’t advertise itself as a tax-filing application.

Theranos on the other hand… That was a con and the founder was prosecuted.

And again, Sam Altman has a history of deceit.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

runarberg•40m ago
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.

The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).

In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.

ACCount37•1h ago
LLMs of today advance in incremental improvements.

There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.

This alone should give you second thoughts on "AI doomerism".

latexr•53m ago
That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.

That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.

ACCount37•37m ago
If that's the case, then, what's the wall?

The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.

The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.

latexr•22m ago
> If that's the case, then, what's the wall?

I didn’t say that is the case, I said it could be. Do you understand the difference?

And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.

ACCount37•5m ago
And there could be a teapot in an orbit around the Sun. Do we have any evidence for that being the case though?

So far, there's a distinct lack of "wall" to be seen - and a lot of the proposed "fundamental" limitations of LLMs were discovered to be bogus with interpretability techniques, or surpassed with better scaffolding and better training.

myrmidon•17m ago
Agree completely with your position.

I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.

This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.

But it's only a matter of time until we solve this IMO.

falloutx•27m ago
AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.
runarberg•48m ago
> it can still massively reduce the amount of human labour required for many tasks.

I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.

I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.

bodge5000•41m ago
> The LLM's are clearly useful for many things

I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?

NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.

This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them

falloutx•35m ago
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.

At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.

And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.

ACCount37•16m ago
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.

The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.

The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.

ACCount37•26m ago
Yeah. Obviously. Duh. That's why we keep doing it.

Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.

If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?

For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.

baq•1h ago
"People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.

The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.

leogao•1h ago
> The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.

you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.

BoxOfRain•58m ago
> nuclear-weapon-level-dangerous chatbots

I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.

AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.

lyu07282•1h ago
I think it's interesting how gamers have developed a pretty healthy aversion to generative ai in video games. Steam and Itch both now make it mandatory that games disclose generative ai use and recently even beloved Larian Studios was under fire for using ai for concept art. Gamers hate that shit.

I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?

Chance-Device•1h ago
I think this is probably a trend that will erode with time, even now it’s probably just moved underground. How many human artists are using AI for concepts then laundering the results? Even if it’s just idea generation, that’s a part of the process. If it speeds up throughput, then maybe that’s fewer jobs in the long run.

And if AI assisted products are cheaper, and are actually good, then people will have to vote with their wallets. I think we’ve learned that people aren’t very good at doing that with causes they claim to care about once they have to actually part with their money.

lyu07282•1h ago
Because voting with your wallet is nonsense, we can decide what society we want to live in we don't have to accept one in which human artists can't make a living. Capitalism isn't a force of nature we discovered like gravity, it's deliberate choices we made.
Chance-Device•15m ago
Which I assume is why you pay someone to hand-paint scenes from your holidays instead of taking photographs? And why you employ someone to wash your clothes on a scrubbing board instead of using a machine?

Or would you prefer these things be outlawed to increase employment?

HWR_14•48m ago
A huge issue with voting with your wallet is fraud. It's easy to lie about having no AI in your process. Especially if the final product is laundered by a real artist.
timschmidt•1h ago
> programmers seem to have to argue it doesn't actually do anything for some reason.

It's not really hard to see... spend your whole life defining yourself around what you do that others can't or won't, then an algorithm comes along which can do a lot of the same. Directly threatens the ego, understandings around self-image and self-worth, as well as future financial prospects (perceived). Along with a heavy dose of change scary, change bad.

Personally, I think the solution is to avoid building your self-image around material things, and to welcome and embrace new tools which always bring new opportunities, but I can see why the polar opposite is a natural reaction for many.

bandrami•1h ago
IDK, I think it's at least reasonable to look at the fact that there isn't a ton of new software available out there and conclude "AI isn't actually making software creation any faster". I understand the counterarguments to that but it's hardly an unreasonable conclusion.
Al-Khwarizmi•1h ago
I haven't gamed much in the last few years due to severe lack of time so I'm out of touch, but I used to play a lot of CRPGs and I always dreamed of having NPCs who could talk and react beyond predefined scripted lines. This seems to finally be possible thanks to LLMs and I think it was desired by many (not only me). So why are gamers not excited about generative AI?
danielbln•1h ago
> Gamers hate that shit.

Unless AI is used for code (which it is, surely, almost everywhere), then Gamers don't give a damn. Also, Larian didn't use it for concept art, they used it to generate the first mood board to give to the concept artist as a guideline. And then there is Ark Raiders, who uses AI for all their VO, and that game is a massive hit.

This is just a breathless bubble, the wider gaming audience couldn't give two shits if studios use AI or not.

lpcvoid•1h ago
I think the costs of LLMs (huge energy hunger, people being fired because of it, hostile takeover of human creativity, and it causing computer hardware to rise in cost exponentially) is by far larger than the uses (generating videos of fish with arms, programming slightly faster, writing slop emails to talented people).

I know LLMs won't vanish again magically, but I wish they would every time I have to deal with their output.

falloutx•19m ago
That is consumer choice, a consumer has rights to know whether something is made by using a tech which could make them unemployed or not. I wouldnt pay $70 or $10 on a game that I know someone didnt put effort into.
krystofee•1h ago
I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.

I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.

Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

ManuelKiessling•57m ago
This. By now I don’t understand how anyone can still argue in the abstract while it’s trivial to simply give it a try and collect cold, hard facts.

It’s like arguing that the piano in the room is out of tune and not bothering to walk over to the piano and hit its keys.

satisfice•45m ago
I am hitting the keys, and I call bullshit.

Yes, the technology is interesting and useful. No, it is not a “10x” miracle.

ozim•25m ago
I call "AGI" or "100x miracle" a bullshit but still existing stuff is definitely "10x miracle".
ozim•27m ago
Downside is a lot of those that argue, try out some stuff in ChatGPT or other chat interface without digging a bit further. Expecting "general AI" and asking general questions where LLMs are most prone for hallucinations. Other part is cheap out setups using same subscription for multiple people who get history polluted.

They don't have time to check more stuff as they are busy with their life.

People who did check the stuff don't have time in life to prove to the ones that argue "in exactly whatever the person arguing would find useful way".

Personally like a year ago I was the person who tried out some ChatGPT and didn't have time to dabble, because all the hype was off putting and of course I was finding more important and interesting things to do in my life besides chatting with some silly bot that I can trick easily with trick questions or consider it not useful because it hallucinated something I wanted in a script.

I did take a plunge for really a deep dive into AI around April last year and I saw for my own eyes ... and only that convinced me. Using API where I built my own agent loop, getting details from images, pdf files, iterating on the code, getting unstructured "human" input into structured output I can handle in my programs.

*Data classification is easy for LLM. Data transformation is a bit harder but still great. Creating new data is hard so like answering questions where it has to generate stuff from thin air it will hallucinate like a mad man.*

Data classification like "is it a cat, answer with yes or no" it will be hard for latest models to start hallucinating.

demorro•18m ago
It's like arguing that the piano goes out of tune randomly and that even if you get through 1, 2, or even 10 songs without that happening, I'm not interested in playing that piano on stage.
112233•3m ago
So I tried it and it is worse that having random dude from Fiverr write you code — it is actively malicious and goes out of it's way do decieve and to subtly sabotage existing working code.

Do I now get the right to talk badly about all LLM coding, or is there another exercise I need to take?

vanderZwan•55m ago
Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.

The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.

amelius•49m ago
Yeah, but it should have been in the title otherwise it uses in itself a centuries old trick.
remus•48m ago
The point of the article is to paint LLMs as a confidence trick, the keyword being trick. If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick? If a street performer was doing the cup and ball scam, but I actually won and left with more money than I started with then I'd say that's a pretty bad trick!

Of course it is a little more nuanced than this and I would agree that some of the marketing hype around AI is overblown, but I think it is inarguable that AI can provide concrete benefits for many people.

latexr•40m ago
> If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick?

Yes, yes you can. As I’ve mentioned elsewhere on this thread:

> When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

LLMs are being sold as miracle technology that does way more than it actually can.

latexr•45m ago
Exactly. It’s like if someone claimed to be selling magical fruit that cures cancer, and they’re just regular apples. Then people like your parent commenter say “that’s not a con, I eat apples and they’re both healthy and tasty”. Yes, apples do have great things about them, but not the exaggerations they were being sold as. Being conned doesn’t mean you get nothing, it means you don’t get what was advertised.
JacoboJacobi•27m ago
The claims being made that are cited are not really in that camp though..

It may be extremely dangerous to release. True. Even search engines had the potential to be deemed too dangerous in the nuclear pandoras box arguments of modern times. Then there are high-speed phishing opportunities, etc.

It may be an essential failure to miss the boat. True. If calculators were upgraded/produced and disseminated at modern Internet speeds someone who did accounting by hand would have been fired if they refused to learn for a few years.

Its communication builds an unhealthy relationship that is parasitic. True. But the Internet and the way content is critiqued is a source of this even if it is not intentionally added.

I don't like many people involved and I don't think they will be financially successful on merit alone given that anyone can create a LLM. But LLM technology is being sold by organic "con" that is how all technology such as calculators end up spreading for individuals to evaluate and adopt. A technology everyone is primarily brutally honest about is a technology that has died because no one bothers to check if the brutal honesty has anything to do with their own possible uses.

latexr•17m ago
> The claims being made that are cited are not really in that camp though..

They literally are. Sam Altman has literally said multiple times this tech will cure cancer.

carpo•41m ago
But saying it's a confidence trick is saying it's a con. That they're trying to sell someone something that doesn't work. Th op is saying it makes then 10x more productive, so how is that a con?
trimethylpurine•23m ago
The marketing says it does more than that. This isn't just a problem unique to LLMs either. We have laws about false advertising for a reason. It's going on all the time. In this case the tech is new so the lines are blurry. But to the technically inclined, it's very obvious where they are. LLMs are artificial, but they are not literally intelligent. Calling them "AI" is a scam. I hope that it's only a matter of time until that definition is clarified and we can stop the bullshit. The longer it goes, the worse it will be when the bubble bursts. Not to be overly dramatic, but economic downturns have real physical consequences. People somewhere will literally starve to death. That number of deaths depends on how well the marketers lied. Better lies lead to bigger bubbles, which when burst lead to more deaths. These are facts. (Just ask ChatGPT, it will surely agree with me, if it's intelligent. ;p)
satisfice•47m ago
You are speculating. You don’t know. You are not testing this technology— you are trusting it.

How do I know? Because I am testing it, and I see a lot of problems that you are not mentioning.

I don’t know if you’ve been conned or you are doing the conning. It’s at least one of those.

consp•45m ago
> It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.

I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.

energy123•44m ago
> I'm maintaining a well-structured enterprise codebase (100k+ lines Django)

How do you avoid this turning into spaghetti? Do you understand/read all the output?

falloutx•38m ago
Are you actually reading the code? I have noticed most of the gains go away when you are reading the code outputted by the machine. And sometimes I do have to fix it by hand and then the agent is like "Oh you changed that file, let me fix it"
keyle•34m ago
It's fine for a Django app that doesn't innovate and just follows the same patterns for the 100 solved problems that it solves.

The line becomes a lot blurrier when you work on non trivial issues.

A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you.

What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate and solve problems that don't exist yet. But it will always be an inference on current problems solved.

Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.

danielbln•25m ago
I think the 'novelty' goalpost is being moved here. This notion that agentic LLMs can't handle novel or non-trivial problems needs to die. They don't merely derive solutions from the training data, but synthesize a solution path based on the context that is being built up in the agentic loop. You could make up some obscure DSL whole cloth, that has therefore never been in the training data, feed it the docs and it will happily use it to create output in said DSL.

Also, almost all problems are composite problems where each part is either prior art or in itself somewhat trivial. If you can onboard the LLM onto the problem domain and help it decompose then it can tackle a whole lot more than what it has seen during pre- and post-training.

abricq•30m ago
> My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily

Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?

Of course marketing matters.

And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.

megamix•11m ago
Are you also getting dumber? https://tech.co/news/another-study-ai-making-us-dumb
lxgr•59m ago
Considerations around current events aside, what exactly is the supposed "confidence trick" of mechanical or electronic calculators? They're labor-saving devices, not arbiters of truth, and as far as I can tell, they're pretty good at saving a lot of labor.
mono442•59m ago
I don't think it's true. It is probably overhyped but it is legitimately useful. Current agents can do around 70% of coding stuff I do at work with light supervision.
latexr•34m ago
> It is probably overhyped

That’s exactly what a con is: selling you something as being more than what it actually is. If you agree it’s overhyped by its sellers, you agree it’s a con.

> Current agents can do around 70% of coding stuff I do

LLMs are being sold as capable of significantly more than coding. Focusing on that singular aspect misses the point of the article.

Traubenfuchs•44m ago
Yeah there is overhyped marketing, but at this point, AI has revolutionized software engineering and is writing the majority of code world wide whether you like it or not and is still improving.
self_awareness•41m ago
> If your answer doesn’t match the calculator’s, you need to redo your work.

Hm... is it wrong to think like this?

falcor84•34m ago
> We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.

> This has, of course, not happened.

This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.

Here are some representative values: https://pauseai.info/pdoom

Meneth•10m ago
> This has, of course, not happened.

And that's the anthropic fallacy. In the worlds where it has happened, the author is dead.

falcor84•7m ago
A very good point too.

Though I personally hope that we'll have enough of a warning to convince people that there is a problem and give us a fighting chance. I grew up on Terminator and would be really disappointed if the AI kills me in an impersonal way.

vegabook•32m ago
the other urgency trick that is not mentioned is "oooh China!!" which is used to short circuit all types of regulations and ethics, especially concerning fair access to energy for actual humans, and plundering the public balance sheet with requests for government guarantees for their wild spending plans.
grumbel•32m ago
> GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”. [...] This has, of course, not happened.

What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.

Everything they warned about with the release of GPT‑2 did in fact happen.

petesergeant•26m ago
Reading AI-denier articles in 2026 is almost as boring as reading crypto-booster articles was 10 years ago. You may not like LLMs, you may not want LLMs, but pretending they're not doing anything clever or useful is bizarre, however flowery you make your language.