frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLMs are a 400-year-long confidence trick

https://tomrenner.com/posts/400-year-confidence-trick/
62•Growtika•1h ago

Comments

mossTechnician•1h ago
"AI safety" groups are part of what's described here: you might assume from the general "safety" label that organizations like PauseAI or ControlAI would focus things like data center pollution, the generation of sexual abuse material, causing mental harm, or many other things we can already observe.

But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.

ltbarcly3•1h ago
You are the masses. Are you afraid?
Xss3•55m ago
Hn commenters are not representative
das_keyboard•51m ago
They don't need to instill fear in everyone, but only a critical mass and most importantly _regulators_.

So there will be laws because not everyone can be trusted to host and use this "dangerous", new tech.

And then you have a few "trusted" big tech firms forming an oligopoly of ai, with all of the drawbacks.

noosphr•33m ago
The more I look at what's happening today the more I wonder what was in the water in the 80s and 90s that allowed a free internet to happen in the first place.
iNic•55m ago
We should do both and it makes sense that different orgs have different focuses. It makes no sense to berate one set of orgs for not working on the exact type of thing that you want. PauseAI and ControlAI have each received less than $1 million in funding. They are both very small organizations as far as these types of advocacy non-profits go.
mossTechnician•41m ago
If it makes sense to handle all of these issues, then couldn't these organizations just acknowledge all of these issues? If reducing harm is the goal, I don't see a reason to totally segregate different issues, especially not drawing a dividing line between the ones OpenAI already acknowledges and the ones it doesn't. I've never seen any self-described "AI safety" organizations that tackles any of the present-day issues AI companies cause.
rl3•53m ago
It's almost like there's enough people in the world that we can focus on and tackle multiple problems at once.
ACCount37•26m ago
I'd rather the "AI safety" of the kind you want didn't exist.

The catastrophic AI risk isn't "oh no, people can now generate pictures of women naked".

mossTechnician•12m ago
Why would you rather it not exist?

In a vacuum, I agree with you that there's probably no harm in AI-generated nudes of fictional women per se; it's the rampant use to sexually harass real women and children[0], while "causing poor air quality and decreasing life expectancy" in Tennessee[1], that bothers me.

[0]: https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

[1]: https://arstechnica.com/tech-policy/2025/04/elon-musks-xai-a...

ltbarcly3•1h ago
I think anyone who thinks that LLMs are not intelligent in any sense is simply living in denial. They might not be intelligent in the same way a human is intelligent, they might make mistakes a person wouldn't make, but that's not the question.

Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.

I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

exceptione•1h ago
In a way LLMs are intelligence tests indeed.
dependency_2x•1h ago
I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.
SwoopsFromAbove•1h ago
And is the electric one intelligent? :p
dependency_2x•59m ago
Who cares!
hexbin010•32m ago
Got any recent specific examples of it saving you an entire evening?
HWR_14•15m ago
Bad example. IKEA assembles better with a manual screwdriver.
Traubenfuchs•12m ago
You wouldn't say that anymore if you would have ever assembled PAX doors.
HWR_14•6m ago
Maybe? I'm not familiar with every ikea product. But it looks like it take a dozen small screws into soft wood.
SwoopsFromAbove•1h ago
I also cannot calculate the square root of 472629462.

My pocket calculator is not intelligent. Nor are LLMs.

HWR_14•11m ago
You'd be surprised. You could probably get three digits of the square root in under a minute if you tried.
kusokurae•59m ago
It's incredible that on Hacker News we still encounter posts by people who will or cannot differentiate mathematics from magic.
obsoleetorr•54m ago
it's also incredible we find people which can't differentiate physics/mathematics from the magic of the human brain
adrianN•48m ago
Intelligence is not magic though. The difference between intelligence and mathematics can plausibly be the same kind of difference between chemistry and intelligence.
energy123•45m ago
Human intelligence is chemistry and biology, not magic. OK, now what?
ACCount37•22m ago
Your brain is just math implemented in wet meat.
TeriyakiBomb•45m ago
Everything is magic when you don't understand how things work.
jaccola•35m ago
There are dozens of definitions of "intelligence", we can't even agree what intelligence means in humans, never mind elsewhere. So yes, by some subset of definitions it is intelligent.

But by some subset of definitions my calculator is intelligent. By some subset of definitions a mouse is intelligent. And, more interestingly, by some subset of definitions a mouse is far more intelligent than an LLM.

techpression•30m ago
I did that when I was 14 because I had no other choice, damn you SoundBlaster! I didn't get any menu but I got sound in the end.

I don't think conflating intelligence with "what a computer can do" makes much sense though. I can't calculate the X digit of PI in less than Z, I'm still intelligent (or I pretend to be).

But the question is not about intelligence, it's a red herring, it's just about utility and they (LLM's) are useful.

self_awareness•6m ago
set BLASTER=A220 I5 D1
slg•28m ago
>I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?

Yes, I have worked in small enough companies in which the developers just end up becoming the default IT help desk. I never had any formal training in IT, but most of that kind of IT work can be accomplished with decent enough Google skills. In a way, it worked the same as you and the LLM. I would go poking through settings, run tests to gather info, run commands, and overall just keep trying different solutions until either one worked or it became reasonable to give up. I'm sure many people here have had similar experiences doing the same thing in their own families. I'm not too impressed with an LLM doing that. In this example, it's functionally just improving people's Googling skills.

schnitzelstoat•59m ago
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).

But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.

NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.

latexr•44m ago
Those aren’t mutually exclusive; something can be both useful and a con.

When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.

https://youtu.be/l0K4XPu3Qhg?t=60

BoxOfRain•31m ago
I think the following things can both be true at the same time:

* LLMs are a useful tool in a variety of circumstances.

* Sam Altman is personally incentivised to spout a great deal of hyped-up rubbish about both what LLMs are capable of, and can be capable of.

latexr•27m ago
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
pousada•6m ago
It would make every marketing department and basically every startup founder conmen too. While I don’t completely disagree with that framing it’s not really helpful.
runarberg•9m ago
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.

The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).

In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.

ACCount37•31m ago
LLMs of today advance in incremental improvements.

There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.

This alone should give you second thoughts on "AI doomerism".

latexr•22m ago
That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.

That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.

ACCount37•6m ago
If that's the case, then, what's the wall?

The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.

The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.

runarberg•17m ago
> it can still massively reduce the amount of human labour required for many tasks.

I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.

I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.

bodge5000•10m ago
> The LLM's are clearly useful for many things

I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?

NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.

This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them

falloutx•4m ago
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.

At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.

baq•54m ago
"People are falling in love with LLMs" and "P(Doom) is fearmongering" so close to each other is some cognitive dissonance.

The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.

leogao•50m ago
> The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.

you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.

BoxOfRain•27m ago
> nuclear-weapon-level-dangerous chatbots

I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.

AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.

lyu07282•46m ago
I think it's interesting how gamers have developed a pretty healthy aversion to generative ai in video games. Steam and Itch both now make it mandatory that games disclose generative ai use and recently even beloved Larian Studios was under fire for using ai for concept art. Gamers hate that shit.

I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?

Chance-Device•39m ago
I think this is probably a trend that will erode with time, even now it’s probably just moved underground. How many human artists are using AI for concepts then laundering the results? Even if it’s just idea generation, that’s a part of the process. If it speeds up throughput, then maybe that’s fewer jobs in the long run.

And if AI assisted products are cheaper, and are actually good, then people will have to vote with their wallets. I think we’ve learned that people aren’t very good at doing that with causes they claim to care about once they have to actually part with their money.

lyu07282•34m ago
Because voting with your wallet is nonsense, we can decide what society we want to live in we don't have to accept one in which human artists can't make a living. Capitalism isn't a force of nature we discovered like gravity, it's deliberate choices we made.
HWR_14•17m ago
A huge issue with voting with your wallet is fraud. It's easy to lie about having no AI in your process. Especially if the final product is laundered by a real artist.
timschmidt•38m ago
> programmers seem to have to argue it doesn't actually do anything for some reason.

It's not really hard to see... spend your whole life defining yourself around what you do that others can't or won't, then an algorithm comes along which can do a lot of the same. Directly threatens the ego, understandings around self-image and self-worth, as well as future financial prospects (perceived). Along with a heavy dose of change scary, change bad.

Personally, I think the solution is to avoid building your self-image around material things, and to welcome and embrace new tools which always bring new opportunities, but I can see why the polar opposite is a natural reaction for many.

bandrami•36m ago
IDK, I think it's at least reasonable to look at the fact that there isn't a ton of new software available out there and conclude "AI isn't actually making software creation any faster". I understand the counterarguments to that but it's hardly an unreasonable conclusion.
Al-Khwarizmi•33m ago
I haven't gamed much in the last few years due to severe lack of time so I'm out of touch, but I used to play a lot of CRPGs and I always dreamed of having NPCs who could talk and react beyond predefined scripted lines. This seems to finally be possible thanks to LLMs and I think it was desired by many (not only me). So why are gamers not excited about generative AI?
danielbln•29m ago
> Gamers hate that shit.

Unless AI is used for code (which it is, surely, almost everywhere), then Gamers don't give a damn. Also, Larian didn't use it for concept art, they used it to generate the first mood board to give to the concept artist as a guideline. And then there is Ark Raiders, who uses AI for all their VO, and that game is a massive hit.

This is just a breathless bubble, the wider gaming audience couldn't give two shits if studios use AI or not.

lpcvoid•29m ago
I think the costs of LLMs (huge energy hunger, people being fired because of it, hostile takeover of human creativity, and it causing computer hardware to rise in cost exponentially) is by far larger than the uses (generating videos of fish with arms, programming slightly faster, writing slop emails to talented people).

I know LLMs won't vanish again magically, but I wish they would every time I have to deal with their output.

krystofee•36m ago
I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.

I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.

Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

ManuelKiessling•26m ago
This. By now I don’t understand how anyone can still argue in the abstract while it’s trivial to simply give it a try and collect cold, hard facts.

It’s like arguing that the piano in the room is out of tune and not bothering to walk over to the piano and hit its keys.

satisfice•14m ago
I am hitting the keys, and I call bullshit.

Yes, the technology is interesting and useful. No, it is not a “10x” miracle.

vanderZwan•25m ago
Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.

The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.

amelius•18m ago
Yeah, but it should have been in the title otherwise it uses in itself a centuries old trick.
remus•17m ago
The point of the article is to paint LLMs as a confidence trick, the keyword being trick. If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick? If a street performer was doing the cup and ball scam, but I actually won and left with more money than I started with then I'd say that's a pretty bad trick!

Of course it is a little more nuanced than this and I would agree that some of the marketing hype around AI is overblown, but I think it is inarguable that AI can provide concrete benefits for many people.

latexr•9m ago
> If LLMs do actually deliver very real, tangible benefits then can you say there is really a trick?

Yes, yes you can. As I’ve mentioned elsewhere on this thread:

> When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

LLMs are being sold as miracle technology that does way more than it actually can.

latexr•14m ago
Exactly. It’s like if someone claimed to be selling magical fruit that cures cancer, and they’re just regular apples. Then people like your parent commenter say “that’s not a con, I eat apples and they’re both healthy and tasty”. Yes, apples do have great things about them, but not the exaggerations they were being sold as. Being conned doesn’t mean you get nothing, it means you don’t get what was advertised.
carpo•10m ago
But saying it's a confidence trick is saying it's a con. That they're trying to sell someone something that doesn't work. Th op is saying it makes then 10x more productive, so how is that a con?
satisfice•16m ago
You are speculating. You don’t know. You are not testing this technology— you are trusting it.

How do I know? Because I am testing it, and I see a lot of problems that you are not mentioning.

I don’t know if you’ve been conned or you are doing the conning. It’s at least one of those.

consp•14m ago
> It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.

That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.

I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.

energy123•13m ago
> I'm maintaining a well-structured enterprise codebase (100k+ lines Django)

How do you avoid this turning into spaghetti? Do you understand/read all the output?

falloutx•7m ago
Are you actually reading the code? I have noticed most of the gains go away when you are reading the code outputted by the machine. And sometimes I do have to fix it by hand and then the agent is like "Oh you changed that file, let me fix it"
keyle•3m ago
It's fine for a Django app that doesn't innovate and just follows the same patterns for the 100 solved problems that it solves.

The line becomes a lot blurrier when you work on non trivial issues.

A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you.

What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate problems that don't exist yet.

Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.

lxgr•28m ago
Considerations around current events aside, what exactly is the supposed "confidence trick" of mechanical or electronic calculators? They're labor-saving devices, not arbiters of truth, and as far as I can tell, they're pretty good at saving a lot of labor.
mono442•28m ago
I don't think it's true. It is probably overhyped but it is legitimately useful. Current agents can do around 70% of coding stuff I do at work with light supervision.
latexr•3m ago
> It is probably overhyped

That’s exactly what a con is: selling you something as being more than what it actually is. If you agree it’s overhyped by its sellers, you agree it’s a con.

> Current agents can do around 70% of coding stuff I do

LLMs are being sold as capable of significantly more than coding. Focusing on that singular aspect misses the point of the article.

Traubenfuchs•14m ago
Yeah there is overhyped marketing, but at this point, AI has revolutionized software engineering and is writing the majority of code world wide whether you like it or not and is still improving.
self_awareness•11m ago
> If your answer doesn’t match the calculator’s, you need to redo your work.

Hm... is it wrong to think like this?

falcor84•3m ago
> We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.

> This has, of course, not happened.

This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.

Here are some representative values: https://pauseai.info/pdoom

vegabook•2m ago
the other urgency trick that is not mentioned is "oooh China!!" used to try to bypass all types of regulations around energy usage, not to mention energy access for actual humans, and also used to try to plunder the public balance sheet.
grumbel•1m ago
> GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”. [...] This has, of course, not happened.

What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.

Everything they warned about with the release of GPT‑2 did in fact happen.

Build Products That Click

https://app.holyshift.ai/
1•likethejade87•31s ago•0 comments

Never Miss a Subscription Payment Again

https://tracksub.app
1•bidya271•36s ago•0 comments

Tell HN: I Downgraded from macOS Tahoe to Sequoia

1•inatreecrown2•1m ago•0 comments

TvOS Review: A Neutral Static-Analysis Example Involving WebKit Linking

https://qoli.notion.site/HSBBrowserAppStore-tvOS-review-public-en-2e8c1b36c40181e98f05ddca7e8d38a9
1•ronniew•3m ago•0 comments

Instrumenting distributed systems for operational visibility

https://aws.amazon.com/builders-library/instrumenting-distributed-systems-for-operational-visibil...
1•tosh•4m ago•0 comments

A Claude Code plugin for spec-driven development with Ralph-style loops

https://github.com/tzachbon/smart-ralph
1•tzachb•5m ago•1 comments

OpenProject 17.0 released: real-time collaboration in documents

https://www.openproject.org/docs/release-notes/17-0-0/
1•openprojectgmbh•7m ago•1 comments

Bug-BOUNTY.md: we stop the bug-bounty end of Jan 2026

https://github.com/curl/curl/pull/20312
1•_____k•8m ago•0 comments

Tesla to offer self-driving software only on monthly basis from Feb 14 Musk says

https://www.reuters.com/business/autos-transportation/tesla-offer-self-driving-software-only-mont...
1•comebhack•13m ago•0 comments

I Hate GitHub Actions with Passion

https://xlii.space/eng/i-hate-github-actions-with-passion/
1•xlii•13m ago•0 comments

Show HN: TLD – a minimal, self-hosted, Twitch live stream recorder

https://codeberg.org/naughtyfinch/twitch-live-downloader
1•naughtyfinch•14m ago•0 comments

GitHub hijacks and breaks browser search

https://abstractnonsense.xyz/micro-blog/2026-01-14-github-hijacks-and-breaks-browser-search/
2•subset•17m ago•0 comments

Show HN: World CEO Cockpit

https://www.maximevidal.com/progress-monitoring
2•vmaxmc2•18m ago•0 comments

Police chief apologises after AI error used to justify Maccabi Tel Aviv ban

https://www.theguardian.com/uk-news/2026/jan/14/west-midlands-police-chief-apologises-ai-error-ma...
4•chrisjj•18m ago•2 comments

What do the struggles of capitalism and communism have in common?

https://blog.hermesloom.org/p/what-do-the-struggles-of-capitalism
1•sigalor•19m ago•1 comments

More sustainable epoxy thanks to phosphorus

https://www.empa.ch/web/s604/flamm-hemmendes-epoxidharz-nachhaltiger-machen
1•JeanKage•23m ago•0 comments

We need a new Unix flag for agents

https://solmaz.io/skillflag
2•hosolmaz•23m ago•1 comments

Show HN: Yapper – Offline macOS dictation. One-time purchase, no sub

https://yapper.to/
1•nikbar•26m ago•0 comments

The Day You Became a Better Writer (The Dilbert Blog 2007)

https://web.archive.org/web/20160529145612/http://dilbertblog.typepad.com/the_dilbert_blog/2007/0...
1•tosh•27m ago•0 comments

Hegseth wants to integrate Musk's Grok AI into military networks this month

https://arstechnica.com/ai/2026/01/hegseth-wants-to-integrate-musks-grok-ai-into-military-network...
3•ndsipa_pomu•28m ago•1 comments

Show HN: Screenrecord.in – A browser based local-first screen recording solution

https://screenrecord.in/
2•troysk•29m ago•0 comments

Anyone experiencing loading problems with LinkedIn (web)?

2•zlatkov•30m ago•1 comments

Tor Snowflake

https://snowflake.torproject.org/
2•DyslexicAtheist•32m ago•0 comments

Show HN: A NO-BS step counter app that actually works

https://simplestepcounter.com
1•kjm_kjm•32m ago•0 comments

Charlie Eggins 3BLD double world record [video]

https://www.youtube.com/shorts/pzz3Ya5BFvs
1•ColinWright•37m ago•1 comments

Meta's VR layoffs, studio closures underscore Zuckerberg's pivot to AI

https://www.cnbc.com/2026/01/13/meta-lays-off-vr-employees-underscoring-zuckerbergs-pivot-to-ai.html
1•cebert•37m ago•0 comments

I Let the Internet Vote on Code Merges: Week 1 Results

https://blog.openchaos.dev/posts/week-1-the-first-merge
1•birdculture•38m ago•0 comments

Tesla driver-assist system FSD will switch to subscription-only

https://www.bloomberg.com/news/articles/2026-01-14/tesla-driver-assist-system-fsd-will-switch-to-...
2•teleforce•40m ago•0 comments

Show HN: Auto-fix Google Play Store translations that exceed character limits

https://chromewebstore.google.com/detail/play-console-translation/polceeifilniadjhgibdnlikpfnflhml
1•jelmervnuss•46m ago•0 comments

Show HN: Remio A second brain without headaches

https://www.remio.ai
1•AliceH0521•50m ago•0 comments