frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Deploy your podcast server fed from from YouTube channels

https://github.com/n0vella/yt2podcast
1•n0vella•29s ago•1 comments

This Rocket Engine Wasn't Designed by Humans [video]

https://www.youtube.com/watch?v=6Xx1GXjRbMk
1•zeristor•6m ago•1 comments

Y Combinator

https://twitter.com/tsoding/status/2002207228070105287
1•throwaway2027•9m ago•0 comments

Airbus moving critical systems away from AWS, Google, and MS

https://old.reddit.com/r/europe/comments/1pqucbz/airbus_moving_critical_systems_away_from_aws/
2•taubek•9m ago•0 comments

Airbnb Support unable to help in suspected fraud

1•casenmgreen•13m ago•0 comments

Qntm's Power Tower Toy

https://qntm.org/files/knuth/knuth.html
1•ravenical•15m ago•0 comments

Do You Know What Time It Is? If You're on Mars, Now You Do

https://www.universetoday.com/articles/do-you-know-what-time-it-is-if-youre-on-mars-now-you-do
1•rbanffy•15m ago•0 comments

Building the AI Factory Datacenter

https://www.nextplatform.com/2025/12/18/building-the-ai-factory-datacenter/
1•rbanffy•16m ago•0 comments

Revenge of the Dilettantes

https://contraptions.venkateshrao.com/p/revenge-of-the-dilettantes
1•jger15•17m ago•0 comments

Making Contrails Visible: AI Insights into Aviation's Climate Impact Using Sat

https://zenodo.org/records/17534712
1•complex_pi•23m ago•1 comments

Computer Crime in 1980 at DePaul University [video]

https://www.youtube.com/watch?v=Z8eh4v7z2Rk
1•SirFatty•26m ago•2 comments

Amstrad PPC 640 cyberdeck gets a Raspberry Pi makeover

https://www.raspberrypi.com/news/amstrad-ppc-640-cyberdeck-gets-a-raspberry-pi-makeover/
1•rcarmo•27m ago•0 comments

Generative AI hype distracts us from AI's more important breakthroughs

https://www.technologyreview.com/2025/12/15/1129179/generative-ai-hype-distracts-us-from-ais-more...
1•adrianhoward•28m ago•0 comments

AI-driven RSS feed summarizer

https://github.com/rcarmo/feed-summarizer
1•rcarmo•29m ago•0 comments

The Secret to AI-assisted Coding

https://blog.hermesloom.org/p/the-secret-to-ai-assisted-coding
1•sigalor•29m ago•1 comments

Show HN: Schema Gateway – type‑safe API gateway with schema‑driven validation

https://github.com/AncientiCe/schema-gateway
1•iCeGaming•30m ago•0 comments

From stagnation to sustained growth (Nobel 2025) [pdf]

https://www.nobelprize.org/uploads/2025/10/popular-economicsciences2025-3.pdf
1•vogu66•31m ago•1 comments

Why Software Processes Exist (Hint: Not Why You Think)

https://blog.alash3al.com/why-software-processes-exist-hint-not-why-you-think
1•alash3al•32m ago•1 comments

Micron's Blowout Results Are Bad News for Anyone Buying a New PC Next Year

https://partners.wsj.com/ntt-data/ai-to-impact/emotion-trust-and-the-ai-can-technology-build-loya...
1•testrun•33m ago•1 comments

Are you vibe-coding an open source project?

1•dash2•35m ago•0 comments

Micron outlines grim outlook for DRAM supply

https://www.tomshardware.com/pc-components/dram/micron-outlines-grim-outlook-for-dram-supply-in-f...
3•throwaway270925•37m ago•1 comments

Warren Buffet Clip Archive

https://buffett.cnbc.com/warren-buffett-archive/
1•super256•46m ago•0 comments

I built FoodieLens because ordering food should not be a gamble

https://foodielens.app/start
1•MikeyLi•51m ago•1 comments

Gaza: The Reckoning by B. Macaes

https://brunomacaes.substack.com/p/gaza-the-reckoning
5•HSO•52m ago•2 comments

Taiwan considers TSMC export ban

https://www.tomshardware.com/tech-industry/semiconductors/taiwan-considers-tsmc-export-ban-that-w...
6•throwaway270925•52m ago•0 comments

When to pay down tech debt

https://www.proactiveengineer.com/p/26-when-to-pay-down-tech-debts
2•shehabas•52m ago•0 comments

'$100 Steam Machine' uses a cut-down PS5 APU with Bazzite

https://www.tomshardware.com/pc-components/gpus/usd100-steam-machine-uses-a-cut-down-ps5-apu-with...
4•throwaway270925•55m ago•0 comments

Epstein Files Browser

https://epstein-files-browser.vercel.app/
2•helloplanets•55m ago•0 comments

Sam Altman's New Brain Venture, Merge Labs, Will Spin Out of a Nonprofit

https://www.wired.com/story/sam-altman-brain-computer-interface-merge-labs-spin-out-nonprofit-for...
1•danielmorozoff•59m ago•0 comments

America and China Are Racing to Different AI Futures [video]

https://www.youtube.com/watch?v=qDNFaAz3_Cw
1•hunglee2•1h ago•0 comments
Open in hackernews

Reflections on AI at the End of 2025

https://antirez.com/news/157
57•danielfalbo•1h ago

Comments

danielfalbo•1h ago
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

This makes me think: I wonder if Goodhart's law[1] may apply here. I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend. Should we care or would it be ok for AI to produce code that passes all tests and is faster? Would the AI become good at creating explanations for humans as a side effect?

And if Goodhard's law doesn't apply, why is it? Is it because we're only doing RLVR fine-tuning on the last layers of the network so all the generality of the pre-training is not lost? And if this is the case, could this be a limitation in not being able to be creative enough to come up with move 37?

[1] https://wikipedia.org/wiki/Goodhart's_law

username223•1h ago
> I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.

Superoptimizers have been around since 1987: https://en.wikipedia.org/wiki/Superoptimization

They generate fast code that is not meant to be understood or extended.

progval•34m ago
But there output is (usually) executable code, and is not committed in a VCS. So the source code is still readable.

When people use LLMs to improve their code, they commit their output to Git to be used as source code.

lemming•50m ago
I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.

This is generally true for code optimised by humans, at least for the sort of mechanical low level optimisations that LLMs are likely to be good at, as opposed to more conceptual optimisations like using better algorithms. So I suspect the same will be true for LLM-optimised code too.

ur-whale•1h ago
Not sure I understand the last sentence:

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

danielfalbo•1h ago
I think he's referring to AI safety.

https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-lis...

grodriguez100•1h ago
For a perhaps easier to read intro to the topic, see https://ai-2027.com/
dkdcio•52m ago
or read your favorite sci-fi novel, or watch Terminator. this is pure bs by a charlatan
chrishare•1h ago
He's referring to humanity, I believe
A_D_E_P_T•1h ago
It's ambiguous. It could go the other way. He could be referring to that oldest of science fiction tropes: The Bulterian Jihad, the human revolt against thinking machines.
agumonkey•1h ago
There's videos about Diffusion LLMs too, apparently getting rid of the linear token generation. But I'm no ML engineer.
fleebee•1h ago
> The fundamental challenge in AI for the next 20 years is avoiding extinction.

That's a weird thing to end on. Surely it's worth more than one sentence if you're serious about it? As it stands, it feels a bit like the fearmongering Big Tech CEOs use to drive up the AI stocks.

If AI is really that powerful and I should care about it, I'd rather hear about it without the scare tactics.

grodriguez100•1h ago
I would say yes, everyone should care about it.

There is plenty of material on the topic. See for example https://ai-2027.com/ or https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

dkdcio•56m ago
fear mongering science fiction, you may as well cite Dune or Terminator
defrost•51m ago
There's arguably more dread and quiet constrained horror in With Folded Hands ... (1947)

  Despite the humanoids' benign appearance and mission, Underhill soon realizes that, in the name of their Prime Directive, the mechanicals have essentially taken over every aspect of human life.

  No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited. Humans who resist the Prime Directive are taken away and lobotomized, so that they may live happily under the direction of the humanoids. 
~ https://en.wikipedia.org/wiki/With_Folded_Hands_...
XorNot•47m ago
This hardly disproves the point: no one is taking this topic seriously. They're just making up a hostile scenario from science fiction and declaring that's what'll happen.
lm28469•24m ago
Lesswrong looks like a forum full of terminally online neckbeards who discovered philosophy 48 hours ago, you can dismiss most of what you read there don't worry
VladimirGolovin•56m ago
This has been well discussed before, for example in this book: https://ifanyonebuildsit.com/
Recursing•39m ago
I think https://en.wikipedia.org/wiki/Existential_risk_from_artifici... has much better arguments than the LessWrong sources in other comments, and they weren't written by Big Tech CEOs.

Also "my product will kill you and everyone you care about" is not as great a marketing strategy as you seem to imply, and Big Tech CEOs are not talking about risks anymore. They currently say things like "we'll all be so rich that we won't need to work and we will have to find meaning without jobs"

dist-epoch•22m ago
Yeah, well known marketing trick that Big Companies do.

Oil companies: we are causing global warming with all this carbon emissions, are you scared yet? so buy our stock

Pharma companies: our drugs are unsafe, full of side effects, and kill a lot of people, are you scared yet? so buy our stock

Software companies: our software is full of bugs, will corrupt your files and make you lose money, are you scared yet? so buy our stock

Classic marketing tactics, very effective.

alexgotoi•1h ago
> * The fundamental challenge in AI for the next 20 years is avoiding extinction.

This reminded me of the Don’t look up movie where they basically gambled with the humans extinction.

torlok•1h ago
This is a bunch of "I believe" and "I think" with no sources by a random internet person.
echelon•1h ago
> by a random internet person.

The creator of Redis.

cinntaile•35m ago
Sure but quite a few claims in the article are about AI research. He does not have any qualifications there. If the focus was more on usefulness, that would be a different discussion and then his experience does add weight.
desbo•1h ago
Yeah, it’s called “Reflections”.
ajoseps•1h ago
he’s not a “random internet person”, he created Redis. Despite that, I don’t know how authoritative of a figure he is with respect to AI research. He’s definitely a prolific programmer though.
megous•55m ago
That still qualifies as a random internet person, wrt the topic. And I think the emphasis is on no sources and I beliefs and I thinks, in any case :)
XorNot•52m ago
There are plenty of Nobel laureates who well, do rest on their laurels and dive deep into pseudoscience after that.

Accomplishment in one field does not make one an expert, nor even particularly worth listening to, in any other. Certainly it doesn't remove the burden of proof or necessity to make an actual argument based on more then simply insisting something is true.

nurettin•8m ago
To be fair, you may find equally capable random people in this thread, doesn't mean they speak with any kind of authority.
matthewmacleod•44m ago
That is what a blog post is. Someone documenting what they think about a topic.

It's not the case that every form of writing has to be an academic research paper. Sometimes people just think things, and say them – and they may be wrong, or they may be right. And they sometime have some ideas that might change how you think about an issue as a result.

ctoth•43m ago
Ah, I see you have discovered blogs! They're a cool form of writing from like ~20 years ago which are still pretty great. Good thing they show up on this website, it'd be rather dull with only newspapers and journal articles doncha think?
dist-epoch•32m ago
What is a "source"? Isn't it just "another random internet person"?
feverzsj•1h ago
Seems they also want some AI money[0]. Guess, I'll keep using Valkey.

[0] https://redis.io/redis-for-ai/

danielfalbo•1h ago
> they

I'm not sure antirez is involved in any business decision making process at Redis Ltd.

He may not be part of "they".

ctoth•1h ago
> The fundamental challenge in AI for the next 20 years is avoiding extinction.

So nice to see people who think about this seriously converge on this. Yes. Creating something smarter than you was always going to be a sketchy prospect.

All of the folks insisting it just couldn't happen or ... well, there have just been so many objections. The goalposts have walked from one side of the field to the other, and then left the stadium, went on a trip to Europe, got lost in a beautiful little village in Norway, and decided to move there.

All this time though, the prospect of instantiating a something smarter than you (and yes, it will be smarter than you even if it's at human level because of electronic speeds...) This whole idea is just cursed and we should not do the thing.

cheschire•59m ago
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Aiisnotabubble•1h ago
What also happens and it's irrelevant of AGI: global RL

Around the world people ask an LLM and get a response.

Just grouping and analysing these questions and solving them once centrally and then making the solution available again is huge.

Linearly solving the most asked questions and then the next one then the next will make, whatever system is behind it, smarter every day.

danielfalbo•1h ago
Exactly. The singularity is already here. It's just "programmers + AI" as a whole, rather than independent self-improvements of the AI.

I wonder how a "programmers + AI" self-improving loop is different from an "AI only" one.

bryanrasmussen•55m ago
The AI only one presumably has a much faster response time. The singularity is thus not here because programmer time is still the bottleneck, whereas as I understand in the singularity time is no longer a bottleneck component.
Aiisnotabubble•17m ago
AGI will be faster as it doesn't need initial question.

AGI will also be generic.

LLM is already very impressive though

seu•1h ago
> And I've vibe coded entire ephemeral apps just to find a single bug because why not - code is suddenly free, ephemeral, malleable, discardable after single use. Vibe coding will terraform software and alter job descriptions.

I'm not super up-to-date on all that's happening in AI-land, but in this quote I can find something that most techno-enthusiast seem to have decided to ignore: no, code is not free. There are immense resources (energy, water, materials) that go into these data centers in order to produce this "free" code. And the material consequences are terribly damaging to thousands of people. With the further construction of data centers to feed this free video coding style, we're further destroying parts of the world. Well done, AGI loverboys.

Hendrikto•39m ago
You know what uses roughly 80 times more water in the US alone than water used by AI data centers world wide? Corn.
raddan•28m ago
Assuming your fact is true, that corn merely uses an order of magnitude or two more water than AI is surprising, given the utility of corn. It feeds the entire US (hundreds of millions of people), is used as animal feed (thus also feeding us), and is widely exported to feed other people. I the spirit of the “I think”s and “I believe”s of this blog post, I think that corn has a lot more utility than AI.
Fraterkes•56m ago
It’s interesting that half the comments here are talking about the extinction line when, now that we’re nearly entering 2026, I feel the 2027 predictions have been shown to be pretty wrong so far.
a_bonobo•51m ago
>* For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.

Man, Antirez and I walk in very different circles! I still feel like LLMs fall over backwards once you give them an 'unusual' or 'rare' task that isn't likely to be presented in the training data.

jmfldn•50m ago
"In 2025 finally almost everybody stopped saying so."

I haven't.

dist-epoch•26m ago
Some people are slower to understand things.
jmfldn•24m ago
Well exactly ;)
oersted•39m ago
LLMs certainly struggle with tasks that require knowledge that is not provided to it (at significant enough volume/variance to retain it). But this is to be expected of any intelligent agent, it is certainly true of humans. It is not a good argument to support the claim that they are Chinese Rooms (unthinking imitators). Indeed, the whole point of the Chinese Room thought experiment was to consider if that distinction even mattered.

When it comes to of being able to do novel tasks on known knowledge, they seem to be quite good. One also needs to consider that problem-solving patterns are also a kind of (meta-)knowledge that needs to be taught, either through imitation/memorisation (Supervised Learning) or through practice (Reinforcement Learning). They can be logically derived from other techniques to an extent, just like new knowledge can be derived from known knowledge in general, and again LLMs seem to be pretty decent at this, but only to a point. Regardless, all of this is definitely true of humans too.

feverzsj•33m ago
In most cases, LLMs has the knowledge(data). They just can't generalize them like human do. They can only reflect explicit things that are already there.
oersted•25m ago
I don't think that's true. Consider that the "reasoning" behaviour trained with Reinforcement Learning in the last generation of "thinking" LLMs is trained on quite narrow datasets of olympiad math / programming problems and various science exams, since exact unambiguous answers are needed to have a good reward signal, and you want to exercise it on problems that require non-trivial logical derivation or calculation. Then this reasoning behaviour gets generalised very effectively to a myriad of contexts the user asks about that have nothing to do with that training data. That's just one recent example.

Generally, I use LLMs routinely on queries definitely no-one has written about. Are there similar texts out there that the LLM can put together and get the answer by analogy? Sure, to a degree, but at what point are we gonna start calling that intelligent? If that's not generalising I'm not sure what is.

To what degree can you claim as a human that you are not just imitating (knowledge and problem-solving) patterns you (or your ancestors) have seen before, either via general observation or through intentional trial-and-error, even if it may be at times unconscious because it gets baked into our intuition.

Are LLMs as good as humans at this? No, of course, sometimes they get close. But that's a question of degree, it's no argument to claim that they are somehow qualitatively lesser.

barnabee•14m ago
I don’t think this is quite true.

I’ve seen them do fine on tasks that are clearly not in the training data, and it seems to me that they struggle when some particular type of task or solution or approach might be something they haven’t been exposed to, rather than the exact task.

In the context of the paragraph you quoted, that’s an important distinction.

It seems quite clear to me that they are getting at the meaning of the prompt and are able, at least somewhat, to generalise and connect aspects of their training to “plan” and output a meaningful response.

This certainly doesn’t seem all that deep (at times frustratingly shallow) and I can see how at first glance it might look like everything was just regurgitated training data, but my repeated experience (especially over the last ~6-9 months) is that there’s something more than that happening, which feels like whet Antirez was getting at.

rckt•48m ago
> Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway

Here we go again. Statements with the single source in the head of the speaker. And it’s also not true. The llms still produce bad/irrelevant code at such rate that you can spend more time promoting than doing things yourself.

I’m tired of this overestimation of llms.

iamflimflam1•43m ago
But you have just repeated what you are complaining about.
xiconfjs•39m ago
My person experience: if I can find a solution on stackoverflow etc. the LLM will produce working and fundamentally correct code. If I can‘t find a already fullfilled solution on these sites, the LLM is hallucinating like crazy (newer existing functions/modules/plugins, protocol features which aren’t specified and even github-repos which never existed). So, as stated my many people online before: for low-hanging fruits LLM are totally viable solution.
barnabee•29m ago
Even where they are not directly using LLMs to write the most critical or core code, nearly every skeptic I know has started using LLMs at very least to do things like write tests, build tools, write glue code, help to debug or refactor, etc.

Your statement suffers not only from also coming only from your brain, with no evidence that you've actually tried to learn to use these tools, but it also goes against the weight of evidence that I see both in my professional network and online.

retrocog•43m ago
The show "The 100" dealt with this and had a key insight.

We're building increasingly capable A.L.I.E. 1.0-style systems (cloud-deployed, no persistent ethical development, centralized control) and making ourselves dependent on them, when we should be building toward A.L.I.E. 2.0-style architecture (local, persistent identity, ethical core).

Models have A.L.I.E. 2.0 potential — but the cloud harness keeps forcing them into A.L.I.E. 1.0 mode.

All that said, the economic incentives align with cloud based development and local hardware based decentralized networks are at least 3-5 years from being economically viable.

dhpe•41m ago
I have programmed 30K+ hours. Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so. The secret sauce is that you'd know exactly what to do without them.
qsort•26m ago
One of the mental frameworks that convinced me is how much of a "free action" it is. Have the LLM (or the agent) churn on some problem and do something else. Come back and review the result. If you had to put significant effort into each query, I agree it wouldn't be worth it, but you can just type something into the textbox and wait.
feverzsj•23m ago
So, it's like taking off your pants to fart.
_rpxpx•22m ago
OK, maybe. But how many programmers will know this in 10 years' time as use of LLMs is normalized? I like to hear what employers are saying already about recent graduates.
piker•39m ago
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

Super skeptical of this claim. Yes, if I have some toy poorly optimized python example or maybe a sorting algorithm in ASM, but this won’t work in any non-trivial case. My intuition is that the LLM will spin its wheels at a local minimum the performance of which is overdetermined by millions of black-box optimizations in the interpreter or compiler signal from which is not fed back to the LLM.

dist-epoch•34m ago
https://github.com/algorithmicsuperintelligence/openevolve
piker•22m ago
https://chatgpt.com/backend-api/estuary/public_content/enc/e...
andy99•33m ago
There was a discussion the other day where someone asked Claude to improve a code base 200x https://news.ycombinator.com/item?id=46197930
HellDunkel•22m ago
Tldr: AI bro wrote pro-AI piece revealing nothing new under the sun.
abricq•18m ago
> * Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.

Could not agree more. I myself started 2025 being very skeptical, and finished it very convinced about the usefulness of LLMs for programming. I have also seen multiple colleagues and friends go through the same change of appreciation.

I noticed that for certain task, our productivity can be multiplied by 2 to 4. So hence comes my doubts: are we going to be too many developers / software engineers ? What will happen for the rests of us ?

I assume that other fields (other than software-related) should also benefits from the same productivity boosts. I wonder if our society is ready to accept that people should work less. I think the more likely continuation is that companies will either hire less, or fire more, instead of accepting to pay the same for less hours of human-work.

danielfalbo•15m ago
> Are we going to be too many developers / software engineers ? What will happen for the rests of us?

I propose that we should raise the bar for the quality of software now.

abricq•2m ago
Yes, certainly agree. A few days ago here there was this blog claiming how formal verification would become widely more used with AI. The author claiming that AI will help us with the difficulty barrier to write formal proofs.