frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•6m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•11m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•15m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•16m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•17m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•22m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•25m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•28m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•35m ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•36m ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•39m ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
2•geox•40m ago•0 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•41m ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
2•bookmtn•46m ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
1•tjr•47m ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
3•alephnerd•48m ago•1 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•53m ago•2 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•57m ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
19•SerCe•1h ago•12 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments

Voyager CEO says space data center cooling problem still needs to be solved

https://www.cnbc.com/2026/02/05/amazon-amzn-q4-earnings-report-2025.html
1•belter•1h ago•0 comments

Boilerplate Tax – Ranking popular programming languages by density

https://boyter.org/posts/boilerplate-tax-ranking-popular-languages-by-density/
1•nnx•1h ago•0 comments

Zen: A Browser You Can Love

https://joeblu.com/blog/2026_02_zen-a-browser-you-can-love/
1•joeblubaugh•1h ago•0 comments

My GPT-5.3-Codex Review: Full Autonomy Has Arrived

https://shumer.dev/gpt53-codex-review
2•gfortaine•1h ago•0 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
2•AGDNoob•1h ago•1 comments

God said it (song lyrics) [pdf]

https://www.lpmbc.org/UserFiles/Ministries/AVoices/Docs/Lyrics/God_Said_It.pdf
1•marysminefnuf•1h ago•0 comments
Open in hackernews

The AI bubble is all over now, baby blue

https://garymarcus.substack.com/p/the-ai-bubble-is-all-over-now-baby
37•ArmageddonIt•1mo ago

Comments

chvid•1mo ago
It is a bit silly calling a top at this point.
ta9000•1mo ago
“Have you met Gary Marcus?”
tim333•1mo ago
You can check https://hn.algolia.com/?query=garrymarcus for ~286 previous Gary Marcus stories with similar content.

Also check out ten years into the future https://sw.vtom.net/hn35/pages/90099333.html https://sw.vtom.net/hn35/item.html?id=90099333 from https://news.ycombinator.com/item?id=46205632

MasterScrat•1mo ago
> Without world models. you cannot achieve reliability. And without reliability, profits are limited.

Surprising to simultaneously announce the end of the road yet point to the road ahead

incrudible•1mo ago
It is not a road, it is a runway.
idontwantthis•1mo ago
> Whether it all falls apart suddenly, or gradually, I do not know. And LLMs will continue to exist.

This is one thing I don't get. Why will LLMs still exist if AI companies go bust? Will we have stagnant models that can't be improved anymore as a service? Isn't each query still a monumental computing task that they lose money on?

incrudible•1mo ago
The task is not so monumental that it could not be provided at a reasonable price or financed through advertising, but as long as major players are willing to operate at a loss, you face little choice but to operate at a loss yourself.
grim_io•1mo ago
Inference is probably okay priced, at least at API prices.

The salaries, training and especially data center build out might be a little crazy right now.

nasmorn•1mo ago
If I use an LLM for programming why would it need to update constantly. As soon as you could run a SOTA class model on let’s say the surely upcoming 1TB RAM MacStudio it is out there and can never be taken back. If that was my only venue to get access I would shell out those 10k in a heartbeat
AnimalMuppet•1mo ago
Take railroads, for instance. Back in the 1800s, too many were built. Many of them (almost all, I think) went bankrupt at one time or another. At that point, the creditors made a rational evaluation: Is this worth keeping, or not? If yes, then let's try to reorganize a business that can actually survive. If not, tear it up and sell the scrap. Some were kept, some were torn up.

But the post-bankruptcy railroads that were kept were able to operate without the burden of the construction costs, because that had been destroyed in the bankruptcy (along with the original owners).

So, AI: I suspect that the training costs (plus hardware costs) dominate the operating costs. If that is so, then a post-bankruptcy AI company could still be a profitable business. It wouldn't be able to grow its hardware very fast, or be able to re-train new models very often, but it could still be an ongoing business. The current owners would still get nothing, though.

Havoc•1mo ago
Still the same flaw in his analysis. Humans are unreliable too and the entire damn economy runs on them. You just need good enough, and AI is getting better daily.
grim_io•1mo ago
It only gets better daily at stuff it already kinda could do.

It can write my code a bit less shitty tomorrow, but that doesn't sound like the fulfillment of the promises given.

Tomorrow, another company will yet again fail to integrate agentic workflows.

Havoc•1mo ago
That seems true.

Even with current abilities if they're just rolled out it's still trillions of the economy.

A bit like OK Waymo isn't perfect but it works in SF...we don't need a giant breakthrough to bring it to another 1000 cities.

Everyone is focused on how to make the models better (rightly), but impact and economic viability is in implementation and there is a lot of low hanging fruit there.

>another company will yet again fail to integrate agentic workflows.

'tis true

camgunz•1mo ago
> A bit like OK Waymo isn't perfect but it works in SF...we don't need a giant breakthrough to bring it to another 1000 cities

Well, a lot of cities have snow, or different flora and fauna, or different road rules (Karachi, Mexico City). Maybe the same approach works (spend hellacious amounts of money to train) but again, for what economic benefit?

thegrim000•1mo ago
What analysis? There is no analysis. "The economics don't make sense to me and therefore it'll crash and burn", and "here's two articles from mainstream media that are worried about AI spending". That's the extent of the article's content. That's the extent of the analysis. He himself offers absolutely zero argument, data, or facts.
pants2•1mo ago
I have been asking myself the question of how useful an LLM would be that has perfect intelligence. Meaning that for any question you give it that can possibly be answered with the given information, you will get the right answer.

Obviously, it would be very useful, but still limited by it's context and prompt. For many tasks, coding models are getting close. It will do everything I ask generally correctly the first time. Around half the re-dos are because I under-specified the prompt. Soon that will be 100% of re-dos, and the programming aspect of my job will be mostly focused on writing good prompts, yet I will still be here identifying and translating real-world requirements into prompts.

We are quickly approaching a situation where LLMs can ace all benchmarks, and yet still not see the insane ROI that the frontier labs are predicting because humans are a bottleneck, and so is experiment.

For example that perfect LLM may be able to find the cure to cancer, but all the research in the world isn't enough information to answer that question, so we need to conduct experiments and learn more. Maybe that can speed up humanity's cancer discoveries by 10X, but not 1000X, purely because of the experiment bottleneck.

Ianjit•1mo ago
Why assume the breakdown between benchmarks and RoI is due to humans? The map is not the territory, the benchmark is not reality, there world is more complex than computer scientists understand.
pants2•1mo ago
Benchmarks are moving closer to reality though with things like FrontierScience and SWE-Bench Pro
Ianjit•1mo ago
Maybe you are right, but maybe it’s radiology all over again.
tim333•1mo ago
You could do a lot with perfect intelligence. Print out the formulas for cures to cancer and als. Print out the code for self improving AGI, meaning of life, why there's something rather than nothing and so on.
tim333•1mo ago
There was an interesting take on it on youtube today, Bill Gurley (VC) talking to Tim Ferris (interviewer) on if it's a bubble, based on some research:

> ...every time there's been a technology wave that leads to wealth creation, especially fast wealth creation, that will inherently invite speculators, carpetbaggers, interlopers that want to come take advantage of it. Think of the gold rush, you know, and so people want to make it a debate. Do you believe in AI or is it a bubble? And if you say you think it's a bubble, they say, "Oh, you don't believe in AI." Like this gotcha kind of thing. And if you study Perez, and I I think this is absolutely correct. If the wave is real, then you're going to have bubble-like behavior. like they come together as a pair precisely because anytime there's very quick wealth creation, you're going to get a lot of people that want to come try and take advantage of that.

Seems about right to me. https://youtu.be/D0230eZsRFw

camgunz•1mo ago
Yeah but the thing that distinguishes the gold rush from mesmerism or whatever is actual gold. Most LLM promises are NFTs with extra steps.

The arguments here are totally bonkers. People didn't wonder what airplanes were for, or cars, or computers, or vaccines. They had immediate, obvious benefits and uses, but still none of them experienced this speed of investment. This is something else entirely.

tim333•1mo ago
I'm pretty sure AI is a real thing. Sure some arguments are bonkers but there's a lot of real stuff happening like Waymos, Claude code, AlphaFold, MuZero and the like. Of those only Claude is really a language model. Skeptics get over hung up on the limits of language models - they are not the only AI.

There was some puzzlement as to what computers were for. See:

>Thomas J. Watson, the chairman of IBM in 1943, who purportedly said, "I think there is a world market for maybe five computers

Also the speed of investment isn't unprecedented - the railway boom was much larger as a percent of gdp.

camgunz•1mo ago
Well the other models are even less useful so I try and stick with the steelman version of these things. That IBM quote isn't ambiguity about what computers are for, but about who can afford them in their current, highly bespoke state. Finally, the railway boom wasn't $1.5 trillion in a few years. Also, again, we knew what railroads were for.

I'm not saying the tech isn't impressive. I'm impressed! Cursor bugbot has found some pretty gnarly bugs in my code, blessedly. But it's neither reliable nor economically viable, even if you don't think they owe anyone anything for training on their data (I do think they owe us).

tim333•1mo ago
>During the 19th-century "Railway Mania," railroad investment in the U.S. reached a peak of 6.0% of GDP, a level significantly higher than current AI infrastructure spending, which is estimated to be around 1.6% of U.S. GDP.

says Google. There was a big crash after, wiping out investors. Time will tell with this one.

camgunz•1mo ago
Again though, it's about time
tripletao•1mo ago
I spent some time looking for sources for the various "railroad investment as % of GDP" numbers floating around, and I don't think they're very good. The modern concept of GDP didn't even exist back then, so the denominator is calculated in retrospect from the limited contemporary data. The numerator is more confident, but the papers I found mostly showed closer to 3%. A pretty wide range is at least defensible though, and I guess VCs are comparing against the high end for obvious reasons.

https://news.ycombinator.com/item?id=44805979

This AI investment is interesting because it's mostly not in durable goods, unlike the railroad's rails and (most importantly) land. The buildings and power infrastructure for the datacenters could retain value for decades, but the servers won't unless something goes badly wrong. I believe this is the largest investment in human history justified primarily by the anticipated value of intellectual property.