frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Restrictions on house sharing by unrelated roommates

https://marginalrevolution.com/marginalrevolution/2025/08/the-war-on-roommates-why-is-sharing-a-h...
130•surprisetalk•2h ago•168 comments

"If you are reading this obituary, it looks like I'm dead. It happened"

https://framinghamsource.com/index.php/2025/09/22/linda-m-brossi-murphy/
60•markhall•24m ago•5 comments

Launch HN: Strata (YC X25) – One MCP server for AI to handle thousands of tools

36•wirehack•1h ago•8 comments

Are Elites Meritocratic and Efficiency-Seeking? Evidence from MBA Students

https://arxiv.org/abs/2503.15443
32•bikenaga•43m ago•0 comments

Go has added Valgrind support

https://go-review.googlesource.com/c/go/+/674077
303•cirelli94•6h ago•85 comments

x402 — An open protocol for internet-native payments

https://www.x402.org/
67•thm•1h ago•21 comments

Zip Code Map of the United States

https://engaging-data.com/us-zip-code-map/
38•helle253•1h ago•26 comments

2025 DORA Report

https://blog.google/technology/developers/dora-report-2025/
55•meetpateltech•2h ago•23 comments

Getting More Strategic

https://cate.blog/2025/09/23/getting-more-strategic/
78•gpi•3h ago•8 comments

Structured Outputs in LLMs

https://parthsareen.com/blog.html#sampling.md
126•SamLeBarbare•5h ago•58 comments

Nine Things I Learned in Ninety Years

http://edwardpackard.com/wp-content/uploads/2025/09/Nine-Things-I-Learned-in-Ninety-Years.pdf
688•coderintherye•13h ago•266 comments

Why Zig Feels More Practical Than Rust

https://dayvster.com/blog/why-zig-feels-more-practical-than-rust-for-real-world-cli-tools/
81•dayvster•3h ago•107 comments

Shopify, pulling strings at Ruby Central, forces Bundler and RubyGems takeover

https://joel.drapper.me/p/rubygems-takeover/
31•bradgessler•46m ago•5 comments

Zinc (YC W14) Is Hiring a Senior Back End Engineer (NYC)

https://app.dover.com/apply/Zinc/4d32fdb9-c3e6-4f84-a4a2-12c80018fe8f/?rs=76643084
1•FriedPickles•4h ago

Show HN: Kekkai – a simple, fast file integrity monitoring tool in Go

https://github.com/catatsuy/kekkai
20•catatsuy•1h ago•3 comments

Agents turn simple keyword search into compelling search experiences

https://softwaredoug.com/blog/2025/09/22/reasoning-agents-need-bad-search
30•softwaredoug•1h ago•12 comments

Zoxide: A Better CD Command

https://github.com/ajeetdsouza/zoxide
244•gasull•11h ago•151 comments

Show HN: Run Qwen3-Next-80B on 8GB GPU at 1tok/2s throughput

https://github.com/Mega4alik/ollm
62•anuarsh•3d ago•5 comments

Processing Strings 109x Faster Than Nvidia on H100

https://ashvardanian.com/posts/stringwars-on-gpus/
122•ashvardanian•3d ago•21 comments

OpenDataLoader-PDF: An open source tool for structured PDF parsing

https://github.com/opendataloader-project/opendataloader-pdf
24•phobos44•2h ago•5 comments

Row-level transformations in Postgres CDC using Lua

https://blog.peerdb.io/row-level-transformations-in-postgres-cdc-using-lua
14•saisrirampur•2d ago•0 comments

Altoids by the Fistful

https://www.scottsmitelli.com/articles/altoids-by-the-fistful/
181•todsacerdoti•9h ago•80 comments

Linux Compose Key Sequences (2007)

https://math.dartmouth.edu/~sarunas/Linux_Compose_Key_Sequences.html
15•dcminter•3d ago•1 comments

Show HN: Open-source AI data generator (now hosted)

https://www.metabase.com/ai-data-generator
20•margotli•1h ago•0 comments

Fall Foliage Map 2025

https://www.explorefall.com/fall-foliage-map
224•rappatic•15h ago•32 comments

OrangePi 5 Ultra Review: An ARM64 SBC Powerhouse

https://boilingsteam.com/orange-pi-5-ultra-review/
47•ekianjo•2h ago•21 comments

Compiling a Functional Language to LLVM (2023)

https://danieljharvey.github.io/posts/2023-02-08-llvm-compiler-part-1.html
50•PaulHoule•3d ago•0 comments

I built a dual RTX 3090 rig for local AI in 2025 (and lessons learned)

https://www.llamabuilds.ai/build/portable-25l-nvlinked-dual-3090-llm-rig
115•tensorlibb•4d ago•99 comments

Delete FROM users WHERE location = 'Iran';

https://gist.github.com/avestura/ce2aa6e55dad783b1aba946161d5fef4
781•avestura•10h ago•612 comments

Obscure feature + obscure feature + obscure feature = compiler bug

https://antithesis.com/blog/2025/compiler_bug/
19•jonstewart•2d ago•2 comments
Open in hackernews

Abundant Intelligence

https://blog.samaltman.com/abundant-intelligence
50•j4mie•2h ago

Comments

wiz21c•1h ago
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.

Did Donald call him ?

tao_oat•1h ago
they’re certainly cozy with the administration. this was in the openai RSS feed yesterday: https://openai.com/global-affairs/american-made-innovation/

interestingly, it doesn’t seem to be linked from the “news” section of their website.

jennyholzer•1h ago
"As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right. Almost everyone will want more AI working on their behalf."

I don't buy it at all.

This sounds like complete and total bullshit to me.

bl0rg•1h ago
Why do you feel that way?
jennyholzer•1h ago
years of consistent disappointment with the user experience, along with years of misleading internet propaganda dramatically overselling the quality and power of the underlying technology.

it's a fucking dud.

seany62•1h ago
It surprises me that people still believe this! I've seen AI deliver incredible value over the past year. I believe the application level is utilizing <.5% (probably less) of the total value that can be derived from current foundation models.
jennyholzer•1h ago
Based on your gung ho attitude I suspect that you are personally invested in "AI products" or otherwise work for a firm that creates "AI products"
dsr_•1h ago
What evidence supports your conclusions?

What evidence are you aware of that counters it?

keiferski•54m ago
It's only a niche weird opinion you'll find on forums like HN.

In the real world, it's immensely useful to millions of people. It's possible for a thing to both be incredibly useful and overhyped at the same time.

dsr_•1h ago
ELIZA is that way ->.

Try asking "what evidence supports your conclusions?".

Avshalom•1h ago
The fundamental driver of the economy is people eating and clothing themselves, not writing memos that are never read.
tlb•1h ago
Food is 13% of US consumer spending, and clothing is 2.7%. Both have declined steadily since the industrial revolution.
Avshalom•31m ago
I assure that people being alive is going to be the fundamental driver of the economy no matter what percent of consumer spending it is.
nhod•1h ago
Are you Jenny Holzer, “you are trapped on the earth so you will explode” conceptual artist?
blamestross•1h ago
> If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.

At least the statement starts with a conditional, even if it is a silly one.

If you know your growth curve is ultimately going to be a sigmoid, fitting a model with only data points before the inflection point is underdetermined.

> If AI stays on the trajectory that we think it will

Is a statement that no amount of prior evidence can support.

jennyholzer•1h ago
I think if you're critiquing AI, you should use harsher words.

AI boosters are going to spam the replies to your comment in attempts to muddy the waters.

CuriouslyC•1h ago
I'm an AI booster, but he's right, these models are in the sigmoid elbow and we're being hard-headed trying to push frontier models, it's not sustainable. We need to take a step back and work on the engineering of the systems around the frontier models while trying to find a new architecture that scales better.

That being said the current models are transformative on their own, once the systems catch up to the models that will be glaringly obvious to everyone.

jjk166•54m ago
Assuming AI stays on the trajectory they think it will doesn't mean they assume it will be infinite exponential growth. If you know for a fact it's sigmoid curve, presumably the path you think it will continue on is the sigmoid curve. The trillion dollar question is does performance plateau before or after AI can do the really exciting stuff, and while I may not agree with it myself the more optimistic position is not an unreasonable belief.

Also you can most certainly fit a sigmoid function only from past data points. Any projection will obviously have error, but your error at any given point should be smaller than for an exponential function with the same sampling.

Guthur•1h ago
My intelligence dropped a few points by reading anything from this charlatan.
wslh•1h ago
It's intelligent to separate intelligence from power. At the macro level, intelligence is often overvalued.
mathverse•1h ago
I think AI compute is one of the biggest grifts of century. A capital that is being redistributed from talented people to this compute when we can clearly see it is not making a huge difference (oai vs deepseek) feels like a grift.
emsign•1h ago
fix'd: Abundant Money
frabonacci•1h ago
> Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.

The moat will be how efficiently you convert electricity into useful behavior. Whoever industrializes evaluation and feedback loops wins the next decade.

HexDecOctBin•1h ago
Y2K errors in old COBOL code kick-started Indian IT sector, which then lead to immense economic progress and mass scale reduction in poverty. I hope LLMs pepper every thing they touch with many such errors, so that nations of Africa and poorer parts of Latin America (that can't do cheap manufacturing due to a lack of infrastructure and capital) can also begin their upwards economic journey by providing services to fix these mistakes.

In order to help reduce global poverty (much of which was caused by colonialism), it is the moral and ethical duty of the Global North to adopt LLMs on a mass scale and use them in every field imaginable, and then give jobs to the global poor to fix the resulting mess.

I am only 10% joking.

TeMPOraL•1h ago
That's funny, but unless LLM bugs break foundational ML codebases beyond human repair (and somehow also delete all existing code, research, and researchers), the models will likely just get better than people at this in a couple years. I mean, the trajectory so far is obvious.
floren•1h ago
disco-stu-pointing-at-chart.gif
paulglx•1h ago
AI will become "something we consider a fundamental human right", according to the guy that wants to sell you access to it
esafak•1h ago
So it's going to be regulated like a utility?
ActionHank•1h ago
Yeah, just like privatised utilities that operator solely for the profits of execs and investors with a complete disregard for regulations or best practices only to hide behind govt not regulating enough when things eventually go wrong.
bilbo0s•46m ago
He'll have no problem with that.

You can get your drinking water from a utility, or you can get bottled water. Guess which one he's gonna be selling?

And if you think for a second that the "utility" models will be trained on data as pristine as the data that the "bottled" models will be trained on, I've got a bridge in Brooklyn to sell you. (The "utility" models will not even have any access to all of the "secret sauce" currently being hoarded inside these labs.)

Essentially we can all expect to go back to the Lexis-Google type dichotomy. You can go into court on Google searches, nothing's stopping you. But nearly everyone will pay for LexisNexis because they're not idiots and they actually want to compete.

mietek•16m ago
Great analogy! Look up Dasani some time.
zoobab•1h ago
OpenAI is like privatizing water. It's a "fundamental right", but I am one of the few to provide it.
causal•1h ago
He might be right about intelligence becoming the new currency in a world where intelligence becomes fungible.

Lots of assumptions about the path to get there, though.

And interesting that he's measuring intelligence in energy terms.

zelias•1h ago
"The factory must grow"
r_lee•1h ago
Altman fatigue anyone?
mcpar-land•1h ago
> As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right.

My product is going to be the fundamental driver of the economy. Even a human right!

> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.

How?

> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.

There's the appeal to the current administration.

> Over the next couple of months, we’ll be talking about some of our plans and the partners we are working with to make this a reality. Later this year, we’ll talk about how we are financing it

Beyond parody.

thrance•1h ago
Stop thinking and give them money.

But for real, the leap from GPT4 to GPT5 was nowhere near as impressive as from GTP3 to GPT4. They'll have to do a lot more to give any weight to their usual marketing ultra-hype.

MattDamonSpace•1h ago
The jump from GPT4 through o3 to GPT5 was very impressive
bronco21016•1h ago
Agreed. Their naming conventions in a way really broke the perception of progress. GPT-4 to o3 or GPT-5 is truly impressive. The leap from GPT-4o to GPT-5 is less impressive but GPT-4o is generally recognized as GPT-4.

All that being said, it does seem like OpenAI and Anthropic are on a quest for more dollars by promoting fantasy futures where there is not a clear path from A to B, at least to those of us on the outside.

davidw•1h ago
Maybe the other countries aren't rounding up engineers working on opening factories and detaining them in inhumane conditions.
drooby•1h ago
No offense but your comment is basically HN parody. OpenAI created AI tech decades ahead of estimates. And they just signed a 100B deal with Nvidea. They are actually doing the things that are astonishing.

Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI. AI nearly solved protein folding. It is beginning to unlock personalized medicine. AI absolutely will be a fundamental driver of the economy in the future.

Being skeptical is still reasonable.. but flippant dismissal of legitimately amazing accomplishments is peak HN top comment.

remus•48m ago
I don't think there's any criticism of the (remarkable) things which have been achieved so far, more the breathless hype about how AI is going to solve all our current and future problems if we just keep shovelling money and energy in. Predicting the future is hard, and I don't think Sam is particularly better at knowing what's going to happen in ten years time than anyone else.
toddmorey•20m ago
Not a word or whisper about environmental impact, either. I mean at least do some hand waving or something. I feel like a habitable planet is a fundamental right.
listic•8m ago
Not to defend what Altman is saying, but is OpenAI actually using or going to use that much power? This Reuters source the US power budget will be 4,187 TWh in 2025: https://www.reuters.com/business/energy/us-power-use-reach-r...
mainecoder•3m ago
No worries once we have AGI we will ask it how to make up for all the emissions and solve climate change ...
0x00cl•1h ago
So...

* Nvidia invests 5 billion in Intel * Nvidia and OpenAI announce partnership to deploy 10 gigawatts of NVIDIA systems (Investment of upto 100 billion) * This indirectly benefits TSMC (which implies they'll be investing more in the US)

Looks like the US is cooking something...

daxfohl•1h ago
> provide customized tutoring to every student on earth

It could start by figuring out how to keep kids from using AI to write all their essays.

allemagne•1h ago
> Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.

If a tenth of this happens, and we don't build a new power plant every ten weeks... then what?

anovikov•1h ago
It will create demand for electricity. Demand always creates supply. Perpahs that was the missing link to the mass deployment of solar (because there's just no other way similar amounts of energy can be produced).
sobiolite•1h ago
Perhaps, but Hank Green published a pretty convincing argument recently that electricity supply has nowhere the necessary elasticity, and the politicised nature of power generation in the US means that isn't going to change:

https://www.youtube.com/watch?v=39YO-0HBKtA

5cott0•1h ago
using abundance discourse to market ai slop is the most innovative thing openai has done yet
conciliatory•1h ago
As a technical user of AI, I think there is certainly value in the capabilities of the current IDE/agentic systems, and as a builder of AI systems professionally I think there is enterprise value as well, although realizing that value in a meaningful way is an ongoing challenge/work in progress. There is also clearly a problem with AI slop, both in codebases and in other professional deliverables. Having said that, what’s more interesting to me is whether we have seen AI produce novel and valuable outputs independently. Altman asserts that 10GW could possibly “cure cancer”, but frankly I’d like to see any discrete examples of AI advancing frontier knowledge areas and moving the needle in small but measurable ways that stand up to peer review. Before we can cure cancer or have world peace through massive consumption of power and spend I’d like to see a meaningful counterpoint to the argument that AI as a derivative technology from all human knowledge is incapable of extending beyond the current limits of human knowledge. Intuitively I think AI should be capable of novel scientific advancement, but I feel like we’re short on examples.
seydor•1h ago
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.

The growth in energy is because of the increase in the output tokens due to increased demand for them.

Models do not get smarter the more they are used.

So why does he expect them to solve cancer if they haven't already?

And why do we need to solve cancer more than once?

keiferski•1h ago
What's the serious counter-argument to the idea that a) AI will become more ubiquitous and inexpensive and b) economic/geopolitical success will be tied in some way to AI ability?

Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.

beeflet•1h ago
Maybe AI will become more ubiquitous. But I predict LLMs will be capped by the amount of training data present in the wild.
bilbo0s•27m ago
I'm more worried that publicly available LLMs "will be capped by the amount of training data present in the wild". But private LLMs, available only to the wealthy and powerful, will have additional, more pristine and accurate, data sources made available to them for training.

Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?

mayankgoel28•1h ago
A company is building technology more powerful than nuclear weaponry, and this comment section is thinking they're "overselling" it. Fun.
paulglx•1h ago
What makes you think LLMs are "more powerful than nuclear weaponry" ?
sadhorse•1h ago
Nobody will be afraid to use AI.
emp17344•22m ago
There is no world in which LLMs are more powerful or impactful than nuclear weapons.
ksec•1h ago
Google: Do no Evil.

Apple: Privacy is a fundamental Human right. That is why we must control everything. And stop our user from sharing any form of data other than to Apple.

OpenAI: AI is a fundamental Human right.....

There is something about Silicon Valley that is philosophically very odd for the past 15 to 20 years.

theaniketmaurya•1h ago
let me put some more in nvidia now
Sivart13•1h ago
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.

Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"

Jun8•1h ago
Putting aside the questions of (as some comments here have) whether

  * AI is a “fucking dud” (you have to be either highly ignorant or trolling to say this)
  * Altman is a “charlatan” (definitely no but it does look like he has some unsavory personal traits, quite common BTW for people at that level) 
  * the ridiculousness of touting a cancer cure (I guess the post is targeted to the technical hoi polloi, with whom such terminology resonates, but also see protein 3D structure discovery advances)
I found the following to be interesting in this post:

1. Altman clearly signaling affinity for the Abundance bandwagon with a clear reference right in the title. Post is shorter but has the flavor of Marc Andreessen's "It's Time to Build" post from 2020: https://a16z.com/its-time-to-build/

2. He advances the vision of "creat[ing] a factory that can produce a gigawatt of new AI infrastructure every week". This may be called frighteningly ambitious at a minimum: U.S. annual additions have been ~10-20 GW/year for solar builds (https://www.climatecentral.org/report/solar-and-wind-power-2...)

physarum_salad•55m ago
Dorks with forks
lunias•29m ago
Sam Altman gives me dark triad vibes.
vrighter•12m ago
"Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth."

what's in between this line and the next:

"or it might not. Now give me moar money!!!!!"