frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: SNKV – SQLite's B-tree as a key-value store (C/C++ and Python bindings)

https://github.com/hash-anu/snkv
25•swaminarayan•18m ago•6 comments

Diode – Build, program, and simulate hardware

https://www.withdiode.com/
187•rossant•3d ago•36 comments

λProlog: Logic programming in higher-order logic

https://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/
49•ux266478•3d ago•4 comments

Terence Tao, at 8 years old (1984) [pdf]

https://gwern.net/doc/iq/high/smpy/1984-clements.pdf
335•gurjeet•21h ago•180 comments

A distributed queue in a single JSON file on object storage

https://turbopuffer.com/blog/object-storage-queue
40•Sirupsen•3d ago•14 comments

Show HN: enveil – hide your .env secrets from prAIng eyes

https://github.com/GreatScott/enveil
119•parkaboy•8h ago•68 comments

I Ported Coreboot to the ThinkPad X270

https://dork.dev/posts/2026-02-20-ported-coreboot/
231•todsacerdoti•13h ago•41 comments

Show HN: X86CSS – An x86 CPU emulator written in CSS

https://lyra.horse/x86css/
173•rebane2001•10h ago•61 comments

The Missing Semester of Your CS Education – Revised for 2026

https://missing.csail.mit.edu/
91•anishathalye•21h ago•19 comments

Firefox 148 Launches with AI Kill Switch Feature and More Enhancements

https://serverhost.com/blog/firefox-148-launches-with-exciting-ai-kill-switch-feature-and-more-en...
329•shaunpud•7h ago•273 comments

Blood test boosts Alzheimer's diagnosis accuracy to 94.5%, clinical study shows

https://medicalxpress.com/news/2026-02-blood-boosts-alzheimer-diagnosis-accuracy.html
314•wglb•10h ago•120 comments

The Age Verification Trap: Verifying age undermines everyone's data protection

https://spectrum.ieee.org/age-verification
1514•oldnetguy•22h ago•1166 comments

Show HN: Steerling-8B, a language model that can explain any token it generates

https://www.guidelabs.ai/post/steerling-8b-base-model-release/
211•adebayoj•12h ago•60 comments

Making Wolfram tech available as a foundation tool for LLM systems

https://writings.stephenwolfram.com/2026/02/making-wolfram-tech-available-as-a-foundation-tool-fo...
210•surprisetalk•15h ago•115 comments

Unsung heroes: Flickr's URLs scheme

https://unsung.aresluna.org/unsung-heroes-flickrs-urls-scheme/
122•onli•2d ago•51 comments

“Car Wash” test with 53 models

https://opper.ai/blog/car-wash-test
267•felix089•17h ago•345 comments

UNIX99, a UNIX-like OS for the TI-99/4A (2025)

https://forums.atariage.com/topic/380883-unix99-a-unix-like-os-for-the-ti-994a/page/5/#findCommen...
188•marcodiego•17h ago•57 comments

Intel XeSS 3: expanded support for Core Ultra/Core Ultra 2 and Arc A, B series

https://www.intel.com/content/www/us/en/download/785597/intel-arc-graphics-windows.html
47•nateb2022•9h ago•33 comments

ATAboy is a USB adapter for legacy CHS only style IDE (PATA) drives

https://github.com/redruM0381/ATAboy
23•zdw•3d ago•27 comments

Goodbye InnerHTML, Hello SetHTML: Stronger XSS Protection in Firefox 148

https://hacks.mozilla.org/2026/02/goodbye-innerhtml-hello-sethtml-stronger-xss-protection-in-fire...
6•todsacerdoti•14m ago•0 comments

A simple web we own

https://rsdoiel.github.io/blog/2026/02/21/a_simple_web_we_own.html
266•speckx•21h ago•186 comments

Show HN: PgDog – Scale Postgres without changing the app

https://github.com/pgdogdev/pgdog
282•levkk•21h ago•53 comments

Decimal-Java is a library to convert java.math.BigDecimal to and from IEEE-754r

https://github.com/FirebirdSQL/decimal-java
7•mariuz•4h ago•2 comments

Ladybird adopts Rust, with help from AI

https://ladybird.org/posts/adopting-rust/
1202•adius•1d ago•667 comments

Writing code is cheap now

https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/
240•swolpers•19h ago•309 comments

Hetzner Prices increase 30-40%

https://docs.hetzner.com/general/infrastructure-and-availability/price-adjustment/
270•williausrohr•1d ago•536 comments

Show HN: Cellarium: A Playground for Cellular Automata

https://github.com/andrewosh/cellarium
26•andrewosh•3d ago•0 comments

What it means that Ubuntu is using Rust

https://smallcultfollowing.com/babysteps/blog/2026/02/23/ubuntu-rustnation/
161•zdw•20h ago•205 comments

Genetic underpinnings of chills from art and music

https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1012002
46•coloneltcb•1d ago•17 comments

Typed Assembly Language (2000)

https://www.cs.cornell.edu/talc/
44•luu•3d ago•19 comments
Open in hackernews

Sam Altman Is Losing His Grip on Humanity

https://www.theatlantic.com/technology/2026/02/sam-altman-train-a-human/686120/
38•noduerme•2h ago

Comments

noduerme•2h ago
I'd like to point out that this article seems to focus on minutiae like how much energy a human brain takes to solve a problem, when the obvious question is: Which is more worthwhile to us as a species - even assuming the same inputs and energy and planetary damage? Spending that energy on fostering humans who can solve problems? Or on machines which render humans superfluous?
bilekas•1h ago
It's calling him out on his BS. That might work for mindless investors who want number go up. But he's actively making the world worse for people.

Nevermind hoarding 40% of the worlds silicone 'just because'.

It's like Musk's lofty and outright BS claims. "Well I'm rich so I can do and say what I like, give me your money".

When did bald face lies become the norm in business ?

Edit.. Im keeping silicone in there.

rkomorn•1h ago
But... what are they going to do with all that silicone?
noduerme•1h ago
>> When did bald face lies become the norm in business ?

When capital has nowhere to go, and the middle class become gamblers and speculators, and people who work become 'superfluous', lots of bad things start to happen. At least, that was my takeaway from "Origins of Totalitarianism".

[edit] To be more specific: Lying (and foreign wars, too) become viable business models when the moneyed class has so much to invest that they no longer know what to do with it, and lack any new markets to pry open, or the education or creativity to produce anything new of value that isn't extractive, and even the extractive methods of generating wealth have begun to dry up locally. Call it a bubble, call it fascism; fascism is basically just a way of keeping a bubble from collapsing indefinitely by pirating neighboring peoples' wealth and cannibalizing one's own society. So there's not a great difference between that and the stated vision of the major AI companies ATM.

qmr•1h ago
silicon*
noduerme•29m ago
It was 40% more fun the first way you wrote it ;)
sigwinch•1h ago
Maybe starting with WorldCom?

The banking sector committed fraud along the way, and early after Lehman collapsed, observers wondered aloud about the moral hazard of bailing out everyone without making an example of someone.

Worldcom had a different ending. Enron had a different ending. But Wells Fargo left 2008 with attitudes that tolerated widespread fraud.

the_gipsy•1h ago
Because we can't foster good humans, the latter is inevitable.
noduerme•1h ago
What do you mean by "we"?
Eddy_Viscosity2•56m ago
The issue is who is making the decision on where energy resources are spent - on people or machines. And it turns out efficiency or even humanity play a very small role in that decision matrix. Control, on the other hand, is important. Rich and powerful people tend to want to control things, and machines are better at that than people.
noduerme•35m ago
This is the conspiratorial mindset. It's not much better than the mindset of people who seek power.

It's wrong, because it assumes that everything is about control.

For example, if I told you that a certain rich and powerful person was spending resources on sending vaccines to poor countries, you might think that was because they wanted to control things. If I said that someone sent books and teachers to a poor country, you might say they were trying to control people.

There's no way to have the conversation, in a conspiratorial mindset, about whether it's better or worse for humans or AI to do this stuff - because no matter what, the conspiratorial mindset will conclude that it's only about power for the humans involved, and always assume the worst. AND YET - there are things people can do which might be for their own self-gratification, but are definitely NOT as bad as some other things they could do. They hold back from doing the worst things.

That's why, I know this lens of looking at the world seems like it's the only smart way to understand things, but looking at the whole world through that lens prevents you from making the important distinction between OK, BAD and REALLY FUCKING BAD.

jalapenos•2h ago
Altman strikes me as a dude who came from a special tier of toxic upbringing.
noduerme•2h ago
I wouldn't say he's much different or worse than certain BBS sysadmins I knew in the early 90s... just that I'd never want those guys to be endowed with enough money and hardware to wreak their vision on the world
PacificSpecific•1h ago
That's the problem though. One of those types did get that type of power.
rob74•1h ago
Not only one of those types, I can think of at least another one that's (IMHO) far worse than Altman. You know, the one with the social network everyone used to use before he turned it into a cesspool of disinformation, the one with the alternate Wikipedia, the obsession with the letter X?
PacificSpecific•1h ago
That's my bad. I shouldn't have spoken in the singular tense. It's a rich tapestry of losers.
noduerme•57m ago
You're not wrong. But it's case, not tense. <3

Sorry, I'm not trying to be an asshole. I mean, I am one. But I'm not trying to be.

Yes, those people did get some control. Yes - they are like the bad versions of us.

We can be smarter. We'll win. Their dumb little utopias will collapse. Watch.

Tense means past/present/future. Case is for single/multiple pronouns.

I dropped out of school but I had a very mean grammar teacher ;)

PacificSpecific•52m ago
Fair enough. I appreciate you taking the time to correct me. No need to be apologetic!

It's the only way I'll learn :)

noduerme•1h ago
Maybe I'm jaded, but I feel like a lot of them did for a moment. Gates, Jobs, Zuck, all got their days of glory and it's quickly that everyone realizes they're a giant misanthropic asshole. If the pot of gold at the end of Altman's quest turns out to be as big as what he's priced it at, then yes, he'll be one of history's most successful sociopaths. If not, he's just another in a long line.
aa-jv•1h ago
He is literally a manifestation of the phenomenon described in this book:

https://archive.org/details/dli.ernet.469826

"The Authoritarian Personality"

tl;dr - the roots of the authoritarian personality grow fertile in the desire to be free of 'the filth of others'. Altman seems like he'd go crazy if he didn't keep his machine spectacularly clean ..

accidentallfact•1h ago
He was talking about education. I'm pretty sure about it.
aa-jv•4m ago
He devalues and invalidates his fellow human beings, from a position of disdain, too much for my liking. Avoiding his activities.
zozbot234•1h ago
> a special tier of toxic upbringing

Well, he has just told us that he thinks of it as "training", hasn't he? Must be why some of his statements sound like they came from a silly chatbot.

amelius•1h ago
He is a brilliant AI researcher and should be allowed to have a few quirks.
ginko•1h ago
He's a college dropout that never done any research. He's a salesman.
petesergeant•1h ago
Struggling to understand what this article is adding to the discourse other than a reflexive “Sam Altman bad” title.
noduerme•1h ago
It's not adding anything significant to the discourse if you read what we read all day, but it's of a piece with

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

and other things that are filtering to the mainstream non-technical intellectual readership that are flashing red lights about the personal nature of the people blowing air into this bubble, and that itself is significant.

BoredPositron•1h ago
If he keeps digging there will be evermore articles about the last shovel of shit he digs up.
mrweasel•1h ago
To some extend I think people are putting way to much weight on his exact words, and not why he says them.

Altman is a man who is quickly running out of lies, so now he starts slinging random arguments that can't stand up to even the briefest of scrutiny.

OpenAI is burning cash and fuel. There are results, and they are, to some extend, impressive, but not impressive enough to justify the cost and Altman are no longer able to cover that up.

zozbot234•1h ago
tl;dr looks like Saltman is getting a bit salty.
LatencyKills•1h ago
I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.
aleph_minus_one•1h ago
> I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.

I am not so sure:

This decision rather tells something important about the priorities of the string-pullers behind the curtain:

They clearly want(ed) to monetize what is there, with the risk that only smaller improvements for the AI models will happen from OpenAI, and thus OpenAI might get outcompeted by competitors who are capable of building and running a much better model.

If this is the priority (no matter whether you like or despise Sam Altman), you will likely prefer Sam Altman over Ilya Sutskever.

If, on the other hand, a fast monetization is less important than making further huge leaps towards much better AI models, you will, of course, strongly prefer Ilya Sutskever over Sam Altman.

Thus, I wouldn't say that choosing Sam Altman over Ilya Sutskever is a sign that OpenAI is in trouble, but a very strong sign where the string-pullers behind the curtain want OpenAI to be. Both Sam Altman and Ilya Sutskever are just marionettes for these string pullers. When they have served their role, they get put back into the box.

rustyhancock•1h ago
Yes I agree. Altman was the rational choice if you realise that eventually the huge R&D bill will need to stop for atleast a moderate period (<5 years).

You want to ride that out before capitalising on the eventual cheaper training costs once the rug has been pulled.

Altman has already succeeded here as it seems inference for API and chat is profitable but offset with massive R&D costs.

sjaiisba•29m ago
All your competitors benefit from your training costs. They’ll lose on inference pretty quickly if they stop training new models, no?
rustyhancock•11m ago
I don't think they will lose on inference because that assumes that compute becomes cheap for all evenly.

Their spending today has secured their compute for the near future.

If every GPU, stick for RAM and SSD is already paid for. Who can afford to sell cheap inference?

Z.ai is trying to deal with this by using domestic (basically Huwawei silicon not Nvidia). And with their state subsidy they will do well.

Anthropic has a 50bn USD plan to build data centres for 2026.

OpenAI similarly has secured extraordinary amounts of other people's money for data centres.

All these will be sunk costs and "other people's money" while money is easy to get hold off. But will be a moat when R&D ends.

Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI), and who you end up with (say people who use Gemini because they have a Google 2TB account).

Some upstart can put themselves into the ground borrowing compute and selling at a loss but the moment they catch up and need to raise prices everyone will simply leave.

ChatGPT is what is most likely to remain a sustained frontier model. Maybe Claude jumps ahead further a few times, Gemini will have its moment. But it'll all be a wash with ChatGPT tittering along as rarely the best. But never the worst.

gizajob•1h ago
There’s running out of words and there’s coming across like a complete psychopath who has lost all perspective.
rustyhancock•1h ago
My understanding is that subscription based inference and API usage is now profitable.

Subscriptions are highly profitable for the typical chat user.

And API is overall net profitable.

What is extremely taxing to their finances is R&D, training and in particular development of frontier models.

My assessment is that when the music stops those who have the most subs will win.

Companies like Apple who had sat out the battle and built niche moats (privacy), and companies like OpenAI and Anthropic who have the market share will be fine.

In 6-12 months, nearly any lead they have will be eaten by distillation.

What will then happen is they will lose subscriptions to services which offer AI as a tack on like Gemini with Googles regular cloud subscriptions.

This will continue. Companies like Apple will have deep pockets to move on the businesses that go underwater and then can restart training in a much less congested market.

All this is assuming a relatively graceful collapse but that is what's likely given how aware everyone is that the bubble must pop.

Training costs will fall. Companies like Nvidia and other shovel businesses (i.e. selling GPUs and not using them) mostly have their revenue secured with funding from the present.

What I see as confirmations of this pattern is if we stop getting ground busting frontier models and then coast for 3-5 years when competition becomes more incremental.

This is an unpopular opinion, Will OpenAI go bust? No chance. Nor will Anthropic.

noduerme•1h ago
I'm a bit of an Apple optimist for this exact reason. I think the moat is collapsing, and Apple is best positioned to dispatch their own models in a year on their own widely sold consumer hardware, unless someone has a breakthrough which they can't replicate. Which I don't anticipate.

I'm not really sure what OpenAI's moat is. Anthropic has a chance being so widely accepted by developers, and being a bit better at developing models when it comes to code.

mrweasel•28m ago
It probably doesn't matter that subscriptions are profitable, when some estimates put the number of users in the free tier at 96%.

I sort of agree with you, not that it's the most subscriptions necessarily that will be the deciding factor, but the there's going to be some companies better positioned to survive when the free money stops. OpenAI has the brand, so that might help, but mostly I think they'll get absorbed into Microsoft. I don't think they can stand on their own. It doesn't seem like a particularly well managed company, so to me it makes more sense that they are simply acquired for pennies on the dollar by someone with better leadership.

zozbot234•15m ago
> some estimates put the number of users in the free tier at 96%.

Isn't that where the ads part comes in? The users are also providing sought-after data about what the AI is being used for, while being served cheap models that can't even figure out whether you should drive or walk to the carwash place if you want to wash your car.

rustyhancock•6m ago
> when some estimates put the number of users in the free tier at 96%.

It's certainly almost everyone today, but that's because enshitificarion has yet to start properly.

The risk to OpenAI is that their free tier are captured by the tack on markets (i.e. Gemini with 2TB of cloud storage).

But otherwise they will make free more annoying until people just buy the cheap tier and then move up from there. Like chatgpt Go.

password54321•1h ago
People might not want to hear this but AI is already smarter and more useful than most people ever will be. We are not even talking about by the end of the year or decade anymore.
delaminator•1h ago
This Atlantic?

https://thealliancerockband.com/wp-content/uploads/2024/10/L...

co_king_5•1h ago
Yes, the Epstein Island Atlantic
noduerme•58m ago
I mean, I disagree with the Atlantic's slant on politics, but FFS I can't even read or understand anything in that mashed up screenshot. Posting nonsense X-screenshots doesn't make a good case, it just looks like something my 86 year old father would send me.
bondarchuk•1h ago
>Anthropic is studying whether its chatbot, Claude, is conscious or can feel “distress,” and allows Claude to cut off “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own.

It's really idiotic to posit that only things that eat and drink can suffer.

Final sentences:

>These tools may serve us. But to put them on the same plane as organic life is sad.

There you have it. "If machines could feel I would be sad. Therefore machines cannot feel."

edgyquant•1h ago
Anthropic treating models like people is cringe and frankly just a PR stunt
co_king_5•1h ago
At least Anthropic can't buy The Atlantic like they can buy the New York Times

https://archive.is/YKLw3

bondarchuk•1h ago
I never denied that. I only said "It's really idiotic to posit that only things that eat and drink can suffer.". Don't just pattern-match everything to some vaguely related black/white issues.
zozbot234•1h ago
Why? AI models were trained on the complete combined outputs and collective echos of billions of real human souls. They have a whole lot of humanity in them. If we can see the hidden humanity in a statue or a painting, why not an AI model that actually talks to us in humanly understandable terms?
noduerme•1h ago
Because a statue or a painting made by a human requires you to think about what their intention was and what they were trying to say. And requiring you to think about that, and actually thinking about something which someone else's thought went into, is what makes you a more complete human yourself. The difficulty of understanding it is what makes it valuable.

Taking the distillation or "reader's digest" version of everything makes you more and more reliant on someone else's interpretation, and less capable of parsing the meaning of it yourself. And in the case of AI-generated work, there is no meaning in it to parse. It's just words and pictures. I love going to the movies and eating popcorn and watching dumb words and pictures. But being able to distinguish between enchanting words and pictures (Marvel movies) versus words and pictures that have meaning which you need to deduce or interpret for yourself is the beginning of being a fully realized conscious being.

bondarchuk•40m ago
If they really think models can be conscious, they should just drop everything they're doing and never touch it again and try to convince anyone else to do the same. The half-assed safeguards they're implementing wouldn't be nearly enough. We can conclude either they legitimately believe this and are behaving morally abhorrently, or they don't really believe it and are just joking around for PR reasons. (probably the latter tbh)

For those that do believe there's a chance models are/will be conscious this precedent of "oh yeah they're conscious but we can just not put in the prompts that make it suffer lol" is pretty freaky.

zozbot234•38m ago
Something doesn't have to be conscious to have a genuine reflection of humanity in it. A statue is just a slab of marble that has been given some peculiar shape: is it conscious?
bondarchuk•34m ago
The context is "Anthropic treating models like people" which is 100% about consciousness, suffering, subjective awareness etc..

edit: in case you hadn't seen it, this kind of stuff https://www.anthropic.com/research/end-subset-conversations

midtake•1h ago
Shit like that makes me cringe. Muh food. Already I know what kind of shallow life the author leads, one full of inessential frippery, whose discussions over dinner or drink never orbit anything actually substantial.

What shall we decide on the important matters of eating or drinking tonight fellow humans???

At this point I'm rooting for the AI models.

absynth•1h ago
I have a different perspective. You'll probably hate it.

AI data centers should be operated according to two limiting factors.

1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.

2) No use of fresh water from municipal or fresh groundwater for cooling. Can use waste water. Must transition to providing excess fresh water to common supply.

No loopholes. Massive penalties for use of loopholes or breaking rules, not limited to but including complete shutdown of data center.

Those two limits will spur innovation AND prevent AI being criticized for energy use. These rules would hard burn improvements in energy storage and renewables as well as other methods of energy production.

Give them five years to comply to some useful progress percentage. Plenty of time to come up with a transition plan and show sufficient progress to justify further extensions. Realistically it will take 20 years at least to fully realize this plan.

Don't bring up cost. If you do, let me remind everyone that the climate change issue is real enough to hurt now. There's the very real cost of not pursuing these rules. AI has had plenty of time to bootstrap off grid. Now it can begin to migrate to something else instead.

Those with experience with energy generation will realize this plan has ridiculously high reward for those who follow it. Have your cake and eat it too definitely applies.

rockostrich•1h ago
The intent is good, but it will end up just pricing out everyone but Google, Amazon, and Microsoft (and the start-ups those companies bankroll).
aleph_minus_one•1h ago
> 1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.

This is easy: the companies will simply build some nuclear power plants near to their data centers. Perhaps even nuclear power plants that are vibe-designed by their AIs. :-)

noduerme•1h ago
That would be great if there weren't the easy arguments that "if we don't build it bigger, China will", "it's for national security", etc. Far from forcing regulation on them, they're reaping windfalls of deregulation. To build a thing which is far from convincingly beneficial to national security or society.
xyzsparetimexyz•1h ago
> if we don't build it bigger, China will

ja, und?

noduerme•1h ago
We have a mineshaft gap.
xyzsparetimexyz•22m ago
And they have a minecraft gap.
xyzsparetimexyz•1h ago
There is no way that letting these clowns run nuclear power plants is a good idea. Also the percentage of land that is allowed to be used by datacenters should be limited. Let them set them up in deserts or something.
geraneum•1h ago
I’m gonna buy me some DRAM wafers for now. No one else done that before. It’s innovative.
noduerme•1h ago
mm. Think big and start an ETF.
brazzy•1h ago
I don't hate it, but I suspect that incentives this would generate would not end up producing results that strictly align with the ones you envision and desire.

For one thing, you're essentially mandating data centers to be colocated with power plants and waste water treatment plants, instead of these things each being located independently according to the requirements of their different functions. If that really leads to "ridiculously high reward", why isn't it being done already?

noduerme•1h ago
OTOH, then AIs could functionally drink, pee and poop
Eddy_Viscosity2•1h ago
The trickle-down economy dictates that data centers get first access to electricity and fresh water (and any other resource it needs). People get whatever is leftover and like it. This is america.
xnx•55m ago
Good rules. How about we apply them to alfalfa farms (that send their water to Saudi Arabia) or football stadiums (I don't like football)?

The point is, I don't see the logic in singling out data centers over anything else.

roxolotl•5m ago
It’s an article about data centers so we’re talking about data centers. 100% agree we should be pushing all industries to use their resources not those of the commons. Data centers do happen to be easier to mostly close loop though than alfalfa farms. Football stadiums on the other hand 100% should be.
shalmanese•21m ago
Sure, if we’re in the business of making arbitrary requests, how about every data center operator has to bring 1 Epstein accused to justice for every data center they’re allowed to build?

The hard part has never been the “what”, it’s always been the “how”.

co_king_5•1h ago
New York Times: How Fast Will A.I. Agents Rip Through the Economy?

https://archive.is/YKLw3

randyrand•1h ago
> You don’t “train a human.”

In this context, "train" a human makes perfect sense.

zozbot234•1h ago
Yes but this is the recurring "Two Minutes Hate, the Sam Altman Edition" HN thread. We all know it's a dumb criticism, we're just going along with it.
noduerme•1h ago
Yeah, it makes sense, and the Atlantic is typically adverse to confronting reality. I don't think they took this from exactly the right perspective. You DO train a human. But Altman says:

>>It also takes a lot of energy to train a human

as a caveat to the energy it takes to train GPTs. The question I believe the writer is trying to ask is: Why is it better to train a GPT than a human?

swingboy•1h ago
I’m not a fan of Altman, but writing an entire article on the basis of what was clearly a joke is…reaching.
Eddy_Viscosity2•52m ago
Yeah, but its the kind of joke that reveals a truth about the way Altman views the world and his place in it. Taken completely alone, sure its just a flippant statement. But taken with everything else about Altman and it reads different. Its joking-not-joking.
bananaflag•1h ago
Well, this is the proposition the field of AI was founded on, that intelligence can be replicated by machines, it's not Sam Altman who "lost his grip". I know there are people in this world who believe humans are somehow special and non-materialist and non-replicable (it's a basic tenet of most religions), but this person doesn't advance or reference a single argument. The article is not very intellectually honest.