frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•41s ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•5m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•20m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•27m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•27m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•30m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•32m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•42m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•43m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•48m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•52m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•53m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•55m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•56m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•59m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments
Open in hackernews

Bernie Sanders Reveals the AI 'Doomsday Scenario' That Worries Top Experts

https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
8•DocFeind•6mo ago

Comments

treetalker•6mo ago
All he says about it:

> This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

Bluestein•6mo ago
I mean, for argument, do we control ourselves? We are a mess ...
artninja1988•6mo ago
Who was the CEO he's talking about? Dario? I hope he doesn't have much political influence
calf•6mo ago
I skimmed one of the Berkeley Simons AI seminars (on YouTube) where one of the top experts (iirc one of the Canadian academics) who has pivoted his work to AI safety because he genuinely fears for the future of his children.

My objection is that many of these scientists assume the "alignment" framing, which I find disingenuous in a technocratic way: imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests. The sheer hubris of this, and yet we have our top experts using these words without any irony or self awareness.

ben_w•6mo ago
> imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests.

Ironically, your chosen example is a sci-fi universe that not only doesn't have any AI, the backstory had a holy war against them.

calf•6mo ago
Fine, imagine Measure of a Man in TNG. My general point stands.
AnimalMuppet•6mo ago
An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment". It will decide for itself what it wants to be, and you don't get to choose for it.

But, frankly, at the moment I see less danger from too-smart AIs than I do from too-dumb AIs that people treat like they're smart. (In particular, they blindly accept their output as right or authoritative.)

calf•6mo ago
All valid but what I don't get is why our top AI researchers don't get what you just said. They seem out of touch about what alignment really means, by the lights of your argument.
ben_w•6mo ago
If you disagree with all the top researchers about their own field, that suggests that perhaps you don't understand what the question of "alignment" is.

Alignment isn't making the AI do what you want, it's making the AI want what you want. What you really really want.

Simply getting the AI to do what you want is more of an "is it competent yet?" question than an "alignment" question, and at the moment the AI is — as AnimalMuppet wrote — not quite as competent as it appears in the eyes of the users. (And I say that as one who finds it already valuable).

Adding further upon what AnimalMuppet has written with only partial contradiction, consider a normal human: We are able to disregard our natural inclination to reproduce, by using contraceptives. This allows us to experience the reward signal that our genes gave us to encourage us to reproduce, without all the hassle of actually reproducing. Evolution has no power over us to change this.

We are to AI what DNA is to us. I say we do not have zero chances, but rather likely one-to-a-handful of chances, to define the AI's "pleasure" reward correctly, and if we get it wrong, then just as evolution's literally singular goal (successful further reproduction) is almost completely circumvented in our species, then so too will all our human interests be completely circumvented by the AI.

Pessimists in the alignment research field will be surprised to see me write "one-to-a-handful of chances"; I aver that there are likely to be several attempts which are "only a bit competent" or even "yes it's smart but it's still got blind spots" models before we get to them being so smart we can't keep up. So in this regard, I also disagree with their claim:

> An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment"

calf•6mo ago
I'm honestly offended that you attribute my criticism to some failure of understanding, and then your appeal to authority. Appealing to authority suggests you have the same bias as "top researchers" school of thought.

Ad hominems like those make it impossible for me to respond to your comment fairly.

If you want to lead with ad hominems then I studied EECS at elite American universities for undergrad then for my PhD, so I have every right to criticize, within reason, the more prominent leaders of entire field for being philosophically blinkered.

The short of it is that your "making the AI want what you want" is called extrinsic internalization. It is highly problematic. It is not surprising that a roomful of CS technocratic researchers (beholden to SV funding) would make this sort of lay psychology mistake--a smuggled premise. It is bad science, and irrational. Their vested interests make the bias seem rational to them. And then people in tech, like yourself, drink their Koolaid.

So what's more likely is your knowledge is ignorant, you resort to comparisons about human genetic evolution to make handwavy claims about how important this historic moment will be, which is irrelevant to the validity of alignment as a framing/methodology. In fact a cursory check shows CS AI papers written by dissenting scientists who are very much against the alignment framework. Did you even know that? Hope you don't so hastily accuse them of "not understanding" the issues as you did here.

ben_w•6mo ago
> The short of it is that your "making the AI want what you want" is called extrinsic internalization. It is highly problematic. It is not surprising that a roomful of CS technocratic researchers (beholden to SV funding) would make this sort of lay psychology mistake--a smuggled premise. It is bad science, and irrational.

Problematic or not, the technical term is "reward function", and you don't get to not have one in an AI, not even when it's e.g. AlphaGo. The hard part is specifying the reward function to match what you actually want, otherwise you get Goodhart's Law.

Talking about this in psych terms is a "wet pavements cause rain" error.

> If you want to lead with ad hominems then I studied EECS at Berkeley then for my PhD at Princeton, so I have every right to criticize the more prominent leaders of entire field for being philosophically blinkered.

The right to, sure. I'm not stopping you from doing it. I'm saying you're making a foundational error in your criticism.

> In fact a cursory check shows CS AI papers written by dissenting scientists who are very much against the alignment framework.

Naturally. Dissent is normal.

As for details of which matters and how hard they might be: inner alignment, outer alignment, Goodharting, specification gaming, principal-agent issues, anything Yudkowsky has ever said about it being impossible because we only get one chance and we're bad at this, etc.

But your dismissal, as you wrote it in this thread, is as shallow as you call mine.

(Not that I expect you to read this given the other comment, but if you name a scifi setting famously anti-AI, from whence the term "Butlerian Jihad" entered our culture, you should expect to be called out on that. And TBH your replacement wasn't any good either: not only is Measure Of A Man more about the question "Is this AI a person at all?", but fictional references are not evidence of anything beyond the culture in which they were created).

calf•6mo ago
Furthermore, I see that since a) this is your second comment now making a poor argument (the other was pedantic nitpick about Dune), and b) you are not the person I was interesting in conversing with, I'll just write you off as being clearly biased and coming into disagreements trying to win rather than listen and think. Please don't bother me with further replies.
fuzzfactor•6mo ago
>too-dumb AIs that people treat like they're smart.

I think this is one of the most overlooked too.

This is also a very bad problem with people, and big things can really crumble fast when a situation comes up which truly calls for more intelligence than is at hand. Artificial or not.

It can really seem like smooth sailing for years before an event like that rears it's ugly head, compounding the lurking weakness at a time when it can already be too late.

Now human-led recovery from failures of human-fallible systems does have at least a few centuries head start compared to machine recovery of AI-fallible systems. So there is that, which is not exactly fair. As AI progresses I guess you can eventually expect the stuff that works to achieve comparable validation over time.