frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
1•gbugniot•59s ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
1•throwaw12•2m ago•0 comments

MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•2m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•3m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•5m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•8m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•11m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•17m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•19m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•24m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•25m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•26m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•28m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•30m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•32m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•33m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•36m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•37m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•40m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•41m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•41m ago•1 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•43m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•46m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•51m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•51m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•54m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•54m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
2•ravenical•56m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•56m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•58m ago•1 comments
Open in hackernews

Bernie Sanders Reveals the AI 'Doomsday Scenario' That Worries Top Experts

https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
8•DocFeind•6mo ago

Comments

treetalker•6mo ago
All he says about it:

> This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

Bluestein•6mo ago
I mean, for argument, do we control ourselves? We are a mess ...
artninja1988•6mo ago
Who was the CEO he's talking about? Dario? I hope he doesn't have much political influence
calf•6mo ago
I skimmed one of the Berkeley Simons AI seminars (on YouTube) where one of the top experts (iirc one of the Canadian academics) who has pivoted his work to AI safety because he genuinely fears for the future of his children.

My objection is that many of these scientists assume the "alignment" framing, which I find disingenuous in a technocratic way: imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests. The sheer hubris of this, and yet we have our top experts using these words without any irony or self awareness.

ben_w•6mo ago
> imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests.

Ironically, your chosen example is a sci-fi universe that not only doesn't have any AI, the backstory had a holy war against them.

calf•6mo ago
Fine, imagine Measure of a Man in TNG. My general point stands.
AnimalMuppet•6mo ago
An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment". It will decide for itself what it wants to be, and you don't get to choose for it.

But, frankly, at the moment I see less danger from too-smart AIs than I do from too-dumb AIs that people treat like they're smart. (In particular, they blindly accept their output as right or authoritative.)

calf•6mo ago
All valid but what I don't get is why our top AI researchers don't get what you just said. They seem out of touch about what alignment really means, by the lights of your argument.
ben_w•6mo ago
If you disagree with all the top researchers about their own field, that suggests that perhaps you don't understand what the question of "alignment" is.

Alignment isn't making the AI do what you want, it's making the AI want what you want. What you really really want.

Simply getting the AI to do what you want is more of an "is it competent yet?" question than an "alignment" question, and at the moment the AI is — as AnimalMuppet wrote — not quite as competent as it appears in the eyes of the users. (And I say that as one who finds it already valuable).

Adding further upon what AnimalMuppet has written with only partial contradiction, consider a normal human: We are able to disregard our natural inclination to reproduce, by using contraceptives. This allows us to experience the reward signal that our genes gave us to encourage us to reproduce, without all the hassle of actually reproducing. Evolution has no power over us to change this.

We are to AI what DNA is to us. I say we do not have zero chances, but rather likely one-to-a-handful of chances, to define the AI's "pleasure" reward correctly, and if we get it wrong, then just as evolution's literally singular goal (successful further reproduction) is almost completely circumvented in our species, then so too will all our human interests be completely circumvented by the AI.

Pessimists in the alignment research field will be surprised to see me write "one-to-a-handful of chances"; I aver that there are likely to be several attempts which are "only a bit competent" or even "yes it's smart but it's still got blind spots" models before we get to them being so smart we can't keep up. So in this regard, I also disagree with their claim:

> An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment"

calf•6mo ago
I'm honestly offended that you attribute my criticism to some failure of understanding, and then your appeal to authority. Appealing to authority suggests you have the same bias as "top researchers" school of thought.

Ad hominems like those make it impossible for me to respond to your comment fairly.

If you want to lead with ad hominems then I studied EECS at elite American universities for undergrad then for my PhD, so I have every right to criticize, within reason, the more prominent leaders of entire field for being philosophically blinkered.

The short of it is that your "making the AI want what you want" is called extrinsic internalization. It is highly problematic. It is not surprising that a roomful of CS technocratic researchers (beholden to SV funding) would make this sort of lay psychology mistake--a smuggled premise. It is bad science, and irrational. Their vested interests make the bias seem rational to them. And then people in tech, like yourself, drink their Koolaid.

So what's more likely is your knowledge is ignorant, you resort to comparisons about human genetic evolution to make handwavy claims about how important this historic moment will be, which is irrelevant to the validity of alignment as a framing/methodology. In fact a cursory check shows CS AI papers written by dissenting scientists who are very much against the alignment framework. Did you even know that? Hope you don't so hastily accuse them of "not understanding" the issues as you did here.

ben_w•6mo ago
> The short of it is that your "making the AI want what you want" is called extrinsic internalization. It is highly problematic. It is not surprising that a roomful of CS technocratic researchers (beholden to SV funding) would make this sort of lay psychology mistake--a smuggled premise. It is bad science, and irrational.

Problematic or not, the technical term is "reward function", and you don't get to not have one in an AI, not even when it's e.g. AlphaGo. The hard part is specifying the reward function to match what you actually want, otherwise you get Goodhart's Law.

Talking about this in psych terms is a "wet pavements cause rain" error.

> If you want to lead with ad hominems then I studied EECS at Berkeley then for my PhD at Princeton, so I have every right to criticize the more prominent leaders of entire field for being philosophically blinkered.

The right to, sure. I'm not stopping you from doing it. I'm saying you're making a foundational error in your criticism.

> In fact a cursory check shows CS AI papers written by dissenting scientists who are very much against the alignment framework.

Naturally. Dissent is normal.

As for details of which matters and how hard they might be: inner alignment, outer alignment, Goodharting, specification gaming, principal-agent issues, anything Yudkowsky has ever said about it being impossible because we only get one chance and we're bad at this, etc.

But your dismissal, as you wrote it in this thread, is as shallow as you call mine.

(Not that I expect you to read this given the other comment, but if you name a scifi setting famously anti-AI, from whence the term "Butlerian Jihad" entered our culture, you should expect to be called out on that. And TBH your replacement wasn't any good either: not only is Measure Of A Man more about the question "Is this AI a person at all?", but fictional references are not evidence of anything beyond the culture in which they were created).

calf•6mo ago
Furthermore, I see that since a) this is your second comment now making a poor argument (the other was pedantic nitpick about Dune), and b) you are not the person I was interesting in conversing with, I'll just write you off as being clearly biased and coming into disagreements trying to win rather than listen and think. Please don't bother me with further replies.
fuzzfactor•6mo ago
>too-dumb AIs that people treat like they're smart.

I think this is one of the most overlooked too.

This is also a very bad problem with people, and big things can really crumble fast when a situation comes up which truly calls for more intelligence than is at hand. Artificial or not.

It can really seem like smooth sailing for years before an event like that rears it's ugly head, compounding the lurking weakness at a time when it can already be too late.

Now human-led recovery from failures of human-fallible systems does have at least a few centuries head start compared to machine recovery of AI-fallible systems. So there is that, which is not exactly fair. As AI progresses I guess you can eventually expect the stuff that works to achieve comparable validation over time.