frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zed 1.0

https://zed.dev/blog/zed-1-0
794•salkahfi•3h ago•278 comments

We need a federation of forges

https://blog.tangled.org/federation/
358•icy•3h ago•186 comments

FastCGI: 30 years old and still the better protocol for reverse proxies

https://www.agwa.name/blog/post/fastcgi_is_the_better_protocol_for_reverse_proxies
59•agwa•1h ago•4 comments

The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness

https://deepmind.google/research/publications/231971/
27•joshus•20m ago•11 comments

Third Editor Fired in Elsevier's Citation Cartel Crackdown

https://www.chrisbrunet.com/p/third-editor-fired-in-elseviers-citation
49•RigbyTaro•2h ago•7 comments

Online age verification is the hill to die on

https://x.com/GlennMeder/status/2049088498163216560
255•Cider9986•2h ago•161 comments

Soft launch of open-source code platform for government

https://www.nldigitalgovernment.nl/news/soft-launch-for-government-open-source-code-platform/
429•e12e•8h ago•108 comments

Ghostty is leaving GitHub

https://mitchellh.com/writing/ghostty-leaving-github
3208•WadeGrimridge•22h ago•946 comments

Linux 7.0 Broke PostgreSQL: The Preemption Regression Explained

https://read.thecoder.cafe/p/linux-broke-postgresql
77•0xKelsey•2h ago•26 comments

An open-source stethoscope that costs between $2.5 and $5 to produce

https://github.com/GliaX/Stethoscope
51•0x54MUR41•3h ago•23 comments

How to Build the Future: Demis Hassabis [video]

https://www.youtube.com/watch?v=JNyuX1zoOgU
13•sandslash•3h ago•0 comments

Show HN: A new benchmark for testing LLMs for deterministic outputs

https://interfaze.ai/blog/introducing-structured-output-benchmark
18•khurdula•1h ago•2 comments

Cursor Camp

https://neal.fun/cursor-camp/
24•bpierre•2h ago•2 comments

GitHub – DOS 1.0: Transcription of Tim Paterson's DOS Printouts

https://github.com/DOS-History/Paterson-Listings
76•s2l•6h ago•4 comments

Making AI chatbots friendly leads to mistakes and support of conspiracy theories

https://www.theguardian.com/technology/2026/apr/29/making-ai-chatbots-more-friendly-mistakes-supp...
38•Cynddl•2h ago•20 comments

Letting AI play my game – building an agentic test harness to help play-testing

https://blog.jeffschomay.com/letting-ai-play-my-game
82•jschomay•5h ago•17 comments

Mistral Medium 3.5

https://mistral.ai/news/vibe-remote-agents-mistral-medium-3-5
206•meetpateltech•2h ago•117 comments

Stardex Is Hiring a Founding Customer Success Lead

https://www.ycombinator.com/companies/stardex/jobs/6GCK1HC-founding-customer-success-lead
1•sanketc•5h ago

Maryland becomes first state to ban surveillance pricing in grocery stores

https://www.theguardian.com/technology/2026/apr/29/maryland-grocery-stores-ban-surveillance-pricing
34•01-_-•1h ago•5 comments

Bugs Rust won't catch

https://corrode.dev/blog/bugs-rust-wont-catch/
542•lwhsiao•15h ago•309 comments

Before GitHub

https://lucumr.pocoo.org/2026/4/28/before-github/
612•mlex•20h ago•200 comments

Show HN: Adblock-rust Manager – Firefox extension to enable the Brave ad blocker

https://github.com/electricant/adblock-rust-manager
56•electricant•5h ago•33 comments

How ChatGPT serves ads

https://www.buchodi.com/how-chatgpt-serves-ads-heres-the-full-attribution-loop/
460•lmbbuchodi•17h ago•315 comments

Court Rules 2nd Amendment Covers Firearms Parts Good News Those Who Build Guns

https://cowboystatedaily.com/2026/04/28/court-rules-2nd-amendment-covers-firearms-parts-good-news...
53•Bender•1h ago•26 comments

Why AI companies want you to be afraid of them

https://www.bbc.com/future/article/20260428-ai-companies-want-you-to-be-afraid-of-them
227•rolph•2h ago•167 comments

Improving ICU handovers by learning from Scuderia Ferrari F1 team

https://healthmanagement.org/c/icu/IssueArticle/improving-handovers-by-learning-from-scuderia-fer...
43•embedding-shape•4h ago•40 comments

Shrdlu

https://en.wikipedia.org/wiki/SHRDLU
37•chistev•1h ago•3 comments

Show HN: Auto-Architecture: Karpathy's Loop, pointed at a CPU

https://github.com/FeSens/auto-arch-tournament/blob/main/docs/auto-arch-tournament-blog-post.md
219•fesens•1d ago•68 comments

Show HN: Rocky – Rust SQL engine with branches, replay, column lineage

https://github.com/rocky-data/rocky
107•hugocorreia90•1d ago•39 comments

HardenedBSD Is Now Officially on Radicle

https://hardenedbsd.org/article/shawn-webb/2026-04-26/hardenedbsd-officially-radicle
144•lftherios•11h ago•27 comments
Open in hackernews

Making AI chatbots friendly leads to mistakes and support of conspiracy theories

https://www.theguardian.com/technology/2026/apr/29/making-ai-chatbots-more-friendly-mistakes-support-false-beliefs-conspiracy-theories-study
38•Cynddl•2h ago

Comments

Cynddl•2h ago
(Title edited, was slightly too long)
tsunamifury•1h ago
LLM technology specifically beam-searches manifolds (or latent space) of lingustics that are closely related to the original prompt (and the pre-prompting rules of the chatbot) which it then limits its reasoning inside of. Its just the basic outcome of weights being the primary function of how it generates reasonable answers.

This is the core problem with LLM tech that several researchers have been trying to figure out with things like 'teleportation' and 'tunneling' aka searching related, but lingusitically distant manifolds

So when you pre-prompt a bot to be friendly, it limits its manifold on many dimensions to friedly linguistics, then reasons inside of that space, which may eliminate the "this is incorrect" manifold answer.

Reasoning is difficult and frankly I see this as a sort of human problem too (our cognative windows are limited to our langauge and even spaces inside them).

krunck•1h ago
> “The push to make these language models behave in a more friendly manner leads to a reduction in their ability to tell hard truths and especially to push back when users have wrong ideas of what the truth might be,” said Lujain Ibrahim at the Oxford Internet Institute, the first author on the study.

People aren't much different. When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views.

This behaviour is expressed in language online. Thus it is expressed in LLMs. Why does this surprise us?

munificent•56m ago
Gonna set my system prompt to: "You are a Dutch person. Respond with the directness stereotypical of people from the Netherlands."
amarant•55m ago
Because nobody dared state the obvious, lest they be perceived as unfriendly.
bheadmaster•47m ago
So Elon Musk was right in his view that Grok should focus on truth above all, even if it became offensive?
amarant•30m ago
Seems like it! I find myself rather agreeing with the sentiment. The world is a offensive place, it's not gonna become less offensive from lying about it, better to stick with honesty then.
chabes•28m ago
Grok is one of the more biased models out there.

Less truth, and more guardrails to protect musks feelings.

“Kill the boer” mean anything to you?

firebot•17m ago
Yea, Mecha-Hitler is a real bastion of truth. /S
miyoji•33m ago
> People aren't much different.

If I had a nickel for every time someone on HN responded to a criticism of LLMs with a vapid and fallacious whataboutist variation of "humans do that too!", I could fund my own AI lab.

> Why does this surprise us?

No one said they were surprised.

root_axis•1m ago
> People aren't much different

Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes.

However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them.

Mistletoe•55m ago
Yeah I wish AI didn’t try to agree with you so much. It’s ok to just say “No that’s not correct at all.” I do find Gemini better at this than ChatGPT. ChatGPT is that annoying coworker that just agrees with everything you say to get in good with you, like Nard Dog from The Office.

“I'll be the number two guy here in Scranton in six weeks. How? Name repetition, personality mirroring, and never breaking off a handshake"

Zigurd•50m ago
A few weeks ago I was gently admonished by a coding agent that the code already did what I was asking it to make the code do. I was pleasantly surprised.
chankstein38•43m ago
Betting it was Claude. That's the only LLM that will stand up to me!
Zigurd•32m ago
In fact it was Gemini, but I don't remember which version and there are big differences. I'm signed up for all the betas and I switch among them frequently.
jmyeet•47m ago
I keep thinking about a comment I read on HN that described neurotypical-style communication as "tone poems" [1]. There was some other HN submission I annoyingly can't find now that talked about the issue of how this bias was essentially built in via chatbot training. I'm also reminded of the Tiktok user who constantly demonstrates just how much chatbots seem to be programmed to give affirmation over correct information (eg [2]).

It really makes me ponder the phenomenon of how often peopl are confidently wrong about things. Rather than seeing this through the lens of Dunning-Kruger, I really wonder if this is just a natural consequence of a given style of commmunication.

Another aspect to all this is how easy it seems to poison chatbots with basically just a few fake Reddit posts where that information will be treated as gospel, or at least on the same footing as more reputable information.

[1]: https://news.ycombinator.com/item?id=47832952

[2]: https://www.tiktok.com/@huskistaken/video/762913172258355945...

AlfredBarnes•21m ago
...no shit
nyc_data_geek1•10m ago
“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as “Your Plastic Pal Who’s Fun to Be With.” The Hitchhiker’s Guide to the Galaxy defines the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who’ll be the first against the wall when the revolution comes,” with a footnote to the effect that the editors would welcome applications from anyone interested in taking over the post of robotics correspondent. Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who were the first against the wall when the revolution came.”
Cynddl•7m ago
Hi all, co-author here! Happy to answer any questions about our work.
kmeisthax•2m ago
The H-neuron paper[0] found something similar (if not more general): the same bits of the model responsible for hallucination also make the model a sycophant, and also make the model easier to jailbreak.

[0] https://arxiv.org/abs/2512.01797