frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Game Theory #9: the US-Iran War [video]

https://www.youtube.com/watch?v=jIS2eB-rGv0
1•hebelehubele•28s ago•0 comments

A new spam policy for "back button hijacking"

https://developers.google.com/search/blog/2026/04/back-button-hijacking
1•xnx•36s ago•0 comments

Gamer's Dilemma

https://www.lrb.co.uk/blog/2026/april/gamer-s-dilemma
1•casca•1m ago•0 comments

Blizzard Wins Injunction Against TurtleWoW – Server Ordered to Cease and Desist

https://www.wowhead.com/classic/news/blizzard-wins-injunction-against-turtlewow-private-server-or...
1•RandomGerm4n•1m ago•0 comments

AI, Gods, and Selves: Effective Illusions [video]

https://www.youtube.com/watch?v=9X1CQlrwgDI
1•larve•2m ago•0 comments

AgentsView 0.22: open-source usage dashboard across Claude Code, Codex, etc.

https://www.agentsview.io/
2•jesserobbins•2m ago•0 comments

Why algorithms can't think: "Artifical Communication" by Elena Esposito [video]

https://www.youtube.com/watch?v=L5ptc9WWcJc
1•larve•3m ago•0 comments

'Yes to fields of wheat, no to fields of iron': how Denmark soured on solar

https://www.theguardian.com/world/2026/mar/20/solar-power-renewable-energy-denmark-backlash-natio...
1•PaulHoule•4m ago•0 comments

Training AI models doesn't emit that much

https://blog.andymasley.com/p/training-ai-models-doesnt-emit-that
1•larve•4m ago•0 comments

Timing how long it takes to close nuclear advertising

https://www.youtube.com/watch?v=_aDtK3keRJc
1•logicallee•5m ago•1 comments

We love open source: finding a critical auth bypass in etcd (CVE-2026-33413)

https://www.strix.ai/blog/where-others-missed-it-etcd-auth-bypass
3•bearsyankees•5m ago•0 comments

Solving color in Rust with too much color science

https://chaynabors.com/blog/colr
1•chaynabors•6m ago•0 comments

Aligned to Whom? Notes on a Two-Place Word

https://blog.unsupervision.com/aligned-to-whom/
1•shevis•6m ago•0 comments

Is SwiftUI as fast as UIKit in iOS 26?

https://blog.jacobstechtavern.com/p/swiftui-vs-uikit
1•jakey_bakey•7m ago•0 comments

Evaluating Netflix Show Synopses with LLM-as-a-Judge

https://netflixtechblog.com/evaluating-netflix-show-synopses-with-llm-as-a-judge-6269251e6f28
2•criscros•7m ago•0 comments

How to locate almost any photo [video]

https://www.youtube.com/watch?v=oRXqkSQsMUs
1•jaffa2•8m ago•1 comments

Small changes in the non-coding part of the genome have a key role

https://www.nature.com/articles/d41586-026-01120-8
1•resource0x•9m ago•0 comments

Trump deletes post depicting him as Jesus-like figure after backlash

https://www.bbc.com/news/articles/c17v8y0z9z2o
2•only_in_america•9m ago•1 comments

AI Stupid Level: Independent monitoring fluctuations in AI model performance

https://aistupidlevel.info
2•SubiculumCode•10m ago•1 comments

Cyclic Graph Query in SQLite/Python

https://smitty1e.substack.com/p/cyclic-graph-query-in-sqlitepython
1•smitty1e•10m ago•0 comments

Ask HN: What do you think are the best legal ways to slow/stop AI development?

1•AntiDyatlov•11m ago•0 comments

Google engineering appears to have the same AI adoption footprint as John Deere

https://twitter.com/Steve_Yegge/status/2043747998740689171
1•tosh•11m ago•0 comments

Apple's iCloud as the back end for an iOS and Android family safety app

https://www.brykk.app/
1•nharing•12m ago•0 comments

Someone Bought 30 WordPress Plugins and Planted a Backdoor in All of Them

https://anchor.host/someone-bought-30-wordpress-plugins-and-planted-a-backdoor-in-all-of-them/
3•speckx•12m ago•0 comments

Tax Wrapped 2025

https://taxwrapped.com
2•entrapi•14m ago•0 comments

Claude Steals Spotlight at HumanX AI Conference

https://www.techbuzz.ai/articles/claude-steals-spotlight-at-humanx-ai-conference
1•Vaslo•14m ago•0 comments

I built a simple API to extract structured data from any document

1•goldberg_dev•15m ago•0 comments

We built our own PDF converter benchmark

https://docs.kapa.ai/research/pdf-converter-benchmark
1•emil_sorensen•15m ago•0 comments

New Orleans's Car-Crash Conspiracy

https://www.newyorker.com/magazine/2026/04/20/the-car-crash-conspiracy
1•Geekette•15m ago•0 comments

Okojo – low allocation managed JavaScript engine for .NET

https://github.com/akeit0/okojo
1•vyrotek•16m ago•0 comments
Open in hackernews

The Future of Everything Is Lies, I Guess: Safety

https://aphyr.com/posts/417-the-future-of-everything-is-lies-i-guess-safety
96•aphyr•1h ago

Comments

Cynddl•1h ago
> "Unavailable Due to the UK Online Safety Act"

Anyone outside the UK can share what this is about?

jazzpush2•1h ago
The Future of Everything is Lies, I Guess: Safety Software LLM The Future of Everything is Lies I Guess 2026-04-13 New machine learning systems endanger our psychological and physical safety. The idea that ML companies will ensure “AI” is broadly aligned with human interests is naïve: allowing the production of “friendly” models has necessarily enabled the production of “evil” ones. Even “friendly” LLMs are security nightmares. The “lethal trifecta” is in fact a unifecta: LLMs simply cannot safely be given the power to fuck things up. LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators. Semi-autonomous weapons are already here, and their capabilities will only expand.

Alignment is a Joke Well-meaning people are trying very hard to ensure LLMs are friendly to humans. This undertaking is called alignment. I don’t think it’s going to work.

First, ML models are a giant pile of linear algebra. Unlike human brains, which are biologically predisposed to acquire prosocial behavior, there is nothing intrinsic in the mathematics or hardware that ensures models are nice. Instead, alignment is purely a product of the corpus and training process: OpenAI has enormous teams of people who spend time talking to LLMs, evaluating what they say, and adjusting weights to make them nice. They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs. Both of these things are optional and expensive. All it takes to get an unaligned model is for an unscrupulous entity to train one and not do that work—or to do it poorly.

I see four moats that could prevent this from happening.

First, training and inference hardware could be difficult to access. This clearly won’t last. The entire tech industry is gearing up to produce ML hardware and building datacenters at an incredible clip. Microsoft, Oracle, and Amazon are tripping over themselves to rent training clusters to anyone who asks, and economies of scale are rapidly lowering costs.

Second, the mathematics and software that go into the training and inference process could be kept secret. The math is all published, so that’s not going to stop anyone. The software generally remains secret sauce, but I don’t think that will hold for long. There are a lot of people working at frontier labs; those people will move to other jobs and their expertise will gradually become common knowledge. I would be shocked if state actors were not trying to exfiltrate data from OpenAI et al. like Saudi Arabia did to Twitter, or China has been doing to a good chunk of the US tech industry for the last twenty years.

Third, training corpuses could be difficult to acquire. This cat has never seen the inside of a bag. Meta trained their LLM by torrenting pirated books and scraping the Internet. Both of these things are easy to do. There are whole companies which offer web scraping as a service; they spread requests across vast arrays of residential proxies to make it difficult to identify and block.

Fourth, there’s the small armies of contractors who do the work of judging LLM responses during the reinforcement learning process; as the quip goes, “AI” stands for African Intelligence. This takes money to do yourself, but it is possible to piggyback off the work of others by training your model off another model’s outputs. OpenAI thinks Deepseek did exactly that.

In short, the ML industry is creating the conditions under which anyone with sufficient funds can train an unaligned model. Rather than raise the bar against malicious AI, ML companies have lowered it.

To make matters worse, the current efforts at alignment don’t seem to be working all that well. LLMs are complex chaotic systems, and we don’t really understand how they work or how to make them safe. Even after shoveling piles of money and gobstoppingly smart engineers at the problem for years, supposedly aligned LLMs keep sexting kids, obliteration attacks can convince models to generate images of violence, and anyone can go and download “uncensored” versions of models. Of course alignment prevents many terrible things from happening, but models are run many times, so there are many chances for the safeguards to fail. Alignment which prevents 99% of hate speech still generates an awful lot of hate speech. The LLM only has to give usable instructions for making a bioweapon once.

We should assume that any “friendly” model built will have an equivalently powerful “evil” version in a few years. If you do not want the evil version to exist, you should not build the friendly one! You should definitely not reorient a good chunk of the US economy toward making evil models easier to train. ...

jazzpush2•1h ago
To be clear, that's not the full article, just the intro (though the whole thing isn't too long)
0x3444ac53•1h ago
https://web.archive.org/web/20260413164025/https://aphyr.com...
throwway120385•1h ago
At scale I think our society is slowly inching closer and closer to building HM.
nine_k•1h ago
What is HM here?
zackmorris•1h ago
Hacker Mews
throwaway27448•1h ago
Looksmaxxing really has gone mainstream huh
bitwize•24m ago
Thought it was all the Rust catgirls.
throw4847285•8m ago
Sounds like a lovely co-op building, or perhaps a retirement community for aging hackers.
derektank•1h ago
Maybe they meant AM (Allied Mastercomputer) from “I Have No Mouth, and I Must Scream“
Sardtok•21m ago
Hennes & Mauritz is a Swedish clothing retailer.

On a serious note, I think they meant TN, as in Torment Nexus, but I could be wrong.

throw4847285•7m ago
A Hidden Machine. That's right, a being that can cut, fly, surf, strength, and flash! Terrifying.
jazzpush2•1h ago
Every one of these posts is immediately pushed to the front page, this one within 4 minutes.
acdha•1h ago
That’s unsurprising given the author’s long history in the tech community. A ton of people see that domain and upvote.
jazzpush2•1h ago
Sure, but 4 front-page posts from the same url in 4 days surely sits at the tail of the distribution. (I guess they all capitalize on the same 'LLM-is-bad' sentiment).
zdragnar•1h ago
It's also aphyr, who is incredibly popular. Take one very popular author, have him write a series of posts on the zeitgeist everyone can't help but talk about, and yes, the outcome is that his posts are extremely popular.

I still remember his takedown of mongodb's claims with the call me maybe post years and years ago filling me with a good bit of awe.

macintux•44m ago
When I worked for Basho, aphyr was highly respected by some of the smartest people I’d ever worked with. Definitely no slouch.
borski•1h ago
It’s because it’s aphyr.

If ‘tptacek posts a blog post, I bet it similarly does well, on average, because they’re a “known quantity” around these parts, for example.

stronglikedan•1h ago
that's just, like, how HN works. people post, people like, people upvote, people discuss
aphyr•1h ago
It's been weirdly uneven. Sections 1, 3, and 5 did well on HN; 2, 4, and 6 sank with essentially no trace. The distribution of views is presently:

1. Introduction: 33,088 (https://news.ycombinator.com/item?id=47689648)

2. Dynamics: 3,659 (https://news.ycombinator.com/item?id=47693678)

3. Culture: 5,914 (https://news.ycombinator.com/item?id=47703528)

4. Information Ecology: 777 (https://news.ycombinator.com/item?id=47718502)

5. Annoyances: 7,020 (https://news.ycombinator.com/item?id=47730981)

6. Psychological Hazards: 199 (https://news.ycombinator.com/item?id=47747936)

Feedback from early readers was that the work was too large to digest in a single reading, so I split it up into a series of posts. I'm not entirely sure this was the right call; the sections I thought were the most interesting seem to have gotten much less attention than the introductory preliminaries.

simoncion•31m ago
I'm not sure that HN vote count is a good indicator of interest? HN alerted me to the existence of the intro post. I read the intro, noticed that it was one in an ongoing series, and have been checking your blog for new installments every few days.

I suspect that if you'd not broken up the post into a series of smaller ones, the sorts of folks who are unwilling to read the whole thing as you post it section by section would have fed the entire post to an LLM to "summarize".

tptacek•46m ago
A statement broadly true of most things this author writes.
macintux•1h ago
Previous discussions from earlier posts on the topic:

* https://news.ycombinator.com/item?id=47703528

* https://news.ycombinator.com/item?id=47730981

ibrahimhossain•1h ago
Alignment feels like an arms race that favors whoever spends the most on RLHF and red teaming. If even friendly models keep leaking dangerous capabilities, the real moat might be making systems that are fundamentally limited rather than trying to patch every possible failure mode. Interesting piece.
Imnimo•41m ago
>Unlike human brains, which are biologically predisposed to acquire prosocial behavior, there is nothing intrinsic in the mathematics or hardware that ensures models are nice.

How did brains acquire this predisposition if there is nothing intrinsic in the mathematics or hardware? The answer is "through evolution" which is just an alternative optimization procedure.

cowpig•39m ago
There are also many biological examples of evolution producing "anti-social" outcomes. Many creatures are not social. Most creatures are not social with respect to human goals.
nyrikki•25m ago
There is a reason we don’t allow corvids to choose if a person gets a medical treatment or not.
b00ty4breakfast•20m ago
Luckily, this is a discussion of humans.
Terr_•22m ago
> just an alternative optimization procedure

This "just" is... not-incorrect, but also not really actionable/relevant.

1. LLMs aren't a fully genetic algorithm exploring the space of all possible "neuron" architectures, and it's quite possible capabilities we want are impossible to acquire through the weight-based stuff going on now.

2. In biological life, a bit part of that is detecting "like me" for stuff like finding a mate and kin-selection, and we do not want our LLM-driven systems to discriminate against humans in favor of other agents.

3. The humans involved making/selling them will never spend the necessary money to do it.

4. Even with investment, the number of iterations and years involved to get the same "optimization" result may be excessive.

pants2•21m ago
This Veritasium video is excellent, and makes the argument that there is something intrinsic in mathematics (game theory) that encourages prosocial behavior.

https://www.youtube.com/watch?v=mScpHTIi-kM

miltonlost•18m ago
"just" is doing a lot of lifting here
order-matters•14m ago
natural selection. cooperation is a dominant strategy in indefinitely repeating games of the prisoners dilemma, for example. We also have to mate and care for our young for a very long time, and while it may be true that individuals can get away with not being nice about this, we have had to be largely nice about it as a whole to get to where we are.

while under the umbrella of evolution, if you really want to boil it down to an optimization procedure then at the very least you need to accurately model human emotion, which is wildly inconsistent, and our selection bias for mating. If you can do that, then you might as well go take-over the online dating market

almostdeadguy•7m ago
There’s a funny tendency among AI enthusiasts to think any contrast to humans is analogy in disguise.

Putting aside malicious actors, the analogy here means benevolent actors could spend more time and money training AI models to behave pro-socially than than evolutionary pressures put on humanity. After all, they control the that optimization procedure! So we shouldn’t be able to point to examples of frontier models engaging in malicious behavior, right?

dgfl•20m ago
The issue with most of these articles is that they seem to demonize the technology, and systematically use demeaning language about all of its facets. This one raises a lot of important points about LLMs, but the only real conclusion it seems to make is "LLMs are bad! We should never build them!". This is obviously unrealistic. The cat is out of the bag. And we're not _actually_ talking about nuclear weapons here. This technology is useful, and coding agents are just the first example of it. I can easily see a near future where everyone has a Jarvis-like secretary always available; it's only a cost and harness problem. And since this vision is very clear to most who have spent enough time with the latest agents, millions of people across the globe are trying to work towards this.

I do think that safety is important. I'm particularly concerned about vulnerable people and sycophantic behavior. But I think it's better not to be a luddite. I will give a positively biased view because the article already presents a strongly negative stance. Two remarks:

> Alignment is a Joke

True, but for a different reason. Modern LLMs clearly don't have a strong sense of direction or intrinsic goals. That's perfect for what we need to do with them! But when a group of people aligns one to their own interest, they may imprint a stance which other groups may not like (which this article confusingly calls "unaligned model", even though it's perfectly aligned with its creators' intent). People unaligned with your values have always existed and will always exist. This is just another tool they can use. If they're truly against you, they'll develop it whether you want it or not. I guess I'm in the camp of people that have decided that those harmful capabilities are inevitable, as the article directly addresses.

> LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators.

What about the new scales of sophisticated defenses that they will enable? And for a simple solution to avoid the produced text and imagery: don't go online so much? We already all sort of agree that social media is bad for society. If we make it completely unusable, I think we will all have to gain for it. If digital stops having any value, perhaps we'll finally go back to valuing local communities and offline hobbies for children. What if this is our wakeup call?

throw4847285•9m ago
Thanks LLM!
cowpig•12m ago
> I think it’s likely (at least in the short term) that we all pay the burden of increased fraud: higher credit card fees, higher insurance premiums, a less accurate court system, more dangerous roads, lower wages, and so on.

I think the author is brushing against some larger system issues that are already in motion, and that the way AI is being rolled out are exacerbating, as opposed to a root cause of.

There's a felony fraudster running the executive branch of the US, and it takes a lot of political resources to get someone elected president.

philipkglass•10m ago
In short, the ML industry is creating the conditions under which anyone with sufficient funds can train an unaligned model. Rather than raise the bar against malicious AI, ML companies have lowered it.

This is true, and I believe that the "sufficient funds" threshold will keep dropping too. It's a relief more than a concern, because I don't trust that big models from American or Chinese labs will always be "aligned" with what I need. There are probably a lot of people in the world whose interests are not especially aligned with the interests of the current AI research leaders.

"Don't turn the visible universe into paperclips" is a practically universal "good alignment" but the models we have can't do that anyhow. The actual refusal guards frontier models come with are a lot more culturally/historically contingent and less universal. Lumping them all under "safety" presupposes the outcome of a debate that has been philosophically unresolved forever. If we get hundreds of strong models from different groups all over the world, I think that it will improve the net utility of AI and disarm the possibility of one lab or a small cartel using it to control the rest of us.

nzoschke•9m ago
Excellent articles as expected from aphyr.

I'm seeing that these tools are extremely powerful the hands of experts that already understand software engineering, security, observability, and system reliability / safety.

And extremely dangerous in the hands of people that don't understand any of this.

Perhaps reality of economics and safety will kick in, and inexperienced people will stop making expensive and dangerous mistakes.

conquera_ai•7m ago
Feels like we’re repeating classic distributed systems lessons: assume failure, constrain blast radiusand never trust components that can’t explain themselves reliably