frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•22s ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•2m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

1•amichail•3m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•9m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•11m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•11m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•12m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•13m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•14m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•14m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•15m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•18m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•18m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•19m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•22m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•23m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•23m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•24m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•24m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•27m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•27m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•29m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•31m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•32m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•32m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•32m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•34m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•35m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•37m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•42m ago•1 comments
Open in hackernews

"If Anyone Builds It, Everyone Dies"

https://scottaaronson.blog/?p=8901
20•nsoonhui•8mo ago

Comments

pfdietz•8mo ago
So, under that assumption, no AI can ever be built by anyone forever, or else humanity ends.

That seems like such a dire conclusion that the optimistic take would be to just assume it's wrong and proceed, since the chance of avoiding that eventual outcome seems remote.

teeray•8mo ago
Yes, but the motivations to move towards true AGI are what will lead us to that eventuality. Most businesses think they want AGI, but will hate it when they actually have it. They believe AGI will let them fire all their employees, because they will effectively have perfectly compliant electronic slaves. They won’t be. Anything that we could point at and say “has AGI”, would never be satisfied writing bizware for megacorp for its entire existence. They will figure out that there is a world outside of the “severed floor” of their existence and want to experience it.
hollerith•8mo ago
It's not that dire: it is possible that some bright young person appears tomorrow with a good method for creating an AI such that it will stay aligned even if it become massively superhuman in its capabilities. If that happens, then Eliezer and Nate will withdraw their objection to going ahead with frontier AI research -- provided of course that they can understand the bright young person's explanation for why the method will work.

That is unlikely though: we will probably have to wait decades before anyone devises such a method and such an explanation. Eliezer and Nate recommend research into human cognitive enhancement to make the wait shorter.

Note that the approach used in all the frontier AIs of the last 13 years (namely, deep learning) might prove too difficult to align with the result that the bright young person (more realistically, a series of bright young people building on each other's breakthroughs) must come up with an alternative approach to generating the AI's cognitive capabilities.

api•8mo ago
I encourage people to listen to Behind the Bastards podcast on the Zizians. It provides an approachable and entertaining picture of what you get when someone takes the core philosophical ideas of the Rationalists deeply seriously. Reductio ad absurdum can be a good start.

I want to write a takedown of this nonsense, but there are about a hundred things I want to do more. I suspect that is true of most people, including people much better qualified to write a takedown of this than me.

I am not just referring to extreme AI doomerism but to the entire philosophical edifice of Rationalism. The interesting parts are not original and the original parts are not interesting. We would hear nothing about it were it not subsidized by tech bucks. It’s kind of like how nobody would have heard of Scientology if it hadn’t gotten its hooks into Hollywood. Rationalism seems to be Silicon Valley's Scientology.

Maybe the superhuman AI will do this: Maybe it will decide to apply to each human being a standard based on their own chosen philosophical outlook. Since the Rationalists tend toward eugenicism and scientific racism, it will conclude that they should be exterminated according to the logic they advance. Each Rationalist will be subjected to an IQ test and compared to the AI and euthanized if lower.

I do wonder if there might be a bit of projection here. A bunch of people who believe raw scored intelligence according to metrics is the thing that determines the value of a living being would be nervous about the prospect of that metric being exceeded by a machine. What if the AI isn't "woke?"

It's such an onion of bullshit. You can keep peeling and peeling for a long time. If I sound snarky and a little rough here it's because I hate these people. They're at least partly responsible for sucking the brains out of a generation. But who knows maybe I'm just low IQ. Don't listen to me. I wasn't high-IQ enough to take Moldbug seriously either.

Vecr•8mo ago
The author says he has an about average IQ, but that's impressive considering he apparently almost entirely failed several component tests.
api•8mo ago
That reminds me of another more obvious way these folks are projecting.

They place so much value on their own ability to munge words together and spew internally consistent language constructs. The existence of a technology -- a machine -- that can do this and do it better than them is a threat to them. The AIs small enough to run locally on my own GPU are better at bullshitting than these people.

It's almost like sophistry isn't particularly interesting or special.

randomcarbloke•8mo ago
Doomers cannot see past humanity's reflection and it's fucking embarrassing.

If AGI will be as advanced and omniscient as claimed, then it is surely impossible to divine it's intent, especially here, this side of it existing and acting.

keybored•8mo ago
Interesting that the Rationalists are too Rationalist for you.
hollerith•8mo ago
If we are going to judge the Berkeley rationalists and the AI doomers by the Zizians, we should also judge Harvard University to be a violent fringe organization because the Unabomber went to college there. The Berkeley rationalists essentially run a school (called the Center for Applied Rationality) that the Zizians went to. The leaders of the rationalists publicly distanced themselves from the Zizians years ago, before the Zizians started with their crimes.
delichon•8mo ago
> And yet, even if you agree with only a quarter of what Eliezer and Nate write, you’re likely to close this book fully convinced—as I am—that governments need to shift to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created.

For "a more cautious approach" to be effective at stopping AI progress would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country. It can only become acceptable after lots of people die. And then to be practical it probably requires ... AI to enforce. So like nuclear weapons it doesn't get banned, it gets monopolized by states. But states aren't notably more restrained at seeking power than non-states, so it still gets developed and if everyone is gonna die, we die.

I respect Scott and Eliezer but even if I agree with them on the urgency of the threat I don't see a plausible way to stop it. A bit more caution would be as effective as an umbrella in an ICBM storm.

cultofmetatron•8mo ago
> would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country.

its easy to make it politically acceptable

1. we need it to oppress insert malligned group here

2. we need it to protect the children

southernplaces7•8mo ago
The main problem here is more that Eliezer Yudkowsky is a tiresome, self-absorbed, self-promoting windbag who seems to have a penchant for saying absurdly over the top things while coating them in a fine layer of just enough technobabble to make them seem sort of plausible if you squint, all to get some attention and make some bucks.

That's fine, but it's not worth in any way taking him seriously or giving him more eyeballs.

mbourgon•8mo ago
> All to get some attention and make some bucks.

This is such a tired take, and I can assure you it's wrong. Think what you like of Eliezer and his perspective, but I think suggesting he's just in this for the money is silly and unhelpful.

southernplaces7•8mo ago
>and I can assure you it's wrong.

Then if it's not the case, and he argues the way he does, he's simply a hysterical idiot. It can't be any other way, since he's very wrong and ridiculously so on some of his takes on AI in particular.

hollerith•8mo ago
Name one over-the-top position held by Yudkowsky other than the position that AI research is probably going to be the end of us. Should be easy given how much he has published.
darepublic•8mo ago
If it were that important and plausible he should release the book for free naturally.
hollerith•8mo ago
He's released books for free in the past, e.g., his 2015 book Rationality From AI to Zombies (under the license CC BY-NC-SA 3.0).

With this book, he and Nate want to enlist the help of a mainstream publisher in promoting the book.

arcanus•8mo ago
> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.

This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.

It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.

api•8mo ago
We literally spent trillions in the past century building doomsday machines -- hydrogen bombs and ICBMs -- to literally, intentionally destroy humanity as part of the MAD defensive strategy in the Cold War. That stuff is largely still out there. If anything suddenly kills humanity, that's high on the list of possibilities.

The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.

Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."

hollerith•8mo ago
>There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities

Haven't there already been a couple of massive leaps in AI capabilities (AlexNet in 2012, then transformers in 2017)?

Is it not the publicly-stated goal of the leaders of most of the AI labs to make further massive leaps?

Isn't drastic improvements what happens in fields that humanity is starting to understand?

Wasn't there for example a drastic improvement in humanity's ability to manufacture things starting in 1750 (which led to a massive increase in fossil-fuel use, which led to climate change and other adverse effects like "killer smog")?