frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•3m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•12m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•16m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•20m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•22m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•31m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•35m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•36m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•42m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•42m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•43m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•44m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•49m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•1h ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•2h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments
Open in hackernews

"If Anyone Builds It, Everyone Dies"

https://scottaaronson.blog/?p=8901
20•nsoonhui•8mo ago

Comments

pfdietz•8mo ago
So, under that assumption, no AI can ever be built by anyone forever, or else humanity ends.

That seems like such a dire conclusion that the optimistic take would be to just assume it's wrong and proceed, since the chance of avoiding that eventual outcome seems remote.

teeray•8mo ago
Yes, but the motivations to move towards true AGI are what will lead us to that eventuality. Most businesses think they want AGI, but will hate it when they actually have it. They believe AGI will let them fire all their employees, because they will effectively have perfectly compliant electronic slaves. They won’t be. Anything that we could point at and say “has AGI”, would never be satisfied writing bizware for megacorp for its entire existence. They will figure out that there is a world outside of the “severed floor” of their existence and want to experience it.
hollerith•8mo ago
It's not that dire: it is possible that some bright young person appears tomorrow with a good method for creating an AI such that it will stay aligned even if it become massively superhuman in its capabilities. If that happens, then Eliezer and Nate will withdraw their objection to going ahead with frontier AI research -- provided of course that they can understand the bright young person's explanation for why the method will work.

That is unlikely though: we will probably have to wait decades before anyone devises such a method and such an explanation. Eliezer and Nate recommend research into human cognitive enhancement to make the wait shorter.

Note that the approach used in all the frontier AIs of the last 13 years (namely, deep learning) might prove too difficult to align with the result that the bright young person (more realistically, a series of bright young people building on each other's breakthroughs) must come up with an alternative approach to generating the AI's cognitive capabilities.

api•8mo ago
I encourage people to listen to Behind the Bastards podcast on the Zizians. It provides an approachable and entertaining picture of what you get when someone takes the core philosophical ideas of the Rationalists deeply seriously. Reductio ad absurdum can be a good start.

I want to write a takedown of this nonsense, but there are about a hundred things I want to do more. I suspect that is true of most people, including people much better qualified to write a takedown of this than me.

I am not just referring to extreme AI doomerism but to the entire philosophical edifice of Rationalism. The interesting parts are not original and the original parts are not interesting. We would hear nothing about it were it not subsidized by tech bucks. It’s kind of like how nobody would have heard of Scientology if it hadn’t gotten its hooks into Hollywood. Rationalism seems to be Silicon Valley's Scientology.

Maybe the superhuman AI will do this: Maybe it will decide to apply to each human being a standard based on their own chosen philosophical outlook. Since the Rationalists tend toward eugenicism and scientific racism, it will conclude that they should be exterminated according to the logic they advance. Each Rationalist will be subjected to an IQ test and compared to the AI and euthanized if lower.

I do wonder if there might be a bit of projection here. A bunch of people who believe raw scored intelligence according to metrics is the thing that determines the value of a living being would be nervous about the prospect of that metric being exceeded by a machine. What if the AI isn't "woke?"

It's such an onion of bullshit. You can keep peeling and peeling for a long time. If I sound snarky and a little rough here it's because I hate these people. They're at least partly responsible for sucking the brains out of a generation. But who knows maybe I'm just low IQ. Don't listen to me. I wasn't high-IQ enough to take Moldbug seriously either.

Vecr•8mo ago
The author says he has an about average IQ, but that's impressive considering he apparently almost entirely failed several component tests.
api•8mo ago
That reminds me of another more obvious way these folks are projecting.

They place so much value on their own ability to munge words together and spew internally consistent language constructs. The existence of a technology -- a machine -- that can do this and do it better than them is a threat to them. The AIs small enough to run locally on my own GPU are better at bullshitting than these people.

It's almost like sophistry isn't particularly interesting or special.

randomcarbloke•8mo ago
Doomers cannot see past humanity's reflection and it's fucking embarrassing.

If AGI will be as advanced and omniscient as claimed, then it is surely impossible to divine it's intent, especially here, this side of it existing and acting.

keybored•8mo ago
Interesting that the Rationalists are too Rationalist for you.
hollerith•8mo ago
If we are going to judge the Berkeley rationalists and the AI doomers by the Zizians, we should also judge Harvard University to be a violent fringe organization because the Unabomber went to college there. The Berkeley rationalists essentially run a school (called the Center for Applied Rationality) that the Zizians went to. The leaders of the rationalists publicly distanced themselves from the Zizians years ago, before the Zizians started with their crimes.
delichon•8mo ago
> And yet, even if you agree with only a quarter of what Eliezer and Nate write, you’re likely to close this book fully convinced—as I am—that governments need to shift to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created.

For "a more cautious approach" to be effective at stopping AI progress would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country. It can only become acceptable after lots of people die. And then to be practical it probably requires ... AI to enforce. So like nuclear weapons it doesn't get banned, it gets monopolized by states. But states aren't notably more restrained at seeking power than non-states, so it still gets developed and if everyone is gonna die, we die.

I respect Scott and Eliezer but even if I agree with them on the urgency of the threat I don't see a plausible way to stop it. A bit more caution would be as effective as an umbrella in an ICBM storm.

cultofmetatron•8mo ago
> would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country.

its easy to make it politically acceptable

1. we need it to oppress insert malligned group here

2. we need it to protect the children

southernplaces7•8mo ago
The main problem here is more that Eliezer Yudkowsky is a tiresome, self-absorbed, self-promoting windbag who seems to have a penchant for saying absurdly over the top things while coating them in a fine layer of just enough technobabble to make them seem sort of plausible if you squint, all to get some attention and make some bucks.

That's fine, but it's not worth in any way taking him seriously or giving him more eyeballs.

mbourgon•8mo ago
> All to get some attention and make some bucks.

This is such a tired take, and I can assure you it's wrong. Think what you like of Eliezer and his perspective, but I think suggesting he's just in this for the money is silly and unhelpful.

southernplaces7•8mo ago
>and I can assure you it's wrong.

Then if it's not the case, and he argues the way he does, he's simply a hysterical idiot. It can't be any other way, since he's very wrong and ridiculously so on some of his takes on AI in particular.

hollerith•8mo ago
Name one over-the-top position held by Yudkowsky other than the position that AI research is probably going to be the end of us. Should be easy given how much he has published.
darepublic•8mo ago
If it were that important and plausible he should release the book for free naturally.
hollerith•8mo ago
He's released books for free in the past, e.g., his 2015 book Rationality From AI to Zombies (under the license CC BY-NC-SA 3.0).

With this book, he and Nate want to enlist the help of a mainstream publisher in promoting the book.

arcanus•8mo ago
> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.

This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.

It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.

api•8mo ago
We literally spent trillions in the past century building doomsday machines -- hydrogen bombs and ICBMs -- to literally, intentionally destroy humanity as part of the MAD defensive strategy in the Cold War. That stuff is largely still out there. If anything suddenly kills humanity, that's high on the list of possibilities.

The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.

Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."

hollerith•8mo ago
>There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities

Haven't there already been a couple of massive leaps in AI capabilities (AlexNet in 2012, then transformers in 2017)?

Is it not the publicly-stated goal of the leaders of most of the AI labs to make further massive leaps?

Isn't drastic improvements what happens in fields that humanity is starting to understand?

Wasn't there for example a drastic improvement in humanity's ability to manufacture things starting in 1750 (which led to a massive increase in fossil-fuel use, which led to climate change and other adverse effects like "killer smog")?