My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
Load-bearing yet there
(please knock twice please)
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
cs702•3h ago
> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
pinkmuffinere•45m ago