But I like the idea there is a term for this, be it Straussian Memes or something else. What I didn't quite get is how "self-stabilizing" works?
What I'd like is for TV-anchors to get wise and start asking their interviewees "What EXACTLY do you mean when you use this term ...". But I guess they won't because they too are happy to spread a meme which multiple different communities can like because they understand it in the way they like.
cathyreisenwitz•1h ago
PaulHoule•1h ago
The article itself is an example of something that overlaps to some extent with its subject without being an example of the subject, like all the examples in it. It's an intriguing idea, like "things you can't say" but without examples it falls flat but that won't bother the rationalists anymore than they are bothered by Aella's "experiments" or allegedly profound fanfics or adding different people's utility functions or reasoning about the future without discounting. It's a hugbox.
Or maybe it is something they can't find any examples of it because humans can't make them, only hypothetical superhuman AI.
https://www.youtube.com/watch?v=kyOEwiQhzMI
UniverseHacker•37m ago
UniverseHacker•41m ago
I suspect that the use of incredibly bad examples is some sort of intentional Straussian joke, and that the entire article itself, and not the examples in it, is supposed to be the real example of a Straussian meme.