But I like the idea there is a term for this, be it Straussian Memes or something else. What I didn't quite get is how "self-stabilizing" works?
What I'd like is for TV-anchors to get wise and start asking their interviewees "What EXACTLY do you mean when you use this term ...". But I guess they won't because they too are happy to spread a meme which multiple different communities can like because they understand it in the way they like.
This is the core rhetorical tactic of the progressive left in a nutshell. Linguistic superposition, equivocation, Schrodinger's definition - whatever you want to call it, it's the ability to have your cake and eat it too by simply changing your definitions, or even someone else's, post hoc.
Let us take a moment to be reminded of the English Socialism of Orwell and doublespeak.
When the left tries this today it results in equal and opposite backlash and has no effect in terms of policy, winning elections, and that sort of stuff, but it certainly can be a motor that keeps online bubbles bubbling.
I would hazard that you are underestimating the impact of these rhetorical tactics, but I've not the energy to aggressively litigate and cite this point further.
Depends of course on which definition of "socialism" you use. Didn't Hitler call his movement socialism as well? But I always associated "socialism" with "being social", which means taking into account other people's benefit as well, instead of trying to overpower them with propaganda and double-speak (and of course, violence).
If the goal is unlimited power to your party, to your leader, it would only make sense to lie to people as much as you can, to mislead them. To double-speak to them. If your goal is peaceful co-existence, then not so much.
And where there's smoke there is fire. Where there's Double-Speak, fascism is not far away.
Ironically Double-Speak succeeds because people are social beings, they really WANT to agree with others.
I live in Wyoming and have MAGA and ultra-progressive friends.
Multiple messaging is a hallmark of all elites. Sometimes it’s functional: being able to say something sharp that if repeated is ambiguous is a skill. Anyone who has any power or authority wields it. It is so common to suggest requirement. (Other times, multiple messaging lets one apologise in a public setting without making things awkward.)
In many respects, it’s an essential feature of commanding language. Compressing multiple meanings into fewer words is the essence of poetry and literature.
Aye, perhaps prompting is the be-all-end-all skill, after all: the ability to distill out an idea into its most concentrated, compressed essence, so it can be diluted, expanded, and reworded ad infinitum by the LLMs.
brb while I search for the word prompt that generated the universe...
Nobody said people haven’t rendered themselves unable to understand poetry or literature through the ages. Nor that these skills haven’t had a distinct class mark to them.
Same here. Someone who relies on LLMs to speak and read will not be able to compete in a live environment. (Someone who uses them as a tool may gain an advantage. But that’s predicated on having the base skill.)
Further much anybody could repeat that, make AI responsible for all their speech, and even actions. But less we use our own brains, the less we learn, and thus cannot gain a competititve advantage over other AI-users. The most rewarded original thoughts and ideas probabaly need to come from outside of AI since AI is trained on people's original text outputs.
"You can't change the people around you -
But you can change the people around you."
Whereas in the example here, acting on that advice is costly (it means losing friends) but believing it is free. And there aren't different layers of meaning accessible to different parties. It's straightforwardly a play on words.
"Prep hop" videos, like https://www.youtube.com/watch?v=XPPpjU1UeAo or https://www.youtube.com/watch?v=L1N3WXZ_1LM , may get forwarded by members of different subcultures for two different reasons: the first because they appreciate the comic satirisation of others, and the second because they appreciate how the comics have sweated the details of their own subculture — "we are like that only".
Lagniappe: https://www.youtube.com/watch?v=McMSHqWM3G8
- Cost to the high (those who live on what they have): no good would come if we said it was real to folks who don't know (TMTC) and it could be bad, if they did not like that we have what we do.
We all stroll down the road of life, but if folks do not care to look at lanes not like theirs, it might make strife to both say the lanes are there, and not to laugh.
How do these costs sound?
>we have what we do
They may be a converse of the Scissor Statement, which has a dual meaning that is irreconcilable between the separate interpreters. (https://news.ycombinator.com/item?id=21190508)
In my head I think of it has just really high linguistic compression. Minus intent, it is just superimposing multiple true statements into a small set of glyphs/phonemes.
Its always really context sensitive. Context is the shared dictionary of linguistic compression, and you need to hijack it to get more meanings out of words.
Places to get more compression in:
- Ambiguity of subject/object with vague pronouns (and membership in plural pronouns)
- Ambiguity of English word-meaning collisions
- Lack of specificity in word choice.
- Ambiguity of emphasis in written language or delivery. They can come out a bit flat verbally.
A group people in a situation:
- A is ill
- B poisoned A
- C is horrified about the situation but too afraid to say anything
- D thinks A is faking it.
- E is just really cool
"They really are sick" is uttered by an observer and we don't know how much of the above they have insight into.
I just get a kick out of finding statements like this for fun in my life. Doing it with intent is more complicated.
What the author describes seems more like strategic ambiguity but slightly more specific. I don't think it is a useful label they try to cast here.
Edit: Not sure why I was being coy. I'm talking about the Claremont Institute.
Who are these dudes?
Top right in this picture: https://pbs.twimg.com/media/GgTm194WIAEqak3?format=jpg&name=...
There are in-person meetups (primarily as a social group) in most large cities. At the meetups, there is no expectation that people have read the website, and these days you're more likely to encounter discussion of the Astral Codex Ten blog than of LessWrong itself. The website is run by a non-profit called LightCone Infrastructure that also operates a campus in Berkeley [2] that is the closest thing to a physical hub of the community.
The community is called "rationalists", and they all hate that name but it's too late to change it. The joke definition of a rationalist is by induction: Eliezer Yudkowsky is the base case for a rationalist, and then anyone who disagrees online with a rationalist is a rationalist.
There are two parallel communities. The first is called "sneer club", and they've bonded into a community over hating and mocking rationalists online. It's not a use of time or emotional energy that makes sense to me, but I guess it's harmless. The second is called "post-rationalism", and they've bonded about being interested in the same topics that rationalists are interested in, but without a desire to be rational about those topics. They're the most normie of the bunch, but at the same time they've also been a fertile source of weird small cults.
[1] https://en.wikipedia.org/wiki/LessWrong [2] https://www.lighthaven.space/
In the Dawkins sense, if the Dad’s use of the Santa myth makes the child feel happy, and preserves in some sense their innocence (ignorance of the world the way it really is), then the mother can recreate the same myth pattern elsewhere, most likely through family traditions.
Or in a semiotics of Ecco, the parents are overcoding Santa and the child is undercoding Santa—same expressions but different interpretations between the two groups. Maybe childhood lives in that gap.
cathyreisenwitz•1mo ago
PaulHoule•1mo ago
The article itself is an example of something that overlaps to some extent with its subject without being an example of the subject, like all the examples in it. It's an intriguing idea, like "things you can't say" but without examples it falls flat but that won't bother the rationalists anymore than they are bothered by Aella's "experiments" or allegedly profound fanfics or adding different people's utility functions or reasoning about the future without discounting. It's a hugbox.
Or maybe it is something they can't find any examples of it because humans can't make them, only hypothetical superhuman AI.
https://www.youtube.com/watch?v=kyOEwiQhzMI
UniverseHacker•1mo ago
PaulHoule•1mo ago
gsf_emergency_6•1mo ago
you absolutely need to turn the "insider-outsider" idea into a paragraph that will attract downvotes (EDIT- without reading like colloquial schizo ;)
Ps: to what extent is your lack of engagement coming from a fear of counter-transferrence (/mirroring)
lcuff•1mo ago
That said, I'm not impressed with the notion of Straussian memes and agree that way better examples are needed to give the idea some validity.
UniverseHacker•1mo ago
I suspect that the use of incredibly bad examples is some sort of intentional Straussian joke, and that the entire article itself, and not the examples in it, is supposed to be the real example of a Straussian meme.
dmichulke•1mo ago
And maybe that's the higher reading.
UniverseHacker•1mo ago
stocksinsmocks•1mo ago