"Rationalism is such an insane name for a school of thought. Like calling your ideology correctism or winsargumentism"
I fully accept that the rationalist community may have morphed into something far beyond that original tenet, but I think rationalism just describes the approach, not that it's the "one true philosophy".
I guess it's possible you might be doing some deep ironic thing by providing a seemingly sincere example of what I'm complaining about. If so it was over my head but in that case I withdraw "derail"!
You can be rational and objective about a given topic without it meaning that the conversation is closed, or that all knowledge has been found. So I'm certainly not a fan of cult dynamics, but I think it's easy to throw an unfair charge at these groups, that their interest in the topic necessitates an absolutist disposition.
What the actual f. This is such an insane thing to read and understand what it means that i might need to go and sit in silence for the rest of the day.
How did we get to this place with people going completely nuts like this?
Use your mind to control reality, reality fights back with paradox, its cool for a teenager but you read a bit more fantasy and you'll definitely find cooler stuff. But i guess for you to join a cult your mind must stay a teen mind forever.
All of the World Of Darkness and Chronicles Of Darkness games are basically about coming of age/puberty. Like X-Men but for Goth-Nerds instead of Geek-Nerds.
In Vampire, your body is going through weird changes and you are starting to develop, physically and/or mentally, while realising that the world is run by a bunch of old, evil fools who still expect you to toe the line and stay in your place, but you are starting to wonder if the world wouldn't be better if your generation overthrew them and took over running the world, doing it the right way. And there are all these bad elements trying to convince you that you should do just that, but for the sake of mindless violence and raucous partying. Teenager - the rpg.
In Werewolf, your body is going through weird changes and you are starting to develop, physically and mentally, while realising that you are not a part of the "normal" crowd that the rest of Humanity belongs to. You are different and they just can't handle that whenever it gets revealed. Luckily, there are small communities of people like you out there who take you in and show you how use the power of your "true" self. Of course, even among this community, there are different types of other. LGBT Teenager - the RPG
In Mage, you have begun to take an interest in the real world, and you think you know what the world is really like. The people all around you are just sleep-walking through life, because they don't really get it. This understanding sets you against the people who run the world: the governments and the corporations, trying to stop these sleeper from waking up to the truth and rejecting their comforting lies. You have found some other people who saw through them, and you think they've got a lot of things wrong, but at least they're awake to the lies! Rebellious Teenager - the RPG
I had friends who were into Vampire growing up. I hadn’t heard of Werewolf until after the aforementioned book came out and people started going nuts for it. I mentioned to my wife at the time that there was this game called “Vampire” and told her about it and she just laughed, pointed to the book, and said “this is so much better”. :shrug:
Rewind back and there were the Star Wars kids. Fast forward and there are the Harry Potter kids/adults. Each generation has their own “thing”. During that time, it was Quake MSDOS and Vampire. Oh and we started Senior Assassinations. 90s super soakers were the real deal.
Twist: we’re sleepwalking through life because we really DO get it.
(Source: I’m 56)
Such a setting would seem like the perfect backdrop for a cult that claims "we have the power to subtly influence reality and make improbable things (ie. "magic") occur".
But, fwiw, that particular role-playing game was very much based on trendy at the time occult beliefs in things like chaos magic, so it's not completely off the wall.
Like christians are very flexible in following 10 commandments, always been.
I’ve never played, but now I’m kind of interesting.
It's nuts.
There are at least a dozen I can think of, including the ‘drink the koolaid’ Jonestown massacre.
People be crazy, yo.
Which actually kinda exised/exists too? [https://en.m.wikipedia.org/wiki/Nichirenism], right down to an attempted coup and a bunch of assassinations [https://en.m.wikipedia.org/wiki/League_of_Blood_Incident].
Now you know. People be whack.
> Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom second coming second coming second coming second coming second coming second coming second coming second coming second coming second coming angels angels angels angels angels angels angels angels angels angels end end times times end times end times end times end times end times end times end times end times end times Key Words: (for search engines) 144,000, Abductees, Agnostic, Alien, Allah, Alternative, Angels, Antichrist, Apocalypse, Armageddon, Ascension, Atheist, Awakening, Away Team, Beyond Human, Blasphemy, Boddhisattva, Book of Revelation, Buddha, Channeling, Children of God, Christ, Christ's Teachings, Consciousness, Contactees, Corruption, Creation, Death, Discarnate, Discarnates, Disciple, Disciples, Disinformation, Dying, Ecumenical, End of the Age, End of the World, Eternal Life, Eunuch, Evolution, Evolutionary, Extraterrestrial, Freedom, Fulfilling Prophecy, Genderless, Glorified Body, God, God's Children, God's Chosen, God's Heaven, God's Laws, God's Son, Guru, Harvest Time, He's Back, Heaven, Heaven's Gate, Heavenly Kingdom, Higher Consciousness, His Church, Human Metamorphosis, Human Spirit, Implant, Incarnation, Interfaith, Jesus, Jesus' Return, Jesus' Teaching, Kingdom of God, Kingdom of Heaven, Krishna Consciousness, Lamb of God, Last Days, Level Above Human, Life After Death, Luciferian, Luciferians, Meditation, Members of the Next Level, Messiah, Metamorphosis, Metaphysical, Millennium, Misinformation, Mothership, Mystic, Next Level, Non Perishable, Non Temporal, Older Member, Our Lords Return, Out of Body Experience, Overcomers, Overcoming, Past Lives, Prophecy, Prophecy Fulfillment, Rapture, Reactive Mind, Recycling the Planet, Reincarnation, Religion, Resurrection, Revelations, Saved, Second Coming, Soul, Space Alien, Spacecraft, Spirit, Spirit Filled, Spirit Guide, Spiritual, Spiritual Awakening, Star People, Super Natural, Telepathy, The Remnant, The Two, Theosophy, Ti and Do, Truth, Two Witnesses, UFO, Virginity, Walk-ins, Yahweh, Yeshua, Yoda, Yoga,
https://www.vice.com/en/article/a-suicide-cults-surviving-me...
I know the church of Scientology wants you to crit that roll of tithing.
I shouldn't LOL at this but I must. We're all gonna die in these terrible times but at least we'll LOL at the madness and stupidity of it all.
Some percentage of the population has a lesser need for a belief system (supernatural, ad hoc, or anything else) but in general, most humans appear to be hardcoded for this need and the overlap doesn't align strictly with atheism. For the atheist with a deep need for something to believe in, the results can be ugly. Though far from perfect, organized religions tend to weed out their most destructive beliefs or end up getting squashed by adherents of other belief systems that are less internally destructive.
Atheists tend to not have those consistently and must build their own.
For those not familiar with the bible enough to know what to look for to find the wild stuff, look up the story of Elisha summoning bears out of the first to maul children for calling him bald, or the last two chapters of Daniel (which I think are only in the Catholic bible) where he literally blows up a dragon by feeding it a cake.
[1]: https://en.wikipedia.org/wiki/Real_presence_of_Christ_in_the...
Take Barbara Hepworth's "Two Figures" a sculpture which is just sat there on the campus where I studied for many years (and where I also happen to work today). What's going on there? I'm not sure.
Sculpture of ideals I get. Liberty, stood on her island, Justice (with or without her blindfold, but always carrying not only the scales but also a sword†). I used to spend a lot of time in the hall where "The Meeting Place" is. They're not specific people, they're an idea, they're the spirit of the purpose of this place (a railway station, in fact a major international terminus). That's immediately understood, yeah.
But I did not receive an immediate understanding of "Two figures". It's an interesting piece. I still occasionally stop and look at it as I walk across the campus, but I couldn't summarise it in a sentence even now.
† when you look at that cartoon of the GOP operatives with their hands over Justice's mouth, keep in mind that out of shot she has a sword. Nobody gets out of here alive.
I think the true meaning has been lost to time. The Hebrew text has been translated and rewritten so many times it’s a children’s book. The original texts of the Dead Sea scrolls are bits and pieces of that long lost story. All we have left are the transliterations of transliterations.
As for my choice of the word "thugs" ("mob" would be another good word), that is necessary to preserve the connotation. Remember, there were 42 of them punished, possibly more escaped - this is a threatening crowd size (remember the duck/horse meme?). Their claimed youth does imply "not an established veteran of the major annual wars", but that's not the same as "not acquainted with violence".
"Then Daniel took pitch, and fat, and hair, and did seethe them together, and made lumps thereof: this he put in the dragon's mouth, and so the dragon burst in sunder: and Daniel said, Lo, these are the gods ye worship."
See also: https://www.episcopalchurch.org/glossary/real-presence/?
"Belief in the real presence does not imply a claim to know how Christ is present in the eucharistic elements. Belief in the real presence does not imply belief that the consecrated eucharistic elements cease to be bread and wine."
There’s a fine line between suspension of disbelief and righteousness. All it takes is for one to believe their own delusion.
https://www.goodreads.com/quotes/366635-there-are-two-novels...
Very similar to my childhood religion. "We have figured everything out and everyone else is wrong for not figuring things out".
Rationalism seems like a giant castle built on sand. They just keep accruing premises without ever going backwards to see if those premises make sense. A good example of this is their notions of "information hazards".
It’s also a hard book to read so it may be smart kids trying to signal being smart.
I can't help but think it's probably the "favourite book" of a lot of people who haven't finished it though, possibly to a greater extent than any other secular tome (at least LOTR's lightweight fans watched the movies!).
I mean, if you've only read the blurb on the back it's the perfect book to signal your belief in free markets, conservative values and the American Dream: what could be more a more strident defence of your views than a book about capitalists going on strike to prove how much the world really needs them?! If you read the first few pages, it's satisfyingly pro-industry and contemptuous of liberal archetypes. If you trudge through the whole thing, it's not only tedious and odd but contains whole subplots devoted to dumping on core conservative values (religion bad, military bad, marriage vows not that important really, and a rather jaded take on actually extant capitalism) in between the philosopher pirates and jarring absence of private transport, and the resolution is an odd combination of a handful of geniuses running away to form a commune and the world being saved by a multi-hour speech about philosophy which has surprisingly little to say on market economics...
Oh, there’s movies for lazy Rand fans, too.
https://www.imdb.com/title/tt0480239/
More of a Fountainhead fan, are you? Do ya like Gary Cooper and Patricia Neal?
tbf that comment was about 50% a joke about their poor performance at the box office :D
Though a Gary Cooper The Fountainhead does tempt me on occasion. (Unlike Atlas Shrugged, The Fountainhead wasn’t horrible, but still some pretty poor writing.)
Ira Levin did a much better job of it and showed what it would lead to but his 'This Perfect Day' did not - predictably - get the same kind of reception as Atlas Shrugged did.
Crazy people have always existed (especially cults), but I'd argue recruitment numbers are through the roof thanks to technology and a failing economic environment (instability makes people rationalize crazy behavior).
It's not that those groups didn't have visibility before, it's just easier for the people who share the same...interests...to cloister together on an international scale.
I do not think this cult dogma is any more out there than other cult dogma I have heard, but the above quote makes me think it is easier to found cults in modern day in someways since you can steal other complex world building from numerous sources rather building yourself and keeping everything straight.
Narcissists tend to believe that they are always right, no mater what the topic is, or how knowledgeable they are. This makes them speak with confidence and conviction.
Some people are very drawn to confident people.
If the cult leader has other mental health issues, it can/will seep into their rhetoric. Combine that with unwavering support from loyal followers that will take everything they say as gospel...
That's about it.
Outside of those, the cult dynamics are cut-paste, and always involve an entitled narcissistic cult leader acquiring as much attention/praise, sex, money, and power as possible from the abuse and exploitation of followers.
Most religion works like this. Most alternative spirituality works like this Most finance works like this. Most corporate culture works like this. Most politics works like this.
Most science works like this. (It shouldn't, but the number of abused and exploited PhD students and post-docs is very much not zero.)
The only variables are the differing proportions of attention/praise, sex, money, and power available to leaders, and the amount of abuse that can be delivered to those lower down and/or outside the hierarchy.
The hierarchy and the realities of exploitation and abuse are a constant.
If you removed this dynamic from contemporary culture there wouldn't be a lot left.
Fortunately quite a lot of good things happen in spite of it. But a lot more would happen if it wasn't foundational.
On the other hand, there's a whole other side of a few nutjobs who really behave like cult leaders, they believe their own bullshit and over time manage to find in this community a lot of "followers", since one of the foundational aspects is radical acceptance it becomes very easy to be nutty and not questioned (unless you do something egregious).
Human brains are lazy Bayesian engines. In uncertainty, we grasp for simple, all-explaining models (heuristics). Mage provides this: a complete ontology where magic equals psychology/quantum woo, reality is malleable, and the camp leaders are the enlightened "tradition." This offers relief from the exhausting ambiguity of real life. Dill didn't invent this; he plugged into the ancient human craving for a map that makes the world feel navigable and controllable. The "rationalist" veneer is pure camouflage. It feels like critical thinking but is actually pseudo-intellectual cargo culting. This isn't Burning Man's fault. It's the latest step of a 2,500-year-old playbook. The Gnostics and the Hermeticists provided ancient frameworks where secret knowledge ("gnosis") granted power over reality, accessible only through a guru. Mage directly borrows from this lineage (The Technocracy, The Traditions). Dill positioned himself as the modern "Ascended Master" dispensing this gnosis.
The 20th century cults Synanon, EST, Moonies, NXIVM all followed similar patterns, starting with isolation. Burning Man's temporary city is the perfect isolation chamber. It's physically remote, temporally bounded (a "liminal space"), fostering dependence on the camp. Initial overwhelming acceptance and belonging (the "Burning Man hug"), then slowly increasing demands (time, money, emotional disclosure, sexual access), framed as "spiritual growth" or "breaking through barriers" (directly lifted from Mage's "Paradigm Shifts" and "Quintessence"). Control language ("sleeper," "muggle," "Awakened"), redefining reality ("that rape wasn't really rape, it was a necessary 'Paradox' to break your illusions"), demanding confession of "sins" (past traumas, doubts), creating dependency on the leader for "truth."
Burning Man attracts people seeking transformation, often carrying unresolved pain. Cults prey on this vulnerability. Dill allegedly targeted individuals with trauma histories. Trauma creates cognitive dissonance and a desperate need for resolution. The cult's narrative (Mage's framework + Dill's interpretation) offers a simple explanation for their pain ("you're unAwakened," "you have Paradox blocking you") and a path out ("submit to me, undergo these rituals"). This isn't therapy; it's trauma bonding weaponized. The alleged rape wasn't an aberration; it was likely part of the control mechanism. It's a "shock" to induce dependency and reframe the victim's reality ("this pain is necessary enlightenment"). People are adrift in ontological insecurity (fear about the fundamental nature of reality and self). Mage offers a new grand narrative with clear heroes (Awakened), villains (sleepers, Technocracy), and a path (Ascension).
I'm a staunch atheist and I feel the pull all the time.
I may have actually been less anxious about the state of the world back then, and may have remained so, if I'd just continued to ignore all those little contradictions that I just couldn't ignore anymore...... But I feel SO MUCH less personal guilt about being "human".
But looking into the underlying Western Esoteric Spirit Science, 'Anthroposophy' (because Theosophy wouldn't let him get weird enough) by Rudolph Steiner, has been quite a ride. The point being that.. humans have a pretty endless capacity to go ALL IN on REALLY WEIRD shit, as long as it promises to fix their lives if they do everything they're told. Naturally if their lives aren't fixed, then they did it wrong or have karmic debt to pay down, so YMMV.
In any case, I'm considering the latent woo-cult atmosphere as a test of the skeptical inoculation that I've tried to raise my child with.
If you’re talking about grade school, interview whoever is gonna be your kids teacher for the next X years and make sure they seem sane. If you’re talking about high school, give a really critical look at the class schedule.
Waldorf schools can vary a lot in this regard so you may not encounter the same problems I did, but it’s good to be cautious.
Ayahuasca?
I’m inclined to believe your upbringing plays a much larger role.
I think people are going nuts because we've drifted from the dock of a stable civilization. Institutions are a mess. Economy is a mess. Combine all of that together with the advent of social media making the creation of echo chambers (and the inevitable narcissism of "leaders" in those echo chambers) effortless and ~15 years later, we have this.
The only scary thing is that they have ever more power to change the world and influence others without being forced to grapple with that responsibility...
Why do you imagine that? Have you tested it?
https://www.vice.com/en/article/the-tale-of-the-final-fantas...
See for example "Reality Distortion Field": https://en.wikipedia.org/wiki/Reality_distortion_field
God died and it's been rough going since then.
https://www.thenewatlantis.com/publications/rational-magic
and its discussion on HN: https://news.ycombinator.com/item?id=35961817
Quite possibly, places like Reddit and Hacker News, are training for the required level of intellectual smugness, and certitude that you can dismiss every annoying argument with a logical fallacy.
That sounds smug of me, but I’m actually serious. One of their defects, is that once you memorize all the fallacies (“Appeal to authority,” “Ad hominem,”) you can easily reach the point where you more easily recognize the fallacies in everyone else’s arguments than your own. You more easily doubt other people’s cited authorities, than your own. You slap “appeal to authority” against a disliked opinion, while citing an authority next week for your own. It’s a fast path from there to perceived intellectual superiority, and an even faster path from there into delusion. Rational delusion.
A contradiction creates a structural fallacy; if you find one, it's a fair belief that at least one of the supporting claims is false. In contrast, appeal to authority is probabilistic: we don't know, given the current context, if the authority is right, so they might be wrong... But we don't have time to read the universe into this situation so an appeal to authority is better than nothing.
... and this observation should be coupled with the observation that the school of rhetoric wasn't teaching a method for finding truth; it was teaching a method for beating an opponent in a legal argument. "Appeal to authority is a logical fallacy" is a great sword to bring to bear if your goal is to turn off the audience's ability to ask whether we should give the word of the environmental scientist and the washed-up TV actor equal weight on the topic of environmental science...
These "maybes" are on the table. They are probably not the case.
(You end up with a spread of likelihoods and have to decide what to do with them. And law hates a spread of likelihoods and hates decision-by-coinflips, so one can see how rhetorical traditions grounded in legal persuasion tend towards encouraging Boolean outcomes; you can't find someone "a little guilty," at least not in the Western tradition of justice).
I guess I'm a radical skeptic, secular humanist, utilitarianish sort of guy, but I'm not dumb enough to think throwing around the words "bayesian prior" and "posterior distribution" makes actually figuring out how something works or predicting the outcome of an intervention easy or certain. I've had a lot of life at this point and gotten to some level of mastery at a few things and my main conclusion is that most of the time its just hard to know stuff and that the single most common cognitive mistake people make is too much certainty.
There's a point where more passive thinking stops adding value and starts subtracting sanity. It's pretty easy to get to that point. We've all done it.
This is a common sentiment but is probably not entirely true. A great example is cosmology. Yes, more data would make some work easier, but astrophysicists and cosmologists have shown that you can gather and combine existing data and look at it in novel new ways to produce unexpected results, like place bounds that can include/exclude various theories.
I think a philosophy that encourages more analysis rather than sitting back on our laurels with an excuse that we need more data is good, as long as it's done transparently and honestly.
The qualifier "normally" already covers "not entirely true". Of course it's not entirely true. It's mostly true for us now. (In fact twenty years ago we used more numerical models than we do now, because we were facing more unsolved problems where the solution was pretty well knowable just by doing more complicated calculations, but without taking more data. Back then, when people started taking lots of data, it was often a total waste of time. But right now, most of those problems seem to be solved. We're facing different problems that seem much harder to model, so we rely more on data. This stage won't be permanent either.)
It's not a sentiment, it's a reality that we have to deal with.
And I think you missed the main point of my reply: that people often think we need more data, but cleverness and ingenuity can often find a way to make meaningful progress with existing data. Obviously I can't make any definitive judgment about your specific case, but I'm skeptical of any claim that it's out of the realm of possibility that some genius like Einstein analyzed your problem could get no further than you have.
I read your point and answered it twice. Your latest response seems to indicate that you're ignoring those responses. For example you seem to suggest that I'm "claim[ing] that it's out of the realm of possibility" for "Einstein" to make progress on our work without taking more data. But anyone can hit "parent" a few times and see what I actually claimed. I claimed "mostly" and "for us where I work". I took the time to repeat that for you. That time seems wasted now.
Perhaps you view "getting more data" as an extremely unpleasant activity, to be avoided at all costs? You may be an astronomer, for example. Or maybe you see taking more data before thinking as some kind of admission of defeat? We don't use that kind of metric. For us it's a question of the cheapest and fastest way to solve each problem.
if modeling is slower and more expensive than measuring, we measure. If not, we model. You do you.
If you are talking about cosmology? Yea, you can look at existing data in new ways, cause you probably have enough data to do that safely.
If you are looking at human psychology? Looking at existing data in new ways is essentially p-hacking. And you probably won’t ever have enough data to define a “universal theory of the human mind”.
The first is diffusion of power. Social media is powered by charisma, and while it is certainly true that personality-based cults are nothing new, the internet makes it way easier to form one. Contrast that with academic philosophy. People can have their own little fiefdoms, and there is certainly abuse of power, but rarely concentrated in such a way that you see within rationalist communities.
The second (and more idealistic) is that the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing." People in academic philosophy are on the whole happy to provide a gloss on a gloss on some important thinker, or some kind of incremental improvement over somebody else's theory. This makes it extremely boring, and yet, not nearly as susceptible to delusions of grandeur. True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy. They mostly seem to dedicate their time to providing post-hoc justifications for the most banal unquestioned assumptions of their subset of contemporary society.
Taking academic philosophy seriously, at least as an historical phenomenon, would require being educated in the humanities, which is unpopular and low-status among Rationalists.
Nuh-uh! Eliezer Yudkowsky wrote that his mother made this mistake, so he's made sure to say things in the right order for the reader not to make this mistake. Therefore, true Rationalists™ are immune to this mistake. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu...
(I call it neorationalism because it is philosophically unrelated to the more traditional rationalism of Spinoza and Descartes.)
That is the bullshit part.
People are trying to make sense of this. For examples.
The Canadian government heavily subsidizes junk food, then spends heavily on healthcare because of the resulting illnesses. It restrict and limits healthy food through supply management and promotes a “food pyramid” favoring domestic unhealthy food. Meanwhile, it spends billions marketing healthy living, yet fines people up to $25,000 for hiking in forests and zones cities so driving is nearly mandatory.
Government is an easy target for irrational behaviours.
Your rant about government or not being allowed to hike in some places in Canada is unrelated to the issue.
But from a societal cohesion or perhaps even an ethical point of view it's just pure irrationality.
When typing the post, I was thinking, different levels of government, changing ideologies of politicians leaving inconsistent governance.
The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.
I agree that the term "rationalist" would appeal to many people, and the obvious need to belong to a group plays a huge role.
And I say this as a Christian. I often think that becoming a state religion was the worst thing that ever happened to Christianity, or any religion, because then it unavoidably becomes a tool for power and authority.
And doing the same with other ideas or ideologies is no different. Look at what happened to communism, capitalism, or almost any other secular idea you can think of: the moment it becomes established, accepted, and official, the corruption sets in.
The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?
Obviously you and I have very different definitions of "obvious"
It seems to not be true, but I still maintain that it was obvious. Sometimes people don't pick the low-hanging fruit.
This is the Internet, you're allowed to say "they are obsessed with unlimited drugs and weird sex things, far beyond what even the generally liberal society tolerates".
I'm increasingly convinced that every other part of "Rationalism" is just distraction or justification for those; certainly there's a conscious decision to minimize talking about this part on the Internet.
This article describes “rationalism” as described in LessWrong and the sequences by Eliezer Yudkowsky. A good amount of it based on empirical findings from psychology behavior science. It’s called “rationalism” because it seeks to correct common reasoning heuristics that are purported to lead to incorrect reasoning, not in contrast to empiricism.
Many of their "beliefs" - Super-duper intelligence, doom - are clearly not believed by the market; Observing the market is a kind of empiricism and it's completely discounted by the lw-ers
I am pretty sure many of the LessWrong posts are about how to understand the meaning of different types of data and are very much about examining, developing, criticizing a rich variety of empirical attitudes.
To quote one of the core foundational articles: "Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly." (https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can...)
One can argue how well the community absorbs the lesson, but this certainly seems to be a much higher standard than average.
The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally
I actually don't mind Yudkowski as an individual - I think he is almost always wrong and undeservedly arrogant, but mostly sincere. Yet treating him as an AI researcher and serious philosopher (as opposed to a sci-fi essayist and self-help writer) is the kind of slippery foundation that less scrupulous people can build cults from. (See also Maharishi Mahesh Yogi and related trends - often it is just a bit of spiritual goofiness as with David Lynch, sometimes you get a Charles Manson.)L. Ron Hubbard is more like the Zizians.
[1] https://www.lesswrong.com/posts/8viKzSrYhb6EFk6wg/why-yudkow...
I'm reminded of a silly social science article I read, quite a long time ago. It suggested that physicists only like to study condensed matter crystals because physics is a male-dominated field, and crystals are hard rocks, and, um ... men like to think about their rock-hard penises, I guess. Now, this hypothesis obviously does not survive cursory inspection - if we're gendering natural phenomena studied by physicists, are waves male? Are fluid dynamics male?
However, Mr. Yudowsky's weird hangups here around rigidity and hardness have me adjusting my priors.
Church, cult, cult, church. So we'll get bored someplace else every Sunday. Does this really change our everyday lives?
https://en.wikipedia.org/wiki/Nancy_Cartwright#Personal_life
I think it was a relative of his claiming this.
Outside of sexuality and the proclivities of their leaders, emphasis on physical domination of the self is lacking. The brain runs wild, the spirit remains aimless.
In the Bay, the difference between the somewhat well-adjusted "rationalists" and those very much "in the mush" is whether or not someone tells you they're in SF or "on the Berkeley side of things"
many such cases
(One of my favorite TED talks was about a failed experiment in introducing traditional Western agriculture to a people in Zambia. It turns out when you concentrate too much food in one place, the hippos come and eat it all and people can't actually out-fight hippos in large numbers. In hindsight, the people running the program should have asked how likely it was that folks in a region that had exposure to other people's agriculture for thousands of years, hadn't ever, you know... tried it. https://www.ted.com/talks/ernesto_sirolli_want_to_help_someo...)
see: bitcoin
Because empathy is hard.
All of it has the appearance of sounding so smart, and a few sites were genuine. But it got taken over.
A significant part of this is the intersection of the cult with money and status - this stuff really took off once prominent SV personalities became associated with it, and got turbocharged when it started intersecting with the angel/incubator/VC scene, when there was implicit money involved.
It's unusually successful because -- for a time at least -- there was status (and maybe money) in carrying water for it.
Sometimes history really does rhyme.
> Enfantin and Amand Bazard were proclaimed Pères Suprêmes ("Supreme Fathers") – a union which was, however, only nominal, as a divergence was already manifest. Bazard, who concentrated on organizing the group, had devoted himself to political reform, while Enfantin, who favoured teaching and preaching, dedicated his time to social and moral change. The antagonism was widened by Enfantin's announcement of his theory of the relation of man and woman, which would substitute for the "tyranny of marriage" a system of "free love".[1]
>The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.
Anyone who had just read a lot about Scientology would read that and have alarm bells ringing.
But a movement, that demonstrates so remarkably elevated rate of generating harmful beliefs in action as this, warrants exactly the sort of increased scrutiny this article vainly strives to deflect. That effort is in itself interesting, as such efforts always are.
This isn't the defense of rationalism you seem to imagine it to be.
I don't think the modal rationalist is sinister. I think he's ignorant, misguided, nearly wholly lacking in experience, deeply insecure about it, and overall just excessively resistant to the idea that it is really possible, on any matter of serious import, for his perspective radically to lack merit. Unfortunately, this latter condition proves very reliably also the mode.
What perspective would that be?
Clearly all of these groups that believe in demons or realities dictated by tabletop games are not what third parties would call Rationalist. They might call themselves that.
There are some pretty simple tests that can out these groups as not rational. None of these people have ever seen a demon, so world models including demons have never predicted any of their sense data. I doubt these people would be willing to make any bets about when or if a demon will show up. Many of us would be glad to make a market concerning predictions made by tabletop games about physical phenomenon.
Which, to me, raises the fascinating question of what does a "good" version look like, of groups and group dynamics centered around a shared interest in best practices associated with critical thinking?
At a first impression, I think maybe these virtues (which are real!) disappear into the background of other, more applied specializations, whether professions, hobbies, backyard family barbecues.
There's a reason they call themselves "rationalists" instead of empiricists or positivists. They perfectly inverted Hume ("reason is, and ought only to be the slave of the passions")
These kinds of harebrained views aren't an accident but a product of rationalism. The idea that intellect is quasi infinite and that the world can be mirrored in the mind is not running contradictory to, but just the most extreme form of rationalism taken to its conclusion, and of course deeply religious, hence the constant fantasies about AI divinities and singularities.
Once you see them listed (social pressure, sleep deprivation, control of drinking/bathroom, control of language/terminology, long exhausting activities, financial buy in, etc) and see where they've been used in cults and other cult adjacent things it's a little bit of a warning signal when you run across them IRL.
https://freedomofmind.com/cult-mind-control/bite-model-pdf-d...
A much simpler theory is that rationalists are mostly normal people, and normal people tend to form cults.
They do note at the beginning of the article that many, if not most such groups have reasonably normal dynamics, for what it's worth. But I think there's a legitimate question of whether we ought to expect groups centered on rational thinking to be better able to escape group dynamics we associate with irrationality.
The group was weird and involved quite a lot of creepy oversharing. I didn't return.
Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.
Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.
And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.
I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.
The game as it is _actually_ played is that you use rationalist arguments to justify your pre-existing gut intuitions and personal biases.
I guess Pareto wasn't on the reading list for these intellectual frauds.
Those are actually the priors being updated lol.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
I played around with various metaphors but most of them felt various degrees of worse. The idea of relaxing priors and then doing an evidence-based update while thinking it's genuinely a single update is a difficult thing to capture metaphorically.
Happy to hear better suggestions.
EDIT: Maybe something more like this:
Picture your belief as a shotgun aimed at the truth:
Aim direction = your best current guess.
Spread = your precision.
Evidence = the pull that says "turn this much" and "widen/narrow this much."
The correct move is one clean Bayesian shot.Hold your aim where it is. Evidence arrives. Rotate and resize the spread in one simultaneous posterior jump determined by the actual likelihood ratio in front of you.
The stupid move? The move that Rationalists love to disguise as humility? It's to first relax your spread "to be open-minded," and then apply the update. You've just secretly told the math, "Give this evidence more weight than it deserves." And then you wonder why you keep overshooting, drifting into confident nonsense.
If you think your prior is overconfident, that is itself evidence. Evidence about your meta-level epistemic reliability. Feed it into the update properly. Do not amputate it ahead of time because "priors are bias." Bias is bad, yes, but closing your eyes and spinning around with shotgun in hand ie: double updating is not an effective method at removing bias.
> The ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
[1] https://maxread.substack.com/p/the-zizians-and-the-rationali...
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
https://archive.org/details/goblinsoflabyrin0000frou/page/10...
I do dimly perceive
that while everything around me is ever-changing,
ever-dying there is,
underlying all that change,
a living power
that is changeless,
that holds all together,
that creates,
dissolves,
and recreates
Are you certain about this?
Freud of course discovered a certain world of the unconscious but untrained [2] you would certainly struggle to explain how you know sentence S is grammatical and S' is not, or what it is you do when you walk.
If you did meditation or psychoanalysis or some other practice to understand yourself better it would take years.
[1] whether or not it is true.
[2] the "scientific" explanation you'd have if you're trained may or may not be true since it can't be used to program a computer to do it
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
https://plato.stanford.edu/entries/rationalism-empiricism/
(Note also how the context is French vs British, and the French basically lost with Napoleon, so the current "rationalists" seem to be more likely to be heirs to empiricism instead.)
That does compute with what I thought the "Rationalist" movement as covered by the article was about. I didn't peg them as pure a priori thinkers as you put it. I suppose my comment still holds, assuming the rationalist in this context refers to the version of "Rationalism" being discussed in the article as opposed to the traditional one.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
Well put, thanks!
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
That said, if they really thought hard about this problem, they would have come to a different conclusion:
https://theconversation.com/solve-suffering-by-blowing-up-th...
However, people are bad at that.
I'll give an interesting example.
Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.
And, they are better on CO2 emissions and lower our oil consumption.
And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.
The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.
And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?
[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.
[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.
[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?
It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?
(Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.
[0] - I still find it shameful that a university is named after the man who enabled this to happen.
The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.
If you reject reason, you are only left with force.
One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."
And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.
Why Are So Many Terrorists Engineers?
Self-described rationalists can and often do rationalize acts and beliefs that seem baldly irrational to others.
People confuse "rational" with "moral". Those aren't the same thing. You can perfectly rationally do something that is immoral with a bad goal.
For example, if you value your life above all others, then it would be perfectly rational to slaughter an orphanage if a more powerful entity made that your only choice for survival. Morally bad, rationally correct.
Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.
The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.
> I don’t think it’s just (or even particularly) bad axioms
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
>> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
> you implying that some people are good building good axiom systems
How do you go from "most people aren't very good" to "this implies some people are really good"? First, that is just a really weird interpretation of how people speak (btw, "you're" not "you" ;) because this is nicer and going to be received better than "making axioms is hard and people are shit at it." Second, you've assumed a binary condition. Here's an example. "Most people aren't very good at programming." This is an objectively true statement, right?[0] I'll also make the claim that no one is a good programmer, but some programmers are better than others. There's no contradiction in those two claims, even if you don't believe the latter is true.Now, there are some pretty good axiom systems. ZF and ZFC seems to be working pretty well. There's others too and they are used to for pretty complex stuff. They all work at least for "simple logic."
But then again, you probably weren't thinking of things like ZFC. But hey, that was kinda my entire point.
> there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
I agree. I'd hope I agree considering my username... But you've jumped to a much stronger statement. I hope we both agree that just because there are things we can't prove that this doesn't mean there aren't things we can prove. Similarly I hope we agree that if we couldn't prove anything to absolute certainty that this doesn't mean we can't prove things to an incredibly high level of certainty or that we can't prove something is more right than something else.[0] Most people don't even know how to write a program. Well... maybe everyone can write a Perl program but let's not get into semantics.
Saying most people aren't good at it DOES imply that some are good at it.
I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.
I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.
Your SCSI devices are only as fast as the slowest device in the chain.
I don't need to be faster than the bear, I only have to be faster than you.
There are not many forums where you would see this analogy.
Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
It is all about what is being modeled and how the inferences string together. If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1.
But a good counter example is the classic Bayesian Inference example[0]. Suppose you have a test that detects vampirism with 95% accuracy (Pr(+|vampire) = 0.95) and has a false positive rate of 1% (Pr(+|mortal) = 0.01). But vampirism is rare, affecting only 0.1% of the population. This ends up meaning a positive test only gives us a 8.7% likelihood of a subject being a vampire (Pr(vampire|+). The solution here is that we repeat the testing. On our second test Pr(vampire) changes from 0.001 to 0.087 and Pr(vampire|+) goes to 89% and a third getting us to about 99%.
[0] Our equation is
Pr(+|vampire)Pr(vampire)
Pr(vampire|+) = ------------------------
Pr(+)
And the crux is Pr(+) = Pr(+|vampire)Pr(vampire) + Pr(+|mortal)(1-Pr(vampire))You're talking about multiple pieces of evidence for the same statement. Your tests don't depend on any of the previous tests also being right.
I'm not a virologist or whoever designs these kinds of medical tests. I don't even know the right word to describe the profession lol. But the question is orthogonal to what's being discussed here. I'm only guessing "probably" because usually having a good example helps in experimental design. But then again, why wouldn't the original test that we're using have done that already? Wouldn't that be how you get that 95% accurate test?
I can't tell you the biology stuff, I can just answer math and ML stuff and even then only so much.
> Everything is nice on paper
I think the reason this is true is mostly because how people do things "on paper". We can get much more accurate with "on paper" modeling, but the amount of work increases very fast. So it tends to be much easier to just calculate things as if they are spherical chickens in a vacuum and account for error than it is to calculate including things like geometry, drag, resistance, and all that other fun jazz (which you still will also need to account for error/uncertainty though this now can be smaller).Which I think at the end of the day the important lesson is more how simple explanations can be good approximations that get us most of the way there but the details and nuances shouldn't be so easily dismissed. With this framing we can choose how we pick our battles. Is it cheaper/easier/faster to run a very accurate sim or cheaper/easier/faster to iterate in physical space?
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
And that's an entire new angle that the cultists are ignoring... because superintelligence may just not be very valuable.
And we don't need superintelligence for smart machines to be a problem anyway. We don't need even AGI. IMO, there's no reason to focus on that.
Yep; from the perspective of evolution (and more specifically, those animal species that only gain capability generationally by evolutionary adaptation of instinct), humans are the recursively self-(fitness-)improving accident.
Our species-aggregate capacity to compete for resources within the biosphere went superlinear in the middle of the previous century; and we've had to actively hit the brakes on how much of everything we take since then, handicapping . (With things like epidemic obesity and global climate change being the result of us not hitting those brakes quite hard enough.)
Insofar as a "singularity" can be defined on a per-agent basis, as the moment when something begins to change too rapidly for the given agent to ever hope to catch up with / react to new conditions — and so the agent goes from being a "player at the table" to a passive observer of what's now unfolding around them... then, from the rest of our biosphere's perspective, they've 100% already witnessed the "human singularity."
No living thing on Earth besides humans now has any comprehension of how the world has been or will be reshaped by human activity; nor can ever hope to do anything to push back against such reshaping. Every living thing on Earth other than humans, will only survive into the human future, if we humans either decide that it should survive, and act to preserve it; or if we humans just ignore the thing, and then just-so-happen to never accidentally do anything to wipe it from existence without even noticing.
There are some pretty obvious ways we could improve human cognition if we had the ability to reliably edit or augment it. Better storage & recall. Lower distractibility. More working memory capacity. Hell, even extra hands for writing on more blackboards or putting up more conspiracy theory strings at a time!
I suppose it might be possible that, given the fundamental design and structure of the human brain, none of these things can be improved any further without catastrophic side effects—but since the only "designer" of its structure is evolution, I think that's extremely unlikely.
To your point though, an electronic machine is a different host altogether with different strengths and weaknesses.
Because deep abstract thoughts about the nature of the universe and elaborate deep thinking were maybe not as useful while we were chasing lions and buffaloes with a spear?
We just had to be smarter then them. Which included finding out that tools were great. Learning about the habits of the prey and optmize hunting success. Those who were smarter in that capacity had a greater chance of reproducing. Those who just exceeded in thinking likely did not lived that long.
The brains we did end up with are really bad at creating that sort of knowledge. Almost none of us can. But we’re good at communicating, coming up with simplified models of things, and seeing how ideas interact.
We’re not universe-understanders, we’re behavior modelers and concept explainers.
The premise is ignorant of time. It is also ignorant of the fact that we know there's a lot of things we don't know. That's all before we consider other factors like if there are limits and physical barriers or many other things.
As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.
If you look at the "legitimate concerns" none are really deal breakers:
>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
I'm will to believe it will be slow though maybe it won't
>LLMs already seem to have hit a wall of diminishing returns
Who cares - there will be other algorithms
>What if there are several paths to different kinds of intelligence with their own local maxima
well maybe, maybe not
>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
well - you can make another one if the first does that
Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?
To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
> The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement.
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).
So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?
In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.
In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).
So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.
The second part I can make much easier to understand.
Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.
So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...
> starting from hearsay
I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping YudI think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth. In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.
> advocating for focusing on one... is primarily following a religion
Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.
Not everyone believes that the purpose of life is to make more life, or that having been born onto team human automatically qualifies team human as the best team. It's not necessarily unfortunate.
I am not a rationalist, but rationally that whole "the meaning of life is human fecundity" shtick is after school special tautological nonsense, and that seems to be the assumption buried in your statement. Try defining what you mean without causing yourself some sort of recursion headache.
> their child might wind up..
They might also grow up to be a normal human being, which is far more likely.
> if Norman Borlaug's parents had decided to never have kids
Again, this would only have mattered if you consider the well being of human beings to be the greatest possible good. Some people have other definitions, or are operating on much longer timescales.
All else equal, it would be better to spread those chances across a longer period of time at a lower population with lower carbon use.
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
But the world is way more complex than the models we used to derive those "first principles".
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
Here, I will share three examples:
Public Health: https://www.laweekly.com/restoring-healthy-communities/
Economic and Governmental: https://magarshak.com/blog/?p=362
Wars & Destruction: https://magarshak.com/blog/?p=424
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
While their nutritional quality has gone down tremendously, for vegetables too: https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
[0]: https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-...
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
I do it. You do it. I think a fascinating litmus test is asking yourself this question: “When did I last change my mind about something significant?” For most people the answer is “never”. If we lived in the world you described, most people’s answers would be “relatively recently”.
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
How suspicious does that make me?
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
YMMV.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
Absolutely. Just in case your keyboard wasn't working to arrive at this link via Google.
https://www.merriam-webster.com/dictionary/axiom
First definition, just in case it still isn't obvious.
> I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress.
Someone was wrong on the Internet! Just don't want other people getting the wrong idea. Good fun regardless.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists. I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
This is such an important skill we should all have. I learned this best from watching the documentary Behind the Curve, about flat earthers, and have applied it to my best friend diving into the Tartarian conspiracy theory.
But I have never been able to get into the Rationalist stuff, to me it’s all very meandering and peripheral and focused on… I don’t know what.
Is it just me?
If you don't get anything out of reading the list itself, then you're probably not going to get anything out of the rest of the community either.
If you poke around and find a few neat ideas there, you'll probably find a few other neat ideas.
For some people, though, this is "wait, holy shit, you can just DO that? And it WORKS?", in which case probably read all of this but then also go find a few other sources to counter-balance it.
(In particular, probably 90% of the useful insights already exist elsewhere in philosophy, and often more rigorously discussed - LessWrong will teach you the skeleton, the general sense of "what rationality can do", but you need to go elsewhere if you want to actually build up the muscles)
I see this arrogant attitude all the time on HN: reflexive distrust of the "mainstream media" and "scientific experts". Critical thinking is a very healthy idea, but its dangerous when people use it as a license to categorically reject sources. Its even worse when extremely powerful people do this; they can reduce an enormous sub-network of thought into a single node for many many people.
So, my answer for "Why Are There So Many Rationalist Cults?" is the same reason all cults exist: humans like to feel like they're in on the secret. We like to be in secret clubs.
In the rationalist cults, you typically have the fear of death and non-existence, coupled with the promise of AGI, the Singularity and immortality, weighed against the AI Apocalypse.
The average teenager who reads Nietzsches proclamation on the death of God thinks of this as an accomplishment, finally we got rid of those thousands of years old and thereby severely outdated ideas and rules. Somewhere along the march to maturity they may start to wonder whether that which has replaced those old rules and ideas were good replacements but most of them never come to the realisation that there were rebellious teenagers during all those centuries when the idea of a supreme being to which or whom even the mightiest were to answer to still held sway. Nietzsche saw the peril in letting go off that cultural safety valve and warned for what might come next.
We are currently living in the world he warned us about and for that I, atheist as I am, am partly responsible. The question to be answered here is whether it is possible to regain the benefits of the old order without getting back the obvious excesses, the abuse, the sanctimoniousness and all the other abuses of power and privilege which were responsible for turning people away from that path.
Buddhist meditation exists only in the context of the Four Noble Truths and the rest of the Buddha's Dhamma. Throwing them away means it stops being Buddhist.
This isn’t just a "must come from the Champagne region of France, otherwise it’s sparkling wine" bickering, but actual widespread misconceptions of what counts as Buddhism. Many ideas floating in Western discourse are basically German Romanticism wrapped in Orientalist packaging, not matching neither Theravada nor Mahayana teachings (for example, see the Fake Buddha Quotes project).
So the semantics are extremely important when it comes to spiritual matters. Flip one or two words and the whole metaphysical model goes in a completely different direction. Even translations add distortions, so there’s no room to be careless.
It's mostly just people who aren't very experienced talking about and dealing honestly with their emotions, no?
I mean, suppose someone is busy achieving and, at the same time, proficient in balancing work with emotional life, dealing head-on with interpersonal conflicts, facing change, feeling and acknowledging hurt, knowing their emotional hangups, perhaps seeing a therapist, perhaps even occasionally putting personal needs ahead of career... :)
Tell that person they can get a marginal (or even substantial) improvement from some rationalist cult practice. Their first question is going to be, "What's the catch?" Because at the very least they'll suspect that adjusting their work/life balance will bring a sizeable amount of stress and consequent decrease in their emotional well-being. And if the pitch is that this rationalist practice works equally well at improving emotional well-being, that smells to them. They already know they didn't logic themselves into their current set of emotional issues, and they are highly unlikely to logic themselves out of them. So there's not much value here to offset the creepy vibes of the pitch. (And again-- being in touch with your emotions means quicker and deeper awareness of creepy vibes!)
Now, take a person whose unexplored emotional well-being tacitly depends on achievement. Even a marginal improvement in achievement could bring perceptible positive changes in their holistic selves! And you can step through a well-specified, logical process to achieve change? Sign HN up!
- Idiosyncratic language used to describe ordinary things ("lightcone" instead of "future", "prior" instead of "belief" or "prejudice", etc)
- Disdain for competing belief systems
- Insistence on a certain shared interpretation of things most people don't care about (the "many-worlds interpretation" of quantum uncertainty, self-improving artificial intelligence, veganism, etc)
- I'm pretty sure polyamory makes the list somehow, just because it isn't how the vast majority of people want to date. In principle it's a private lifestyle choice, but it's obviously a community value here.
So this creates an opportunity for cult-like dynamics to occur where people adjust themselves according to their interactions within the community but not interactions outside the community. And this could seem — to the members — like the beliefs themselves are the problem, but from a sociological perspective, it might really be the inflexible way they diverge from mainstream society.
EST-type training still exists today. You don't eat until the end of the whole weekend, or maybe you get rice and little else. Everyone is told to insult you day one until you cry. Then day two, still having not eaten, they build you up and tell you how great you are and have a group hug. Then they ask you how great you feel. Isn't this a good feeling? Don't you want your loved ones to have this feeling? Still having not eaten, you're then encouraged to pay for your family and friends to do the training, without their knowledge or consent.
A friend of mine did this training after his brother paid for his mom to do it, and she paid for him to do it. Let's just say that, though they felt it changed their lives at the time, their lives in no way shape or form changed. Two are in quite a bad place, in fact...
Anyway, point is, the people who invented everything we are using right now were also susceptible to cult-like groups with silly ideas and shady intentions.
It's called the "Landmark"[0] now.
Several of my family members got sucked into that back in the early 80s and quite a few folks I knew socially as well.
I was quite skeptical, especially because of the cult-like fanaticism of its adherents. They would go on for as long as you'd let them (often needing to just walk away to get them to stop) try to get you to join.
The goal appears to be to obtain as much legal tender as can be pried from those who are willing to part with it. Hard sell, abusive and deceptive tactics are encouraged -- because it's so important for those who haven't "gotten it" to do so, justifying just about anything. But if you don't pay -- you get bupkis.
It's a scam, and an abusive one at that.
1. tendency to produce - out of no necessity whatsoever, mind - walls of text. walls of speech will happen too but not everyone rambles.
2. Obnoxiously confident that they're fundamentally correct about whatever position they happen to be holding during a conversation with you. No matter how subjective or inconsequential. Even if they end up changing it an hour later. Challenging them on it gets you more of #1.
When you're uncertain about a topic, you can explore it by writing a lot about said topic. Ideally, when you've finished exploring and studying a topic, you should be able to write a much more condensed / synthesized version.
Odd to me. Not biological warfare? Global warming? All-out nuclear war?
I guess The Terminator was a formative experience for them. (For me perhaps it was The Andromeda Strain.)
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
Anyone who's ever seen the sky knows it's blue. Anyone who's spent much time around rationalism knows the premise of this article is real. It would make zero sense to ban talking about about a serious and obvious problem in their community until some double blind peer reviewed data can be gathered.
It would be what they call an "isolated demand for rigor".
Note the common pattern in major religions: they tell you that thoughts and emotions obscure the light of intuition, like clouds obscure sunlight. Rationalism is the opposite: it denies the very idea of intuition, or anything above the sphere of thoughts, and tells to create as many thoughts as possible.
Rationalists deny anything spiritual, good or evil, because they don't have evidence to think otherwise. They remain in this state of neutral nihilism until someone bigger than them sneaks into their ranks and casually introduces them to evil with some undeniable evidence. Their minds quickly pass the denial-anger-acceptance stages and being faithful to their rationalist doctrine they update their beliefs with what they know. From that point they are a cult. That's the story of Scientology, which has too many many parallels with Rationalism.
https://news.ycombinator.com/newsguidelines.html
Scroll to the bottom of the page.
The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight.
I once called rationalists infantile, impotent liberal escapism, perhaps that's the novel take you are looking for.
Essentially my view is that the fundamental problem with rationalists and the effective altruist movement is that they are talking about profound social and political issues, with any and all politics completely and totally removed from it. It is liberal depoliticisation[1] driven to its ultimate conclusion. That's just why they are ineffective and wrong about everything, but that's also why they are popular among the tech elites that are giving millions to associated groups like MIRI[2]. They aren't going away, they are politically useful and convenient to very powerful people.
It is still fascinating to trace back the divergent developments like american-flavoured christian sects or philosophical schools of "pragmatism", "rationalism" etc. which get super-charged by technological disruptions.
In my youth I was heavily influenced by the so-called Bildung which can be functionally thought of as a form of ersatz religion and is maybe better exemplified in the literary tradition of the Bildungsroman.
I've grappled with and wildly fantasized about all sorts of things, experimented mindlessly with all kinds of modes of thinking and consciousness amidst my coming-of-age, in hindsight without this particular frame of Bildung left by myself I would have been left utterly confused and maybe at some point acted out on it. By engaging with books like Der Zauberberg by Thomas Mann or Der Mann ohne Eigenschaften by Robert Musil, my apparent madness was calmed down and instead of breaking the dam of a forming social front of myself with the vastness of the unconsciousness, over time I was guided to develop my own way into slowly operating it appropriately without completely blowing myself up into a messiah or finding myself eternally trapped in the futility and hopelessness of existence.
Borrowing from my background, one effective vaccination which spontaneously came up in my mind against rationalists sects described here, is Schopenhauer's Die Welt als Wille und Vorstellung which can be read as a radical continuation of Kant's Critique of Pure Reason which was trying to stress test the ratio itself. [To demonstrate the breadth of Bildung in even something like the physical sciences e.g. Einstein was familiar with Kant's a priori framework of space and time, Heisenberg's autobiographical book Der Teil und das Ganze was motivated by: "I wanted to show that science is done by people, and the most wonderful ideas come from dialog".]
Schopenhauer arrives at the realization because of the groundwork done by Kant (which he heavily acknowledges): that there can't even exist a rational basis for rationality itself, that it is simply an exquisitely disguised tool in the service of the more fundamental will i.e. by its definition an irrational force.
Funny little thought experiment but what consequences does this have? Well, if you are declaring the ratio as your ultima ratio you are just fooling yourself in order to be able to rationalize anything you want. Once internalized Schopenhauer's insight gets you overwhelmed by Mitleid for every conscious being, inoculating you against the excesses of your own ratio. It instantly hit me with the same force as MDMA but several years before.
I’m hyper rational when I don’t take my meds. I’m also insane. But all of my thoughts and actions follow a carefully thought out sequence.
As soon as those “sequences” were being developed it was clearly turning into a cult around EY, that I never understood and still don’t.
This article did a good job of covering the history since and was really well written.
Water finds its own level
Ummm, EY literally has a semi-permanent office in Lighthouse (at least until recently) and routinely blocks people on Twitter as a matter of course.
This is an interesting idea (phenomenon?):
> A purity spiral is a theory which argues for the existence of a form of groupthink in which it becomes more beneficial to hold certain views than to not hold them, and more extreme views are rewarded while expressing doubt, nuance, or moderation is punished (a process sometimes called "moral outbidding").[1] It is argued that this feedback loop leads to members competing to demonstrate the zealotry or purity of their views.[2][3]
I glanced at it once or twice and shoved it into a bookshelf. I wish I kept it, because I never thought so much would happen around him.
Is he known publicly for some other reason?
His book If Anyone Builds It, Everyone Dies comes out in a month: https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuma...
You can find more info here: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
[citation needed]
Even for this weird cult that is trying to appropriate the word, would they really consider him the father of redefining the word?
So the community itself gives him a lot of credit.
So to the point of the article, rationalist cults are common because Rationalists are irrational people (like all people) who (unlike most people) are blinded to their own irrationality by their overinflated egos. They can "reason" themselves into all manner of convoluted pretzels and lack the humility to admit they went off the deep end.
It was a while ago, but take the infamous story of the 2006 rape case in Duke University. If you check out coverage of that case, you get the impression every member of faculty that joined in the hysteria was from some humanities department, including philosophy. And quite a few of them refused to change their mind even as the prosecuting attorney was being charged with misconduct. Compare that to Socrates' behavior during the trial of the admirals in 406 BC.
Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
But rationalism is?
Imagine that you're living in a big scary world, and there's someone there telling you that being scared isn't particularly useful, that if you slow down and think about the things happening to you, most of your worries will become tractable and some will even disappear. It probably works at first. Then they sic Roko's Basilisk on you, and you're a gibbering lunatic 2 weeks later...
Those would, of course, be people with no formal training in history or philosophy (as the study of history where you aren't allowed to question Marxist doctrine would be self-evidently useless). Their training would be in the natural sciences or mathematics. And without knowing how to properly reason about history or philosophy, they may reach fairly kooky conclusions.
Hence why Rationalism can be though as the same class of phenomena as Fomenko's chronology (or if you want to be slightly more generous, Shafarevich's philosophical tracts).
Mereological nihilism and weak emergence is interesting and helps protect against many forms of kind of obsessive levels of type and functional cargo culting.
But then in some areas philosophy is woefully behind, and you have philosophers poo-pooing intuitionism when any software engineer working on sufficiently federated or real world sensor/control system borrows constructivism into their classical language to not kill people (agda is interesting of course). Intermediate logic is clearly empirically true.
It's interesting that people don't understand the non-physicality of the abstract and you have people serving the abstract instead of the abstract being used to serve people. People confusing the map for the terrain is such a deeply insidious issue.
I mean all the lightcone stuff, like, you can't predict ex ante what agents will be keystones in beneficial casual chains so its such waste of energy to spin your wheels on.
And did all the philosophers at all the other colleges convene and announce they were also using philosophy to determine if someone was raped?
I don't think that matters very much. If there's a strong enough correlation between being a reactive idiot and the department you're in, it makes a bad case for enrolling in that realm of study for educational motives. It's especially bad when the realm of study is directly focused on knowledge, ethics, and logic.
Note the "if" though, I haven't evaluated the parent's claims. I'm just saying it doesn't matter if they said they used philosophy. It reflects on philosophy as a study, at least the style they do there.
How much that affects other colleges is iffier, but it's not zero.
The faculty denounced the students without evidence, judged the case thought their emotions and their preconceived notions and refused to change their minds as new evidence emerged. Imagine having an academic discussion on a difficult ethical issue with such a teacher...
And none of that would have changed even, even if there somehow was a rape-focused conspiracy among the students of that university. (Thought the problem would have been significantly less obvious.)
> I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Man, if you have to make stuff up to try to convince people... you might not be on the right side here.
I guess I should have limited my statement about resisting mob justice to the economists at that university as the other departments merely didn't sign on to the public letter of denunciation?
Its weird that Wikipedia doesn't give you a percentage of signatories of the letter of 88 from the philosophy department, but several of the notable signatories are philosophers.
[1] https://en.m.wikipedia.org/wiki/Reactions_to_the_Duke_lacros...
Edit: Just found some articles claiming that a chemistry professor by the name of Stephen Baldwin was the first to write to the university newspaper condemning the mob.
You realise that it's very hard to do well and it's intellectual quicksand.
Reading philosophers and great writers as you suggest is better than joining a cult.
It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
An AI can neither write about what you are thinking in your place nor substitute for a particularly smart critic, but might still be useful for rubber-ducking philosophical writing if used well.
I meant use the AI to critique what you have written in response to reading the suggested authors.
Yes, a particularly smart critic would be better. But an LLM is easily available.
Being Christian, it helped me understand what I believe and why. It made faith a deliberate, reasoned choice.
And, of course, there are many rational reasons for people to have very different opinions when it comes to religion and deities.
Being bipolar might give me an interesting perspective. Everything I’ve read about rationalists misses the grounding required to isolate emotion as a variable.
What else?
Rationalism isn't any more "correct" and "proper" thinking than Christianity and Buddhism claim to espouse.
It's the same wolf in another sheep's clothing.
And people who wouldn't join a religious cult -- e.g. because religious cults are too easy to recognize since we're all familiar with them, or because religions hate anything unusual about gender -- can join a rationalist cult instead.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
Replace AGI causing extinction with the Rapture and you get a lot of US Christian fundamentalists. They often reject addressing problems in the environment, economy, society, etc. because the Rapture will happen any moment now. Some people just end up stuck in a belief about something catastrophic (in the case of the Rapture, catastrophic for those left behind but not those raptured) and they can't get it out of their head. For individuals who've dealt with anxiety disorder, catastrophizing is something you learn to deal with (and hopefully stop doing), but these folks find a community that reinforces the belief about the pending catastrophe(s) and so they never get out of the doom loop.
Both communities, though, end up reinforcing the belief amongst their members and tend towards increasing isolation from the rest of the world (leading to cultish behavior, if not forming a cult in the conventional sense), and a disregard for the here and now in favor of focusing on this impending world changing (destroying or saving) event.
It’s not from a rational basis, but from being bombarded with fear from every rectangle in my house, and the houses of my entire community
Which is to say that I don't think just dooming is going on. Especially, the belief in AGI doom has a lot of plausible arguments in its favor. I happen not to believe in it but as a belief system it is more similar to a belief in global warming than to a belief in the raptures.
My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.
This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.
The article seems to try very hard to find something positive to say about these groups, and comes up with:
“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”
There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.
The "threat of AI" they're claiming validates rationalism doesn't exist. These loons were the reason Google sat on their LLMs and made their image models only draw pictures of robots, because of the supposed "threat" of AI. Now everyone can run models way better on their own laptops and the sky hasn't fallen, there hasn't even been mass unemployment or anything. Not even the weakest version of this belief has proven true. AI is very friendly, even.
And masks? How many graphs of cases/day with mask mandate transitions overlayed are required before people realize masks did nothing? Whole countries went from nearly nobody wearing them, to everyone wearing them, overnight, and COVID cases/day didn't even notice. You can't look at a case graph and see where the rules changed. Which makes sense because SARS-CoV-2 is aerosolized and can enter through the masks, around the masks, when masks are removed and even through the eyeballs.
Seems like rationalists in the end have managed to be correct about nothing. What a disappointment.
After reading a warning from a rationalist blog, I posted a lot about COVID news to another forum and others there gave me credit for giving the heads-up that it was a Big Deal and not just another thing in the news. (Not sure it made all that much difference, though?)
Some meta-commentary first... How would one go about testing if this is true? If true, then such "promises" are not written down -- they are implied. So one would need to ask at least two questions: 1. Did the author intend to make these implicit promises? 2. What portion of readers perceive them as such?
> ... There is an art of thinking better ...
First, this isn't _implicit_ in the Sequences; it is stated directly. In any case, the quote does not constitute a promise: so far, it is a claim. And yes, rationalists do think there are better and worse ways of thinking, in the sense of "what are more effective ways of thinking that will help me accomplish my goals?"
> ..., and we’ve figured it out.
Codswallop. This is not a message of the rationality movement -- quite the opposite. We share what we've learned and why we believe it to be true, but we don't claim we've figured it all out. It is better to remain curious.
> If you learn it, you can solve all your problems...
Bollocks. This is not claimed implicitly or explicitly. Besides, some problems are intractable.
> ... become brilliant and hardworking and successful and happy ...
Rubbish.
> ..., and be one of the small elite shaping not only society but the entire future of humanity.
Bunk.
For those who haven't read it, I'll offer a relevant extended quote from Yudkowsky's 2009 "Go Forth and Create the Art!" [1], the last post of the Sequences:
## Excerpt from Go Forth and Create the Art
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence. You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.
I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?" They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one. They will get a saving throw.
It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.
And so instead they... go off and invent Freudian psychoanalysis. Or a new religion. Or something. That's what happens by default, when people start thinking about thinking.
I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.
That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
[1]: https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-a...
I'd have thought that would be obvious since it's the history of many religions (which seem to just be cults that survived the bottleneck effect to grow until they reached a sustainable population).
In other words, humans are wired for tribalism, so don't be surprised when they start forming tribes...
I've been there myself.
> And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory.
> The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
This helps me understand what people mean by "meaning". A sense that their life and actions actually matter. I've always struggled to understand this issue but this helps make it concrete, the kind of thing people must be looking for.
> One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional.
"We're"? The author is a rationalist too? That would definitely explain why this article is so damned long. Why are rationalists not able to write less? It sounds like a joke but this is seriously a thing. [EDIT: Various people further down in the comments are saying it's amphetamines and yes, I should have known that from my own experience. That's exactly what it is.]
> Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)
This kind of defeats the purpose, doesn't it? Also, this is nowhere justified in the article, just added on as the very last sentence.
So there's six questionable (but harmless) groups and then later the article names three of them as more serious. Doesn't seem like "many" to me.
I wonder what percentage of all cults are the rationalist ones.
How many rationalists are there in the world? Of course it depends on what you mean by rationalist, but I'd guess that there are probably several tens of thousands, at very least, people in the world who either consider themselves rationalists or are involved with the rationalist community.
With such numbers, is it surprising that there would be half a dozen or so small cults?
There are certainly some cult-like aspects to certain parts of the rationalist community, and I think that those are interesting to explore, but come on, this article doesn't even bother to establish that its title is justified.
To the extent that rationalism does have some cult-like aspects, I think a lot of it is because it tends to attract smart people who are deficient in the ability to use avenues other than abstract thinking to comprehend reality and who enjoy making loosely justified imaginative leaps of thought while overestimating their own abilities to model reality. The fact that a huge fraction of rationalists are sci-fi fans is not a coincidence.
But again, one should first establish that there is anything actually unusual about the number of cults in the rationalist community. Otherwise this is rather silly.
1) If you have a criticism about them or their stupid name or how "'all I know is that I know nothing' how smug of them to say they're truly wise," rest assured they have been self flagellating over these criticisms 100x longer than you've been aware of their group. That doesn't mean they succeeded at addressing the criticisms, of course, but I can tell you that they are self aware. Especially about the stupid name.
2) They are actually well read. They are not sheltered and confused. They are out there doing weird shit together all the time. The kind of off-the-wall life experiences you find in this community will leave you wide eyed.
3) They are genuinely concerned with doing good. You might know about some of the weird, scary, or cringe rationalist groups. You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
In my experience, where they go astray is when they trick themselves into working beyond their means. The basic underlying idea behind most rationalist projects is something like "think about the way people suffer everyday. How can we think about these problems in a new way? How can we find an answer that actually leaves everyone happy?" A cynic (or a realist, depending on your perspective) might say that there are many problems that fundamentally will leave some group unhappy. The overconfident rationalist will challenge that cynical/realist perspective until they burn themselves out, and in many cases they will attract a whole group of people who burn out alongside them. To consider an extreme case, the Zizians squared this circle by deciding that the majority of human beings didn't have souls and so "leaving everyone happy" was as simple as ignoring the unsouled masses. In less extreme cases this presents itself as hopeless idealism, or a chain of logic that becomes so divorced from normal socialization that it appears to be opaque. "This thought experiment could hypothetically create 9 quintillion cubic units of Pain to exist, so I need to devote my entire existence towards preventing it, because even a 1% chance of that happening is horrible. If you aren't doing the same thing then you are now morally culpable for 9 quintillion cubic units of Pain. You are evil."
Most rationalists are weird but settle into a happy place far from those fringes where they have a diet of "plants and specifically animals without brains that cannot experience pain" and they make $300k annually and donate $200k of it to charitable causes. The super weird ones are annoying to talk to and nobody really likes them.
I listened to a podcast that covered some of these topics, so I'm not lost; but I think someone who's new to this topic will be very, very, confused.
https://aiascendant.substack.com/p/extropias-children-chapte...
https://aiascendant.com/p/extropias-children-chapter-1-the-w...
https://davidgerard.co.uk/blockchain/2023/02/06/ineffective-...
https://www.bloomberg.com/news/features/2023-03-07/effective...
https://www.vox.com/future-perfect/23458282/effective-altrui...
Here's a counter-piece on David Gerard and his portrayal of LessWrong and Effective Altruism: https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
If so, got anything to share - anecdotes, learnings, cautions, etc.?
I am never planning to be part of one; just interested to know, partly because I have lived adjacent to what might be one, at times.
The main point is that it isn't so much the cult (leader) so much as the victims being in a vulnerable mental state getting exploited.
From kink to rock hounding, there's always people who base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
Who would ever maintain power when removed from their community? You mean to say they base their identity on the awareness of the power they possess within a certain group?
I don't know if it is really true, but it certainly felt true that folks looking for deeper answers about a better way to think about things end up finding what they believe is the "right" way and that tends to lead to branding other options as "wrong".
A search for certainty always seems to be defined or guided by people dealing with their own issues and experiences that they can't explain. It gets tribal and very personal and those kind of things become dark rabbit holes.
----
>Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
Reminds me of some members of our government and conspiracy theorists who "research" and encourage people to figure it out themselves ...
As for the AI doomerism, many in the community have more immediate and practical concerns about AI, however the most extreme voices are often the most prominent. I also know that there has been internal disagreement on the kind of messaging they should be using to raise concern.
I think rationalists get plenty of things wrong, but I suspect that many people would benefit from understanding their perspective and reasoning.
There's so much in these group dynamics that repeats group dynamics of communist extremists of the 70s.
Compare this part from OP:
>Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.”
This reeks of Marxist-Leninist self-criticism, where everybody tried to up each other in how ideologically pure they were. The most extreme outgrowing of self-criticism is when the Japanese United Red Army beat its own members to death as part of self-criticisms.
>'These violent beatings ultimately saw the death of 12 members of the URA who had been deemed not sufficiently revolutionary.' https://en.wikipedia.org/wiki/United_Red_Army
History doesn't repeat, but it rhymes.
api•10h ago
iwontberude•10h ago
keybored•10h ago
shadowgovt•10h ago
mindslight•10h ago
shadowgovt•10h ago
NoGravitas•9h ago
FuriouslyAdrift•10h ago
khazhoux•7h ago
I've been saying this since at least 1200 BC!
shadowgovt•10h ago
The shared thread among these is (in ever widening circles) a story people tell themselves to justify precisely why, for example, the actions of someone you'll never meet in Tulsa, OK have any bearing whatsoever on the fate of you, a person in Lincoln, NE.
One can see how this leaves an individual in a tenuous place if one doesn't feel particularly connected to nationhood (one can also see how being too connected to nationhood, in an exclusionary way, can also have deleterious consequences, and how not unlike differing forms of Christianity, differing concepts on what the 'soul' of a nation is can foment internal strife).
(To be clear: those fates are intertwined to some extent; the world we live in grows ever smaller due to the power of up-scaled influence of action granted by technology. But "nation" is a sort of fiction we tell ourselves to fit all that complexity into the slippery meat between human ears).
ameliaquining•9h ago