Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...
1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
Saying in 2025 that the study is still debated is not only racist, but dishonest as well. It's not debated, it's junk
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.
There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.
https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.
https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
i looked into this when taleb made a splash denying it, but i ran the numbers myself and sent them over to a quant friend to look over and he agreed. the reality of our world is less than optimal.
i hope the stealth start-ups working on iq increasing drugs are successful and everyone who knows the truth stays real quiet about in their public life, which you will too if you want a career in the west.
i heard you can talk more openly about it in china of all places.. funny how that is.
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
but I don't know enough about it, I'm just trolling.
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
Actually, neither do Rationalists, but instead they cosplay at being rational.
What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.
Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.
The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.
Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.
Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).
Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.
I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.
Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.
I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day.
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
Kinda like Mensa?
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong?
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”
They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?
It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.
Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.
The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.
On the one hand, I guess that's just life here in reality.
On the other, man, reality sucks sometimes.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
These people are just narcissists who use (often pseudo)intellectualism as the vehicle for their narcissism.
https://www.ohchr.org/en/press-releases/2024/11/un-special-c...
In other words, your question ignores so much nuance that it’s a red herring IMO.
Still, since he/she was so willing to make a claim of genocide (implicitly) I was wondering that, were it a false claim, would it be equally "blatantly shitty social behaviour, narcissistic use of (often pseudo)intellectualism for his/her narcissistic behaviour" as the behaviour he/she was calling out?
I'm pretty certain I understand the reality of the situation (in fact I'd accept reasonably short odds that I understand it better than anyone participating in the discussion on this story).
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
*Guess I’m a rationalist now.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
Until?
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
Yeah, this surprises absolutely nobody.
cue_the_strings•5h ago
greener_grass•4h ago
cue_the_strings•1h ago