frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Curved-Crease Origami Sculptures

https://erikdemaine.org/curved/
43•wonger_•1h ago•4 comments

Andrej Karpathy: Software in the era of AI [video]

https://www.youtube.com/watch?v=LCEmiRjPEtQ
795•sandslash•15h ago•395 comments

Posit floating point numbers: thin triangles and other tricks (2019)

http://marc-b-reynolds.github.io/math/2019/02/06/Posit1.html
15•fanf2•1h ago•2 comments

Finding Dead Websites

https://www.marginalia.nu/log/a_122_dead_websites/
36•ingve•2d ago•4 comments

From LLM to AI Agent: What's the Real Journey Behind AI System Development?

https://www.codelink.io/blog/post/ai-system-development-llm-rag-ai-workflow-agent
77•codelink•6h ago•15 comments

Guess I'm a Rationalist Now

https://scottaaronson.blog/?p=8908
82•nsoonhui•5h ago•192 comments

In-Memory C++ Leap in Blockchain Analysis

https://caudena.com/the-in-memory-c-leap-in-blockchain-analysis/
54•caudena•19h ago•36 comments

Show HN: Claude Code Usage Monitor – real-time tracker to dodge usage cut-offs

https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor
105•Maciej-roboblog•6h ago•67 comments

Show HN: Unregistry – “docker push” directly to servers without a registry

https://github.com/psviderski/unregistry
522•psviderski•16h ago•114 comments

Show HN: A DOS-like hobby OS written in Rust and x86 assembly

https://github.com/krustowski/rou2exOS
48•krustowski•2h ago•4 comments

Researchers are now vacuuming DNA from the air

https://www.sciencedaily.com/releases/2025/06/250603114822.htm
13•karlperera•3d ago•4 comments

Getting Started Strudel

https://strudel.cc/workshop/getting-started/
72•rcarmo•3d ago•34 comments

Elliptic Curves as Art

https://elliptic-curves.art/
150•nill0•11h ago•16 comments

Painting with Math: A Gentle Study of Raymarching

https://blog.maximeheckel.com/posts/painting-with-math-a-gentle-study-of-raymarching/
47•ibobev•1d ago•3 comments

My iPhone 8 Refuses to Die: Now It's a Solar-Powered Vision OCR Server

https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-server/
381•hemant6488•1d ago•144 comments

A Visual Guide to Genome Editors

https://www.asimov.press/p/a-visual-guide-to-genome-editors
24•surprisetalk•2d ago•1 comments

Show HN: Workout.cool – Open-source fitness coaching platform

https://github.com/Snouzy/workout-cool
745•surgomat•1d ago•210 comments

The Missing 11th of the Month (2015)

https://drhagen.com/blog/the-missing-11th-of-the-month/
189•xk3•18h ago•30 comments

Microsoft wants you to buy a new computer. Make your current one secure again?

https://endof10.org/
39•doener•2h ago•12 comments

Bento: A Steam Deck in a Keyboard

https://github.com/lunchbox-computer/bento
249•MichaelThatsIt•18h ago•69 comments

Base44 sells to Wix for $80M cash

https://techcrunch.com/2025/06/18/6-month-old-solo-owned-vibe-coder-base44-sells-to-wix-for-80m-cash/
104•myth_drannon•6h ago•91 comments

3D printable 6" f/5 compact travel telescope model

https://www.printables.com/model/1325533-smallest-telescope-kit-for-150750
62•chantepierre•3d ago•37 comments

The Zed Debugger Is Here

https://zed.dev/blog/debugger
410•SupremumLimit•13h ago•129 comments

Geochronology supports LGM age for human tracks at White Sands, New Mexico

https://www.science.org/doi/10.1126/sciadv.adv4951
4•gametorch•23m ago•0 comments

The Scheme That Broke the Texas Lottery

https://www.newyorker.com/news/letter-from-the-southwest/the-scheme-that-broke-the-texas-lottery
6•mitchbob•2h ago•3 comments

The unreasonable effectiveness of fuzzing for porting programs

https://rjp.io/blog/2025-06-17-unreasonable-effectiveness-of-fuzzing
220•Bogdanp•23h ago•44 comments

TI to invest $60B to manufacture foundational semiconductors in the U.S.

https://www.ti.com/about-ti/newsroom/news-releases/2025/texas-instruments-plans-to-invest-more-than--60-billion-to-manufacture-billions-of-foundational-semiconductors-in-the-us.html
245•TMWNN•14h ago•109 comments

SpaceX Starship 36 Anomaly

https://twitter.com/NASASpaceflight/status/1935548909805601020
237•Ankaios•11h ago•352 comments

Websites are tracking you via browser fingerprinting

https://engineering.tamu.edu/news/2025/06/websites-are-tracking-you-via-browser-fingerprinting.html
305•gnabgib•18h ago•201 comments

CPU-Based Layout Design for Picker-to-Parts Pallet Warehouses

https://arxiv.org/abs/2506.04266
36•PaulHoule•2d ago•6 comments
Open in hackernews

Guess I'm a Rationalist Now

https://scottaaronson.blog/?p=8908
82•nsoonhui•5h ago

Comments

cue_the_strings•5h ago
I feel like I'm witnessing something that Adam Curtis would cover in the last part of The Century of Self, in real time.
greener_grass•4h ago
There was always an underlying Randian impulse to the EA crowd - as if we could solve any issue if we just get the right minds onto tackling the problem. The black-and-white thinking, group think, hero worship and charicaturist literature are all there.
cue_the_strings•1h ago
I always wondered is it her direct influence, or is it just that those characteristics naturally "go together".
roenxi•5h ago
The irony here is the Rationalist community are made up of the ones who weren't observant enough to pick that "identifying as a Rationalist" is generally not a rational decision.
MichaelZuo•4h ago
From what I’ve seen it’s a mix of that, some who avoid the issue, and some who do it intentionally even though they don’t really believe it.
voidhorse•4h ago
These kinds of propositions are determined by history, not by declaration.

Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.

Sharlin•4h ago
Bleh, labels can be restrictive, but guess what labels can also be? Useful.
resource_waste•1h ago
>These kinds of propositions are determined by history, not by declaration.

A single failed prediction should revoke the label.

The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.

MeteorMarc•4h ago
This is what rationalisme entails: https://plato.stanford.edu/entries/rationalism-empiricism/
greener_grass•4h ago
For any speed-runners out there: https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism
Sharlin•4h ago
That's a different definition of rationalism from what is used here.
AnimalMuppet•1h ago
It is. But the Rationalists, by taking that name as a label, are claiming that they are what the GP said. They want the prestige/respect/audience that the word gets, without actually being that.
amarcheschi•4h ago
They call themselves rationalist, yet they don't have very rational opinions if you ask them about scientific racism [1]

[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...

wffurr•4h ago
I am not sure precisely it not very rational about that link. Did you have a specific point you were trying to make with it?
amarcheschi•4h ago
Yes, that they're not "rational".

If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...

1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations

If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute

As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational

derangedHorse•4h ago
Nothing about the article you posted in your first comment seems racist. You could argue that believing in the conclusions of Richard Lynn’s work makes someone racist, but to support that claim, you’d need to show that those who believe it do so out of willful ignorance of evidence that his science is flawed.
amarcheschi•3h ago
Scott itself makes a point of the study being debated. It's not. It's not debated. It's pseudo science,or "science" made with so many questionable points that it's hard to call it "science". He links to a magazine article written by a researcher that has been fired, not surprisingly, for his pseudo scientific stances on racism https://en.m.wikipedia.org/wiki/Noah_Carl

Saying in 2025 that the study is still debated is not only racist, but dishonest as well. It's not debated, it's junk

wizzwizz4•33m ago
It is debated: just not by serious scholars or academics. (Which doesn't necessarily make it wrong; but "scientific racism is bunk, and its proponents are persuasive" is a model whose high predictive power has served me well, so I believe it's wrong regardless.)
mjburgess•4h ago
A lot of "rationalists" of this kind are very poorly informed about statistical methodology, a condition they inherit from reading papers written in these pseudoscientific fields about people likewise very poorly informed.

This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.

amarcheschi•4h ago
I agree, I had to read things for an ethics course in IT in uni that read more like science fiction than actual science. Anyway, my point is that it feels pretentious - very pretentious, and I'm being kind with words - to support such pseudo scientific theories and call itself rationalist. Especially when these teories can be debunked just by reading the related Wikipedia page
saalweachter•1h ago
More charitably, it is really, really hard to tell the difference between a crank kicked out of a field for being a crank, and an earnest researcher being persecuted for not towing the political line, without being an expert in the field in question and familiar with the power structures involved.

A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.

There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.

exoverito•13m ago
Human ethnic groups are measurably different in genetic terms, as based on single nucleotide polymorphisms and allelic frequency. There are multiple PCA plots of the 1000 Genomes dataset which show clear cluster separation based on ancestry:

https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...

We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.

The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.

https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/

A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.

https://en.wikipedia.org/wiki/File:Full_Fst_Average.png

In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.

Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.

ineedaj0b•4h ago
there's some things in this world that suck like cancer, nuclear weapons, etc but it's the way the world is.

i looked into this when taleb made a splash denying it, but i ran the numbers myself and sent them over to a quant friend to look over and he agreed. the reality of our world is less than optimal.

i hope the stealth start-ups working on iq increasing drugs are successful and everyone who knows the truth stays real quiet about in their public life, which you will too if you want a career in the west.

i heard you can talk more openly about it in china of all places.. funny how that is.

contrarian1234•4h ago
The article made me think deeper about what rubs me the wrong way about the whole movement

I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"

In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected

Avicebron•4h ago
Yeah the "rational" part always seemed a smokescreen for the ability to produce and ingest their own and their associates methane gases.

I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..

Cthulhu_•57m ago
It feels like a shield of sorts, "I am a rationalist therefore my opinion has no emotional load, it's just facts bro how dare you get upset at me telling xyz is such-and-such you are being irrational do your own research"

but I don't know enough about it, I'm just trolling.

shermantanktop•54m ago
The meta with these people is “my brilliance comes with an ego that others must cater to.”

I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.

That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.

dkarl•43m ago
It's a conflict as old as time. What do you do when an argument leads to an unexpected conclusion? I think there are two good responses: "There's something going on here, so let's dig into it," or, "There's something going on here, but I'm not going to make time to dig into it." Both equally valid.

In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."

bilbo0s•1m ago
Might kind of point to real life people having too much of what is now called, "rationality", and very little of what used to be called "wisdom"?
felipeerias•4h ago
The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.

Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.

One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.

loose-cannon•4h ago
Reducibility is usually a goal of intellectual pursuits? I don't see that as a fault.
colordrops•4h ago
What the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.
loose-cannon•3h ago
That's a really hard belief to justify. And what implications would that position have? Should biologists give up?
the_af•3h ago
Biologists don't try to reason everything from first principles.

Actually, neither do Rationalists, but instead they cosplay at being rational.

falcor84•2h ago
> Biologists don't try to reason everything from first principles.

What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.

svnt•1h ago
Setting up reasoning from first principles vs magical thinking is a false dichotomy and an implicit swipe.
falcor84•40m ago
Ok, mea culpa. So what distinction did you have in mind?
Veen•1h ago
It would imply that when dealing with complex systems, models and conceptual frameworks are, at the very best, useful approximations. It would also imply that it is foolhardy to ignore phenomena simply because they are not comprehensible within your preferred framework. It does not imply biologists should give up.
pixl97•1h ago
How reducible is the question. If some particular events require a minimum amount of complexity, how to do you reduce it below that?
__MatrixMan__•1h ago
I think that chemistry, physics, and mathematics, are engaged in a program of understanding their subject in terms of the sort of first principles that Descartes was after. Reduction of the subject to a set of simpler thoughts that are outside of it.

Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.

achierius•35m ago
Concretely we know that there exist irreducible structures, at least in mathematics: https://en.wikipedia.org/wiki/Classification_of_finite_simpl...

The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.

Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.

jltsiren•3h ago
"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.
js8•1h ago
I would not be so sure. There are many fields where reductionism was applied in practice and it yielded useful results, thanks to computers.

Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).

Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.

gilleain•1h ago
> protein folding (reduction to quantum chemistry),

I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.

Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.

nyrikki•2h ago
'Reducibility' is a property if present that makes problems tractable or possibly practical.

What you are mentioning is called western reductionism by some.

In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.

Under the assumption that all models are wrong, but some are useful, it helps you find useful models.

If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.

Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.

Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.

ImaCake•1h ago
>The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.

This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.

We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.

baxtr•1h ago
Yes, and if you read Popper that’s exactly how he defined rationality / the scientific method: to solve problems of life.
lsp•28m ago
A lot of people really need to be reminded of this.

I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.

cjs_ac•4h ago
I think the absolutism is kind of the point.
ineedaj0b•4h ago
rationalism got pretty lame the last 2-3 years. imo the peak was trying to convince me to donate a kidney.

post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.

they also let SF get run into the ground until their detractors decided to take over.

josephg•3h ago
Where do the post rats hang out these days? I got involved in the stoa during covid until the online community fragmented. Are there still events & hangouts?
jes5199•1h ago
postrats were never a coherent group but a lot of people who are at https://vibe.camp this weekend probably identify with the label. some of us are still on twitter/X
Trasmatta•1h ago
Not "post rat", but r/SneerClub is good for criticisms of rationalists (some from former rationalists)
hiAndrewQuinn•4h ago
>Maybe it's actually going to be rather benign and more boring than expected

Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.

Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.

That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.

Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.

s1mplicissimus•4h ago
> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.

I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)

hiAndrewQuinn•3h ago
It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.

I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.

lcnPylGDnU4H9OF•2h ago
> like a burger chain advertising itself on the brutality of its factory farms

It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.

I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)

hiAndrewQuinn•1h ago
That is how I normally hear the marketing theory described when people go into it in more detail.

I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.

Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.

Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.

ummonk•58m ago
It's also reasonable as a Pascal's wager type of thing. If you can't affect the outcome, just prepare for the eventuality that it will work out because if it doesn't you'll be dead anyway.
voidhorse•4h ago
To me they have always seemed like a breed of "intellectuals" who only want to use knowledge to inflate their own egos and maintain a fragile superiority complex. They are't actually interested in the truth so much as they are interested in convincing you that they are right.
camgunz•2h ago
Yeah I don't know or really care about Rationalism or whatever. But I took Aaronson's advice and read Zvi Mowshowitz' Childhood and Education #9: School is Hell [0], and while I share many of the criticisms (and cards on the table I also had pretty bad school experiences), I would have a hard time jumping onto this bus.

One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.

But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!

But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.

[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...

[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/

[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf

ummonk•54m ago
> but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn"

The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.

By the way, this small fraction of the population is responsible for the driving the bulk of R&D.

genewitch•28m ago
My wife is LMSW and sees ~5 people a day. 153,922 population in the metro area. Mind you, this is adults, but they're all mandated to show up.

there's only ~3300 counties in the USA.

i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day.

verall•2m ago
[delayed]
dv_dt•1h ago
The rationalist discussions rarely consider what should be the baseline assumption of what if one or more of the logical assumptions or associations are wrong. They also tend to not systematically plan to validate. And in many domains - what could hold true for one moment can easily shift.
resource_waste•1h ago
100%

Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.

They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.

cogman10•44m ago
It's every bit a proto religion. And frankly quite reminiscent of my childhood faith.

It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.

In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".

Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.

js8•1h ago
> The people involved all seem very... Full of themselves ?

Kinda like Mensa?

parpfish•36m ago
When I was a kid I wanted to be in Mensa because being smart was a big part of my identity and I was constantly seeking external validation.

I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects

NoGravitas•2m ago
Personally, I subscribe to Densa, the journal of the Low-IQ Society.
baxtr•1h ago
My main problem with the movement is their emphasis on Bayesianism in conjunction with an almost total neglect of Popperian epistemology.

In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.

Popper injects an epistemic humility that Bayesianism, taken alone, can miss.

I think that aligns well with your observation.

agos•1h ago
is epidemiology a typo for epistemology or am I missing something?
baxtr•1h ago
Yes, thx, fixed it.
kurtis_reed•57m ago
So what's the difference between Bayesianism and Popperian epistemology?
uniqueuid•49m ago
Popper requires you to posit null hypotheses to falsify (although there are different schools of thought on what exactly you need to specify in advance [1]).

Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.

[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...

kragen•50m ago
Hmm, what epistemological propositions of Popper's do you think they're missing? To the extent that I understand the issues, they're building on Popper's epistemology, but by virtue of having a more rigorous formulation of the issues, they resolve some of the apparent contradictions in his views.

Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong?

uniqueuid•47m ago
The counterpoint here is that in practice, humility is only found in the best of frequentists, whereas the rest succumb to hubris (i.e. the cult of irrelevant precisions).
the_af•44m ago
What really confuses me is that many in this so called "rationalist" clique discuss Bayesianism as an "ism", some sort of sacred, revered truth. They talk about it in mystical terms, which matches the rest of their cult-like behavior. What's the deal with that?
mitthrowaway2•33m ago
That's specific to Yudkowsky, and I think that's just supposed to be humor. A lot of people find mathematics very dry. He likes to dress it up as "what if we pretend math is some secret revered knowledge?".
jrm4•30m ago
Yeah but these feels like "more truth is said in jest etc etc"
ummonk•1h ago
Rationalists have always rubbed me the wrong way too but your argument against AI doomerism is weird. If you care about first principles, how about the precautionary principle? "Maybe it's actually benign" is not a good argument for moving ahead with potentially world ending technology.
xyzzy123•58m ago
I don't think "maybe it's benign" is where anti doomers are coming from, more like, "there are also costs to not doing things".

The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.

In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.

XorNot•50m ago
It's just the slippery-slope fallacy: if X then obviously Y will follow, and there will be no further decisions, debate or time before it does.
parpfish•38m ago
One of my many peeves has been the way that people misuse the term “slippery slope” as evidence for their stance.

“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”

They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?

IshKebab•50m ago
He wasn't saying "maybe it's actually going to be benign" is an argument for moving ahead with potentially world ending technology. He was saying that it might end up being benign and rationalists who say it's definitely going to be the end of the world are wildly overconfident.
eviks•36m ago
But not accepting this technology could also be potentially world ending, especially if you want to start many new wars to achieve that, so caring about the first principles like peace and anti-ludditism brings us back to the original "real lack of humility..."
nradov•24m ago
The precautionary principle is stupid. If people had followed it then we'd still be living in caves.
NoGravitas•1h ago
I've always seen the breathless Singularitarian worrying about AI Alignment as a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman, from algorithmic bias, to policy-washing, to energy costs and acceleration of wealth concentration. It also leads to so-called longtermism - discounting the benefits of solving current real problems and focusing entirely on solving a hypothetical one that you think will someday make them all irrelevant.
philipov•47m ago
yep, the biggest threat posed by AI comes from the capitalists who want to own it.
parpfish•45m ago
Or the propagandists that use it
bilbo0s•5m ago
They won't be allowed to use it unless they serve the capitalists who own it.

It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.

Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.

impossiblefork•41m ago
I actually think the people developing AI might well not get rich off it.

Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.

I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.

This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.

cogman10•33m ago
It smacks of a goldrush. The winners will be the people selling shovels (nVidia) and housing (AWS). It may also be the guides showing people the mountains (Cursor, OpenAI, etc).

I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.

nradov•26m ago
When railroads were first being built across the continental USA, those companies also had high valuations (for the time). Most of them ultimately went bankrupt or were purchased for a fraction of their peak valuation. But the tracks remained, and many of those routes are still in use today.
bilbo0s•12m ago
Just checked.

The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.

On the one hand, I guess that's just life here in reality.

On the other, man, reality sucks sometimes.

tuveson•35m ago
My feeling has been that it’s a lot of people that work on B2B SaaS that are sad they hadn’t gotten the chance to work on the Manhattan Project. Be around the smartest people in your field. Contribute something significant (but dangerous! And we need to talk about it!) to humanity. But yeah computer science in the 21st century has not turned out to be as interesting as that. Maybe just as important! But Jeff Bezos important, not Richard Feynman important.
James_K•56m ago
Implicit in calling yourself a rationalist is the idea that other people are not thinking rationally. There are a lot of “we see the world as it really is” ideologies, and you can only ascribe to one if you have a certain sense of self-assuredness that doesn't lend itself to healthy debate.
resters•44m ago
Not meaning to be too direct, but you are misinterpreting a lot about rationalists.

In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).

Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.

To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.

One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.

I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.

benreesman•37m ago
Any time people engage in some elaborate exercise and it arrives at: "me and people like me should be powerful and not pay taxes and stuff" the reason for making the argument is not a noble one, the argument probably has a bunch of tricks and falsehoods in it, and there's never really any way to extract anything useful, greed and grandiosity are both fundamentally contaminative processes.

These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.

This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.

It was a disaster when it happened in Russia in the 90s and its a disaster now.

mitthrowaway2•18m ago
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".

Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?

Are you sure you're not painting this group with an overly-broad brush?

BurningFrog•10m ago
An unfortunate fact is that people who are very annoying can also be right...
aredox•8m ago
They are the perfectly rational people who await the arrival of a robot god...

Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.

Seattle3503•5m ago
I think the rationalists have failed to humanize themselves. They let their thinkpieces define them entirely, but a studiously considered think piece is a narrow view into a person. If rationalists were more publicly vulnerable, people might find them more publicly relatable.
Fraterkes•4h ago
[flagged]
Fraterkes•4h ago
(Ive also been somewhat dogmatic and angry about this conflict, in the opposite direction. But I wouldnt call myself a rationalist)
codehotter•4h ago
I view this as a political constraint, cf. https://www.astralcodexten.com/p/lifeboat-games-and-backscra.... One's identity as Academic, Democrat, Zionist and so on demands certain sacrifices of you, sometimes of rationality. The worse the failure of empathy and rationality, the better a test of loyalty it is. For epistemic rationality, it would be best to https://paulgraham.com/identity.html, but for instrumental rationality it is not. Consequently, many people are reasonable only until certain topics come up, and it's generally worked around by steering the discussion to other topics.
voidhorse•4h ago
And this is precisely the problem with any dogma of rationality. It starts off ostensibly trying to help guide people toward reason but inevitably ends up justifying blatantly shitty social behavior like defense of genocide as "political constraint".

These people are just narcissists who use (often pseudo)intellectualism as the vehicle for their narcissism.

tome•3h ago
I'm curious how you assess, relatively speaking, the shittiness of defence of genocide versus false claims of genocide.
voidhorse•3h ago
Ignoring the subtext, actual genocide is obviously shittier and if you disagree I doubt I could convince you otherwise in the first place.

https://www.ohchr.org/en/press-releases/2024/11/un-special-c...

tome•3h ago
But that's not my question. My question was between defence of genocide and false accusations of genocide. (Of course actual genocide is "shittier" -- in fact that's a breathtaking understatement!)
kombine•1h ago
We have concrete examples of defence of genocide, such as by Scott Aaronson. Can you provide the examples of "false accusations of genocide", otherwise this is a hypothetical conversation.
tome•1h ago
I can certainly agree we have a concrete example of defence of purported genocide and a concrete example of an accusation of purported genocide. Beyond that I'd be happy to discuss further (although it's probably off topic).
noworriesnate•1h ago
Wouldn’t it be better to spend the time understanding the reality of the situation in Gaza from multiple angles rather than philosophizing on abstract concepts? I.e. there are different degrees of genocide, but that doesn’t matter in this context because what’s happening in Gaza is not abstract or theoretical.

In other words, your question ignores so much nuance that it’s a red herring IMO.

tome•1h ago
Well, what it's better for me to do is my business and what it's better for voidhorse to do is his/her business. He/she certainly doesn't have to respond.

Still, since he/she was so willing to make a claim of genocide (implicitly) I was wondering that, were it a false claim, would it be equally "blatantly shitty social behaviour, narcissistic use of (often pseudo)intellectualism for his/her narcissistic behaviour" as the behaviour he/she was calling out?

I'm pretty certain I understand the reality of the situation (in fact I'd accept reasonably short odds that I understand it better than anyone participating in the discussion on this story).

Fraterkes•3h ago
I don’t really buy this at all: I am more emotionally invested in things that I know more about (and vice versa). If Rationalism breaks down at that point it is essentially never useful.
lcnPylGDnU4H9OF•2h ago
> I don’t really buy this at all

For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.

skybrian•2h ago
Anything in particular you want to link to as unreasonable?
komali2•40m ago
What's incredible to me is the political blindness. Surely at this point, "liberal zionists" would at least see the writing on the wall? Apply some Bayesian statistical analysis to popular reactions to unprompted military strikes against Iran or something, they should realize at this point that in 25 years the zeitgeist will have completely turned against this chapter in Israel's history, and properly label the genocide for what it is.

I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?

radicalbyte•4h ago
* 20 somethings who are clearly on spectrum

* Group are "special"

* Centered around a charismatic leader

* Weird sex stuff

Guys we have a cult!

krapp•4h ago
These are the people who came up with Roko's Basilisk, Effective Altruism and spawned the Zizians. I think Robert Evans described them not as a cult but as a cult incubator, or something along those lines.
ausbah•6m ago
so many of the people i’ve read in these rationalist groups sound like they need a hug and therapy
t_mann•4h ago
> “You’re [X]?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”

> “Yes,” I replied, not bothering to correct the “physicist” part.

Didn't read much beyond that part. He'll fit right in with the rationalist crowd...

simianparrot•4h ago
No actual person talks like that —- and if they really did, they’ve taken on the role of a fictional character. Which says a lot about the clientele either way.

I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.

FeteCommuniste•3h ago
I assumed that the stuff in quotes was a summary of the general gist of the conversations he had, not a word for word quote.
riffraff•3h ago
I don't think GP objects to the literalness, as much as to the "I am known for always being right and I acknowledge it", which comes off as.. not humble.
junon•4h ago
I got to that part, thought it was a joke, and then... it wasn't.

Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.

derangedHorse•4h ago
Is it not a joke? I’m pretty sure it was.
alphan0n•4h ago
If that was a joke, all of it is.

*Guess I’m a rationalist now.

lcnPylGDnU4H9OF•2h ago
It doesn’t really read like a joke but maybe. Regardless, I guess I can at least be another voice saying it didn’t land. It reads like someone literally said that to him verbatim and he literally replied with a simple, “Yes.” (That said, while it seems charitable to assume it was a joke but that doesn’t mean it’s wrong to assume that.)
myko•1h ago
I laughed, definitely read that way to me
IshKebab•52m ago
I think the fact that we aren't sure says a lot!
joenot443•1h ago
Scott's done a lot of really excellent blogging in the past. Truthfully, I think you risk depriving yourself of great writing if you're willing to write off an author because you didn't like one sentence.

GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.

dcminter•4h ago
Also...

> they gave off some (not all) of the vibes of a cult

...after describing his visit with an atmosphere that sounds extremely cult-like.

wizzwizz4•39m ago
No, Guru Eliezer Yudkowsky wrote an essay about how people asking "This isn’t a cult, is it?" bugs him, so it's fine actually. https://www.readthesequences.com/Cultish-Countercultishness
James_K•54m ago
I made it to “liberal zionist” before quitting.
johnfn•46m ago
To be honest, if I encountered Scott Aaronson in the wild I would probably react the same way. The guy is super smart and thoughtful, and can write more coherently about quantum computing than anyone else I'm aware of.
kragen•34m ago
Why would you comment on the post if you stopped reading near its beginning? How could your comments on it conceivably be of any value? It sounds like you're engaging in precisely the kind of shallow dismissal the site guidelines prohibit.
gooseus•4h ago
I've never thought ill of Scott Aaronson and have often admired him and his work when I stumble across it.

However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"

https://en.wikipedia.org/wiki/Rhinoceros_(play)

Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.

dcminter•4h ago
Upvote for the play link - that's interesting and I hadn't heard of it before. Worthy of a top-level post IMO.
NoGravitas•1h ago
I have always considered Scott Aaronson the least bad of the big-name rationalists. Which makes it slightly funny that he didn't realize he was one until Scott Siskind told him he was.
wizzwizz4•46m ago
Reminds me of Simone de Beauvoir and feminism. She wrote the book on (early) feminism, yet didn't consider herself a feminist until much later.
Joker_vD•3h ago
Ah, so it's like the Order of the October Star: certain people have simply realized that they are entitled to wear it. Or, rather, that they had always been entitled to wear it. Got it.
samuel•3h ago
I'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.

For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).

For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.

I recommend giving it a try.

greener_grass•3h ago
I think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.

Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.

samuel•2h ago
I don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.

And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.

elt895•1h ago
Are there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore. 2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.
bnjms•56m ago
I think you’re mostly right.

But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)

sixo•55m ago
To the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.

They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality

Bjartr•1h ago
Yeah, the whole community side to rationality is, at best, questionable.

But the tools of thought that the literature describes are invaluable with one very important caveat.

The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.

It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.

If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.

the_af•40m ago
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.

It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".

And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".

wizzwizz4•38m ago
Chapter 67. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu... (And since it's in the book, and people know about it, obviously they're not doing it themselves.)
wannabebarista•21m ago
This reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).
hiAndrewQuinn•1h ago
If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
d--b•1h ago
Sorry, I haven't followed what is it that these guys call Rationalism?
retRen87•1h ago
He already had a rationalist “coming out” like ages ago. Dude just make up your mind

https://scottaaronson.blog/?p=2537

kragen•35m ago
While this was an interesting and enjoyable read, it doesn't seem to be a “rationalist ‘coming out’”. On the contrary, he's just saying he would have liked going to a ‘rationalist’ meeting.
resource_waste•1h ago
"I'm a Rationalist"

"Here are some labels I identify as"

So they arent rational enough to understand first principles don't objectively exist.

They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.

I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.

I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.

"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."

bargainbin•1h ago
Never ceases to amaze me that the people who are clever enough to always be right are never clever enough to see how they look like complete wankers when telling everyone how they’re always right.
falcor84•57m ago
I don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.

> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.

cogman10•38m ago
> clever enough to always be right

Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.

The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.

dr_dshiv•1h ago
Since intuitive and non-rational thinking are demonstrably rational in the face of incomplete information, I guess we’re all rationalists. Or that’s how I’m rationalizing it, anyway.
aosaigh•1h ago
> “You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”

Give me strength. So much hubris with these guys (and they’re almost always guys).

I would have assumed that a rationalist would look for truth and not correctness.

Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.

api•1h ago
That's exactly what Rationalism(tm) is.

The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.

Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.

Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"

Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.

ModernMech•1h ago
Rationalists are better called Rationalizationists, really.
NoGravitas•1h ago
Probably the most useful book ever written about topics adjacent to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the topic of the book is nominally the Alt-Right, a lot more of it is about the capital-R Rationalist communities and individuals that incubated the neoreactionary movement that is currently dominant in US politics. It's probably the best book to read for understanding how we got politically and intellectually from where we were in 2010, to where we are now.

https://www.goodreads.com/book/show/41198053-neoreaction-a-b...

kragen•41m ago
Thanks for the recommendation! I hadn't heard about the book.
apples_oranges•1h ago
Never heard of the man, but that was a fun read. And it looks like a fun club to be part of. Until in becomes unbearable perhaps. Also raises the chances to get invited to birthday orgies..? Perhaps I should have stayed a in academia..
moolcool•46m ago
> Until in becomes unbearable perhaps

Until?

Barrin92•55m ago
>"frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future"

One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.

I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.

bee_rider•46m ago
The main things I don’t like about rationalism are aesthetic (the name sucks and misusing the language of Bayesian probability is annoying). Sounds like they are a thoughtful and nice bunch otherwise(?).
mathattack•28m ago
Logic is an awesome tool that took us from Greek philosophers to the gates on our computers. The challenge with pure rationalism is checking the first principles that the thinking comes from. Logic can lead you astray if the principles are wrong, or you miss the complexity along the way.

On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.

On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]

This doesn't discount logic, it just highlights that answers should always come with provisional humility.

And I'm still a superfan of Scott Aaronson.

[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...

[1] https://www.jstor.org/stable/2400494

jrm4•25m ago
Yup, can't stress the word "tool" enough.

It's a "tool," it's a not a "magic window into absolute truth."

Tools can be good for a job, or bad. Carry on.

kragen•21m ago
The ‘rationalist’ group being discussed here aren't Cartesian rationalists, who dismissed empiricism; rather, they're Bayesian empiricists. Bayesian probability turns out to be precisely the unique extension of Boolean logic to continuous real probability that Aristotle (nominally an empiricist!) was lacking. (I think they call themselves “rationalists” because of the ideal of a “rational Bayesian agent” in economics.)

However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!

jrm4•26m ago
My eyes started to glaze over after a bit; so what I'm getting here is there a group that calls themselves "Rationalists," but in just about every externally meaningful sense, they're smelling like -- perhaps not a cult, but certainly a lot of weird insider/outsider talk that feels far from rational?
pja•10m ago
Capital r-Rationalism definitely bleeds into cult-like behaviour, even if they haven’t necessarily realised that they’re radicalising themselves.

They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians

pja•10m ago
Scott Aaronson, the man who turned scrupulosity into a weapon against his own psyche is a capital R rationalist?

Yeah, this surprises absolutely nobody.