frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•7m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•9m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•12m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
2•pabs3•14m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
1•pabs3•15m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•16m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•16m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•21m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•30m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•34m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•38m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•40m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•49m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•53m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•54m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
2•ambitious_potat•1h ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•1h ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
4•akagusu•1h ago•1 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
9•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments
Open in hackernews

Famous cognitive psychology experiments that failed to replicate

https://buttondown.com/aethermug/archive/aether-mug-famous-cognitive-psychology/
172•PaulHoule•4mo ago

Comments

delichon•4mo ago
Approximate replication rates in psychology:

  social      37%
  cognitive   42%
  personality 55%
  clinical    44%
So a list of famous psychology experiments that do replicate may be shorter.

https://www.nature.com/articles/nature.2015.18248

NewJazz•4mo ago
I think one would wish the famous ones to be more often replicable.
sunscream89•4mo ago
There may be minute details like having a confident frame of reference for the confidence tests. Cultures, even psychologies might swing certain ideas and their compulsions.
tomjakubowski•4mo ago
Nonreplicable publications are cited more than replicable ones (2021)

> We use publicly available data to show that published papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that replicate. This difference in citation does not change after the publication of the failure to replicate. Only 12% of postreplication citations of nonreplicable findings acknowledge the replication failure.

https://www.science.org/doi/10.1126/sciadv.abd1705

Press release: https://rady.ucsd.edu/why/news/2021/05-21-a-new-replication-...

nitwit005•4mo ago
This feels like some sort of truth telling paradox, where if you assume the study is true, then seeing a citation like this means it's likely not true.
esperent•4mo ago
This is at least partially a failure in publication. Once a paper is published, it's usually left up in the same state forever. If it fails to replicate, that data is published somewhere else. So when someone references the paper, and the diligent reader follows up and reads the reference, it looks convincing, just as it did when first published. It's not reasonable to expect the reader, or even the writer, to be so well versed in all the thousands and thousands of papers published that they know when something has failed to be replicated.

What we need is for every paper to be published alongside a stats card that is kept up to date. How many times it's been cited, how many times people tried to replicate it, and how many times they failed.

dlcarrier•4mo ago
Isn't the unexpected more famous than the expected?
t_mann•4mo ago
Thanks for providing the reference, that's useful context. Those are awful replication rates, worse than a coin flip. Sounds like the OP can add their own introduction to their list. From the introduction:

> Most results in the field do actually replicate and are robust[citation needed]

glial•4mo ago
The incentive of all psychology researchers is to do new work rather than replications. Because of this, publicly-funded psychology PhDs should be required to perform study replication as part of their training. Protocol + results should be put in a database.
gwd•4mo ago
How interesting would it be if every PhD thesis had to have a "replication" section, where they tried to replicate some famous paper's results.
analog31•4mo ago
Sure, dump it on the lowest level employee, who has the least training and the most to lose. Punish them for someone else's bad research. Grad school already takes too long, pays too little, and involves too much risk of not finishing. And it doesn't solve the problem of people having to generate copious quantities of research in order to sustain their careers.

Disclosure: Physics PhD.

SpaceManNabs•4mo ago
One thing that confuses me is that some of these papers were successfully replicated, so juxtaposing them to the ones that have not been replicated at all given the title of the page feels a bit off. Not sure if fair.

The ego depletion effect seems intuitively surprising to me. Science is often unintuitive. I do know that it is easier to make forward-thinking decisions when I am not tired so I dont know.

taeric•4mo ago
The idea isn't that it is easier to do things when not tired. It is that you specifically get tired exercising self control.

I think that can be subtly confused by people thinking you can't get better at self control with practice? That is, I would think a deliberate practice of doing more and more self control every day should build up your ability to do more self control. And it would be easy to think that that means you have a stamina for self control that depletes in the same way that aerobic fitness can work. But, those don't necessarily follow each other.

ceckGrad•4mo ago
>some of these papers were successfully replicated, so juxtaposing them to the ones that have not been replicated at all given the title of the page feels a bit off. Not sure if fair.

I don't like Giancotti's claims. He wrote: >This post is a compact reference list of the most (in)famous cognitive science results that failed to replicate and should, for the time being, be considered false.

I don't agree with Giancotti's epistemological claims but today I will not bloviate at length about the epistemology of science. I will try to be brief.

If I understand Marco Giancotti correctly, one particular point is that Giancotti seems to be saying that Hagger et al. have impressively debunked Baumeister et al.

The ego depletion "debunking" is not really what I would call a refutation. It says, "Results from the current multilab registered replication of the ego-depletion effect provide evidence that, if there is any effect, it is close to zero. ... Although the current analysis provides robust evidence that questions the strength of the ego-depletion effect and its replicability, it may be premature to reject the ego-depletion effect altogether based on these data alone."

Maybe Baumeister's protocol was fundamentally flawed, but the counter-argument from Hagger et al. does not convince me. I wasn't thrilled with Baumeister's claims when they came out, but now I am somehow even less thrilled with the claims of Hagger et al., and I absolutely don't trust Giancotti's assessment. I could believe that Hagger executed Baumeister's protocol correctly, but I can't believe Giancotti has a grasp of what scientific claims "should" be "believed."

SpaceManNabs•4mo ago
You make some good points based on your deeper read. I am a bit saddened that the rest of the comment section (the top 6 comments as of right now) devolved into "look at how silly psychology is with all its p-hacking"

That might be true, but this article's comment section isn't a good place for it because it doesn't seem like the article is entirely fair. I would not call it dishonest, but there is a lack of certainty and finality in being able to conclude that these papers have been successfully proven to not be replicable.

Terr_•4mo ago
> Source: Hagger et (63!) al. 2016

I can't help chuckling at the idea that over 1.98 * 10^87 people were involved in the paper.

dlcarrier•4mo ago
If you were to meet a "normal" person, would you interpret that as meaning "perpendicular" or as meaning "the kind of person that doesn't look at everything like it's a mathematical expression"?
Terr_•4mo ago
Normal: "The kind of person who doesn't go out of their way to put-down other people on HN for being nerdy when a linked article contains a weird editorial interjection that bears an unusual resemblance to a math expression."
recursive•4mo ago
Um, actually, the interpretation here is "factorial", not "perpendicular".
fsckboy•4mo ago
famous cognitive psychology experiments that do replicate: IQ tests

http://www.psychpage.com/learning/library/intell/mainstream....

in fact, the foundational statistical models considered the gold standard for statistics today were developed for this testing.

alphazard•4mo ago
> in fact, the foundational statistical models considered the gold standard for statistics today were developed for this testing.

The normal distribution predates the general factor model of IQ by hundreds of years.[0]

You can try other distributions yourself, it's going to be hard to find one that better fits the existing IQ data than the normal (bell curve) distribution.

[0] https://en.wikipedia.org/wiki/Normal_distribution#History

fsckboy•4mo ago
Darwin's cousin, Francis Galton, for whom the log-normal distribution is often called the Galton distribution, was among the first to investigate psychometrics.

not realizing he was hundreds of years late to the game, he still went ahead and coined the term "median"

more tidbits here https://en.wikipedia.org/wiki/Francis_Galton#Statistical_inn...

astrange•4mo ago
Survivorship bias. You can easily make someone's IQ test not replicate. (Hit them on the head really hard.)
dlcarrier•4mo ago
I took an IQ test as a high school student, and one of the subtests involved placing a stack of shuffled pictures in chronological order. I had one series in the incorrect order, because I had no understanding of the typical behavior of snowfall. The test proctor said almost everyone she tested mixed that one up, because it doesn't snow in the area where I live.

I have no doubt that IQ tests reproducibly measure the test takers ability to pass tests, as well as to perform in a society that the tests are based on.

I think it's disingenuous to attribute IQ to intelligence as a whole though, and it is better understood as an indicator of cultural intelligence.

I would expect that, for cultures who's members score below average on IQ tests from the US, an equivalent IQ test created within that culture would show average members of that culture scoring higher than average members of US culture.

3cKU•4mo ago
Raven's Progressive Matrices is often administered. Is that test culturally biased? Does that test measure only ability to take that test and nothing else?
teamonkey•4mo ago
Yes, it’s almost certainly linked to quality of schooling and exposure to those types of problems, amongst other things, see the Flynn Effect.

https://en.wikipedia.org/wiki/Flynn_effect

3cKU•4mo ago
Access to schooling etc can't be the whole story: "black students from prosperous families tend to score higher in IQ than blacks from poor families, but they score no higher, on average, than whites from poor families".
tptacek•4mo ago
So IQ is malleable and SES-dependent and GxE interactions are real.
3cKU•4mo ago
No. The first part of that quote is consistent with any hypothesis (G only, E only, G&E), i.e. cannot distinguish between them.
dlcarrier•4mo ago
Puzzle tests have their own problems. They're only effective at measuring puzzles solving abilities when they are novel, so retaking the test would lead to higher scores, and practicing even more so. They also only measure puzzle solving abilities which are necessary in some but not all applied intelligence tasks.
NalNezumi•4mo ago
I can't quite find the study but there was one mentioned to me about showing the Ravens progressive matrices test to hunter / gatherer tribes, and they did horribly. But those tribes do geometric pattern recognition on the daily basis during hunting, so the tester tried to modify the base shapes to mimic more "realistic" shapes for hunter gatherers (rather than unusual shapes such as perfect triangle, circles and rectangles, hard to find in nature) and the score normalized to median.

I was told this in context of "cultural psychology" how many tests or psychological observations and metrics poorly translate over culture. (especially when you try to pin it on some success metric)

tptacek•4mo ago
A fun irony (every part of this scientific question is gnarly as fuck, which can make it interesting to follow) is that the more culturally biased an IQ test is, the more g-loaded it will turn out to be.

https://pubmed.ncbi.nlm.nih.gov/24104504/

dlcarrier•4mo ago
I think humanity majorly underplays how much success is based on culture. I have a long-held theory that offices don't exist to accomplish work, but to establish social relationships, and that work itself is a secondary product of the office community.

My belief was reinforced when companies switched to remote work, and management at many companies complained that it was difficult to tell who was and wasn't working, when the managers didn't get to watch the workers. Abstracting the social relationship from the results of work will make it easier to judge the work itself, but more difficult to enforce the social relationship. When the abstraction occurred, those who were basing the status of their employees on the social relationship, and not the work output, were especially disadvantaged.

growingkittens•4mo ago
> I would expect that, for cultures who's members score below average on IQ tests from the US, an equivalent IQ test created within that culture would show average members of that culture scoring higher than average members of US culture.

A moment from the show "Good Times" in 1974. https://m.youtube.com/watch?v=DhbsDdMoHC0 at 1:25

dlcarrier•4mo ago
Apparently it's referencing a real test, called the BITCH test: https://en.wikipedia.org/wiki/Black_Intelligence_Test_of_Cul...

Also, I forgot how annoying comic relief characters were in sitcoms. They are the opposite of relieving.

fsckboy•4mo ago
in my comment i gave a link to what a fairly large group of university professors, scientists who study, test, and measure intelligence, and what they say they've learned about intelligence. you think you know more, but you don't even investigate or reference what they say, you just think it's the way you think it should be based on ideas you have that you have not tested. not very convincing.

also, cultures don't have iq's, there is no known link to culture.

pessimizer•4mo ago
What exactly are they meant to replicate other than other IQ tests? They don't make a statement about anything that is falsifiable, other than that if you give somebody who scores high on a test carefully designed and tested to match the results of previously given IQ tests when given to the same people, they'll tend to match the results that those people will get on other tests that were calibrated in the exact same way.

If you're trying to say they replicate over the lifetime of the same person, I've had a 15 point swing between tests, out of the few I've taken. What did stay constant for me from age 10 to age 40 was my Myers-Briggs test (my dad was a metrics obsessive), and that's obvious horseshit. Consistency doesn't mean you're measuring what you claim to be measuring.

edit: if it matters, scores were between 137 and 152, so exactly an entire standard deviation. That's like the difference between sub-Saharan and European that racists are always crowing about, but in the same person. IQ doesn't even personally replicate for me.

teamonkey•4mo ago
You can prepare for IQ tests, just as you can for any other test, and you can get better at some of the problems in these tests the more you practice them, just as you get better at Sudoku puzzles the more you do them.

Related: that brain is plastic and can adapt to challenges in different ways. https://www.scientificamerican.com/article/london-taxi-memor...

fsckboy•4mo ago
>What exactly are they meant to replicate other than other IQ tests?

if a variety of different IQ tests sort the same people the same way, even though every question on the tests is different from the other tests, you have shown that the test is showing something about the subjects, not something about the tests. and that is replicable, and falsifiable.

if you follow the same people over time and provide them with new tests, and they continue to sort in the same relative fashion, you have increased confidence that you are measuring something relatively fixed, not variable. For statistical significance (look it up) you don't draw conclusions on the basis of one person (or one Dad) but on population samples tested under standard conditions.

this is like all study results published here, a thousand nerds who've never studied intelligence come up with a hundred objections to what was tested, assuming with arrogance that the people who specialized and did the work aren't considering what comes off the top of this nerd's head. Better qualified nerds did this work.

>Myers-Briggs...'s obvious horseshit

Myers Briggs is not complete horseshit, correlates closely to, but not as good a fit as, the generally accepted Big Five Factor system, the gold standard of personality tests: you should educate yourself a bit more. Myers-Briggs essentially tries to phrase everything in a postive way, where the Big Five separates and includes Neuroticism which is a more negative (for the person) trait. All these traits should be considered adaptive till proven otherwise, so resist the urge to judge.

tptacek•4mo ago
The Big Five --- not all that great either!

https://www.stat.cmu.edu/~brian/Pmka-Attack-V71-N3/pmka-2006...

(1st section).

ausbah•4mo ago
i wonder the replication rate is for ML papers
PaulHoule•4mo ago
From working in industry and rubbing shoulders with CS people who prioritize writing papers over writing working software I’m sure that in a high fraction of papers people didn’t implement the algorithm they thought they implemented.
avdelazeri•4mo ago
Don't get me started, I have seem repos that I'm fairly sure never ran in their presented form. A guy in our lab thinks authors purposefully mess up their code when publishing on GitHub to make it harder to replicate. I'm starting to come around on his theory.
KingMob•4mo ago
And most medical studies. It's just as bad as social psych, if not worse, because there's real money at stake in churning out new drugs.
intalentive•4mo ago
Nowadays everyone publishes their code. There’s typically a project page on github.io, a paper on arxiv.org, and a public repo.
aeve890•4mo ago
>Source: Stern, Gerlach, & Penke (2020)

Wow, what are the odds?

https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experime...

NooneAtAll3•4mo ago
I'm still amazed that wikipedia doesn't have redirect away from its mobile site
dang•4mo ago
(It's on my list to rewrite those URLs in HN comments at least)
NoMoreNicksLeft•4mo ago
Please, please, please... can you rewrite reddit links to old.reddit.com too? Not that there's much reason to link there, but it makes my eyes bleed.
dang•4mo ago
It depends if the bulk of the HN community supports it. As you probably know, we already do that for submission URLs.
dlcarrier•4mo ago
I thought you were pointing out some bias by comparing the research to previous research from the same authors. It took me far too long to realize that the experiment was from 100 years ago, and you were pointing out that the names were coincidentally the same.
jbentley1•4mo ago
This is a great list for people who want to smugly say "Um, actually" a lot in conversation.

Based on my brief stint doing data work in psychology research, amongst many other problems they are AWFUL at stats. And it isn't a skill issue as much as a cultural one. They teach it wrong and have a "well, everybody else does it" attitude towards p-hacking and other statistical malpractice.

Waterluvian•4mo ago
Um, actually I’d say it is the responsibility of all scientists, both professional and amateur, to point out falsehoods when they’re uttered, and not an act of smugness.
rolph•4mo ago
[um], has contexts but is usually a cue, that an unexpected, off the average, something is about to be said.

[actually], is a neutral declaration that some cognitive structure was presented, but is at odds with physically observable fact that will now be laid out to you.

sputr•4mo ago
As someone who's part of a startup (hrpotentials.com) trying to bring truly scientifically valid psychological testing into HR processes .... yeah. We've been at it for almost 7 years, and we're finally at a point where we can say we have something that actually makes scientific sense - and we're not inventing anything new, just commercializing the science! It only took an electrical engineer (not me) with a strong grasp of statistics working for years with a competent professor of psychology to separate the wheat from the chaff. There's some good science there it's just ... not used much.
PaulHoule•4mo ago
Yeah, this is an era which is notorious for pseudoscience.
obviouslynotme•4mo ago
How are you going to get around Griggs v. Duke Power Co.? AFAIK, personality tests have not (yet) been given the regulatory eye, but testing cognitive ability has.
odyssey7•4mo ago
There’s surely irony here
wduquette•4mo ago
"they are AWFUL at stats."

SF author Michael Flynn was a process control engineer as his day job; he wrote about how designing statistically valid experiments is incredibly difficult, and the potential for fooling yourself is high, even when you really do know what you are doing and you have nearly perfect control over the measurement setup.

And on top of it you're trying to measure the behavior of people not widgets; and people change their behavior based on the context and what they think you're measuring.

There was a lab set up to do "experimental economics" at Caltech back in the late 80's/early 90's. Trouble is, people make different economic decisions when they are working with play money rather than real money.

dgfitz•4mo ago
> Trouble is, people make different economic decisions when they are working with play money rather than real money.

Understated even. Ever play poker with just chips and no money behind them? Nobody cares, there is no value to the plastic coins.

Projectiboga•4mo ago
Expermential Design is one of the big four adacemic subjects within Statistics. The math is complex even before the issues of the effects of the expermential situation.
SkyMarshal•4mo ago
Oblig link to Norvig's "Warning Signs in Experimental Design": https://www.norvig.com/experiment-design.html
jci•4mo ago
Reminds me of Feynman’s Cargo Cult Science speech:

https://people.cs.uchicago.edu/~ravenben/cargocult.html

turnsout•4mo ago
I read in a study that it takes 10,000 hours to become proficient in statistics /s
iamthemonster•4mo ago
I was very surprised at how many statistical methods are taught in undergraduate psychology. Far more statistics than I ever touched in engineering for sure. Yet the undergrads really treated statistics as a cookbook, where they just wanted to be told the recipe and they'd follow it. Honestly they'd have been better off just eyeballing data and collaborating with statisticians for the analysis.
lmpdev•4mo ago
The problem with a lot of the “higher free variable” sciences like psychology, ecology and sociology etc

Is they are the ones who need to be at the bleeding edge of statistics but often aren’t

They absolutely need Bayesian competitive hypothesis testing but are often the least likely to use it

eviks•4mo ago
> And it isn't a skill issue as much as a cultural one. They teach it wrong

It's definitely a skill issue then

abandonliberty•4mo ago
https://en.wikipedia.org/wiki/Stereotype_threat shows up in this list as not replicated, however, it is one of the most studied phenomena in psychology.

>meta-analyses and systematic reviews have shown significant evidence for the effects of stereotype threat, though the phenomenon defies over-simplistic characterization.[22][23][24][25][26][27][28][9]

Failing to reproduce an effect doesn't prove it isn't real. Mythbusters would do this all the time.

On the other hand, some empires are built on publication malpractice.

One of the worst that I know is John Gottman. Marriage counselling based on 'thin slicing'/microexpressions/'Horsemen of the Apocalypse'. His studies had been exposed as fundamentally flawed, and training based on his principles performed worse than prior offerings, before he was further popularized by Malcolm Gladwell in Blink.

This type of intellectual dishonesty underlies both of their careers.

https://en.wikipedia.org/wiki/Cascade_Model_of_Relational_Di...

https://en.wikipedia.org/wiki/The_Seven_Principles_for_Makin...

https://www.gottman.com/blog/this-one-thing-is-the-biggest-p...

hn_throw_250915•4mo ago
I thought we knew that these were vehicles by wannabe self-help authors to puff up their status for money. See for example “Grit” and “Deep Work” and other bullshit entries in a breathlessly hyped up genre of pseudoscience.
systemstops•4mo ago
Is anyone tracking how much damage to society bad social science has done? I imagine it's quite a bit.
feoren•4mo ago
We rack up quite a lot of awfulness with eugenics, phrenology, the "science" that influenced Stalin's disastrous agriculture policies in the early USSR, overpopulation scares leading to China's one-child policy, etc. Although one could argue these were back-justifications for the awfulness that people wanted to do anyway.
systemstops•4mo ago
Those things were not done by awful people though - they all thought they were serving the public good. We only judge it as awful now because of the results. Nearly of these ideas (Lysenkoism I think was always fringe) were embraced by the educated elites of the time.
feoren•4mo ago
Lysenkoism! That's the one. Thank you for reminding me of the name (and for knowing what I was grasping at).

I think some "bad people" used eugenics and phrenology to justify prior hate, but they were also effective tools at convincing otherwise "good people" to join them.

daoboy•4mo ago
You are absolutely right. Another interesting example: The man who invented the lobotomy won a Nobel Prize for it.
izabera•4mo ago
i'm struggling to imagine many negative effects on society caused by the specific papers in this list
systemstops•4mo ago
Public policies were made (or justified) based on some of this research. People used this "settled science" to make consequential decisions.

Stereotype threat for example was widely used to explain test score gaps as purely environmental, which contributed to the public seeing gaps as a moral emergency that needed to be fixed, leading to affirmative action policies.

seec•4mo ago
To be honest, whether they had a "study" proving it or not I think those things would have happened anyway.

It's just a question of power in the end. And even if you could question the legitimacy of "studies" the people in power use to justify their ruling, they would produce a dozen more flawed justifications before you could even produce one serious debunking. And they wouldn't even have to give much light to your production so you would need large cultural and political support.

Psychology exists mostly as a new religion; it serves as a tool for justification for people in power, it is used just in the same way as the bible.

It should not be surprising to anyone that much of it isn't replicable (nor falsifiable in the first place) and when it is, the effects are so close to randomness that you can't even be sure of what it means. This is all by design, you need to keep people confused to rule over them. If they start asking questions you can't answer, you lose authority and legitimacy. Psychology is the tool that serves the dominant ideology that is used to "answer" those questions.

roadside_picnic•4mo ago
The most obvious one is the breakdown of trust in scientific research. A frequent discussion I would have with another statistics friend of mine was that that anti-vax crowd really isn't as off base as they are more popularly portrayed and if anything, the "trust the science!" rhetoric is more clearly incorrect.

Science should never be taught as dogmatic, but the reproducibility crisis has ultimately fostered a culture where one should not question "established" results (Kahneman famously proclaimed that one "must" accept the results of the unbelievable priming results in his famous book), especially if that one is interested in a long academic career.

The trouble is that some trust is necessary in communicating scientific observations and hypothesis to the general public. It's easy to blame the failure of the public to unify around Covid as based around cultural divides, but the truth is that skepticism around high stakes, hastily done science is well warranted. The trouble is that even when you can step through the research and see the conclusions are sound, the skepticism remains.

However, as someone that has spent a long career using data to understand the world, I suspect the harm directly caused by the wrong conclusions being reached is more minimal than one would think. This is largely because, despite lip service to "data driven decision making", science and statistics very rarely are the prime driver of any policy decision.

seec•4mo ago
I agree wholeheartedly with your conclusion. Science is relevant for those who care about finding the truth, just because they want to know for sure.

But for most people science doesn't really make much difference in how they choose and operate. Knowing the truth doesn't mean you are ready to adapt your behavior.

BeetleB•4mo ago
I imagine it's comparable to the damage done when policies are set that are not based on studies.

Let's be candid: Most policies have no backing in science whatsoever. The fact that some were backed by poor science is not an indictment of much.

rgblambda•4mo ago
From a political point of view, it may actually be beneficial for a policy to have no scientific basis. What happens when the science gets updated?

You either have to change the policy and admit you were "wrong" to an electorate who can't understand nuance, or continue with the policy and accept a few bad news days before the media cycle resets to something else.

rgblambda•4mo ago
I once did a corporate internal management course that was filled with pseudoscience bullshit. I imagine the impact of that course on the company's productivity was net negative. I'm sure lots of orgs have similar courses.

Learning styles have also been debunked for decades though they continue to be used in education. I saw an amusing line in an article that said 90% of teachers were happy to continue using them even after accepting they're nonsense.

And that's just theories that have been debunked (i.e. proven wrong).

juujian•4mo ago
Now I want to know which cognitive psychology experiments were successfully replicated though.
blindriver•4mo ago
Papers should not be accepted until an independent lab has replicated the results. It’s pretty simple but people are incentivized to not care if it’s replicable because they need the paper to publish to advance their career
jay_kyburz•4mo ago
Agreed, and the independent lab should be chosen by the publisher, and be kept secret until results are in.
mcswell•4mo ago
In many cases--longitudinal studies are an example, but not the only one--that's not feasible. And it's often expensive--who would pay for it?
Animats•4mo ago
> Most results in the field do actually replicate and are robust [citation needed], so it would be a pity to lose confidence in the whole field just because of a few bad apples.

Is there a good list of results that do consistently replicate?

gwd•4mo ago
> Smile to Feel Better Effect

> Claimed result: Holding a pen in your teeth (forcing a smile-like expression) makes you rate cartoons as funnier compared to holding a pen with your lips (preventing smiling). More broadly, facial expressions can influence emotional experiences: "fake it till you make it."

I read this about a decade ago, and started, when going into a situation where I wanted to have a natural smile, grimacing maniacally like I had a pencil in my teeth. The thing is, it's just so silly, it always makes me laugh at myself, at which point I have a genuine smile. I always doubted whether the claimed connection was real, but it's been a useful tool anyway.

sunscream89•4mo ago
Yeah, the marshmallow one taught me to have patience and look for the long returns on investments of personal effort.

I think there may be something to a few of these, and more may need considering regarding how these are conducted.

Let’s leave open our credulities for the inquest of time.

bogtog•4mo ago
Little of this is considered cognitive psychology. The vast majority would be viewed as "social psychology"

Setting that aside, among any scientific field I'm aware of, psychology has taken the replication crisis most seriously. Rigor across all areas of psychology is steadily increasing: https://journals.sagepub.com/doi/full/10.1177/25152459251323...

WesolyKubeczek•4mo ago
> Claimed result: Listening to Mozart temporarily makes you smarter.

This belongs in a dungeon crawl game. You find an artifact that plays music to you. Depending on the music played (depends on the artifact's enchantment and blessed status), it can buff or debuff your intelligence by several points temporarily.

picardo•4mo ago
Well, at least the growth mindset study is not fully debunked yet. It's basically a modern interpretation of what we've known to be true about self-fulfilling prophecies. If you tell children they are can be smart and competent if they work hard, then they will work hard and become smart and competent. This should be a given.
sunrunner•4mo ago
No mention of the Stanford Prison Experiment I notice.
dlcarrier•4mo ago
You'd think it's so far in the past that it isn't even considered, but Zimbardo was elected president of the American Psychological Association in 2002, which wasn't all that long ago.
runarberg•4mo ago
And during which time APA was complicit in and participated in torturing prisoners at Guantanamo Bay.

https://www.apa.org/about/policy/chapter-4b

insane_dreamer•4mo ago
If the "failed replication" was a single study, as in many cases listed here, there is still an open question as to whether the 1) replication study was underpowered (the ones I looked at had pretty small n's), or 2) the re-implementation of the original study was flawed. So I'm not so sure we can quickly label the original studies as "debunked", no more than we can express a high level of confidence in the original studies.

(This isn't a comment on any of the individual studies listed.)

Aurornis•4mo ago
> Claimed result: Adopting expansive body postures for 2 minutes (like standing with hands on hips or arms raised) increases testosterone, decreases cortisol, and makes people feel more powerful and take more risks.

A heuristic I use that is unreasonably good at identifying grifters and charlatans: Unnecessarily invoking cortisol or other hormones when discussing behavioral topics. Influencers, podcasters, and pseudoscience practitioners love to invoke cortisol, testosterone, inflammation, and other generic concepts to make their ideas sound more scientific. Instead of saying "stress levels" they say "cortisol". They also try to suggest that cortisol is bad and you always want it lower, which isn't true.

Dopamine is another favorite of the grifters. Whenever someone starts talking about raising dopamine or doing something to increase dopamine, they're almost always being misleading or just outright lying. Health and fitness podcasters are the worst at this right now.

thecrims0nchin•4mo ago
I have a draft of a blog post on this. Originally I was going to write about how cortisol isn't always bad, or good, it's just a chemical in us. But then I started noticing the pattern you point out here where I'm not sure anyone uses the cortisol argument in good faith. Everyone who brings up cortisol is usually trying to sell you something
tryauuum•4mo ago

    claimed result: Women are more attracted to hot guys during high-fertility days of their cycles

wait why not? I hoped I'm attractive at least some days of the month :(
lutusp•4mo ago
A key factor behind psychology's low replication rate is the absence of theories that define the field. In most science fields, an initial finding can be compared to theory before publication, which may weed out unlikely results in advance. But psychology doesn't have this option -- no theories, so no Litmus test.

It's important to say that a psychology study can be scientific in one sense -- say, rigorous and disciplined, but at the same time be unscientific, in the sense that it doesn't test a falsifiable, defining psychological theory -- because there aren't any of those.

Or, to put it more simply, scientific fields require falsifiable theories about some aspect of nature, and the mind is not part of nature.

Future neuroscience might fix this, but don't hold your breath for that outcome. I suspect we'll have AGI in artificial brains before we have testable, falsifiable neuroscience theories about our natural brains.

dlcarrier•4mo ago
Disturbing fact: The Stanford prison experiment, run by Philip Zimbardo, wasn't reproducible but that didn't stop Zimbardo from using it to promote his ideologies about the impossibility of rehabilitating criminals, or from becoming the president of the American Psychological Association.

The APA has a really good style guide, but I don't trust them for actual psychology.

runarberg•4mo ago
Yes, the APA certainly has a lot to answer for in their history. The Guantanamo Prison torture scandal is still fresh in my memory.

https://www.democracynow.org/2007/8/20/apa_members_hold_fier...

chatmasta•4mo ago
Has anyone tried to reproduce it? Good luck convincing an ethics review board to let you try that again.

Meanwhile, it’s been reproduced “in vitro” in numerous episodes of atrocity, e.g. Abu Ghraib…

dlcarrier•4mo ago
A reality TV show tried, because they have more practical ethics requirements than academic institutions, and were able to run the study as long as everyone agreed to be there and there was no sign of abuse: https://en.wikipedia.org/wiki/The_Experiment

Also, how do you italicize text in your comment?

chatmasta•4mo ago
> Also, how do you italicize text in your comment?

HN will italicize any string between a pair of asterisks. [0]

> practical ethics requirements

Practical ethics requirements :)

[0] https://news.ycombinator.com/formatdoc

runarberg•4mo ago
Note that nearly non of these studies are pure cognitive psychology. Most have intersections with social psychology (and I would deem primarily social psychology) or developmental psychology. For example the debunked study on social priming was published in Journal of Personality and Social Psychology.

This title would be much more accurate if the author omitted “cognitive” from the title.

somewholeother•4mo ago
The economics of pyschology are the psychology of economics.

If you won't trust the process, you will gain no real outcome.

What we recieve from the process is not necessarily tangible, but instead a fresh perspective on what may be possible. Thus, the inversion is complete, and we may then move forward.

seec•4mo ago
All the "hypothesis" or supposed "results" are so bonkers than it's an insult to intelligence itself that such things can be "proved" with psych "experiment".

Not that it matters, most of the psychology field is inherently bullshit, those are just the example of cases they went so far in the insult to intelligence, no amount of "studies" and rhetoric can save them.

eviks•4mo ago
Given how long the whole field has been malicious/incompetent failing at basic statistics and sweeping it under the rug, I think that rather then discarding the experiments that don't replicate the better baseline is to discard everything and wait for the future generation of better cognitive psychologists that come up with any good discovery that is widely replicated?
eska•4mo ago
I recently read the lifework book of a nobel prize winning psychologist and it was full of these disproven experiments. As a non-psychologist my trust in the experts is extremely low.
patrickhogan1•4mo ago
Before dunking on psychology for not replicating, remember this is a cross-discipline problem.

In biomedicine, Amgen could reproduce only 6/53 “landmark” preclinical cancer papers and Bayer reported widespread failures.

camgunz•4mo ago
Dear lazyweb: is there the opposite of this list anywhere?
HK-NC•4mo ago
IIRC the 2013 "racism predicted by telling leading questions" one and ita predecessor, which is listed here but also says there is slight trend toward replication, is just based on implicit association tasks. So you have a green and red button for good and bad, and then a word pops up and you have less a second to choose which button to press. Oversimplifying complex thought processes in my opinion is junk psychology.
chatmasta•4mo ago
It’s also self-referential because there is no objective measure of “racism,” so how can you even measure whether someone is “more racist” based on reaction time to stereotypical stimuli?

“No objective measure” pretty much sums up the whole field, to be honest. I started on a CS & Psych double major, did about eight psych courses, and then decided it was mostly a joke once I got to the quantitative portions. But those courses were very useful for general life skills. Developmental psychology in particular was packed with dense lessons about how we learn as children… social psych was a good overview of all the “well-known” experiments… etc.

djoldman•4mo ago
I'm surprised that dunning Kruger isn't listed.
Ferret7446•4mo ago
Devil's advocate, is it possible that humans have psychologically changed since the original experiments?