I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.
But which contributes more, they ask? Who gives a shit, really?
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple indivuduals at once.
Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.
Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.
See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.
Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.
Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
What does this even mean? Do you think in an ant colony only the queen is needed? Or in a wolf pack only the strongest wolf?
("Ortega most likely would have disagreed with the hypothesis that has been named after him...")
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.
This is hilarious
pavel_lishin•1h ago
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
observationist•1h ago
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.