Lot of that Despair is thanks to how the architecture of the chimp brain handles unpredictability over different time horizons - whats the system going to do tomorrow/next month/next year/next decades. Confidence decreases anxiety increases. You want to break the architecture keep feeding it the unpredictable.
So we get the corporal hudson in aliens cycle - "I'm ready, man. Check it out. I am the ultimate badass. State of the badass art" > unpredictability > "Whats happening man! Now what are we supposed to do? Thats it man. Game over man. Game over!"
Think about what science offers corporal hudson.
Where science can't make you better, non-science can make you feel better.
Where truth is painful, untruth can attempt to provide comfort.
(not sure how any of this relates to the comment or the article, maybe I should have just ignored this)
Also, survivorship bias.
Science is centered around the scientific method. A naive understanding of it can lead to an excessive focus on producing disconnected factoids called results. Wissenschaft has different failure modes, but because you are supposed to study your chosen topic systematically by any means necessary, you have to think more about what you are trying to achieve and how. For example, whether you want to produce results, explanations, case studies, models, predictions, or systems.
The literature tends to be better when people see results as intermediate steps they can build on, rather than as primary goals.
But I don’t consciously distinguish that from the English “science”. Although obviously the connotation of science leans on the scientific method whilst “Wetenschap” is more on the “gaining of knowledge”.
While there is no single English-word translation I can think of, I guess “knowledge building” or “the effort to expand knowledge” might be good approximations.
Interesting, never thought about this distinction too much.
Thomas Aquinas asks if theology is a science. Spoiler alert: The answer is Yes.
not really european but in russian it's neither. the word for science "наука" literally is closest to "teaching" or "education" (edit: and historically "punishment")
there is no stem for knowledge ("знать") OR science (doesn't even exist in russian) in that word:)
It's literally "na-oo-ka? What in the hell is the etymology of that?
Pseudoscience like measuring cost to fix a bug in a classroom setting is bad. Specially if it literally does "cost" and "classroom" together. That's just a sad way to grab some more research funding to keep the machine going.
The "garbage pile" of papers is not a new problem. It's been plaguing the science world for quite a long time. And it's only going to get worse because the metric and the target are the same (Goodhart's Law).
From the article itself, each mentioned paper scream of "the author never had to write actual functional code for a living" to me.
If we stop paying for those "garbages", the problem might disappear. But what about the researchers or scientists who depend on that funding to live?
What needs to change is the very way academic work is organized, but nothing comes for free.
I don't think this observation is valid. Papers are not expected to be infallible truth-makers. They are literally a somewhat standardized way for anyone to address a community of their peers and say "hey guys, check out this neat thing I noticed".
Papers are subject to a review because of editorial bars from each publication (that is, you can't just write a few sentences with a crayon and expect it to be published) and the paper should not just clone whatever people already wrote before. Other than this, unless you are committing academic fraud, you can still post something you found your conclusions were off or you missed something.
In such trajectory science is meant to cross the information overload / false equivalence threshold, where the "hey, check this out" scenario won't scale and the cost of validating all other people's papers outweigh the (theoretical) gains.
Not sure if you think that threshold has been crossed already or not.
Thus for example; the answer to the question "Are Late-Stage Bugs More Expensive?" is a "Yes generally" since at later stage in development we have more of design/implementation done where we now have a larger number of interacting components and thus increased Complexity. So the probability that the bug lies in the interaction/intersection of various components is higher which may require us to rework (both design and implementation) a large part of the system. This is the reason we accept separation-of-concerns, modularization and frequent feedback loops between design/implementation/testing as "standard" Software Engineering Practices.
Lee Smolin in his essay There is No Scientific Method (https://bigthink.com/articles/there-is-no-scientific-method/) states the following which i think is very applicable to "Software Engineering" since it is more of a Human Process than Hard Science;
Science works because Scientists form communities and traditions based not on a common set of methods, but a common set of ethical principles. And there are two ethical principles that I think underlie the success of science...
The first one is that we agree to tell the truth and we agree to be governed by rational argument from public evidence. So when there is a disagreement it can be resolved by referring to a rational deduction from public evidence. We agree to be so swayed.
Whether we originally came to that point of view or not to that point of view, whether that was our idea or somebody else’s idea, whether it’s our research program or a rival research program, we agree to let evidence decide. Now one sees this happening all the time in science. This is the strength of science.
The second principle is that when the evidence does not decide, when the evidence is not sufficient to decide from rational argument, whether one point of view is right or another point of view is right, we agree to encourage competition and diversification amongst the professionals in the community.
Here I have to emphasize I’m not saying that anything goes. I’m not saying that any quack, anybody without an education is equal in interest or is equal in importance to somebody with his Ph.D. and his scientific training at a university...
I’m talking about the ethics within a community of people who have accreditation and are working within the community. Within the community it’s necessary for science to progress as fast as possible, not to prematurely form paradigms, not to prematurely make up our mind that one research program is right to the exclusion of others. It’s important to encourage competition, to encourage diversification, to encourage disagreement in the effort to get us to that consensus which is governed by the first principle.
It was like finding out Santa Claus wasn't real to me.
> Over time you slowly build out a list of good "node papers" (mostly literature reviews) and useful terms to speed up this process, but it's always gonna be super time consuming.
This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".
The review paper is the traditional way. It's usually okay.. but very biased towards the author's background.
If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal. In a sense those tools help having a macro understanding, but are very prone to an initial seed bias. It is difficult to get out of a closed sub-section of the field. This is especially the case in technical papers, which often fail to address surrounding issues. These issues might still be technical, just not within the grasp of that particular sub-group of authors.
In the end, like you said, it is very time consuming. You do need to go through each one individually and build an understanding and intuition for what to look for, and how to get out of those "cycles" for a deeper understanding. And you really are better off reading them yourself.
> The review paper is the traditional way. It's usually okay.. but very biased towards the author's background. > > If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...
Some guidelines like PRISMA, or the various assessments of self-bias are generally good indicators the author cared. Having sections like these will help you getting the aforementioned intuition for what else to look through, given you have a recognition from the source itself of their bias (your own assessment may be biased, so some ground truth is good). Plus really thorough description of their methods for gathering the information (databases, queries, and themes they spent time on).
Agreed recent is generally bad, you need to allow some time for things to have a chance to get looked at.
Jtsummers•11h ago
https://news.ycombinator.com/item?id=27892615 - 168 comments
https://news.ycombinator.com/item?id=27891102 - 16 comments