Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
There is very little incentive in publicly criticising a paper, there is incentive to show why others are wrong and why your new technically superior method solves the problem and finds new insights to the field.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
If really pro science, some non-profit should really fund this sort of research.-
PS. Heck, if nothing else, it'd give synthetic intellection systems somewhere to not go with their research, and their agency and such ...
This alone would probably kill off a lot of fraudulent science in areas like nutrition and psychology. It's what the government should be doing with NIH and NSF funding, but is not.
If you manage to get a good RCT through execution & publication, that should make your career, regardless of outcome.
Indeed. That is the "baseline"-setting science, you are much correct.-
Heck, "we tried, and did not get there but ..." should be a category unto itself.-
There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
The "psychological null hypothesis" is that which follows the common assumption, whether that assumption states that there is a relationship between the variables or that there is not.
A null result would have been: "We tried to apply Famous Theory to showing that minimum wage has no effect on employment rate but we failed because this and that".
If the result is very surprising and contradicts established theory, I wouldn't consider it a null result, even if some parameter numerically measures as zero.
The above is a good point but I would extend it further. I mean, philosophically, you get a positive result from a negative (null) result by merely changing your hypotheses (e.g., something should not cause something else).
I had to look that up, because more precisely, it showed that a particular minimum wage increase in NJ from $4.25 to $5.05 didn't increase unemployment in 410 particular fast food joints in 1989-1990 - https://davidcard.berkeley.edu/papers/njmin-aer.pdf - not that "increasing the minimum wage has a null effect on employment rates" at all, ever, no matter what. It's not as if increasing the minimum wage to something impossible to afford like $100 trillion wouldn't force everyone to get laid off, but nobody generally cares about the limiting case like that, as that's relatively unlikely.
The interesting part is non-linearity in the response, seeing where and how much employment rates might change given the magnitude of a particular minimum wage increase, what sort of profit margins the affected industries have, elasticity of the demand for labor and other adaptations by businesses in response to increases, not whether it's a thing that can happen at all.
And we're seeing a lot more such adaptions these days. There are McDonalds around here that have computers in the lobby where you input your order yourself and I've gotten drones to deliver my food instead of DoorDash drivers. That kind of thing was not yet practical back in 1989, when I remember using an Apple ][ GS and it's not clear that findings like this should be relied upon too heavily given that some of the adaptations available to businesses now were not practical back then, especially when technology may change that even more in the future.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
BrenBarn•1d ago
antithesizer•17h ago
youainti•16h ago
Bluestein•15h ago
"When even truth itself needs an angle ...
... every lie looks like a viable alternative".-
pixl97•15h ago
Not that I'm saying all science has to economic purposes.
autoexec•10h ago
Obviously it takes money to do pretty much anything in our society but it does seem like it has way more influence that is necessary. Greed seems to corrupt everything, and even though we can identify areas where things can be improved nobody seems to be wiling or able to change course.
derektank•3h ago
MITSardine•13h ago
There's some issues, though. Firstly, how do you enforce citing negative results? In the case of positive results, reviewers can ask that work be cited if it had already introduced things present in the article. This is because a publication is a claim to originality.
But how do you define originality in not following a given approach? Anyone can not have the idea of doing something. You can't well cite all the paths not followed in your work, considering you might not even be aware of a negative result publication regarding these ideas you discarded or didn't have. Bibliography is time consuming enough as it is, without having to also cite all things irrelevant.
Another issue is that the effort to write an article and get it published and, on the other side, to review it, makes it hard to justify publishing negative results. I'd say an issue is rather that many positive results are already not getting published... There's a lot of informal knowledge, as people don't have time to write 100 page papers with all the tricks and details regularly, nor reviewers to read them.
Also, I could see a larger acceptance of negative result publications bringing perverse incentives. Currently, you have to get somewhere eventually. If negative results become legitimate publications, what would e.g. PhD theses become? Oh, we tried to reinvent everything but nothing worked, here's 200 pages of negative results no-one would have reasonably tried anyways. While the current state of affairs favours incremental research, I think that is still better than no serious research at all.
michaelt•10h ago
The thing is, people mostly cite work they're building upon, and it's often difficult to build much on a null result.
If I'm an old-timey scientist trying to invent the first lightbulb, and I try a brass filament and it doesn't work, then I try a steel filament and it doesn't work, then I try an aluminium filament and it doesn't work - will anyone be interested in that?
On the other hand, if I tested platinum or carbonised paper or something else that actually works? Well, there's a lot more to build on there, and a lot more to be interested in.
mitthrowaway2•5h ago
throwawaymaths•12h ago
kurthr•12h ago
The counter example to some extent is medical/drug control trials, but those are pharma driven, and gov published though an academic could be on the paper, and it might find its way onto a tenure review.
Second, in the beginning there is funding. If you don't have a grant for it, you don't do the research. Most grants are for "discoveries" and those only come about from "positive" scientific results. So the first path to this is to pay people to run the experiments (that nobody wants to see "fail"). Then, you have to trust that the people running them don't screw up the actual experiment, because there are an almost infinite number of ways to do things wrong, and only experts can even make things work at all for difficult modern science. Then you hope that the statistics are done well and not skewed, and hope a major journal publishes.
Third, could a Journal of Negative Results that only published well run experiments, by respected experts, with good statistics and minimal bias be profitable? I don't know, a few exist, but I think it would take government or charity to get it off the ground, and a few big names to get people reading it for prestige. Otherwise, we're just talking about something on par with arXiv.org. It can't just be a journal that publishes every negative result or somehow reviewers have to experts in everything, since properly reviewing negative results from many fields is a HUGE challenge.
My experience writing, and getting grants/research funded, is that there's a lot of bootstrapping where you use some initial funding to do research on some interesting topics and get some initial results, before you then propose to "do" that research (which you have high confidence will succeed) so that you can get funding to finish the next phase of research (and confirm the original work) to get the next grant. It's a cycle, and you don't dare break it, because if you "fail" to get "good" results from your research, and you don't get published, then your proposals for the next set of grants will be viewed very negatively!