Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
There is very little incentive in publicly criticising a paper, there is incentive to show why others are wrong and why your new technically superior method solves the problem and finds new insights to the field.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
If really pro science, some non-profit should really fund this sort of research.-
PS. Heck, if nothing else, it'd give synthetic intellection systems somewhere to not go with their research, and their agency and such ...
This alone would probably kill off a lot of fraudulent science in areas like nutrition and psychology. It's what the government should be doing with NIH and NSF funding, but is not.
If you manage to get a good RCT through execution & publication, that should make your career, regardless of outcome.
Indeed. That is the "baseline"-setting science, you are much correct.-
Heck, "we tried, and did not get there but ..." should be a category unto itself.-
So journals could have a section (the grey pages?) for "unsellable results" that they didnt give a peer review. They would of course need to assess them in some other way, to ensure a reasonable level of quality.
Some of the very high-profile journals are run by non-profits, including: Science (American Association for the Advancement of Science), PNAS, (National Academy of Sciences), eLife (HHMI/Max Planck/Wellcome Trust). A slew of more specialized journals are run by societies too.
In theory, they should be willing to lead the charge. In practice, I think they are largely dependent on income from the journals for a lot of their operations and so are reluctant to rock the boat.
There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
The "psychological null hypothesis" is that which follows the common assumption, whether that assumption states that there is a relationship between the variables or that there is not.
A null result would have been: "We tried to apply Famous Theory to showing that minimum wage has no effect on employment rate but we failed because this and that".
If the result is very surprising and contradicts established theory, I wouldn't consider it a null result, even if some parameter numerically measures as zero.
Measurements of some physical quantity are a different kind of experiment, you cannot phrase it as a question about the correlation between two variables. Instead you take measurements and put error bars on them (unless what you're measuring is an indirect proxy for the actual quantity, in which case the null hypothesis and p-value testing does become relevant again).
The above is a good point but I would extend it further. I mean, philosophically, you get a positive result from a negative (null) result by merely changing your hypotheses (e.g., something should not cause something else).
In this case we would expect some studies of the minimum wage to show it increases employment regardless of what the effect of wage rises is in the general case - eg, some official raised the minimum wage while a sector went into a boom for unrelated and coincidental reasons.
In this case the difference between before and after raising the minimum wage.
Furthermore, the thing with a null result is that it's always dependant on how sensitive your study is. A null result is always of the form "we can't rule out that it's 0". If the study is powerful then a null result will rule out a large difference, but there is always the possibility that there is a difference too small to detect.
It is absolutely shameful that negative results are almost never published. I sure that a lot of money and effort wasted by repeating the same dead-end experiments by many research groups just because there is no paper that said: “we tried it this way, it didn’t work”.
2. It's easy to call it shameful when it isn't you who has to do the work. If you are like most other normally functioning people, you no doubt perform little experiments every day that end up going nowhere. How many have you written papers for? I can confidently say zero. I've got better things to do.
> Papers are written to invite replication
They are written to share knowledge, new discoveries. We hope that they are replicated.
> It's easy to call it shameful when it isn't you who has to do the work.
We are not judging anyone, we are qualifying the situation. And we are speaking of publishing results for experiments that have already conducted, not voluntarily making up null cases and doing the related experiments. It wouldn't make sense and would be harmful. But if you did an experiment that produced a null result, not publishing is a loss of knowledge. Again, we are not judging anybody, it's a failure of the publishing system. But this cannot change if nobody points it out.
I'd have troubles understanding a researcher not acknowledging today that the lack of null result publication is an issue. It would show a lack of perspective IMHO. And for someone acknowledging that there's an issue, pushing back is not the right stance.
If they aren't replicated, no new knowledge is gained. Not really.
> it's a failure of the publishing system.
What failure? Again, the "publishing system" is to submit the "best of the best" research to the world in order to invite replication. While nothing is perfect, we don't want people to have to sift through papers that have effectively no reason to be replicated in a quest to find something that is. That would make things infinitely worse.
The internet was created for the minor-leagues. If you are willing to put in the effect to document your "less than the best" research, put it up on a website for all to see already. There is nothing stopping you. But who wants to put in the effort?
> We are not judging anyone
How do you define "shameful" if it isn't related to judgement of someone?
We fundamentally disagree here and I'm not willing to put in the effort needed to try to convince you otherwise, many other comments are here are better than what I could write.
I do hope you take the time to take a step back (and you seem to be in defensive mode, if so, you need to go out of this) and reconsider.
> How do you define "shameful" if it isn't related to judgement of someone?
Do you know the expression "it's a shame"?
> used when you wish a situation was different, and you feel sad or disappointed
https://www.ldoceonline.com/dictionary/it-s-a-shame-what-a-s...
No. I make no such assumption. That may often be true, but there is nothing to stop a useful null result from being published. We also get things wrong from time to time. It is very possible that the best baseball player in the world has been overlooked by Major League Baseball. We're pretty good at scoping out the best, but nothing in life is perfect.
If the best baseball player in the world ends up in the minor leagues instead, oh well? Does it really matter? You can still watch them there. Same goes for research. If something great doesn't make it into the formal publication system, you can still read it on the studier's website (if they put in the effort to publish it).
> You seem to be ignoring the issue where researchers collectively waste time redoing "failing" studies again and again because the null results are not published
I may be ignoring it, but if that's the case that's because it is irrelevant. There is no reason to not publish your "failing" studies. That's literally why the public internet was created (the original private internet was created for military, but next in line was university adoption — to be used for exactly that purpose!).
> We fundamentally disagree here and I'm not willing to put in the effort needed to try to convince you otherwise
Makes sense. There is nothing to convince me of. I was never not convinced. But it remains: Who wants to put in the effort? Unless you are going to start putting guns to people's backs, what are you expecting?
Ok.
> There is no reason to not publish your "failing" studies. That's literally why the public internet was created
You are suggesting researchers should blog about their null results? It seems to me the null results deserve the same route as any other paper, with peer reviews, etc.
It matters, because this route is what other researchers trust. They wouldn't base their work on some non reviewed blog article that can barely be cited. You don't even base good science on some random article on Arxiv that was not published in some recognized avenue. If you are using some existing work to skip an experiment because it tells you "we've already tried this, it didn't show any effect", you want to be able to trust it like any other work. Hell, as a random citizen in a random discussion, especially one with a PhD, I don't want to be citing a blog article as established scientific knowledge.
And yes, getting published in a proper, peer reviewed avenue is work, but we all need to deeply internalize that it's not lesser work if the result is null.
> Unless you are going to start putting guns to people's backs, what are you expecting?
If researchers collectively decide it's worth pursuing, it's all about creating the incentives at the right place. Like any other research, you could be rewarded, recognized and all. High impact journals and conferences could encourage researchers to publish / present their null results.
Of course, we are not speaking about such things like "what two unrelated things could I try to measure to find some absence of correlation", we are speaking about "I think those two things are linked, let's make an experiment. Oh, no, they are not correlated in the end!" -> the experiment is done either way, just that the results also deserve to be published either way. And the experiment should only be published if it doesn't exhibit a fatal flaw or something, we are not talking about flawed experiment either.
If they want to. Especially if it doesn't meet the standard for the publication system, why not?
> It seems to me the null results deserve the same route as any other paper, with peer reviews, etc.
If it ranks with the best of them, it is deserving. There isn't room for everything, though, just as there isn't room for everyone who has ever played baseball to join the MLB. That would defeat the entire purpose of what these venues offer.
But that doesn't mean you can't play. Anyone who wants to play baseball can do so, just as anyone who wants to publish research can do so.
> If researchers collectively decide it's worth pursuing
It only takes an individual. Unlike baseball, you can actually play publishing research all by yourself!
1. Where do we read your failed research? Given your stance, it would look very foolish to find out that you haven't published it.
2. Do you draw a line? Like, if you add a pinch more salt to your dinner and found that it doesn't taste any better, do you publish that research?
I get your point, but this is not specific to null results.
> It only takes an individual
No no no. The desirability of null results need to be recognized and somewhat consensual, and high impact journals and conferences needs to accept them. Otherwise, there's no reason researchers will work to publish them.
1. I don't publish anymore: I'm not a researcher anymore. I didn't encounter the case during the short time I was one (I could have, though. Now I know, years later. I suspect it would have been difficult to convince my advisors to do it). I hope this doesn't matter for my points to stand on their own. Note that I think null results ARE NOT failed research. This is key.
2. Ideally, null or positive result alike, the experiments and the studies need to be solid and convincing enough. Like, there needs to be enough salt and not too much, the dinner needs to be tasty in both cases. If the dinner doesn't taste good, of course you don't publish it. There is something wrong with what you've done (the protocol was not well followed, there's statistical bias, not enough data points, I don't know)
It feels like we are talking past each others, you are thinking I'm talking about failed research, but I'm talking about a hypothesis you believed could be true, you built an experiment to test it, and found no correlation in the end. This result is interesting and should be published, it's not failed research.
As it happens, I attended a PhD defense less than a month ago where the thesis lead to null results… The student was able to publish, these null results felt somewhat surprising and counter intuitive, so it's not like it's impossible, it just needs to be widely seen as not failed research.
If it is interesting you should also find it interesting when you read it 30 years in the future. You don't need other people. It's a nice feeling when other people want to look at what you are doing, sure, but don't put the cart before the horse here. Publish first and prove to others that there is something of value there. They are not going to magically see the value beforehand. That is not how the human typically functions.
It's not like you have to invent the printing press to do it. Putting your work up on a website for the entire world to see is easy peasy. Just do it!
> Ideally, null or positive result alike, the experiments and the studies need to be solid and convincing enough.
No need to let perfect become the enemy of good. Publishing your haphazard salting experiment isn't apt to be terribly convincing, but it gets you into the habit of publishing. Eventually you'll come around to something that actually is interesting and convincing. It's telling if someone isn't willing to do this.
> The student was able to publish, these null results felt somewhat surprising and counter intuitive, so it's not like it's impossible
Exactly. Anything worthy of the major leagues will have no trouble getting formally published. But not everything is. And that's okay. You can still publish it yourself. If you want to play baseball, there is no need to wait around for the MLB to call, so to speak... Just do it!
> you are thinking I'm talking about failed research [...] it just needs to be widely seen as not failed research.
Yes, I am talking about what is widely seen as failed research. It may not actually be failed research in a practical sense, but the moniker is still apt, especially given that you even call it that yourself. I guess I don't understand what you are trying to say here.
What work? The work of writing the paper?
It seems to me that it’s better that research group A will spend 10 days to write the paper about their dead-end experiment, than 20 other research groups will do the same series of experiments, wasting a lot of time, money, and energy just because there was no paper that said “it doesn’t work”. Perhaps it would be better if those 20 research groups would instead try 20 different ways to fix the faulty method in hopes to get somewhere, than dosing the same thing.
I do not understand how it’s even a question for debate.
I had to look that up, because more precisely, it showed that a particular minimum wage increase in NJ from $4.25 to $5.05 didn't increase unemployment in 410 particular fast food joints in 1989-1990 - https://davidcard.berkeley.edu/papers/njmin-aer.pdf - not that "increasing the minimum wage has a null effect on employment rates" at all, ever, no matter what. It's not as if increasing the minimum wage to something impossible to afford like $100 trillion wouldn't force everyone to get laid off, but nobody generally cares about the limiting case like that, as that's relatively unlikely.
The interesting part is non-linearity in the response, seeing where and how much employment rates might change given the magnitude of a particular minimum wage increase, what sort of profit margins the affected industries have, elasticity of the demand for labor and other adaptations by businesses in response to increases, not whether it's a thing that can happen at all.
And we're seeing a lot more such adaptions these days. There are McDonalds around here that have computers in the lobby where you input your order yourself and I've gotten drones to deliver my food instead of DoorDash drivers. That kind of thing was not yet practical back in 1989, when I remember using an Apple ][ GS and it's not clear that findings like this should be relied upon too heavily given that some of the adaptations available to businesses now were not practical back then, especially when technology may change that even more in the future.
Like many things in statistics, this is solved by Bayesian analysis: instead of asking if we can reject the null hypothesis, the question should be which model is more likely, the null model or the alternate model.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
PS. Good point about the "shotgun" studies.-
BrenBarn•6mo ago
antithesizer•6mo ago
youainti•6mo ago
Bluestein•6mo ago
"When even truth itself needs an angle ...
... every lie looks like a viable alternative".-
pixl97•6mo ago
Not that I'm saying all science has to economic purposes.
autoexec•6mo ago
Obviously it takes money to do pretty much anything in our society but it does seem like it has way more influence that is necessary. Greed seems to corrupt everything, and even though we can identify areas where things can be improved nobody seems to be wiling or able to change course.
derektank•6mo ago
BrenBarn•6mo ago
ytpete•6mo ago
Perhaps this means it really does have to start with journal publications though. If journals value null results, peer reviewers will sharpen their ability to distinguish null but well-run experiments from ones that failed simply due to poor execution. Then employers can use published null results as a positive signal that a researcher is indeed doing good quality work.
MITSardine•6mo ago
There's some issues, though. Firstly, how do you enforce citing negative results? In the case of positive results, reviewers can ask that work be cited if it had already introduced things present in the article. This is because a publication is a claim to originality.
But how do you define originality in not following a given approach? Anyone can not have the idea of doing something. You can't well cite all the paths not followed in your work, considering you might not even be aware of a negative result publication regarding these ideas you discarded or didn't have. Bibliography is time consuming enough as it is, without having to also cite all things irrelevant.
Another issue is that the effort to write an article and get it published and, on the other side, to review it, makes it hard to justify publishing negative results. I'd say an issue is rather that many positive results are already not getting published... There's a lot of informal knowledge, as people don't have time to write 100 page papers with all the tricks and details regularly, nor reviewers to read them.
Also, I could see a larger acceptance of negative result publications bringing perverse incentives. Currently, you have to get somewhere eventually. If negative results become legitimate publications, what would e.g. PhD theses become? Oh, we tried to reinvent everything but nothing worked, here's 200 pages of negative results no-one would have reasonably tried anyways. While the current state of affairs favours incremental research, I think that is still better than no serious research at all.
michaelt•6mo ago
The thing is, people mostly cite work they're building upon, and it's often difficult to build much on a null result.
If I'm an old-timey scientist trying to invent the first lightbulb, and I try a brass filament and it doesn't work, then I try a steel filament and it doesn't work, then I try an aluminium filament and it doesn't work - will anyone be interested in that?
On the other hand, if I tested platinum or carbonised paper or something else that actually works? Well, there's a lot more to build on there, and a lot more to be interested in.
mitthrowaway2•6mo ago
directevolve•6mo ago
throwawaymaths•6mo ago
kurthr•6mo ago
The counter example to some extent is medical/drug control trials, but those are pharma driven, and gov published though an academic could be on the paper, and it might find its way onto a tenure review.
Second, in the beginning there is funding. If you don't have a grant for it, you don't do the research. Most grants are for "discoveries" and those only come about from "positive" scientific results. So the first path to this is to pay people to run the experiments (that nobody wants to see "fail"). Then, you have to trust that the people running them don't screw up the actual experiment, because there are an almost infinite number of ways to do things wrong, and only experts can even make things work at all for difficult modern science. Then you hope that the statistics are done well and not skewed, and hope a major journal publishes.
Third, could a Journal of Negative Results that only published well run experiments, by respected experts, with good statistics and minimal bias be profitable? I don't know, a few exist, but I think it would take government or charity to get it off the ground, and a few big names to get people reading it for prestige. Otherwise, we're just talking about something on par with arXiv.org. It can't just be a journal that publishes every negative result or somehow reviewers have to experts in everything, since properly reviewing negative results from many fields is a HUGE challenge.
My experience writing, and getting grants/research funded, is that there's a lot of bootstrapping where you use some initial funding to do research on some interesting topics and get some initial results, before you then propose to "do" that research (which you have high confidence will succeed) so that you can get funding to finish the next phase of research (and confirm the original work) to get the next grant. It's a cycle, and you don't dare break it, because if you "fail" to get "good" results from your research, and you don't get published, then your proposals for the next set of grants will be viewed very negatively!