There is a half-joke in our lab that the more times a paper is rejected, the bigger or more praised it will be once it's accepted. This simply alludes to the fact that many times reviewers can be bothered with seeing value in certain ideas or topics in a field unless it is "novel" or the paper is written in a way that is geared towards them, rather than being relegated to "just engineering effort" (this is my biased experience). However, tailoring and submitting certain ideas/papers to venues that value the specific work is the best way I have found to work around this (but even then it takes some time to really understand which conferences value which style of work, even if it appears they value it).
I do think there is some saving grace in the section the author writes about "The Science Thing Was Improved," implying that these changes in the paper make the paper better and easier to read. I do agree very much with this; many times, people have bad figures, poor tables or charts, bad captions, etc., that make things harder to understand or outright misleading. But I only agree with the author to a certain extent. Rather, I think that there should also be changes made on the other side, the side of the reviewer or venue, to provide high-quality reviews and assessments of papers. But I think this is a bit outside the scope of what the author talks about in their post.
>I saw this firsthand with BERT in my field (NLP).
Sure it's framed in terms of "helping you get published" (which feels kind of gross) but I think ultimately it's really about tips for authors to get their points across in a clear and engaging way.
It's the difference between being a Cassandra or the Oracle at Delphi. Maybe the only difference between the two was presentation? (Classicists, feel free to roast my metaphor).
I interpreted GPs above statement as reacting to the strategies in the article for "gaming" publications and grants, but perhaps I misunderstood GPs comment.
The people who won the career game at top US universities in technical fields don't simply get there by making their plots fancier or using the right words in the abstract in otherwise trivial papers. The papers do make valuable contributions. Pursuing research for pure personal discovery is great, but if you don't tell others about it, why should they care? Most discoveries are not General Relativity or Evolution.
And there's also a component of "cope" in these lamentations. Oh, I'm a lone wolf genius, misunderstood by all, the contrarian who is rejected by the in-crowd yadda yadda because of career failure. It's a way to preserve ego. If only it wasn't for the social games, I'd be the next Einstein, my intentions are pure, while the establishment is rotten. It's a bit more nuanced than that. You have to do good work AND know how to present it and spread awareness about it. Both are needed.
I won't speak for anyone else but here are three things I think are all true:
* We live in a renaissance of academic research that is giving us profound scientific discovery
* Prioritizing a scientific career over scientific discovery can lead to a net positive of good scientific results, and, so far, has
* Prioritizing a scientific career over scientific discovery produces low quality science
Saying that people who know how to maneuver the political academic landscape, to secure a position, also produce valuable contributions might be true (I believe it is) but the argument doesn't address the cost of prioritizing, or promoting, that behavior.
I'm reminded of "The Economics of Superstars" [0]. If someone is "better" by a measure of 2x, say, but gets (10+)x the amount of resources, this is not a good allocation of energy. Saying that the 2x person should get more resources is true. Saying that they're justified in getting orders of magnitude more resources, at the cost of everyone else who might use them to better effect, is not.
These conversations are subtle. I notice that one of the common crutches is to attack people as "just being bitter". This seems like a cheap attack and I wish you and others would try to be more thoughtful.
[0] https://home.uchicago.edu/~vlima/courses/econ201/Superstars....
In the old days, scientific careers were largely restricted to the independently wealthy or those who could secure patrons.
I also feel like there's a sort of tension with what Hacker News broadly wants out of science. There's often a lament that there aren't enough staff science positions, or positions where people can have a career beyond a postdoc that's just devoted to research.
Those things have to be paid for. Postdocs are expensive. Staff scientists are expensive - and terrifying, because they have careers and kids and mortgages. Postdocs are expensive.
That ends up eating a lot of a PIs time, because the success rate on proposals are low. Even worse now.
Would I love to be able to just sit in my office, think my thoughts, and occasionally write those thoughts up? Sure. But I'd also like to give people an opportunity to have careers in science where they can get paid.
None of that gets to the actual point of my comment, which is that it's all well and good to say people should do science for science's sake, but in the meantime, rent is due.
From the first article in the series [0]:
> Insiders ... understand that a research paper serves ... in increasing importance ... Currency, An advertisement, Brand marketing ... in contrast to what outsiders .. believe, which is ... to share a novel discovery with the world in a detailed report.
I can believe it's absolutely true. And yikes.
Other than the brutal contempt, TFA looks like pretty good advice.
The most disturbing thing about it is the way advice to forget about science and optimize for the process is mixed with standard tips for good communication. It shows that the community is so far gone that they don't see the difference.
If anyone needs a point of reference, just look at an algorithms and data structures journal to see what life is like with a typical rather than extreme level of problems.
The number of accepted papers is absolutely currency and measure of worth in academia.
Both of which are currency.
Another reason might be that clever titles stand out as bold claims, working counter to the common practice of academic humility. If a paper seems to be downplaying its own significance, then why should a casual reader (or reviewer, at first impression) give it the benefit of the doubt?
That's not to say that papers should over-claim, and I suspect that doing so might lead to a harsh counter-reaction from reviewers who feel like they've been set up to have their time wasted. Nonetheless, "project confidence" might be good practice in academia as well as one's social life.
Silly example: if I ever find out a prove saying that "P=NP", that will also be the title of my paper. No cleverness required to grab attention.
If I have a more pedestrian result, I'll think up some clever title.
This tends to not manifest as "We need one of these" but "If we have one of these, lets be sure to use it."
Evaluators are human.
This article is spot on. what are you talking about? have you ever published a research paper and gone through peer review?
I think it's ultimately due to a lack of theory, which creates the expectation that the results from trying an idea will be a random draw. From that point, you get the behaviors of trying as much as possible and taking each attempt as a fixed object to then go try and get over the threshold.
> Could you imagine someone saying, "be sure that the graphic for the molecule in figure 1 is 3D and has bright colors?"
I doubt the reviewers asked for that, but yes that kind of thing happens all the time prior to publishing and there’s nothing wrong with it. If it reduces the amount of time it takes to understand the paper then do it..
Chemists are extremely brand-aware regarding their figures.
In synthetic chemistry many chemists could guess the author based just on the color scheme of the paper's figures.
For instance, look at the consistency here: https://macmillan.princeton.edu/publications/
And it comes with rewards! The above lab is synonymous with several popular techniques (one, organocatalysis, which garnered a Nobel prize) - the association would be much less strong if the lab hadn't kept a consistent brand over so many years.
Yes, I am aware of the irony of paywalled academic publications.
In the private sector you can choose your patrons and your dissemination mechanism. Many, many scientists publish papers, publish code, give talks, write blogs, and otherwise distribute technical details about their work product.
In academia the Federal Government is your only serious patron and you must disseminate in academic journals/conferences, which generally do a piss poor job of providing incentives for either doing good work or communicating well about that work.
Any time I hire a junior PhD I have to UNDO a ton of academic writing/provlem-solving propaganda and reteach both common sense and normal writing style.
The harsh truth is that private sector scientists tend to do better science and disseminate it in more useful and lasting ways. They are paid better for it.
The academic scientists who are up to private sector standards tend to have diverse funding mechanisms and therefore rely far less heavily on prestige publication for their labs revenue stream. But most professors must publish papers because they are unable to do good work and/or communicate the value of that work to anyone other than their inner circle of friends (who sit on the grant review panels or take stints at federal agencies).
I have only worked with one business that did not require NDAs, and that was because it was built around an open-source sharing philosophy. Every other client, even very small organic-growth businesses and pre-seed startups, required an NDA, and if they hadn't I would have advised them that they should.
Are you are aware there is a world outside the USA?
It’s important to point out that US professors are sometimes able to go without public patronage, but that this is very much an anomaly.
The US private sector funds A LOT of R&D relative to other countries, and the US attracts an outsized amount of FDI targeted at R&D.
As a result, in the USA there are occasionally rare instances where professors can mostly fund labs without government patronage.
Scientists in other counties are even more desperate for public patronage (and its associated political games) than US scientists
In 2021, academic R&D spending in the US was ~$90 billion (https://ncses.nsf.gov/pubs/nsb202326/funding-sources-of-acad...). Out of that, 55% came from the federal government, 25% from the institutions themselves, 6% from nonprofits, 6% from businesses, 5% from state and local governments, and 3% from other sources. The share of businesses looks normal, while the share of nonprofits seems low.
I’m pretty damn sure you’re wrong about Europe on a relative basis. The percentages in most of Europe are MUCH higher. Eg Germany is closer to 80% than 50% gov funded.
(Earmarked gifts to an endowment with some level of direction/advice vs a foundation is a real cultural and tax policy difference, but the end effect is what matters and that’s not as simple as you’re suggesting.)
And not to be too flippant, but the question about the world outside of America applies also to the world outside the West ;)
We could take the University of Helsinki (Finland) as a singular example. 56% of the external research funding comes from the national government and 14% from the EU. 16% is from private foundations, 9% from businesses, and the remaining 5% from other sources. The 16% figure from foundations is lower than it would be under the American model, as many private grants (particularly fellowships for PhD students) are awarded directly to the individual and therefore not included in the figures for the university. Overall, 70% of the external research funding is from the government, 25% from private institutions, and 5% from other sources.
I didn't include the share of research funding from the university itself, because I don't know what is included under it by American standards. If you adjust the American figures to exclude that, you get 80% from the government, 16% from private institutions, and 4% from other sources.
It's good to remember that Europe is not a continent of social democratic welfare states but a continent of warmongers and old money that happens to be quiet for the moment. A lot of that old money went into foundations that fund prestigious things such as arts and science. European private donors don't like funding education, as they consider it a government responsibility. American donors on the other hand often give money to universities, which then use it primarily for education, buildings, and infrastructure.
And even then usually only because it’s expected, not because it was actually useful. (And no, it’s not because academics are more ethical about acknowledging the shoulders they stand on. Academics rarely cite the chip designs, software libraries, lab instruments, instruction documents, training materials, etc. What “counts” as something that deserves a citation mostly boils down to “did you publish it in a venue controlled by other academics”, not “how important was this to enabling your contributions?”)
The fortunate thing about the private sector is that you don’t have to spend years of your life shaping opinion on citation ethics, because people are using your stuff instead of half-interestedly saying that they may’ve skimmed the intro to a pdf describing your stuff. And if people use your stuff and get value from it you can usually extract some of the value that creates. Which means you don’t need vanity metrics to convince some government agency to throw you some coin.
Either way my point stands. Since they are citing the research as foundational in their papers, then we should take them at their word. The idea you put forth that they're only doing so as a matter of show puts a terrible light on them if true. They shouldn't be citing research as foundational if it really isn't. So I will choose not to believe your characterization, because I think very highly of the industry researchers I know, and that doesn't seem like something they would do.
That said, even if we accept the general premise of your post, which I don’t, you’re still drawing the wrong conclusion.
To wit: citing something does not imply that the cited thing is “foundational” to the work from which it is cited. One can cite work for any number of reasons. (Admittedly, citation behavior did change with the rise of bibliomaniacs, but of course that further bolsters my overall point, so I’m not sure the daylight on this point does you any favors.)
You identified some counter-examples that miss the point because they’re unrepresentative, unresponsive, and irrelevant.
Unrepresentative because we are discussing literature in aggregate and this behavior is common.
Unresponsive because, in aggregate, inessential academic writing is systematically over-cited in academic writing and essential inputs of other types are systematically under-cited in academic writing. This is true of all academic writing; it’s a bias of the medium and of the medium’s standard bearers.
And irrelevant because there is nothing a priori or essentially nefarious about the above, on its own!
Academics beat ideas and lines of inquiry deep into the ground. Crucially, they do so by pumping out ridiculous quantities of PDFs. For every little variation there is a paper. Outside of academia this isn’t done. Eg: you cite Package X, great! But do you cite the 17 different PRs most relevant to your work, many of which are at least a papers worth or work? No. That’s culturally off. But for the corresponding thinly sliced papers that’s what you have to do.
Conclusion: academic work dominates the citation list because of publication and citation culture, not because academic work dominates the set of enabling contributions.
I do trust that you genuinely do experience the world as you describe here, but I think you’re a fish in water and that Upton Sinclair quote about paychecks comes to mind.
I neither wrote nor implied that. Sure there are many reasons to cite papers, but in saying "citing the research as foundational", meaning that their foundation is the reason for their citation. You were so eager to write all those words you didn't stop to actually read mine. Therefore, I think that's all I have to say to you, I'll leave the rest unread.
Pedantic and profoundly wrong but always in some ridiculous lens always wiggling enough to never let truth get on the way of Being Smart And Right. Peak .edu and the reason it’s so damn hard to justify science spending to the actually hard working tax payers patronizing this stuff.
> "The primary objects of modern science are research papers. Research papers are acts of communication. Few people will actually download and use our dataset. Nobody will download and use our model—they can’t, it’s locked inside Google’s proprietary stack."
The author is confusing the concept of 'science as a pursuit that will earn me enough money and prestige to live a nice life' - in which, I'd say, we can replace 'science' with 'religion' and go back to the 1300s or so - with science as the practice of observation, experiment and mathematical theory with the goal of gaining some understanding of the marvelously wonderful universe we exist in.
Yes, the academic system has been grotesquely corrupted by Bayh-Dole, yes, the academic system is internal blood sport politics for a limited number of posts, yes, it's all collapsing under the weight of corporate corruption and a degenerate ruling class - but so what, science doesn't care. It can all go dormant for 100 years, it has before, hasn't it? 125 years ago you had to learn to read German to be up on modern scientific developments.
Wake up - nature doesn't care about the academic system, and science isn't reliant on some decrepit corrupt priesthood.
P.S. Practically speaking, new graduate students should all be required to read Machiavelli as an intro to their new life.
> And it’s not just a pace thing, there’s a threshold of clarity that divides learned nothing from got at least one new idea.
But these days, ideas are quite cheap: in my experience, most researchers have more ideas than students to work on them. Many papers can fit their "core idea" in a tweet or two, and in many cases someone has already tweeted the idea in one form or another. Some ideas are better than others, but there's a lot of "reasonable" ideas out there.
Any of these ideas can be a paper, but what makes it science can't just be the fact that it was communicated clearly. It wouldn't be science unless you perform experiments (that accurately implement the "idea") and faithfully report the results. (Reviewers may add an additional constraint: that the results must look "good".)
So what does science have to do with reviewers' fixation on clarity and presentation? I claim: absolutely nothing. You can pretty much say whatever you want as long as it sounds reasonable and is communicated clearly (and of course the results look good). Even if the over-worked PhD student screws up the evaluation script a bit and the results are in their favor (oops!), the reviewers are not going to notice so long as the ideas are presented clearly.
Clear communication is important, but science cannot just be communicating ideas.
As an academic I need to be up to date in my discipline, which means skimming hundreds of titles, dozens of abstracts and papers, and thoroughly reading several papers a week, in the context of a job that needs many other things done.
Papers that require 5x the time to read because they're unnecessarily unclear and I need to jump around deciphering what the authors mean are wasting me and many others' time (as are those with misleading titles or abstracts), and probably won't be read unless absolutely needed. They are better caught at the peer review stage. And lack of clarity can also often cause lack of reproducibility when some minor but necessary detail is left ambiguous.
In the end, getting a paper accepted is a purely social game, and has not much to do with how clear your science is described, especially for truly novel research.
1) The whole opening segment is the literature review.
2) If you are coming up with a novel concept, then you would be explaining how it shows up in relation to known fact.
Then you would be providing evidence and experiment.
The entire structure is designed to ensure as many affordances for the author to make their case.
Being accepted as a social game is the cynical view that ignores that academia still works. It’s academia itself which recognizes these issues and is trying to rectify the situation.
And so on.
I think the social game view is at this point entirely justified, and there is nothing cynical about it. And no, academia does not still work.
A given "structure" is also ridiculous, and part of the problem. Once you care more about the form than the content, form is what prevails.
The truth is: To understand a paper properly, you need to deal with it properly, not just the 5 minutes it takes to skim the first pages and make up your opinion there already. Fifteen pages is short enough, and if you cannot commit to properly review this, for a week or so of dedicated study, just don't review it. We would all be better off for it.
Reviewing dynamics make this hard. There is little to no reward for reviewers, and it is much easier to write a long and bad paper than it is to review it carefully (and LLMs have upset this balance even further). To suggest that every submitted paper should occupy several weeks of expert attention is to fundamentally misunderstand how many crappy papers are getting submitted.
To suggest that peer review means anything when this is not the case is the true fundamental misunderstanding here.
In other words: Fuck peer review. You are no peer of mine.
By “idea” researchers usually imply “idea for a high-impact project that I’m capable of executing”. It’s not just about having ideas, but about having ideas that will actually make an impact on your field. Those again come in two flavors: “obvious ideas” that are the logical next step in a chain of incremental improvements, but that no one yet had time or capability to implement; and “surprising ideas” that can really turn a research field upside down if it works, but is inherently a high-risk/high-reward scenario.
Speaking as a physicist, I find the truly “surprising ideas” to be quite rare but important. I get them from time to time but it can take years between. But the “obvious” ideas, sure, the more students I have the more of them I’d work on.
> Any of these ideas can be a paper, but what makes it science can't just be the fact that it was communicated clearly. It wouldn't be science unless you perform experiments (that accurately implement the "idea") and faithfully report the results. (Reviewers may add an additional constraint: that the results must look "good".)
I kinda agree with this. With the caveat that I’d consider e.g. solving theoretical problems to also count under “experiment” in this specific sentence, since science is arguably not just about gathering data but also developing a coherent understanding of it. Which is why theoretical and numerical physics count as “science”.
On the other hand, I think textbooks and review papers are crucial for science as a social process. We often have to try to consolidate the knowledge gathered from different research directions before we can move forward. That part is about clear communication more than new research.
I think it's still the case that there's lots of ideas that (if they worked!) would be surprising. Anyone can state outlandish ideas in a paper -- imo the contribution is proving (e.g. with sound "experiments", interpreted broadly) that they actually work. Unfortunately, I think clarity of writing matters more to reviewers than the soundness of your experiments. I think in CS this could very well change if the reviewers willed it (i.e. require artifact submission with the paper, and allow papers to be rejected for faults in the artifact)
In particular, the lines between science and some industry is blurring.
Eg. Machine learning where universities appear almost lazy compared to their industrial counter parts.
I would be interested to hear other perspectives.
The value lies in getting true ideas in front of your eyeballs. So communicating the idea clearly is crucial to making the value available.
I can write anything I want in the paper, but at the end of the day my experiments could do something slightly (or completely) different. Where are reviewers going to catch this?
Wrong things definitely still make it through, both mistakes and fraud. But it is a pretty strong filter.
Regardless of the strength of the filter, if the filter's inputs are just "the paper", but the claims depend on the details in another artifact (i.e. the code), how can we argue that peer review filters for the truth?
But I think your most significant change was changing the "what" to "why".
Reading the original, we can see that most sentences start with "we did..." "we did..." and my impression as a reader was, "Okay, but how is this important?" In the second one, the "what" is only in the first part of the sentence, to name things (which gives a sense of novelty), and then only "whys" come after it.
"Whys" > "Whats" also applies to good code comments (and why LLM's code sometimes sucks). I can easily know "what" the code does, but often, I want to know "why" it is there.
If you're submitting to a control theory journal, you better have some novel theorems with rigorous mathematical proofs in that "rest of the paper" part. That's a little nontrivial.
>The tweaks that get the paper accepted—unexpectedly, happily—also improve the actual science contribution. >The main point is that your paper’s value should be obvious, not that is must be enormous.
This is slightly oversimplified, but from the outside, science may look like researchers are constantly publishing papers sort of for the sake of it. However, the papers are the codified ways in which we attempt to influence the thinking of other researchers. All of us who engage in scientific research aim to be on the literal cutting edge of the research conversation. Therefore it's imperative to communicate how our work can be valuable to specific readers.
Let's take a look at the two abstracts:
(Version 1, Rejected): Given two distinct stimuli, humans can compare and contrast them using natural language. The comparative language that arises is grounded in structural commonalities of the subjects. We study the task of generating comparative language in a visual setting, where two images provide the context for the description. This setting offers a new approach for aiding humans in fine grained recognition, where a model explains the semantics of a visual space by describing the difference between two stimuli. We collect a dataset of paragraphs comparing pairs of bird photographs, proposing a sampling algorithm that leverages both taxonomic and visual metrics of similarity. We present a novel model architecture for generating comparative language given two images as input, and validate its performance both on automatic metrics and visa human comprehension.
Here, the first two sentences a) make a really obvious claim and could equally be at home in a philosophy journal, a linguistic journal, a cognitive science journal, a psychology journal, a neuroscience journal, even something about optometry. Moreover, some readers may look at this abstract and think "well, that's nice, but I'm not sure I need to read this." (Version 2, Accepted): We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds. The language collected is highly detailed, while remaining understandable to the everyday observer (e.g., “heart-shaped face,” “squat body”). Paragraph-length descriptions naturally adapt to varying levels of taxonomic and visual distance—drawn from a novel stratified sampling approach—with the appropriate level of detail. We propose a new model called Neural Naturalist that uses a joint image encoding and comparative module to generate comparative language, and evaluate the results with humans who must use the descriptions to distinguish real images. Our results indicate promising potential for neural models to explain differences in visual embedding space using natural language, as well as a concrete path for machine learning to aid citizen scientists in their effort to preserve biodiversity.
Compared to V1, the V2 abstract does a much better job of communicating a) how this project might be valuable to people who want to understand and use neural-network models "to explain differences in visual embedding space using natural language." Or to put it another way, if you want to understand this, it's in your interest to read the paper!When it comes to papers, I always reminded myself and others that people also _read_ with their eyes.
It is easy to be cynical about this (with some justification!), but if the findings are more clearly and quickly communicated by a pretty-looking paper, then the paper has objectively improved.
Thankfully the scientific process is incredibly resilient to nonsense, because a bad result will eventually screw up someone's future work when they come to rely on it. But it's not pretty.
Not if they isolate themselves enough from the outcome but I get what you're saying.
The world progresses despite these deeply flawed institutions (corporations or academia have these perverse incentive problems and all in all, they do create some value on average).
Which is why it's so funny when you see non skeptical appeals to "the god of science" which apparently exists in a vacuum of correctness and ethical purity.
The most fun in science can be had when done at home and shared with friends.
Academia != science. It is a social construct and dominated by people with power within a given field. That being said, double blind review process improved the author engineering problem a lot.
As long as It has some capacity to self correct, it’s a stable function.
Science has a few aspects that are distinct from non-Science enterprises, but more aspects in common.
So I assume that it is not done to keep outsiders out of your garden…
Honestly, I don’t find any other reason to don’t apply it.
Although there's plenty of critique to go around about the review system, machine learning here typically uses double-blind peer review for the major conferences. That blinding is often imperfect (e.g. if a paper very obviously uses a dataset or cluster proprietary to a major company), but it's not precise enough to reject a paper based on the author being an unknown.
1. Avoid overly general citations. The rejected paper leads with references to image captioning tasks in general and visual question-answering, neither of which is directly advanced by the described study. The accepted paper avoids these general citations in favour of more specific literature that works directly on the image-comparison task.
2. Don't lead with citations. The accepted paper has its citations at the end of the introduction, on page 2.
I think that each change is reasonably justified.
In avoiding overly-general citations, the common practice in machine learning literature is to publish short papers (10 pages or fewer for the main body), and column inches spent in an exhaustive literature review are inches not spent clearly describing the new study.
Placing citations towards the end of the introduction is consistent with the "inverted pyramid" school of writing, most commonly seen in journalism. Leaving the review process out of it for the moment, an ordinary researcher reading the article probably would rather know what the paper is claiming more than what the paper is citing. A page-one that can tell a reader whether they'll be interested in the rest of the article does readers a service.
My least favourite type of citations in introductions, that I often see from more junior researches are ones that look like:
"In this paper we use a Machine Learning [1][2][3] technique known as Convolutional [4] Neural Networks [5][6][7][8] to..."
In academia the equivalent is prestige. Who gets it and how? Who are the players? There are college students, PhD students, professors, administrators, grant committees, corporation-university industrial collaborations and consortiums, individual managers at corporations and their shareholders, university boards, funding agency managers, politicians allocating taxpayer money to research funding, journal editors, reviewers, tenure committees, pop science magazine editors, pop science magazine readers, general public taxpayers.
You should be able to put yourself in the shoes of each of these and have a rough idea of how they can obtain prestige as input from some other actor and how they can pass on prestige to yet another actor. You must understand the flow of prestige, and then it will be much less mysterious. (Of course understanding the flow of money also helps, but people tend to overlook prestige because one of the least prestigious things is to overtly care about prestige, it's supposed to seem effortless and unacknowledged)
It seems to go 180 degrees against what a smart starry-eyed junior grad student would believe. Surely, it's all about actually making things work, right? We are in the hard sciences, we don't just craft narratives about our ideas, we make cold hard useful things that are objectively and measurably better and can be used by others, building on top of it, standing on our shoulders, and what could be more satisfying than seeing the fruits of our research being applied and used.
However, for an academic career you want to cultivate the profile of a guru, a thought leader, a visionary, a grand ideas person. Fiddling with the details to put a working system together is lowly and kinda dirty work, like fixing clogged toilets or something. Not like the glorious intellectual work of thinking up great noble thoughts about the big picture.
If you want to pivot to industry, it could help you to build a track record of having created working systems, sure. But I've often seen grad students get stuck on developing bepoke internal systems that are not even really visible to potential future employers. Like improving the internal compute cluster tooling, automating the generations of figures in Latex, building a course management system to keep track of assignment submissions and exam grading and so on. Especially when you're at a phase where your research project is getting rejections and you feel stuck, you are most prone to dive into these invisible, career-killing types of work. In academia, what counts is your published research, your networking opportunities obtained through going to conferences where you have papers, getting cold emailed because someone saw your paper etc. I've seen very smart PhD students get stuck in engineering rabbit holes and it's sad. It happens less if your parents were already in academia, and you kinda get the gist of how things work via osmosis. But outsiders don't really grok what actually makes a difference and what is totally invisible (and a waste from a career perspective). Another such trap is pouring insane amounts of hours into teaching assistance and improving the materials, slides, handouts and so on. The careerists will know to spend just as much on this sort of stuff as they absolutely have to. Satisficing, not optimizing. Do enough to meet the bar, and not one minute more. It is absolutely invisible to the wider academic research community whether your tutorial session on Tuesday to those 20 students was stellar or just OK. Winners of the metagame ruthlessly optimize for visible impact and offload everything else to someone else or just not do them. A publication is visible. A research semester at a prestigious university is visible. Getting a grant is visible. Being the organizer of a workshop is visible. Meticulously grading written exams is invisible. Giving a good tutorial session is invisible. Improving the compute infrastructure of the lab is invisible. Being the goto person regarding Linux issues is invisible.
Packaging your research in a way that works well out of the box is in the middle on this spectrum. It may be appreciated by another stressed PhD student somewhere in some other university, and it may save them some time in setting things up. But that other PhD student won't sit on your grant committee or promotion board. So it might as well be invisible. Unless your work is so stellar and above and beyond other things that it goes viral and you become known to the community through it. But it's a double edged sword, because being known for having packaged your work in an easy to use manner will get you pigeonholed into the "software engineer technician" category, and not the "ideas person" category. Execution is useful but not prestigious. Like the loser classmate whose homework gets copied but isn't invited to parties.
The metagame winner recognizes that their work is transient. Any time spent on packaging up the research software for ease of use or ease of reproducibility once the publication is accepted is simply time stolen from the next project that could get you another publication. Since you'll likely improve the performance in the next slice of the salami anyway, there would be no use in releasing that outdated software so nicely. The primary research output is the paper itself, and the talks and posts you can make to market it to boost its citations, as well as the networking opportunities that happen around the poster and the conference. Extras beyond that are nice, but optional.
While you're working on making something "really" work, you're either delaying the publication, making it risky to get scooped (if done before publication), or you're dumping time into a dead project (dead in the sense that the paper is already published and won't be published-er by pouring more time into it post-publication).
This won’t get you a Stanford professorship. That’s something you can cry about from your mountain chalet or beachfront vacation home.
Part of the meta-game of academia is that feedback timelines are long enough that you can play the “wrong” meta-game and still come out ahead. If you don’t want a professorship — or are willing to settle for a super cushy “professor of practice” as an early retirement non-profit thing to keep ya out of the house — then a PhD can be a good place to do hard tech pre-seed work.
"Is the scientific paper a fraud?"
I found a PDF online here: https://www.weizmann.ac.il/mcb/alon/sites/mcb.alon/files/use...
Max should publish this in a book and it will probably sell by truckloads.
If I've to choose by ranking in usefulness, it will probably topic no. 4 is the best part "Don't Make Things Actually Work". Topic no. 3 is the second. This particular topic no. 5 is the third. Topic no.1 is the fourth. The topic no. 2 is the fifth ranking in usefulness but overall great advises nonetheless.
Perhaps the last one for the topic is when and how to wrap up the PhD research since research is a never ending endeavor.
fl4tul4•8mo ago