frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Launch HN: Vassar Robotics (YC X25) – $219 robot arm that learns new skills

216•charleszyong•3h ago•91 comments

Show HN: I made a 3D printed VTOL drone

https://www.tsungxu.com/p/i-made-a-3d-printed-vtol-that-can
119•tsungxu•2h ago•45 comments

Magistral — the first reasoning model by Mistral AI

https://mistral.ai/news/magistral
598•meetpateltech•8h ago•274 comments

Low-background Steel: content without AI contamination

https://blog.jgc.org/2025/06/low-background-steel-content-without-ai.html
174•jgrahamc•5h ago•111 comments

Xeneva Operating System

https://github.com/manaskamal/XenevaOS
74•psnehanshu•3h ago•2 comments

Show HN: A "Course" as an MCP Server

https://mastra.ai/course
64•codekarate•2h ago•2 comments

You Can Drive but Not Hide: Detection of Hidden Cellular GPS Vehicle Trackers

https://www.researchgate.net/publication/391704077_You_Can_Drive_But_You_Cannot_Hide_Detection_of_Hidden_Cellular_GPS_Vehicle_Trackers
62•gnabgib•3h ago•17 comments

OpenAI dropped the price of o3 by 80%

https://twitter.com/sama/status/1932434606558462459
228•mfiguiere•5h ago•209 comments

Show HN: Chili3d – A open-source, browser-based 3D CAD application

220•xiange•7h ago•61 comments

Malleable software: Restoring user agency in a world of locked-down apps

https://www.inkandswitch.com/essay/malleable-software/
133•jessmartin•7h ago•55 comments

The Gentle Singularity

https://blog.samaltman.com/the-gentle-singularity
55•firloop•1h ago•98 comments

A Blacklisted American Magician Became a Hero in Brazil

https://www.wsj.com/lifestyle/careers/magician-brazil-national-celebrity-d31f547a
81•bookofjoe•5h ago•31 comments

Mikeal Rogers has died

https://b.h4x.zip/mikeal/
162•neom•9h ago•18 comments

A Thousand Tiny Optimisations

https://leejo.github.io/2025/06/08/alttpr/
7•leejo•2d ago•0 comments

Another Crack in the Chain of Trust: Uncovering (Yet Another) Secure Boot Bypass

https://www.binarly.io/blog/another-crack-in-the-chain-of-trust
37•vitplister•3h ago•12 comments

Launch HN: BitBoard (YC X25) – AI agents for healthcare back-offices

37•arcb•7h ago•14 comments

A Family of Non-Periodic Tilings, Describable Using Elementary Tools

https://arxiv.org/abs/2506.07638
18•joshu•3h ago•6 comments

Android 16 is here

https://blog.google/products/android/android-16/
149•nsriv•4h ago•138 comments

Chatbots are replacing Google's search, devastating traffic for some publishers

https://www.wsj.com/tech/ai/google-ai-news-publishers-7e687141
67•jaredwiener•2h ago•53 comments

Exploring our collection: the canary resuscitator (2018)

https://blog.scienceandindustrymuseum.org.uk/canary-resuscitator/
9•mooreds•3d ago•0 comments

Denuvo Analysis

https://connorjaydunn.github.io/blog/posts/denuvo-analysis/
196•StefanBatory•1d ago•96 comments

Faster, easier 2D vector rendering [video]

https://www.youtube.com/watch?v=_sv8K190Zps
108•raphlinus•9h ago•26 comments

Spoofing OpenPGP.js signature verification

https://codeanlabs.com/blog/research/cve-2025-47934-spoofing-openpgp-js-signatures/
81•ThomasRinsma•9h ago•25 comments

Dubious Math in Infinite Jest (2009)

https://www.thehowlingfantods.com/dfw/dubious-math-in-infinite-jest.html
76•rafaepta•8h ago•60 comments

Show HN: High End Color Quantizer

https://github.com/big-nacho/patolette
102•big-nacho•11h ago•27 comments

Show HN: MidWord – A Word-Guessing Game

https://midword.com/
18•minaguib•4h ago•21 comments

Onlook (YC W25) Is Hiring an engineer in SF

1•D_R_Farrell•11h ago

Show HN: PyDoll – Async Python scraping engine with native CAPTCHA bypass

https://github.com/autoscrape-labs/pydoll
107•thalissonvs•9h ago•30 comments

OpenAI o3-pro

https://help.openai.com/en/articles/9624314-model-release-notes
123•mfiguiere•2h ago•69 comments

Teaching National Security Policy with AI

https://steveblank.com/2025/06/10/teaching-national-security-policy-with-ai/
40•enescakir•9h ago•21 comments
Open in hackernews

Scientific Papers: Innovation or Imitation?

https://www.johndcook.com/blog/2025/06/05/scientific-papers-innovation-or-imitation/
60•tapanjk•19h ago

Comments

birn559•17h ago
The process is far from perfect, but it works well enough mid-term and works pretty well long-term.

It's also better than any alternatives, as far as I know. Haven't heard people pushing the idea of restructuring the process, the only exception being that journals shouldn't cost (that much) money and instead institutions should pay for publishing a paper. This wouldn't however change the foundation of the process.

agumonkey•15h ago
What about the publish or perish effect ? no ideas on how to rebalance things to avoid it ?
Daub•15h ago
Yes. It's simple. Establish peer review as the metric of tenure and make dam sure those peers know their stuff.
friendzis•14h ago
> Establish peer review as the metric of tenure and make dam sure those peers know their stuff.

And you are back at square one: peer reviews become the currency used in academic politics. A relatively small group of tenured academics have all the incentives to independently form a fiefdom. Anonymization does not help as everyone knows work and papers of the rest anyway.

ancillary•10h ago
The supply of knowledgeable and conscientious reviewers in, say, machine learning, is far outmatched by the number of papers less knowledgeable and conscientious people submit.
richarlidad•17h ago
Imitation precedes creation.
kevinventullo•15h ago
Follow-up papers by other authors which “only extend or expand on the specific finding in very minor ways” have a secondary benefit. In addition to expanding the original findings, they are also implicitly replicating the original result. This is perhaps a crucial contribution in light of the replication crisis!
Daub•15h ago
Maybe. But that is a generous reading. I used to attend many computational aesthetic conferences. The sheer volume of non photorealistic rendering cross hatch algorithms was almost laughable.
tgv•10h ago
If only. I worked in cog/neuro sci, and the career builders there produce small variations on the original. Variations on the Stroop task, which dates back to 1935(!), are still being published, despite the fact that there is no explanation for the effect. And when you consider that null results are rarely published, and that many aspects of the methodology are flawed, a new paper cannot be considered a replication: it's just wishful thinking upon wishful thinking.
kevinventullo•4h ago
Are you claiming the Stroop effect hasn’t been proven to exist or just that there hasn’t been an explanation?

Funnily enough, the first “professional” coding I ever did was writing up a Stroop test in Visual Basic for a neuro professor, and I recall the effect being undeniably clear. At a personal anecdotal level, I would time myself with matching colors versus non-matching, and even with practice I could not bring my non-matching times down to my matching times.

Daub•15h ago
For a few years I worked closely with computer engineers in a S E Asian university. I got to know quite well the sort of stuff they published. Some of the dodgy stuff i saw:

Recycling. Some papers seemed to be near duplicates of prior work by the same academic, with minor modification.

Faddishnes. Papers featuring the latest buzz technologies regardless of whether they were appropriate.

Questionable authorship. Some senior academics would get their name included on publications regardless of whether they had been actively engaged with that project. I saw a few academics get involved in risky and potentially interesting subjects, but they all risked their careers in doing so.

But most of all, there was a dearth of true innovation. The university noticed this and established an Innovation Centre. It quickly became full of second hand projects all frustratingly similar to projects in the US from a few years ago.

Of course there were exceptions, and learning from them was a genuine growth experience fir which I am grateful.

bonoboTP•13h ago
It's not just about the academics but the expectations from higher ups and funding agencies in order to keep your job and have a chance at continuing your career. Over the last few decades the expected amount of papers at good and even mediocre institutions has exploded. Profs who want to be seen as productive and who want good funding publish 30-50 papers per year and sometimes "supervise" dozens of PhD students at the same time (who agree to the deal to get the brand name of the big prof, not for any real supervision).

Funding agencies can't evaluate the research itself, so they look at numbers, metrics, impact factors, citations, h-index, publication count etc. They can't simply say "we pay this academic whether he publishes or not because we trust he is still deep in important work when he is not at a work stage to publish" because people will suspect fraud and nepotism and bias, and often the funding is taxpayer money. Not that the metrics prevent that of course. But it seems that way. So metrics it is, so gaming the metrics via Goodhart's law it is.

I don't think it's super bad, but it increases administrative work and busywork overhead on top of the actual research. The progress slows somewhat per person, as the same work has to be salami sliced and marketed in chunks, but there's also way more people in it, but of course most of them produce vary low quality stuff but it's not a big loss because these people would not even have published anything some decades ago, they would just have some teaching professorship and publish every few years perhaps just in their national language. It increases the noise but there are ways to find the signal among it, and academics figure out ways to cut through the noise. It's not great, not super easy, and it pushes a lot of people out who dislike the grind but there are plenty who see it as a relatively good deal to move to a richer country and do this.

tokinonagare•11h ago
I've seen faddishnes and questionable authorship in a top-3 Japan university too. The lab I was in was a paper mill, the professor even explicitly told student than quantity > quality. I'm glad in France things are getting a bit slower but deeper (from my observations).
empiko•13h ago
In my experience, the publication pressure in today's science is to large extent inhibiting innovation. How can you innovate when you need to have X papers every year, otherwise you will not get that position of funding. To fulfill the quota, the only rational strategy is to focus on simple iterative papers that are very similar to what everybody else is doing. There is simply no time to innovate or be brave, you have to comfort. There is also barely time to make sure they what you are doing is actually methodologically correct. If you spend too much time, you will get scooped and forgotten.

Case in point, everybody is doing AI research nowadays and NIPS has like 15k submitted papers. But the innovation rate in AI is actually not that much higher than 10 years ago, I would even argue that it is lower. What are all these papers for? They help people build their careers as proofs of work.

jltsiren•12h ago
AI is a special case of a special case. First you have the weird CS publication culture with conference papers and a heavy focus on selecting a (small) subset of winners. And then you have a subfield with giant conferences, a lot of money, and a lot of people doing similar things.

A typical approach to science is finding your niche and becoming a person known for that thing. You pick something you are interested in, something you are good at, something underexplored, and something close enough to what other people are doing that they can appreciate your work. Then you work on that topic for a number of years and see where you end up in. But you can't do that in AI, because the field is overcrowded.

mpascale00•11h ago
Certainly other field are competitive, but the current AI boom has been ridiculous for a while now. As an outside observer, the competition seems to be for the final money, prestige, or whatever the top papers win, rather than competition at the level of paper acceptance...
bonoboTP•11h ago
The competition racket and inflation keeps turning. It used to be publications. Then it was top conference publications. Now it's going viral on social media, being popularized by big AI aggregators like AK.

It's crazy, most Master's students applying for a PhD position already come with multiple top conference papers, which a few years ago would get you like 2/3 of the way to the PhD, and now it just gets you a foot in the door in applying to start a PhD. And then already Bachelor students are expected to publish to get a good spot in a lab to do their Master thesis or internship. And NeurIPS has a track for high school students to write papers, which - I assume - will boost their applications to start university. This type of hustle has been common in many East Asian countries and is getting globalized.

bonoboTP•11h ago
> finding your niche

Exactly. It used to be that way in AI a decade ago. Different subfields used bespoke methods you could specialize in and could take a fairly undisturbed 3-5 years to work on it without constant worries of being scooped and therefore having to rush to publish something half baked to plant flags. Nowadays methods are converging, it's comparatively less useful to be an expert in some narrow application area, since the standard ML methods work quite well for such a broad range of uses (see the bitter lesson). This also means that a broader range of publications are relevant to everyone, you're supposed to be aware of the NLP frontier even if you are a vision researcher etc., you should know about RL developments etc. Due to more streamlined github and huggingface releases, research results are also more available for others to build on, so publishing an incremental iteration on top of a popular method is much easier today than 15 years ago when you first had to implement the paper yourself and needed expertise to avoid traps not mentioned in any paper and is assumed common knowledge.

It may not be a big problem for overall progress, but it makes people much more anxious. I see it on PhD students, many are quite scared of opening arxiv and academic social media, fearing that someone was faster and scooped them.

Lots of labs are working on very similar things, and the labs are less focused on narrow areas, everyone tries to claim broad areas. Meanwhile people have less and less energy to peer review this flood of papers and there's less incentive to do a good job there instead of working on the next paper.

This definitely can't go on forever and there will be a massive reality check in academia (of AI/ML).

empiko•11h ago
I agree that AI is an extreme example, but similar pressures exist in other popular fields and subfields, especially in STEM. Peter Higgs famously said that he would probably not be able to do a PhD nowadays.
atrettel•9h ago
I completely agree that "publish or perish" harms innovation. Funding and research positions have become so predicated on rapid and consistent publication that it incentives researchers to focus on incremental and generally low-risk ideas that they can propose, develop, and publish quickly and predictably. Nobody has the time or energy anymore to focus on bigger and braver (your word) ideas that are less incremental and cannot be developed in predictable time frames.

I agree that many fields essentially have papers as "proof of work", but not all fields are like that. When I worked as a mechanical engineer, publication was "the icing on the cake" and not "the cake itself". It was a nice capstone you do after you have have completed a project, interacted with your customers, built a prototype, filed a patent application, etc. The "proof of work" was the product, basically, and you can build your career by making good products.

Now that I am working as a scientist, I see that many scientists have a different view of what their "product" is. I have always focused on the product being the science itself --- the theories I develop, the experiments and simulations I conduct, etc. But for many scientists, the product is the papers, because that it what people use to evaluate your career. It does not have to be this way, but we would have to shift towards a better definition of what it means to be a productive scientist.

agarttha•12h ago
Random thoughts from physics researcher: - Too much imitation delays innovation. - For all the emphasis on high risk research, the system doesn't reward it. - Creativity isn't valued as much as it should be. - negative results and failed experiments hold back careers but are signs of attempts at innovation - the VC world may understand that only 1/100 projects will be novel and perhaps successful, but funding agencies don't
mpascale00•11h ago
The thesis here is not well elaborated on. Ref [1] for example seems to me to miss more recent progress on our understanding of working memory, and in linguistics, while Chomsky's work is foundational, we have a much better idea now of how the requisite compositionality of behavior necessary for language might arise in the human brain.
mpascale00•11h ago
To add to that, I agree with the basic sentiment of the article - but it just doesn't seem to be the reflections of an academic. Ending with optimism about AI, to me, makes me think the author believes AI will solve a problem they are not well acquainted with.

Perhaps one expects overgeneralization in consulting blogs though

kj4211cash•9h ago
So much of academic life revolves around bringing in grant money. This is particularly true in STEM fields and at the best research schools. There are ever increasing administrative hoops to jump through to bring in that grant money. And grants nowadays are often given out for research on very specific topics often chosen by bureaucrats. These topics are, almost by definition, not innovative. The NSF is an exception but there are very few NSF grants given out, relative to the number of researchers. My assessment is that the most famous, most published researchers can still afford to explore, if they have the time and inclination, but the rest cannot.