frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Global hack on Microsoft Sharepoint hits U.S., state agencies, researchers say

https://www.washingtonpost.com/technology/2025/07/20/microsoft-sharepoint-hack/
175•spenvo•23h ago•78 comments

What went wrong inside recalled Anker PowerCore 10000 power banks?

https://www.lumafield.com/article/what-went-wrong-inside-these-recalled-power-banks
186•walterbell•3h ago•75 comments

AccountingBench: Evaluating LLMs on real long-horizon business tasks

https://accounting.penrose.com/
307•rickcarlino•4h ago•76 comments

Don't bother parsing: Just use images for RAG

https://www.morphik.ai/blog/stop-parsing-docs
99•Adityav369•4h ago•24 comments

TrackWeight: Turn your MacBook's trackpad into a digital weighing scale

https://github.com/KrishKrosh/TrackWeight
396•wtcactus•6h ago•102 comments

Spice Data (YC S19) Is Hiring

https://www.ycombinator.com/companies/spice-data/jobs/RJz1peY-product-associate-new-grad
1•richard_pepper•14m ago

In a major reversal, the world bank is backing mega dams (2024)

https://e360.yale.edu/features/world-bank-hydro-dams
18•prmph•57m ago•1 comments

Scarcity, Inventory, and Inequity: A Deep Dive into Airline Fare Buckets

https://blog.getjetback.com/scarcity-inventory-and-inequity-a-deep-dive-into-airline-fare-buckets/
45•bdev12345•2h ago•6 comments

New records on Wendelstein 7-X

https://www.iter.org/node/20687/new-records-wendelstein-7-x
165•greesil•6h ago•77 comments

Show HN: Lotas – Cursor for RStudio

https://www.lotas.ai/
35•jorgeoguerra•3h ago•15 comments

Game Genie Retrospective: The Best NES Accessory Ever Was Unlicensed

https://tedium.co/2025/07/21/the-game-genie-generation/
72•coloneltcb•3h ago•26 comments

Jqfmt like gofmt, but for jq

https://github.com/noperator/jqfmt
108•Bluestein•4h ago•29 comments

Yoni Appelbaum on the real villians behind our housing and mobility problems

https://www.riskgaming.com/p/how-jane-jacobs-got-americans-stuck
19•serviette•1h ago•7 comments

The Fundamentals of Asyncio

https://github.com/anordin95/a-conceptual-overview-of-asyncio/blob/main/readme.md
59•anordin95•3h ago•11 comments

Gemini with Deep Think officially achieves gold-medal standard at the IMO

https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
375•meetpateltech•5h ago•153 comments

Erlang 28 on GRiSP Nano using only 16 MB

https://www.grisp.org/blog/posts/2025-06-11-grisp-nano-codebeam-sto
42•plainOldText•2h ago•0 comments

Occasionally USPS sends me pictures of other people's mail

https://the418.substack.com/p/a-bug-in-the-mail
124•shayneo•6h ago•117 comments

MIPS – The hyperactive history and legacy of the pioneering RISC architecture

https://thechipletter.substack.com/p/mips
30•rbanffy•3h ago•12 comments

Modern Debian-based Window Maker distribution

https://wmlive.sourceforge.net/
66•Aldipower•5h ago•23 comments

SecretSpec: Declarative Secrets Management

https://devenv.sh/blog/2025/07/21/announcing-secretspec-declarative-secrets-management/
99•domenkozar•5h ago•25 comments

"Changing elves to wolves makes a difference"

https://www.sciencedaily.com/releases/2025/07/250716000855.htm
4•robinhouston•3d ago•0 comments

Amazon and the "Profitless Business Model" Fallacy

https://www.eugenewei.com/blog/2013/10/25/amazon-and-the-profitless-business-model-narrative
63•serviette•2d ago•15 comments

We made Postgres writes faster, but it broke replication

https://www.paradedb.com/blog/lsm_trees_in_postgres
175•philippemnoel•10h ago•40 comments

The Krull dimension of the semiring of natural numbers is equal to 2

https://freedommathdance.blogspot.com/2025/07/the-krull-dimension-of-natural-numbers.html
22•surprisetalk•3d ago•3 comments

Make Map Icons with Orthographic Projections

https://www.esri.com/arcgis-blog/products/arcgis-living-atlas/mapping/custom-orthographic-icons
39•bryanrasmussen•4h ago•0 comments

UK backing down on Apple encryption backdoor after pressure from US

https://arstechnica.com/tech-policy/2025/07/uk-backing-down-on-apple-encryption-backdoor-after-pressure-from-us/
359•azalemeth•6h ago•229 comments

Hiding messages in a deck playing cards

https://asherfalcon.com/blog/posts/3
82•ashfn•3d ago•29 comments

12ft.io Taken Down

https://www.newsmediaalliance.org/takedown-of-12ftio/
92•afeuerstein•2h ago•76 comments

Memory Efficiency in iOS: Reducing footprint and beyond

https://antongubarenko.substack.com/p/memory-efficiency-in-ios-reducing
47•CharlesW•5h ago•18 comments

Show HN: Pogocache – Fast caching software

https://github.com/tidwall/pogocache
43•tidwall•4h ago•17 comments
Open in hackernews

NIH limits scientists to six applications per year

https://www.science.org/content/article/fearful-ai-generated-grant-proposals-nih-limits-scientists-six-applications-year
82•pseudolus•11h ago

Comments

throwpoaster•10h ago
The decision criteria for NIH grants should be qualitative, not quantitative.
stingraycharles•10h ago
Well yes but I can completely understand the fears of them being overwhelmed by the amount of things to review.

Unless they start using AI on their end to review the quality as well, which I don’t think is the way we want to have things going.

poulpy123•10h ago
Which is what they still do or at least pretend to do. What they are limiting here is the number of submissions which actually will allow them to spend more time evaluating the quality if they wish so.
seydor•9h ago
(small) quantity is a quality of its own
gadders•10h ago
From TFA:

"Lauer notes that not long before he left NIH, he and his colleagues identified a principal investigator (PI) who had submitted more than 40 distinct applications in a single submission round, most of which appeared to be partially or entirely AI generated. The incident was “stunning” and “disappointing,” says Lauer, who was not involved in creating the new NIH policy but hopes the cap will discourage other researchers from abusing the system."

Always somebody who ruins things for everybody else.

prasadjoglekar•10h ago
Indeed, and failing to name and punish the individuals in question means the rest of profession is now bearing a collective weight going forward.
jeltz•10h ago
While I personally do not mind people abusing the system being called out, in this case I am not sure that is an issue. Is there any purpose in allowing people to submit more than six?
mbreese•7h ago
The process itself can require multiple submissions for the same grant. From my reading of the article, initial submissions, revisions, or renewals will all count as a “submission”. Additionally, if you are part of a collaboration (program project), that will count too. The concern here is that with these caps in place, you’ll see a pull back from collaborative efforts (which are normally looked on favorably).

The main gripe isn’t the fact that there is a limit, but rather that the threshold is too low. If you doubled the threshold to 12, I don’t think you’d see much pushback and still be able to limit over submissions. This being NIH, I’m a bit surprised there wasn’t more data presented to show why 6 was a reasonable limit.

kelipso•4h ago
The real problem seems to me to be that revisions etc. count as submissions and they are not limiting the submissions count to where you are PI. Six PI submissions per year seems plenty enough, though there are some PIs with a crazy number of grad students, post docs, etc. but hey, spread the PI responsibilities to post docs or whatever.
BiteCode_dev•10h ago
Is there ground to sue those people?
amelius•10h ago
I suppose you could if you looked at it as a denial of service attack.
colechristensen•7h ago
Maybe someone could make a fraud case out of it? It really depends on details not given whether or a decent civil or criminal case exists. Very likely a university-level ethics investigation would be warranted.
lazide•7h ago
If it wasn’t against the rules, and the submissions themselves were legit (albeit AI assisted), what basis is there for fraud?
amelius•9h ago
Sadly this kind of greed and indecency seems pervasive in society. We're transforming into a new species: homo economicus.
gadders•8h ago
There really does seem to be a "vibe" in the last few years where (paraphrasing) "Society is going to sh*t so although this action is morally dubious, it's not strictly speaking illegal, so I'll maximise my earnings so at least I am well resourced when everything falls apart."
conception•7h ago
When you see it working so well around you, you get the idea it may work for you too.
sorcerer-mar•6h ago
It's called being a loser and we should ostracize such losers at every turn. Making money won't make them less of a loser. Can't wash it off, actually.
amelius•4h ago
The problem: money is the new measure of success, so loser is exactly what they are not in this view.
sorcerer-mar•4h ago
Strongly disagree. I know a lot of losers with a lot of money, and most people around them think they're fundamentally losers too.
Nasrudith•5h ago
They follow their leaders basically. Look at the pattern of avoiding claiming responsibility like the plague as it ends up with all of the downsides. When borderline the 'not illegal' legalist ethics prevail across all segments of upper society, with Congress sticking to that and blatantly illegally insider trading or worse instead of a higher 'avoid even the appearance of impropriety' the shit has rolled down hill.

Although it might also just be an impact of illusions of nobility from the past shattered by increased transparency. George Washington abused his power to land speculate for one, presidents who had their affair participant institutionalized for stating uncomfortable truths about whose child was whose.

gadders•5h ago
Agreed. And it also makes people think that behaviour like this is OK: https://michaelwest.com.au/macquarie-bank-privatisation-and-...
theobeers•7h ago
Indeed, many such cases. Our society is full of institutions that functioned only because "no one in a position to participate would be shameless enough to abuse it." Then that assumption breaks, and it's ruined for everyone.

It is relevant, though awkward to discuss, that a large share of NIH and NSF funding proposals (and indeed funded projects) are led by researchers who didn't grow up in the US. I wonder if it's in fact a majority.

gus_massa•7h ago
It's hard to know without reading them, and perhaps 40 is too much for a innocent explanation, but ...

It's common that the head of the laboratory submit the applications for each project, so 40 application may mean 40 subteams with 2-6 minions each (where each minion has a Ph.D. or is a graduate student.) Usually when the paper is published, the head of the laboratory is the "last author".

Now it's getting common to make an AI cleanup, like fixing orthography and grammar and perhaps reduce the text to 5000 characters. Without reading them it's hard to know if this is the case or it's nonsensical AI slop.

BeetleB•6h ago
> It's common that the head of the laboratory submit the applications for each project, so 40 application may mean 40 subteams with 2-6 minions each (where each minion has a Ph.D. or is a graduate student.) Usually when the paper is published, the head of the laboratory is the "last author".

I believe this is the reason they are limiting it. A not of grants require the PI to spend at least some percentage of their research time on the project. PIs trend to ignore that requirement. As a result big names were getting a huge number of grants and early career researchers were getting none. Requirements like these give other researchers a chance.

WillQuinn•10h ago
I think its fair that the decision criteria should be qualitative, its just a bummer that its happening at a time with a complicated political environment and dwindling research funds, making it harder for researchers
Al-Khwarizmi•10h ago
As a scientist myself, grant proposals are an ideal use case for LLMs:

- Massive time sink. Those of us at senior/PI levels devote a lot of time to grant writing, often more than to actual research.

- Not something that you really get much useful learning or enrichment from (apart from learning to write better grant proposals the next time). The part of brainstorming and structuring ideas is useful but you would mostly do it without grant writing anyway, all the actual writing and polishing (which is 95% of the time) isn't. Definitely not an efficient use of the amount of hours it takes.

- I don't know specifically for NIH, but in my (non-US) context, grant proposals are full of formulaic sections that aren't really useful (Gantt chart, data management plan, etc.) When I'm in an evaluator role, I tend to outright skip many of them, not out of neglect or laziness but because they're just useless ritual fluff.

- As a consequence of the above three points, most of us dislike or even hate this part of our work.

- The meta for most funding agencies I know has long been to overhype and to use exaggeratedly positive language and takes. Exactly what LLMs are naturals at.

- If you're a non-native English speaker and write grant requests in English (common in Europe), the LLM also helps you level the playing field with native speakers, which is quite a big deal. From a naive outside standpoint you might think that scientific grant evaluation is all about the actual ideas and CVs, but the truth is that in practice, ability to pitch your ideas better than other competitors in your call is key.

- Honestly in the last grant I wrote, Gemini came up with some paragraphs that I consider to be clearly better than what I would have written by myself. Clear, concise, attractive to read, etc. It's just very good at writing. I'm better than it at the actual research, but at writing, let alone in English where I'm not a native? I don't have a chance.

As a result of this... good luck convincing scientists not to use LLMs for this. I'm pretty sure that if you ask, you will find two types of scientists: those that tell you that they use LLMs for grant writing and those who are hypocrites and deny it. I wouldn't even trust a scientist who didn't use LLMs for this (unless it's out of some very deep quasireligious conviction): why waste your time? Don't you want to have more time to do actual science?

jhrmnn•10h ago
> useless ritual fluff

I believe that LLMs can be very useful to identify this stuff in our processes. The solution shouldn't be then to fill them with LLMs but strip them entirely away. I tend to think the same about everyone freaking out about LLM misuse in education.

Al-Khwarizmi•10h ago
Of course, but identification has never been the problem. You don't need LLMs for that, you could just ask the scientists themselves and I'm sure over 90% of us would agree on the parts I mentioned being useless.

The problem is the bureaucracy. And if it asks for useless fluff, I'm happy to feed it with LLMs.

cturner•10h ago
I want to ask about the bureaucracy aspect. I have never written a science grant application, but expect that some of it comes about because the applications want to ensure good governance around the proposals. Do you agree? For the fluff that genuinely has no productive value, do you have any explanation for why it is there?

Could LLM participation be blowing holes in good-governance measures that were only weakly effective, and therefore a good thing in the long-term? Could the rise in the practice drive grants arrangements to better governance?

gotoeleven•10h ago
https://www.motherjones.com/politics/2025/03/nih-ending-dive...

Crap like this

Al-Khwarizmi•9h ago
These are very good questions, and I only have vague answers because it's not easy to understand how bureaucratic systems come to be, grow and work (and not my speciality), but I'll try to do my best.

Indeed, some of the fluff is due to the first reason - for example, the data management plan (where you need to specify how you're going to handle data) has good intentions: it's there so that you explain how you will make your data findable, interoperable, etc. which is a legitimately positive thing; as opposed to e.g. never releasing the research software you produce and making your results unreproducible. But the result is still fluff: I (well, Gemini and I) wrote one last week, it's 6 pages, and what it says can be said in 2-3 lines: that we use a few standard data formats, we will publish all papers on arXiv and/or our institutional repository, software on GitHub, data on GitHub or data repositories, and all the relevant URLs or handles will be linked from the papers. That's pretty much all, but of course you have to put it into a document with various sections and all sorts of unnecessary detail. Why? I suppose in part due to requirements of some disciplines "leaking" into others (I can imagine for people who work with medical data, it's important to specify in fine detail how they're going to handle the data. But not for my projects where I never touch sensitive data at all). And in part due to the trend of bureaucracies to grow - someone adds something, and then it's difficult to remove it because "hey, what if for some project it's relevant?", etc.

Then there are things that are outright useless, like the Gantt chart. At least in my area (CS), you can't really Gantt chart what you're going to do in a 5-year project, because it's research. Any nontrivial research should be unexpected, so beyond the first year you don't know what you'll exactly be doing.

Why is that there? I suppose it can be a mix of several factors:

- Maybe again, spill from other disciplines: I suppose in some particular disciplines, a Gantt chart might be useful. Perhaps if you're a historian and you're going to spend one year at a given archive, another year at a second archive, etc... but in CS it's useless.

- Scientists who end up at bureaucratic roles are those that don't actually like doing science that much, so they tend to focus on the fluff rather than on actual research.

- Research is unpredictable but funding agencies want to believe they're funding something predictable. So they make you plan the project, and then write a final report on how everything turned just as planned (even if this requires contorting facts) to make them happy.

Majromax•10h ago
> The solution shouldn't be then to fill them with LLMs but strip them entirely away.

You don't need language models to identify useless processes. The problem, however, is that people tend to be more comfortable with a process that exists whose product is ignored rather than no process at all.

For example, in the case of the grants here it's easier to imagine giving money to someone with a Gantt chart – even if that chart will never really represent reality – rather than someone who says 'trust us to use the money effectively.'

For an alternative view, a lot of the information supplied in such processes isn't related to the happy path, but rather it creates a paper trail for blame when things go wrong.

> I tend to think the same about everyone freaking out about LLM misuse in education.

The difference for education is that students need to practice, so the repetition is the point. The AI might ultimately be better at writing the book report, particularly compared to a student in 6th grade, but there's few other ways to train skills of reading comprehension and analysis.

DonsDiscountGas•10h ago
These are good reasons to allow LLMs but the cap on 6 submissions doesn't seem so bad. If anything it ensures people focus on fewer quality submissions. And if that means less time spent writing proposals, so much the better.
Al-Khwarizmi•10h ago
Of course, I agree with the cap. In fact, I'm surprised people could submit 40 proposals - in my country, most grant calls are capped to one proposal (since long before LLMs) and the same goes for most EU-wide calls I know. Even without LLMs, I wouldn't see the point of giving someone 40x the chances of funding just because they somehow found the time to write 40 proposals.

My comment was more about the part that says

"Aside from the cap, the policy makes clear that NIH will not consider AI-generated proposals to be the original work of applicants. “NIH will continue to employ the latest technology in detection of AI-generated content to identify AI generated applications,” the agency’s notice says. If AI use is detected after an award has been granted, NIH warns, the agency may refer the matter to the Office of Research Integrity while imposing penalties."

I think it's just unrealistic to expect that scientists won't use LLMs for grant proposals. And detection doesn't even actually work without the risk of false positives (not that there are going to be actual false positives, because everyone is going to use LLMs for this so by definition you will only have true positives and false negatives, but it's still unfair if some scientists' proposals are flagged by an AI detector and others bypass it).

poulpy123•10h ago
Using tools, LLM or not, to improve the language is one thing, but if a LLM is able to write a grant proposal for a research project that means the project is crap
SecretDreams•10h ago
You've just, correctly, identified probably most research proposals in the current era. They're mostly* recycled crap. But that doesn't mean they should get funded. We just need to also recognize modern research (at universities, at least) is mostly about training HQP.
SecretDreams•7h ago
Shouldn't**
seydor•9h ago
It's true though. You need an idea that sounds sexy and a lot of hyperbole about the impact of your research, in particular with EU funds. Applications generally MUST overhype their impact , because there is a lot of competition , often because the guidelines encourage it, and because there is no restriction on making unrealistic claims. The application, the interview, the pitching, the networking, it all matters. And over the years EU funding by the ERC has adopted the processes of tech startups and VCs (pitching etc). So whatever works with hustling entrepreneurs works with researchers.
Al-Khwarizmi•9h ago
I would never use them to write a proposal from scratch (as in "write a grant proposal to submit to call X on topic Y"), but they can very effectively turn informally-written lists of bullet points with (human-generated) ideas into well-structured sections of flawless, effective text describing the ideas in detail. As well as iterate on the text (make this section longer, shorter, add idea X, emphasize aspect Y more, etc.), suggest risks and contingency plans, make a Gantt chart, etc.

All this represents the overwhelming majority of the work (in terms of hours) in writing a grant proposal.

poulpy123•9h ago
It will still take a lot of time to write down the bullet points and get the data in support (I suppose that nih grants work like the ones in my country). The people that propose 40 grants in a year and that are targeted by this rule do not just use AI for helping to redact their proposal.

As a funny anecdote and a side note, I noticed a pattern of tricks used by people to detect AI texts that is dangerously close to my natural writing habits in English. For example I've seen people saying text with a lot of "fancier" words ( closer to my native language than more natural words), using a lot of moreover or "-" in the sentences. They are all things I've read people here and elsewhere say it helps them to know that a text is AI generated. I'm now very self-conscious about using moreover.

tornikeo•10h ago
This reminds me of how I used to have spam-filled email inbox before I switched over to GMail. It almost feels like we are back to that state. There's now a large demand in keeping the context of humans free of AI bullshit. I wonder what the solution to this would look like? Identity-based blacklisting?
eloisius•10h ago
Would it be cynical to think World(coin) is the answer?
arghwhat•10h ago
Not cynical, just wrong - yet another identification solution doesn't solve anything.
jeltz•10h ago
I don't think it is the answers but I could definitely see it being intended as an attempt to answer it.
poulpy123•10h ago
What would a cryptocurrency even provide except more scammers ?
seydor•9h ago
it s not like spammers are anonymous
Bluestein•10h ago
We are going to end up in a "personal certificate" cryptographically secure ID environment for users ...

... and everybody else is going to be assumed to be a software system.-

PS. Even worse (better): Your agent is going to be cryptographically bound to your identity.-

Al-Khwarizmi•10h ago
How do you ensure that I don't write the proposal with an LLM, have it on my mobile screen (or even printed), and then copy it typing with my human hands in the super secure environment?

I don't see a way of ensuring actual human input that doesn't involve panopticon-level surveillance.

d4rkn0d3z•9h ago
This would not be a problem because human hands are limited, their output is bounded.
Bluestein•9h ago
I mean, this is the "analogue gap" all over again.-

(Yes, I'd say it'd mitigate the issues somewhat, with the putative slowdown ...)

seydor•9h ago
the obvious solution is using AI to De-AI
krallistic•10h ago
Seems like a band-aid solution for a broken system.

But in general science will have to deal with that problem. Written text used to "proof" that the author spend some level of thought into the topic. With AI that promise is broken.

paulluuk•10h ago
The question is: is AI breaking the system, or was it always broken and does AI merely show what is broken about it?

I'm not a scientist/researcher myself, but from what I hear from friends who are, the whole "industry" (which is really what it is) is riddled with corruption, politics, broken systems and lack of actual scientific interest.

jeltz•10h ago
Everyone even remotely close to the system knew that this was broken and this is just bandaid. More fundamental changes would be needed to fix this.
raphman•8h ago
"Broken" is a spectrum. Adding rapid-fire AI exacerbates the existing problems and makes it harder to fix them.
miltonlost•6h ago
Yeah, a single drip of water leaking out of a pipe is "broken" but is substantially different than a deluge flooding out constantly.
strangescript•10h ago
I was literally typing band-aid when I scrolled down.

Many systems are going to have to come up with better solutions to the problems AI will pose for legacy processes.

RhysU•7h ago
It's going to be really funny when the NIH eventually sits down the professors, hands them blue exam booklets, and makes them write proposals in freehand.
d4rkn0d3z•10h ago
This seems like a good example of a more general issue; when you have a machine that produces bullshit mixed with gem-like phraseology, at a pace that we cannot possibly match as humans, we may be faced with intellectual denial of service attacks.
maratc•10h ago
There was a natural barrier of investing time into writing the proposal.

This barrier is clearly broken now.

A different barrier could be money that people submitting grant proposals would need to pay. First grant proposal could be $0, second $1, third $10, fourth $100, etc.

nness•10h ago
Or more limited in impact — flag people who are behaving outside of the norms for manaul review, i.e. too many or too frequent submissions. Manual review and if any are found to contain AI, assume all are AI, and charge only that person a fixed proposal review fee going forward.
maratc•9h ago
That would need a clear definition of what "too many" or "too frequent" mean. Every time this definition is changed, you'd need to retroactively apply the change. Changing the "person" would circumvent this.

My idea doesn't involve any of that -- you want to submit 10 proposals? $111,111,111 please.

Cthulhu_•9h ago
The "if found to contain AI" part will possibly become harder and harder to detect over time, at which point you have to assume all entries could be AI, and flagging or making them pay a review fee would become the standard.

But it's similar to emails, under water your e-mail account has a 'trust score' based on previous behaviour, domain, etc. It could also come from the other side, if a scientist is attached to a university or other research body, they should sign off on a declaration that AI was not used (or used but clearly marked as such), with a big fine and reputation damage for the university if their researchers violate it.

nxobject•10h ago
Where did the 6 application/year number come from? The justification seems a little fast-and-loose:

> According to the new notice, the number of PIs who submit more than six applications per year is “relatively low.”

I imagine that, given funding cuts, PIs are going to try to work harder to find funding opportunities (i.e. more proposals submitted) for insurance.

sampl3username•10h ago
Another "gift" from AI to the world. Another line to the long list of "minor side effects" from uncontrolled, unapollogetic corporations releasing radical technology into a world that didn't agree to it.
chvid•10h ago
But AI can fix this.
spwa4•10h ago
Are you seriously blaming corporations for advancing science and how bad and inconvenient that is for some people?
sampl3username•9h ago
Yes I do indeed think companies should not be able to vastly reshape the world at will.
lazide•7h ago
Who should?
Nasrudith•5h ago
In a free society nobody decides who is allowed to reshape the world. The alternative is frankly horrifying.
Cthulhu_•10h ago
TBH it was already possible before, but you'd need to either write templates or find competent low-cost ghostwriters. AI has made it easier and more accessible to do this kind of thing.

I wouldn't be surprised if the institutions or projects like CURL will create harsher measures to stop this flood of high-quality spam, like putting people on a list or requiring payment per submission.

I mean that last one isn't a bad thing, Apple did something like it years ago and I think asking for an upfront cost and having a strict (at the time) review program made it so that all apps were from serious developers and met minimum quality standards.

aitchnyu•7h ago
Agree. Before 2010, I read a get-rich-quick affiliate marketing book telling you to use an automated tool to publish SEO spam about goldfish about to 200 blogs.
lazide•7h ago
In Academia, this is what grad students have been sucked into doing for decades.
newsclues•9h ago
Blame the people not the tools.
sampl3username•9h ago
Technology is not neutral, nor it does exist in a vacuum. It is not neutral due to the fact that humans are not impassible wielders of technology.
newsclues•8h ago
Assuming you define tools and technology as the same thing I disagree.

Hammers build homes, and smash faces

Encryption keeps honest people’s information safe, but it also can be used for drug trafficking

The internet is used for social media and news. But it’s also used for child porn.

Tools are neutral, it’s how they are used that makes a tool like a gun, something that feeds a family or something that kills innocent people.

dang•8h ago
Ok, but can you please not post like this to Hacker News? It just makes things worse, and it's against multiple site guidelines—these, for example:

"Please don't fulminate."

"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html

bbarnett•10h ago
One of the issues here, is trust.

If I write anything, and put my name as the author, I'm 100% lying if I am just copy pasting text.

This holds true 10 years ago, if I copied in any text without attribution. A novel, a book, a grant app, a paper, whatever.

Just because you're now copying large swaths of text from an LLM, doesn't make it better than copying from a person, eg plagiarism. And if you took a person's text 10 years ago, and modified a few words out of thousands, yes, that'd be called plagiarism too.

(No, a spell checker isn't that. It's correcting your word for the same word. If you think spellcheckers are the same as whole paragraph insertion, please check your ethics meter, it's broken.)

If the work isn't yours, you need to say so. Otherwise you're being dishonest.

If people get upset at the notion of disclosing, that feels like guilty behaviour. Otherwise, why not disclose?

Now, taking a step back? We're in a period of transition.

I agree that vast imbalances are being created here. This is the true problem.

For example, an application process could state "LLM applications are fine", or not. Instead?

The current is "no" without clearly saying so, for obvious reasons (it's copying work you didn't write, as your own ... plagiarism), but any such "no" without a high incident of detection and punishment, is worse than anything.

The 6 applications seems like a cop out, although it is logical. It should also be coupled with a "OK you win, use LLMs" statement too.

On another note, soon there will be two types of people, and only one of which I will engage in thoughtful email/text communication with.

Those who use LLMs, and people worth talking too.

Of what value is any meaningful conversation, if the other person's response is an LLM paste? Might as well just talk to chatgpt instead.

(note, I'm talking about friendly debate among friends or colleagues. Seeking their opinion or vice versa.)

seydor•9h ago
> I'm 100% lying if I am just copy pasting text.

I disagree. It's like doing collage art with text. Like using samples to make music. We are already in that era of collating text and images and video from AIs. We should learn to embrace

addicted•7h ago
I can't speak to whether the 6 applications is the correct number, but it seems like a reasonable first pass to apply some limit as long as the NIH is closely monitoring this and modifying the restrictions as needed.
Balgair•7h ago
I'm curious for an economist's take here.

It seems to me that the incentives are such that now you've just guaranteed that all PIs will now submit 6 applications every year.

It may be less that what you originally were seeing, but I don't know the population stats.

It also may be that you've now poisoned all the other grant agencies (NSF, etc) and they'll soon have to have maximums too.

randomizedalgs•7h ago
For perspective, the CS programs in the NSF already have a two-submission limit per year [1].

Besides reducing the incentive to spam, this rule has had another positive effect: As a researcher without funding, you don't have to spend your whole year writing grants. You can, instead, spend your time on actual research.

With that said, NIH grants tend to me much more narrow than CS ones, and I imagine that it takes a lot more grants to keep a lab going...

[1] https://www.nsf.gov/funding/opportunities/computer-informati...

elehack•6h ago
Describing this as a limit on "CS programs" is a common, but erroneous, understanding of the proposal limit.

This specific solicitation — CISE Core Programs — has a 2-proposal-per-year limit. However, that only applies to this solicitation, and only counts proposals submitted to this solicitation. CISE Core Programs is an important CS funding mechanism, but there are quite a few other funding vehicles within CISE (Robust Intelligence, RETTL, SATC, and many more, including CAREER). Each has its own limits, that generally don't count or count against the Core Programs limit.

n20benn•7h ago
The same is happening in the Computer Security academic research realm. All of the four top conferences (USENIX Security, ACM CCS, IEEE S&P, and NDSS) have instituted a submission cap--you can't have your name on more than 6 papers being submitted in a given cycle. This has all happened within the last year, likely due to the same GenAI abuse that puts undue burden on PC reviewers.
jasonhong•5h ago
Having been on the program committee for some of these conferences, this issue of limiting number of submissions was being discussed long before GenAI. Specifically, there was talk of a few highly prolific security researchers that submitted 15-20 papers to these conferences each cycle, with pretty good quality too.