frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mac mini will be made at a new facility in Houston

https://www.apple.com/newsroom/2026/02/apple-accelerates-us-manufacturing-with-mac-mini-production/
226•haunter•2h ago•228 comments

I'm helping my dog vibe code games

https://www.calebleak.com/posts/dog-game/
504•cleak•6h ago•154 comments

Hacking an old Kindle to display bus arrival times

https://www.mariannefeng.com/portfolio/kindle/
125•mengchengfeng•3h ago•22 comments

Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3

https://github.com/moonshine-ai/moonshine
34•petewarden•1h ago•4 comments

How we rebuilt Next.js with AI in one week

https://blog.cloudflare.com/vinext/
256•ghostwriternr•3h ago•64 comments

Pi – a minimal terminal coding harness

https://pi.dev
57•kristianpaul•1h ago•20 comments

Nearby Glasses

https://github.com/yjeanrenaud/yj_nearbyglasses
183•zingerlio•5h ago•76 comments

Show HN: Emdash – Open-source agentic development environment

https://github.com/generalaction/emdash
83•onecommit•5h ago•32 comments

I pitched a roller coaster to Disneyland at age 10 in 1978

https://wordglyph.xyz/one-piece-at-a-time
371•wordglyph•10h ago•144 comments

Looks like it is happening

https://www.math.columbia.edu/~woit/wordpress/?p=15500
120•jjgreen•2h ago•81 comments

Hugging Face Skills

https://github.com/huggingface/skills
116•armcat•5h ago•34 comments

Optophone

https://en.wikipedia.org/wiki/Optophone
13•Hooke•4d ago•3 comments

Cell Service for the Fairly Paranoid

https://www.cape.co/
6•0xWTF•43m ago•2 comments

Dream Recorder AI – a portal to your subconscious

https://dreamrecorder.ai/
8•level87•1h ago•4 comments

Build Your Own Forth Interpreter

https://codingchallenges.fyi/challenges/challenge-forth/
36•AlexeyBrin•3d ago•10 comments

IRS Tactics Against Meta Open a New Front in the Corporate Tax Fight

https://www.nytimes.com/2026/02/24/business/irs-meta-corporate-taxes.html
171•mitchbob•10h ago•189 comments

OpenAI, the US government and Persona built an identity surveillance machine

https://vmfunc.re/blog/persona/
381•rzk•4h ago•121 comments

Steel Bank Common Lisp

https://www.sbcl.org/
124•tosh•4h ago•42 comments

Verge (YC S15) Is Hiring a Director of Computational Biology and AI Scientists/Eng

https://jobs.ashbyhq.com/verge-genomics
1•alicexzhang•6h ago

We installed a single turnstile to feel secure

https://idiallo.com/blog/installed-single-turnstile-for-security-theater
250•firefoxd•2d ago•108 comments

Diode – Build, program, and simulate hardware

https://www.withdiode.com/
435•rossant•4d ago•93 comments

Show HN: Tag Promptless on any GitHub PR/Issue to get updated user-facing docs

26•prithvi2206•5h ago•5 comments

IDF killed Gaza aid workers at point blank range in 2025 massacre: Report

https://www.dropsitenews.com/p/israeli-soldiers-tel-sultan-gaza-red-crescent-civil-defense-massac...
1046•Qem•11h ago•361 comments

Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP)

https://github.com/MdSadiqMd/AV-Chaos-Monkey
26•MdSadiqMd•1d ago•2 comments

We Are Changing Our Developer Productivity Experiment Design

https://metr.org/blog/2026-02-24-uplift-update/
20•ej88•3h ago•13 comments

The history of knocking on wood

https://resobscura.substack.com/p/neolithic-habits-machine-age-tools
3•benbreen•8h ago•0 comments

The Missing Semester of Your CS Education – Revised for 2026

https://missing.csail.mit.edu/
367•anishathalye•1d ago•109 comments

λProlog: Logic programming in higher-order logic

https://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/
139•ux266478•4d ago•36 comments

Extending C with Prolog (1994)

https://www.amzi.com/articles/irq_expert_system.htm
58•Antibabelic•2d ago•18 comments

Show HN: Recursively apply patterns for pathfinding

https://pattern-pathfinder.vercel.app/?fixtureId=%7B%22path%22%3A%22site%2Fexamples%2F_intro.fixt...
9•seveibar•1h ago•1 comments
Open in hackernews

Looks like it is happening

https://www.math.columbia.edu/~woit/wordpress/?p=15500
119•jjgreen•2h ago

Comments

sealeck•1h ago
There are many really excellent papers out there - the kind which will save you hours/months of work (or even make things that were previously inviable to build viable).

That said, it is amazing how terrible a lot of papers are; people are pressured to publish and therefore seem to get into weird ruts trying to do what they think will be published, rather than what is intellectually interesting...

CoastalCoder•44m ago
Thanks for respecting HN's KJV-only rule!

/jk

wmf•1h ago
I assume hep = high energy physics in this context. PI = professor who received a government grant.

Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality. This already happens to some extent today when the stakes are lower.

selridge•1h ago
Kinda. PI is principal investigator and usually they’re a professor with a grant (the grant being the thing they are the principal of investigating). That part is right. But they’re not really directly in the review loop. For some fields where things are small enough that folks can recognize style such as it exists, you could see reviewers passing over unfamiliar work and promoting familiar work. That was not the issue.

The issue was that it still was kind of hard to produce crappy mid rate papers, so you kind of needed the infrastructure of a small lab to do that. Now you don’t. The success rate for those mediocre papers produced by grad students and postdocs will go way down. It is possible that will cease to be a useful signal for those early career researchers.

MarkusQ•1h ago
But peer review (circa 1965-2010[1]) is just the prior iteration of the problem[2]; the wave of crap[3] produced by publish or perish (crica 1950-present[4]). Rejecting papers by outsiders is irrelevant; the problem is we want to determine which papers are good/interesting/worth considering out of the fire hose of bilge, and, though we were already arguably failing at this, the problem just got harder.

(I say arguably, because there is always the old "try it yourself and see if it actually works" trick, but nobody seems to be fond of this; it smacks of "do your own research" and we're lazy monkeys at heart, who would much rather copy off of someone else's homework.)

[1] https://books.google.com/ngrams/graph?content=peer+review&ye...

[2] https://www.experimental-history.com/p/the-rise-and-fall-of-...

[3] https://journals.plos.org/plosmedicine/article?id=10.1371/jo...

[4] https://books.google.com/ngrams/graph?content=publish+or+per...

moregrist•1h ago
Peer review isn’t the issue here. His comments are about Arxiv, which is a preprint server. Essentially anyone can publish a preprint. There’s no peer or other review involved.
xamuel•48m ago
This is a common misconception. People without academic affiliation (based on their email address) require someone to vouch for them before they can submit to arxiv. And papers submitted to arxiv (with or without affiliation) are reviewed, and many are rejected.
bmacho•5m ago
Papers on arxiv are only reviewed for formal requirements. They don't review every pdf there, and reject them for being false or wrong.

You are right that arxiv is an invite-only website, but once you are in, there is no peer review of any form.

xamuel•54m ago
>Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality.

I'm a complete outsider (not even in academia at all) and just got a paper accepted in the top math biology journal [1]. But granted, it took literally years to write it up and get it through. I do really worry that without academic affiliation it is going to get harder and harder for outsiders as gates are necessarily kept more and more securely because of all the slop.

[1] "Specieslike clusters based on identical ancestor points" https://philpapers.org/archive/ALESCB.pdf

sixtyj•1h ago
Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.

E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.

tossandthrow•1h ago
This approach dismisses the cases where Ai submissions generally are better.

I don't think this is appreciated enough: a lot of Ai adaptation is not happening because of cost on the expense of quality. Quite the opposite.

I am in the process of switching my company's use of retool for an Ai generated backoffice.

First and foremost for usability, velocity and security.

Secondly, we also save a buck.

moregrist•1h ago
> This approach dismisses the cases where Ai submissions generally are better.

You’re perhaps missing the not so subtle subtext of Peter Woit’s post, and entire blog, which is:

While AI is getting better, it’s still not _good_ by the standards of most science. However it’s as good as hep-th where (according to Peter Woit) the bar is incredibly low. His thesis is part “the whole field is bad” and part “Arxiv for this subfield is full of human slop.”

I don’t have the background to engage with whether Peter Woit’s argument has merit, but it’s been consistent for 25+ years.

tossandthrow•1h ago
My comment was more an answer to the proposed gatekeeping of science as a human activity.

Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau.

We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves.

zozbot234•38m ago
What about the new result that was recently derived by GPT 5.2 Pro/Deep Research? That was also hep-th. https://openai.com/index/new-result-theoretical-physics/ https://arxiv.org/abs/2602.12176
asdfman123•1h ago
Maybe we need to find a new metric to judge academics by beyond quantity of papers
mclau153•1h ago
What is happening?
bryanrasmussen•1h ago
It is happening that people can now find out what articles are about by clicking the links to said articles and reading them! It's an amazing world, man. The future!
guerrilla•1h ago
Nope, the site is down.
wmf•1h ago
Human science being replaced by AI I guess.
Sharlin•1h ago
No, human mediocrity being replaced by AI. Mediocrity meaning papers that exist only to increment the magic "num_citations" variable.
Sharlin•1h ago
The end of mediocrity, optimistically speaking. Getting so flooded in mediocrity that the gems are lost in the noise, pessimistically speaking.
babblingfish•1h ago
The number of submissions to high energy physics category on arXiv is double this year compared to the historical average. The author hypothesizes the increase is due to papers being written by LLMs.
blibble•1h ago
the collapse of the signal to noise ratio

in every domain, simultaneously

essentially, the end of the progress of humanity

selridge•1h ago
Honestly, this is good. We were already in a completely unsustainable system. Nobody had an alternative. We still don’t have one but at least now it’s not just merely unsustainable— it is completely fucked in half.

This kind of pattern is gonna get repeated in a lot of sectors when previous practices that were merely unsustainable become unsustained.

commandlinefan•1h ago
Honestly, publication has been pretty meaningless for a long time, long before AI could generate complete paragraphs. "Publish or perish" meant that a lot of human-generated slop was being published by people who were put in a position of perverse incentives by a "well-meaning" (?) system. There will still be meaningful contributions, but they'll be as rare as they ever were.
Certhas•59m ago
This has been my optimistic take on the situation for the last two years. My pessimistic take is that social systems have an incredible ability to persist in a state of utter fuckedness much longer than seems reasonably possible.
selridge•51m ago
Yeah and like…who knows if what is coming is better. Maybe big labs cartelize and withdraw from the global publication market (which is already unraveling). Maybe we ban theory and demand all papers be empirical, though that will amount to the same thing: seizure of publication by big actors.

As you point out, human systems are machines for making do. There is no guarantee that dramatic pressures produce dramatic change. But I think we’ll see something weird, soon.

zoogeny•1h ago
One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.

But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.

If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.

jellyroll42•1h ago
By its nature, it can only produce _another_ Redis, not _the next_ Redis.
smokel•1h ago
This is probably an outdated understanding of how LLMs work. Modern LLMs can reason and they are creative, at least if you don't mind stretching the meaning of those words a bit.

The thing they currently lack is the social skills, ambition, and accountability to share a piece of software and get adoption for it.

Philpax•1h ago
The human operator controls what gets built. If they want to build Redis 2, they can specify it and have it built. If you can't take my word for it, take those of the creator of Redis: https://antirez.com/news/159
krashidov•1h ago
The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.

The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop

root_axis•1h ago
That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.

However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.

mixdup•53m ago
Will they be willing to offer their content for training AI models?
oblio•1h ago
This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.

Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.

10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.

mosura•1h ago
There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.

We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.

gritspants•11m ago
Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.

On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.

Also on the list to knock out, as we're not impressed with Wiz's resource usage.

Aeolun•1h ago
Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.
ModernMech•58m ago
> New languages and technology will be derivatives of existing tech.

This has always been true.

> There will be no React successor.

No one needs one, but you can have one by just asking the AI to write it if that's what we need.

> There will never be a browser that can run something other than JS.

Why not, just tell the AI to make it.

> And the reason for that is because in 20 years the new engineers will not know how to code anymore.

They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.

Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.

zozbot234•48m ago
AI will finally rewrite everything in Rust.
mosura•1h ago
This massively confusing phase will last a surprisingly long time, and will conclude only if/when definitive proof of superintelligence arrives, which is something a lot of people are clearly hoping never happens.

Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.

So get used to being ever more confused.

lmeyerov•1h ago
I've been calling this Software Collapse, similar to AI Model Collapse.

An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. As the cost to clone good ideas goes to zero, software converges towards the best ideas across the field and stops differentiating.

It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.

The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.

zozbot234•42m ago
> Things are getting higher quality but bland.

"Bland" is not a bad thing. The FLOSS ecosystem we have today is quite "bland" already compared to the commercial and shareware/free-to-use software ecosystem of the 1980s and 1990s. It's also higher quality by literally orders of magnitude, and saves a comparable amount of pointless duplicative effort.

Hopefully AI will be a similar story, especially if human reviewing/surveying effort (the main bottleneck if AI coding proves effective) can be mitigated via the widespread adoption of rigorous formal metods, where only the underlying specification has to be reviewed whereas its implementation is programmatically checkable.

titzer•37m ago
The dark side of this is that everyone has graduated to prompt engineering and there's no one with expertise left who can debug it. We'll be entirely dependent on AIs to do the debugging too. When whoever controls the AIs decides to enshittify that service, we'll be truly screwed. That is, if we can't run competitive models locally at reasonable efficiency and price.

I don't know how this will play out, except that I've been so cowed by the past 15 years of enshittification that I don't feel hopeful.

sidrag22•1h ago
Noise is going to be the coming years biggest issue for so many fields. A losing battle like arguing with a conspiracy minded relative, you can slowly and clearly address one conspiracy and disprove it, by the time you do, they are deep into 8 new ones.
general_reveal•1h ago
“And further, by these, my son, be admonished: of making many books there is no end; and much study is a weariness of the flesh.” - Ecclesiastes 12:12 (KJV)

I suppose we’re entering TURBO mode for of ‘making many books there is no end’.

dang•1h ago
> submission numbers in the last couple months have nearly doubled with respect to the stable numbers of previous years

This is showing up (no pun intended) on HN as well. The # of submissions and # of submitters, which traditionally had been surprisingly stable—fluctuating within a fixed range for well over 10 years—has recently been reaching all-time highs. Not double, though...yet.

minimaxir•1h ago
Are the increasing # of distinct submitters from established accounts or new accounts?
dang•1h ago
Don't know that yet either! at least that one isn't hard to answer, it just needs a bit of spare time.
vermilingua•1h ago
Is it feasible to differentiate increased agent-traffic from the organic growth in popularity HN has been seeing?
dang•1h ago
We don't know yet.
oblio•1h ago
We need the equivalent of Bayesian filtering for email spam and of Page Rank for search.

Now that I think of this, whoever solves this well will have the next hyperscaler.

gus_massa•49m ago
I agree, but then you get https://news.ycombinator.com/item?id=44719222

It has a lot of red flags. Second (re)post of dormant account, vive coded, AI, the biological model is horrible. But it was a nice project, 5/5 would upvote again.

Perhaps the important detail is "[I] spent about a month on it."

hedgehog•1h ago
Robots coming for todsacerdoti's job.
readitalready•1h ago
Is it because there's a lot more AI related content as the industry quickly shifts? Or is it bots submitting content?
cyanydeez•58m ago
It's likely people with mediocre ideas but access to free LLM tools are able to get over the care-risk-reward activation energy and consequently submit their ideas with the help of LLMs.
marginalia_nu•55m ago
I've noticed a pretty significant uptick in new accounts posting complete garbage. I don't mean the comments are bad, they're not even words in many cases.

I collected a few of them: https://news.ycombinator.com/item?id=47130684

But it also seems some topics (in particular AI) attract a lot of accounts that post incredibly low quality comments, far below the quality you'd expect from HN. Ofte it's in reasonable English, but it's just inane reddit-level drivel. Unclear if these topics attract low quality posters, or if these are bot accounts.

Also looking at the three first pages of /noobcomments, we find 28 comments with EM-dashes in them. That's not proof of AI, but if you compare with /newcomments, you find exactly one EM-dash going back as far. That's a bit of a statistical aberration.

krapp•47m ago
I would wager the vast majority are alt accounts of existing users. People who don't want to risk their karma or reputation but who do want to go mask off for certain subjects. After that it's bound to be bots run by HN users. I just don't think HN is so popular that the rush of green accounts popping up actually represents new users. Maybe I'm wrong , though.
agoodusername63•40m ago
Reddit has been shedding its techy enthusiast crowd for the past few years with the combination of policy changes and insufficient moderation against LLM bots. I wonder if that’s contributing.
rob•40m ago
I've witnessed bots here on accounts that are years old with no history that start posting multiple times in a short timeframe suddenly after being dormant forever. Makes you wonder how they're getting these old accounts. It's not just new ones.
dragontamer•23m ago
Black market accounts. Some human made them years ago for a price, they are sat on by some black market / grey market guy and now he's selling the accounts for a profit.

Old accounts from multiple social media platforms has a $$$$$ value.

dang•19m ago
I believe it's the former, which of course does not exclude the latter.
rob•42m ago
I would imagine tons of them are bots. They're getting hard to distinguish, they don't do the normal tropes any longer. They'll type in all lowercase, they'll have the creator post manually to throw you off, they'll make multiple comments within 45 seconds that normal human couldn't do. All things I've witnessed here over the past couple of weeks. And those are just the ones I've caught.
Retr0id•25m ago
I'd love to see some graphs
snowhale•11m ago
curious whether the quality distribution changes too, or just the volume. arXiv can't really downvote noise but HN can at least flag/bury it. might be why the doubling shows up on arXiv first and HN is catching up more slowly.
hmokiguess•1h ago
I think this is solid proof that the bedrock of academia is deeply motivated by money and still defaults to optimizing where it impacts its bottom line. If professors can get more grants and more publications in less time with less spending, of course they are going to be doing that. This isn't just because of AI, but also because of how this system is designed in the first place.
mathisfun123•1h ago
> I think this is solid proof that the bedrock of academia is deeply motivated by money and still defaults to optimizing where it impacts its bottom line.

no shit - could've asked literally anyone that's finished their phd to save yourself the conjecturing/hypothesizing about this fact.

Certhas•1h ago
This is stupid. Nobody motivated by money is in academia. Academics are motivated by curiosity, but also prestige, vanity and the wish to hire students and collaborators. And on top of human vanity working it's magic, the ideology that everything should be a market and competition is the final form of social organisation, has pervaded academia just as much as everything else.

I agree that the system of publishing papers to gain prestige to gain resources to publish papers was already broken pre AI.

jasperry•57m ago
You're right that being a scientist is unlikely to result in personal wealth and so that's not the primary drive for those who seek faculty or research positions. However, it's not just curiosity, prestige and vanity either, because a big factor for promotion and tenure is how much grant money you bring in. That money is what keeps the university's lights on and buys the lab equipment and pays the grad students, so it's still money as a primary driver in the background.
tombert•31m ago
My dad said he stopped being a professor because of that.

He liked the research, and he even liked teaching, but he absolutely hated having to constantly try and find grant money. He said he ended up seeing everyone as "potential funders" and less like "people" because his job kind of depended on it, and it ended up burning him out. He lasted four years and went into engineering.

I don't know that "motivation" is the right word for it, because I don't think professors like having to find grant money all the time. I think most people who get PhDs and try to go to academia do it for a genuine love for the subject, and they find the grant-searching to be a necessary evil part of the job; it's more "survival" than regular motivation, though I am admittedly splitting hairs here.

noslenwerdna•53m ago
just replace "money" with "prestige" and I think the above comment works just fine
dang•17m ago
> This is stupid.

Can you please make your substantive points without swipes or calling names? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

Your comment would be fine without that first bit.

guerrilla•1h ago
Website's down. What was it about?
pavel_lishin•1h ago
Apparently "hep-th" stands for "High Energy Physics - Theory".
ModernMech•1h ago
I mean... I dunno I wish the AI could write my papers. I ask it to and it's just bad. The research models return research that doesn't look anything like the research I do on my own -- half of it is wrong, the rest is shallow, and it's hardly comprehensive despite having access to everything (it will fail to find things unless you specifically prompt for them, and even then if the signal is too low it'll be wrong about it). So I can't even trust it to do something as simple as a literature review.

Insofar as most research is awful, it's true that the AI is producing research that looks and sounds like most of it out there today. But common-case research is not what propels society forward. If we try to automate research with the mediocrity machine, we'll just get mediocre research.

8organicbits•1h ago
> when AI agents started being able to write papers indistinguishable in quality from [...]

Given that arXiv lacks peer review, I'm not clear what quality bar is being referenced here.

Chinjut•50m ago
Note the following comment by Jerry Ling: "The effect goes away if you search properly using the original submission date instead of the most recent submission date. By using most recent submission date, your analysis is biased because we’re so close to the beginning of 2026 so ofc we will see a peak that’s just people who have recently modified their submission."
bitbytebane•37m ago
STOP CITING YOUTUBERS AS A CREDIBLE SOURCE OF ANYTHING.
tombert•37m ago
I like AI, I use Codex and ChatGPT like most people are, but I have to say that I am pretty tired of low-effort crap taking over everything, particularly YouTube.

There have always been content mills, but there was still some cost with producing the low-effort "Top 10" or "Iceberg Examination" videos. Now I will turn on a video about any topic, watch it for three minutes, immediately get a kind of uncanny vibe, and then the AI voice will make a pronunciation mistake (e.g. confusing wind, like the weather effect or the winding of a spring), or the script starts getting redundant or repetitive in ways that are common with AI.

And I suspect these kinds of videos will become more common as time goes on. The cost to producing these videos is getting close to "free" meaning that it doesn't take much to make a profit on them, even if their views are relatively low per-video.

If AI has taught me anything, it's that there still is no substitute for effort. I'm sure AI is used in plenty of places where I don't notice it, because the people who used it still put in effort to make a good product. There are people who don't just make a prompt like "make me a fifteen minute video about Chris Chan" and "generate me a thumbnail with Chris Chan with the caption 'he's gone too far'", and instead will use AI as a tool to make something neat.

Genuine effort is hard, and rare, and these AI videos can give the facsimile of something that prior to 2023 was high effort. I hate it.

mianos•31m ago
This title should have been editorialised. It's like a headline from the daily mirror.
hhsuey•24m ago
What's happening? I hate click bait titles like these.