frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Unsolved Problems in MLOps

https://queue.acm.org/detail.cfm?id=3762989
1•aarghh•1m ago•0 comments

Show HN: A Cyberpunk Tuner

https://un.bounded.cc
1•hirako2000•3m ago•0 comments

Education in a Post Text World

https://anandsanwal.me/education-post-text-world/
1•herbertl•3m ago•0 comments

macOS 26 Tahoe review: Power under glass

https://sixcolors.com/post/2025/09/macos-26-tahoe-review-power-under-glass/
1•herbertl•4m ago•0 comments

Tips for Faster Rust Compile Times

https://corrode.dev/blog/tips-for-faster-rust-compile-times/
1•itzlambda•5m ago•0 comments

Bored Games

https://nik.art/bored-games/
1•herbertl•5m ago•0 comments

The Company Man

https://www.lesswrong.com/posts/JH6tJhYpnoCfFqAct/the-company-man
1•chlorokin•5m ago•0 comments

Delphi-2M LLM uses medical records, lifestyle to provide risks for 1k+ diseases

https://www.nature.com/articles/d41586-025-02993-x
1•rntn•6m ago•0 comments

Golang, JavaScript and C++ dancing together

https://github.com/sait/pdfmakego
1•igtztorrero•11m ago•1 comments

Aleph raises a $29M Series B to accelerate AI adoption in FP&A

https://www.getaleph.com/blog/series-b
1•mattkruk•12m ago•0 comments

Works in Progress is now in print

https://www.worksinprogress.news/p/works-in-progress-is-now-in-print
2•ortegaygasset•13m ago•0 comments

Microplastics May Trigger Alzheimer's-Like Brain Damage

https://scitechdaily.com/microplastics-may-trigger-alzheimers-like-brain-damage/
1•01-_-•14m ago•0 comments

Famous cognitive psychology experiments that failed to replicate

https://buttondown.com/aethermug/archive/aether-mug-famous-cognitive-psychology/
2•PaulHoule•15m ago•1 comments

The Case for an Iceberg-Native Database

https://www.warpstream.com/blog/the-case-for-an-iceberg-native-database-why-spark-jobs-and-zero-c...
1•ordinarily•17m ago•0 comments

Such a Classic

https://blog.hermesloom.org/p/debunking-the-myth-of-agi
1•sigalor•19m ago•0 comments

Smallest, Slimmest and Lightest Smartphones

https://phonesized.com/charts/
1•mgh2•20m ago•0 comments

Agent Process Intelligence – Map work and ground agents in reality

https://www.clearwork.io/clearwork-agent-process-intelligence
2•abrooks43•20m ago•1 comments

Show HN: AI Virtual Try-On and Garment Design Tool (No Login, Free)

https://tryon.aivory.space
1•aivoryZen•21m ago•0 comments

GuardDog is a CLI tool to Identify malicious PyPI and NPM packages

https://github.com/DataDog/guarddog
1•jmsmtn•24m ago•0 comments

Optimizing ClickHouse for Intel's 280 core processors

https://clickhouse.com/blog/optimizing-clickhouse-intel-high-core-count-cpu
3•ashvardanian•25m ago•0 comments

The "Debate Me Bro" Grift: How Trolls Weaponized the Marketplace of Ideas

https://www.techdirt.com/2025/09/17/the-debate-me-bro-grift-how-trolls-weaponized-the-marketplace...
37•toomanyrichies•25m ago•7 comments

TraceFind – Email Osint Information Gathering Tool – 300 Modules

https://tracefind.info/
1•codinglive•26m ago•0 comments

Chimps likely ingest equivalent of several alcoholic drinks every day

https://news.berkeley.edu/2025/09/17/in-the-wild-chimps-likely-ingest-the-equivalent-of-several-a...
2•geox•27m ago•0 comments

The Hacker Who Helped Score a $243M Verdict Against Tesla

https://www.pcmag.com/articles/hacker-who-helped-score-243-million-verdict-against-tesla
2•fortran77•29m ago•0 comments

Ask HN: How do you choose what phone to buy

1•snjy7•30m ago•2 comments

Fed delivers normal-sized rate cut, sees steady pace of further reductions

https://www.reuters.com/business/fed-delivers-normal-sized-rate-cut-sees-steady-pace-further-redu...
4•SilverElfin•33m ago•1 comments

AI's ability to displace jobs is advancing quickly, Anthropic CEO says

https://www.axios.com/2025/09/17/anthropic-amodei-ai
2•jmsflknr•34m ago•0 comments

LLMs can't solve production issues

https://clickhouse.com/blog/llm-observability-challenge
3•mikeshi42•35m ago•1 comments

Faster Rust Builds on Mac

https://nnethercote.github.io/2025/09/04/faster-rust-builds-on-mac.html
1•itzlambda•35m ago•0 comments

Marimo: Is building data apps easier now?

https://www.lovelydata.cz/en/blog/marimo-is-building-data-apps-easier-now/
1•lovelydata•36m ago•0 comments
Open in hackernews

Ask HN: Is anyone else sick of AI splattered code

63•throwaway-ai-qs•1h ago
Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

Comments

thesuperbigfrog•1h ago
If you eat lots of highly processed food, don't be surprised if it makes you less healthy.
MongooseStudios•1h ago
I'm sick of AI everything. Every day I hope today is the day the grift machine finally implodes.

In the short term it's going to make things suck even more, but I'm ready to rip that bandaid off.

P.S. To anyone that is about to reply to this, or downvote it, to tell me that AI is the future, you should be aware that I also hope someone places a rotting trout in your sock drawer each day.

StellaMary•1h ago
No hate mate it's real that ai can really code anything u need but the brutal fact is you need to be much smarter to validate its code all I can see is skill and xp issue. But as Rust node Js dev I made ultra fast http framework using AI with just 200 lines of code it had beaten fastify hono and express. It's all possible just because of my xp in ffi and Rust with node js Architect lessons.

https://www.npmjs.com/package/brahma-firelight

MongooseStudios•1h ago
For you, a trout in the sock drawer AND the closet.
pnw_throwaway•49m ago
You spam that enough yet?
Workaccount2•1h ago
I get this take because of the existential threat, but it ignores that at least right now, LLMs are enabling people to get more from their computer than ever before.

Maybe LLMs can't build you an enterprise back-end for thousands of users. But they sure as shit can make you a bespoke applet that easily tracks your garage sale items. LLMs really shine in greenfield <5k LOC programs.

I think it's largely a mistake for devs to think that LLMs are made for them, rather than for enabling regular people to get far more mileage out of their computers.

duxup•1h ago
I'm really not seeing a lot of code that I can say is bad AI code.

I and my coworkers use AI, but the incoming code seems pretty ok. But my view is just my current small employer.

whycome•28m ago
The breadth of industry is so vast that people have wildly different takes on this. For a lot of simple coding tasks (eg custom plugins or apps) an LLM is not only efficient, but extremely competent. Some traditional coders are having a harder time working with them when a major challenge comes from defining the problem and constraints well. It’s usually something kept in head. So, new skill sets are emerging and being refined. The ones that thrive here will not be coders, but it will be generalists with excellent management and communication skills.
nharada•1h ago
My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.

kstrauser•1h ago
I disagree. Code is code: it speaks for itself. If it's high quality, I don't care whether it came from a human or an AI trained on good code examples. If it sucks, it's not somehow less awful just because someone worked really hard on it. What would change for me is how tactful I am in wording my response to it, in which case it's a little nicer replying to AIs because I don't care about being mean to them. The summary of my review would be the same either way: here are the bad parts I want you to re-work before I consider this.
jacknews•1h ago
Half of the point of code review is to provide expert or alternative feedback to junior and other developers, to share solutions and create a consistent style, standard and approach.

So no, code is not just code.

chomp•1h ago
> What would change for me is how tactful I am in wording my response to it

So code is not code? You’re admitting that provenance matters in how you handle it.

alansammarone•1h ago
I've had a similar discussion with a coworker which I respect and know to be very experienced, and interestingly we disagreed on this very point. I'm with you - I think AI is just tool, and people shouldn't be off the hook because they used AI code. If they consistently deliver bad code, bad PR descriptions, or fail to explain/articulate their reasoning, I don't see any particular reason we should treat it any differently now that AI exists. It goes both ways, of course - reviewer also shouldn't pay less attention when the code is did not involve AI help in any form. I think these are completely orthogonal and I honestly don't see why people have this view.

The person who created the PR is responsible for it. Period. Nothing changes.

skydhash•59m ago
It does because the amount of PR goes up. So instead of reviewing, it’s more like back and forth debugging where you are doing the check that the author was supposed to do.
alansammarone•55m ago
So the author is not a great programmer/professional. I agree with you that they should have done their homework, tested it, have a mental model for why and how, etc. If they don't, it doesn't seem to be particularly relevant to me if that's because they had a concussion or because they use AI.
skydhash•42m ago
It’s easy to skip quality with code, starting with coding only the happy path and bad design that hides bugs. Handling errors properly can take a lot of times, and designing to avoid errors takes longer.

So when you have a tools that can produce things that fits the happy path easily, don’t be surprised that the amount of PRs goes up. Because before, by the time you can write the happy path that easily, experience has taught you all the error cases that you would have skipped.

roughly•1h ago
The problem with AI generated code is there’s no unifying theory behind the change. When a human writes code and one part of the system looks weird or different, there’s usually a reason - by digging in on the underlying system, I can usually figure out what they were going for or why they did something in a particular way. I can only provide useful feedback or alternatives if I can figure out why something was done, though.

LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.

skydhash•1h ago
Adding to the sibling comment by @jacknews. Code is much more than an algorithm, it’s the description of the algorithm that is non ambiguous and human readable. Code review is a communication tool. The basic expectation is that you’re a professional and I’m just adding another set of eyes, or you’re a junior and I’m reviewing for the sake of training.

So when there’s some confusion, I’m going back to the author. Because you should know why each line was written and how it contributes to the solution.

But a complete review takes time. So in a lot of places, we only do a quick scan checking for unusual stuff instead of truly reviewing the algorithms. That’s because we trust our colleagues to test and verify their own work. Which AI users usually skip.

jcranmer•1h ago
The problem with reviewing AI-written code is that AI makes mistakes in very different ways from the way humans make mistakes, so you essentially have to retrain yourself to watch for the kinds of mistakes that AI makes.
risyachka•58m ago
This is all great except it doesn't give any reason not to label AI code.
elviejo•57m ago
Code is not only code.

It's like saying physics it's just math. If we read:

F = m*a

There is ton of knowledge encoded in that formula.

We cannot evaluate the formula alone. We need the knowledge behind it to see if it matches reality.

With llms we know for a fact that if the code matches reality, or expectations, it's a happy accident.

bluGill•55m ago
Code doesn't always speak for itself. I've had to do some weird things that make no sense on its own. I leave comments but they are not always easy to understand. Most of this is when I'm sharing data across threads - there is good reason for each lock/atomic and each missing lock. (I avoid writing such code, but sometimes there is no choice). If AI is writing such code I don't trust them to figure out those details, while I have some (only a minority but some) coworkers I trust to figure this out.
Workaccount2•1h ago
>My biggest annoyance is that people aren't transparent about when they use AI

You get shamed and dismissed for mentioning that you used AI, so naturally nobody mentions they used AI. They mention AI the first time, see the blow back, and never mention it again. It just shows how myopic group-think can be.

sys13•1h ago
> the best suggestions I've seen are found by linters in CI, and spell checkers

I don't think this is a rational take on the utility of AI. You really are not leveraging it well.

yomismoaqui•1h ago
AI coding should be better with a little profesionalism thrown in. I mean, if you have commit that code you are responsible for it. Period.

And I say this as a grumpy senior that has found a lot of value in tools like Copilot and specially Claude Code.

vegancap•1h ago
Yeah, I get the feeling. I'm torn to be honest, because I quite enjoy using it, but then I sift through everything line by line, correct things, change the formatting. Alter parts it's gotten wrong. So for me, it's saving me a little bit of time manually writing it all out. My colleagues are either like me, or aren't sold on it. So I think there's a level of trust and recognition that even if we are using it, we're using it cautiously, and wouldn't just YOLO some AI generated code straight into main.

But we're a really small but mature engineering org, I can't imagine the bigger companies with hundreds of less experienced engineers, just using it without car and caution, it must just cause absolutely chaos (or will soon).

barrell•1h ago
I'm not convinced it's what the future holds for three main reasons:

1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.

2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.

3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.

I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.

I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app

koakuma-chan•1h ago
I agree on all, but I also have a PTSD of the pre-LLM era where people kept telling me that my code is garbage, because it wasn't SOLID or whatever. I prefer the way it is now.
majorbugger•56m ago
and what LLM has to do with your PTSD?
koakuma-chan•54m ago
It has to do because those assholes will no longer tell me that I should have written an abstract factory or some shit. AI generated code is so fucking clean and SOLID.
skydhash•55m ago
SOLID is a nice sets of principles. And like principles, there are valid reasons to break them. To use or not to use is a decision best taken after you’ve become a master, when you know the tradeoffs and costs.

Learn the rules first, then learn when to break them.

koakuma-chan•28m ago
This is idealistic. Do you actually sit down and evaluate whether the code is SOLID or maybe it's more like you're just vibe checking it, and it doesn't actually matter if you call that SOLID or DRY or whatever letters of the alphabet you prefer. Meanwhile your project is just a PostgreSQL proxy.
skydhash•18m ago
These are principles, not mathematical equations. It’s like drawing an human face. The general rule is that the eyes are spaced by another eye length viewed from the front. Or the intervals between the chin, the base of the nose, the eyebrows and the hairline are equal. It does not fit every face, and artists do break these rules. But a beginner breaks them for the wrong reasons.

So there’s a lot of heuristics in code’s quality. But some time, it’s just plain bad.

james2doyle•1h ago
Totally agree. I use for chores (generate an initial README, document the changes from this diff, summarize this release, scaffold out a new $LANG/$FRAMEWORK project) that are well understood. I have also been using it to work in languages that I can/have written in the past but are out of practice with (python) but I’m still babysitting it.

I recently used it to write a Sublime Text plugin for me and forked a Chrome extension and added a bunch of features to. Both open source and pretty trivial projects.

However, I rarely use it to write code for me in client projects. I need to know and understand everything going out that we are getting paid for.

risyachka•54m ago
This.

If someone says "Most of my code is AI" there are only 3 reasons for this 1. They do something very trivial on daily basis (and its not a bad thing, you just need to be clear about this). 2. The skill is not there so they have to use AI, otherwise it would be faster to DIY it than to explain the complex case and how to solve it to AI. 3. They prefer to explain to llm rather than write code themselves. Again, no issue with this. But we must be clear here - its not faster. Its just someone else is writing the code for you while you explain it in details what to do.

codr7•1h ago
Not my future.
Herring•1h ago
AI will keep improving

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

https://epoch.ai/blog/what-will-ai-look-like-in-2030

There's a good chance that eventually reading code will become like inspecting assembly.

runjake•1h ago
> AI will keep improving

Agree. But most code already generated won't be improved until many years from now.

> There's a good chance that eventually reading code will become like inspecting assembly.

Also agree, but I believe it will be very inefficient and complex code, unlike most written assembly.

I'm not sure tight code matters to anyone but maybe 0.0001% of us programmers, anymore.

epicureanideal•1h ago
> There's a good chance that eventually reading code will become like inspecting assembly.

We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

The equivalent situation for LLMs would be if we were reviewing the prompts only, and if we had 100% confidence that the prompt resulted in code that does exactly what the prompt asks.

Otherwise we need to inspect the generated code. So the situation isn’t the same, at least not with current LLMs and current LLM workflows.

YeGoblynQueenne•46m ago
>> We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

I think the reason "we" don't read, or write, assembly is that it takes a lot of effort and a detailed understanding of computer architecture that are simply not found in the majority of programmers, e.g. those used to working with javascript frameworks on web apps etc.

There are of course many "we" who work with assembly every day: as a for instance, people working with embedded systems, or games programmers as another.

incomingpain•1h ago
>I think I'm finally ready to get off the ride.

I'm sorry you feel that way. Yes, this is probably the future.

AI is a new tool or really a huge new category of different AI tools that will need time to gain competency on.

AI doesnt eliminate the need for developers, it's just a whole new load of baggage and we will NEVER get to the point where that new pile of problems becomes 0.

A tool that gemini cli really loves if Ruff, I run it often :)

sexyman48•1h ago
I'm finally ready to get off the ride

c ya, wouldn't wanna b ya.

codingdave•1h ago
I think we're going to look back on this time as "Remember when basically all new software dev spun its wheels for years while everyone tried to figure out where AI fit in?"

I'm not sick of AI. I'm just sick of people thinking that AI should be everything in our industry. I don't know how many times I can say "It is just a tool." Because it is. We're 3 years deep into LLM-based products, and people are just now starting to even ask... "Hey, where are the strengths and weaknesses of this tool, and best practices for when to use it or not?"

andrewstuart•1h ago
No I love it.

When I see AI code I feel excited that the developer is building stuff beyond their previous limits.

madamelic•1h ago
As others have said, LLM generation of code is no excuse for not self-reviewing, testing, and understanding their own code.

It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.

The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.

I adopted LLM coding fairly early on (GPT3) and the difference between then and now is immense. It's a fast-moving technology still so I don't have the expectation that the model or tool I use today will be the one I use in 3 months.

I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind. It's going to be the equivalent, in my opinion, of keeping hard cover reference manuals on your desk versus using a search engine.

bigstrat2003•58m ago
I think people will eventually wake up and realize LLMs aren't actually good for generating code, but it might take a while. The hype train is rolling at full steam and a lot of people won't get off until they get personally burned.
gerash•55m ago
One downside IMHO is reimplementing the same building blocks rather than refactoring and reusing because it’s cheap to reimplement.
add-sub-mul-div•53m ago
I am so glad I spent 25 years in this field, made my bag, and got out right before it became the norm to stop doing the fun part of the job yourself.
apple4ever•52m ago
I'm sick of AI in general.
twalichiewicz•44m ago
I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.

When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”

But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.

whycome•37m ago
Did you just basically coin “artisanal code”?
greenavocado•44m ago
AI generated code by Claude Sonnet, Kimi K2 0905, GLM-4.5 is not good enough to simultaneously maintain structure and implement features in complex code without doing insane things like grossly violating each SOLID principle. If you impose too much structure upon them, they fall apart as they don't truly understand the long range ramifications of their code too often. These assistants are best suited for generating highly testable snippets of code and pushing them to work in a large codebase pushes their capabilities too far, too often.
juancn•42m ago
I only use small local models like those of IntelliJ (under 100M each), which just save you the tedium of typing some common boilerplate.

But I don't prompt them, they typically just suggest a completion, usually better than what we had before from pure static analysis.

Anything more it detracts. I learn nothing, and the code is believable crap, which requires mindbogglingly boring and intense code reviews.

It's sometimes fine to prototype throw-away code (specially if you don't to intend to invest in learning the tech deeply), but I don't like what I miss by not doing the thinking by myself.

breppp•39m ago
Due to Brandolini's Law, there's an asymmetry between the time it takes to generate crap code and the time it takes to review crap code.

That what makes it seem disrespectful, as if someone is wasting your time when they could have done better

throwacct•1m ago
I'm using "AI" almost exclusively to scaffold projects. I spent 2 days trying to find the reason the code wasn't working the way it supposed to work. Where I work we're using it with moderation, knowing that if you generate code, you must double check everything and confirm that what you generated doesn't smell. You'll be held accountable if something brakes because you were eager to push unreviewed code.