frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

WASM 3.0 Completed

https://webassembly.org/news/2025-09-17-wasm-3.0/
255•todsacerdoti•1h ago•69 comments

Anthropic irks White House with limits on models’ use

https://www.semafor.com/article/09/17/2025/anthropic-irks-white-house-with-limits-on-models-uswhi...
110•mindingnever•1h ago•43 comments

Apple Photos app corrupts images

https://tenderlovemaking.com/2025/09/17/apple-photos-app-corrupts-images/
811•pattyj•8h ago•305 comments

Tinycolor supply chain attack post-mortem

https://sigh.dev/posts/ctrl-tinycolor-post-mortem/
80•STRiDEX•2h ago•35 comments

Depression Reduces Capacity to Learn to Actively Avoid Aversive Events

https://www.eneuro.org/content/12/9/ENEURO.0034-25.2025
73•PaulHoule•2h ago•19 comments

DeepSeek writes less secure code for groups China disfavors

https://www.washingtonpost.com/technology/2025/09/16/deepseek-ai-security/
110•otterley•2h ago•54 comments

DeepMind and OpenAI Win Gold at ICPC, OpenAI AKs

https://codeforces.com/blog/entry/146536
49•notemap•1h ago•27 comments

Drought in Iraq Reveals Ancient Tombs Created 2,300 Years Ago

https://www.smithsonianmag.com/smart-news/severe-droughts-in-iraq-reveals-dozens-of-ancient-tombs...
35•pseudolus•2h ago•2 comments

Optimizing ClickHouse for Intel's 280 core processors

https://clickhouse.com/blog/optimizing-clickhouse-intel-high-core-count-cpu
18•ashvardanian•49m ago•2 comments

Event Horizon Labs (YC W24) Is Hiring

https://www.ycombinator.com/companies/event-horizon-labs/jobs/U6oyyKZ-founding-engineer-at-event-...
1•ocolegro•2h ago

U.S. investors, Trump close in on TikTok deal with China

https://www.wsj.com/tech/details-emerge-on-u-s-china-tiktok-deal-594e009f
268•Mgtyalx•23h ago•248 comments

Ton Roosendaal to step down as Blender chairman and CEO

https://www.cgchannel.com/2025/09/ton-roosendaal-to-step-down-as-blender-chairman-and-ceo/
61•cma•2h ago•4 comments

Tau² benchmark: How a prompt rewrite boosted GPT-5-mini by 22%

https://quesma.com/blog/tau2-benchmark-improving-results-smaller-models/
142•blndrt•6h ago•40 comments

Alibaba's new AI chip: Key specifications comparable to H20

https://news.futunn.com/en/post/62202518/alibaba-s-new-ai-chip-unveiled-key-specifications-compar...
217•dworks•9h ago•230 comments

Ask HN: What's a good 3D Printer for sub $1000?

64•lucideng•2d ago•73 comments

Noise Cancelling a Fan

https://chillphysicsenjoyer.substack.com/p/noise-cancelling-a-fan
8•crescit_eundo•1d ago•1 comments

Launch HN: RunRL (YC X25) – Reinforcement learning as a service

https://runrl.com
31•ag8•3h ago•9 comments

How to motivate yourself to do a thing you don't want to do

https://ashleyjanssen.com/how-to-motivate-yourself-to-do-a-thing-you-dont-want-to-do/
163•mooreds•4h ago•146 comments

UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)

https://github.com/stateless-me/uuidv47
96•aabbdev•5h ago•53 comments

Determination of the fifth Busy Beaver value

https://arxiv.org/abs/2509.12337
218•marvinborner•9h ago•94 comments

YouTube addresses lower view counts which seem to be caused by ad blockers

https://9to5google.com/2025/09/16/youtube-lower-view-counts-ad-blockers/
161•iamflimflam1•5h ago•354 comments

Microsoft Python Driver for SQL Server

https://github.com/microsoft/mssql-python
54•kermatt•4h ago•21 comments

When Computer Magazines Were Everywhere

https://www.goto10retro.com/p/when-computer-magazines-were-everywhere
7•ingve•1h ago•0 comments

Procedural Island Generation (III)

https://brashandplucky.com/2025/09/17/procedural-island-generation-iii.html
88•ibobev•7h ago•17 comments

Famous cognitive psychology experiments that failed to replicate

https://buttondown.com/aethermug/archive/aether-mug-famous-cognitive-psychology/
8•PaulHoule•40m ago•1 comments

Just for fun: animating a mosaic of 90s GIFs

https://alexplescan.com/posts/2025/09/15/gifs/
12•Bogdanp•1d ago•1 comments

PureVPN IPv6 Leak

https://anagogistis.com/posts/purevpn-ipv6-leak/
147•todsacerdoti•9h ago•67 comments

Bringing fully autonomous rides to Nashville, in partnership with Lyft

https://waymo.com/blog/2025/09/waymo-is-coming-to-nashville-in-partnership-with-lyft
115•ra7•6h ago•156 comments

Stategraph: Terraform state as a distributed systems problem

https://stategraph.dev/blog/why-stategraph/
121•lawnchair•10h ago•55 comments

Slow social media

https://herman.bearblog.dev/slow-social-media/
131•rishikeshs•17h ago•112 comments
Open in hackernews

Ask HN: Is anyone else sick of AI splattered code

64•throwaway-ai-qs•2h ago
Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

Comments

thesuperbigfrog•2h ago
If you eat lots of highly processed food, don't be surprised if it makes you less healthy.
MongooseStudios•1h ago
I'm sick of AI everything. Every day I hope today is the day the grift machine finally implodes.

In the short term it's going to make things suck even more, but I'm ready to rip that bandaid off.

P.S. To anyone that is about to reply to this, or downvote it, to tell me that AI is the future, you should be aware that I also hope someone places a rotting trout in your sock drawer each day.

StellaMary•1h ago
No hate mate it's real that ai can really code anything u need but the brutal fact is you need to be much smarter to validate its code all I can see is skill and xp issue. But as Rust node Js dev I made ultra fast http framework using AI with just 200 lines of code it had beaten fastify hono and express. It's all possible just because of my xp in ffi and Rust with node js Architect lessons.

https://www.npmjs.com/package/brahma-firelight

MongooseStudios•1h ago
For you, a trout in the sock drawer AND the closet.
pnw_throwaway•1h ago
You spam that enough yet?
Workaccount2•1h ago
I get this take because of the existential threat, but it ignores that at least right now, LLMs are enabling people to get more from their computer than ever before.

Maybe LLMs can't build you an enterprise back-end for thousands of users. But they sure as shit can make you a bespoke applet that easily tracks your garage sale items. LLMs really shine in greenfield <5k LOC programs.

I think it's largely a mistake for devs to think that LLMs are made for them, rather than for enabling regular people to get far more mileage out of their computers.

duxup•1h ago
I'm really not seeing a lot of code that I can say is bad AI code.

I and my coworkers use AI, but the incoming code seems pretty ok. But my view is just my current small employer.

whycome•52m ago
The breadth of industry is so vast that people have wildly different takes on this. For a lot of simple coding tasks (eg custom plugins or apps) an LLM is not only efficient, but extremely competent. Some traditional coders are having a harder time working with them when a major challenge comes from defining the problem and constraints well. It’s usually something kept in head. So, new skill sets are emerging and being refined. The ones that thrive here will not be coders, but it will be generalists with excellent management and communication skills.
duxup•25m ago
Yeah most of my team is using an LLM for "make this function better", or learning, or just somewhat smaller bites or code that an LLM will work well with. So we don't see the "hey rewrite this whole 20 year old complicated application, omg it didn't work" kind of sitiatons.
nharada•1h ago
My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.

kstrauser•1h ago
I disagree. Code is code: it speaks for itself. If it's high quality, I don't care whether it came from a human or an AI trained on good code examples. If it sucks, it's not somehow less awful just because someone worked really hard on it. What would change for me is how tactful I am in wording my response to it, in which case it's a little nicer replying to AIs because I don't care about being mean to them. The summary of my review would be the same either way: here are the bad parts I want you to re-work before I consider this.
jacknews•1h ago
Half of the point of code review is to provide expert or alternative feedback to junior and other developers, to share solutions and create a consistent style, standard and approach.

So no, code is not just code.

chomp•1h ago
> What would change for me is how tactful I am in wording my response to it

So code is not code? You’re admitting that provenance matters in how you handle it.

alansammarone•1h ago
I've had a similar discussion with a coworker which I respect and know to be very experienced, and interestingly we disagreed on this very point. I'm with you - I think AI is just tool, and people shouldn't be off the hook because they used AI code. If they consistently deliver bad code, bad PR descriptions, or fail to explain/articulate their reasoning, I don't see any particular reason we should treat it any differently now that AI exists. It goes both ways, of course - reviewer also shouldn't pay less attention when the code is did not involve AI help in any form. I think these are completely orthogonal and I honestly don't see why people have this view.

The person who created the PR is responsible for it. Period. Nothing changes.

skydhash•1h ago
It does because the amount of PR goes up. So instead of reviewing, it’s more like back and forth debugging where you are doing the check that the author was supposed to do.
alansammarone•1h ago
So the author is not a great programmer/professional. I agree with you that they should have done their homework, tested it, have a mental model for why and how, etc. If they don't, it doesn't seem to be particularly relevant to me if that's because they had a concussion or because they use AI.
skydhash•1h ago
It’s easy to skip quality with code, starting with coding only the happy path and bad design that hides bugs. Handling errors properly can take a lot of times, and designing to avoid errors takes longer.

So when you have a tools that can produce things that fits the happy path easily, don’t be surprised that the amount of PRs goes up. Because before, by the time you can write the happy path that easily, experience has taught you all the error cases that you would have skipped.

roughly•1h ago
The problem with AI generated code is there’s no unifying theory behind the change. When a human writes code and one part of the system looks weird or different, there’s usually a reason - by digging in on the underlying system, I can usually figure out what they were going for or why they did something in a particular way. I can only provide useful feedback or alternatives if I can figure out why something was done, though.

LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.

skydhash•1h ago
Adding to the sibling comment by @jacknews. Code is much more than an algorithm, it’s the description of the algorithm that is non ambiguous and human readable. Code review is a communication tool. The basic expectation is that you’re a professional and I’m just adding another set of eyes, or you’re a junior and I’m reviewing for the sake of training.

So when there’s some confusion, I’m going back to the author. Because you should know why each line was written and how it contributes to the solution.

But a complete review takes time. So in a lot of places, we only do a quick scan checking for unusual stuff instead of truly reviewing the algorithms. That’s because we trust our colleagues to test and verify their own work. Which AI users usually skip.

jcranmer•1h ago
The problem with reviewing AI-written code is that AI makes mistakes in very different ways from the way humans make mistakes, so you essentially have to retrain yourself to watch for the kinds of mistakes that AI makes.
risyachka•1h ago
This is all great except it doesn't give any reason not to label AI code.
elviejo•1h ago
Code is not only code.

It's like saying physics it's just math. If we read:

F = m*a

There is ton of knowledge encoded in that formula.

We cannot evaluate the formula alone. We need the knowledge behind it to see if it matches reality.

With llms we know for a fact that if the code matches reality, or expectations, it's a happy accident.

bluGill•1h ago
Code doesn't always speak for itself. I've had to do some weird things that make no sense on its own. I leave comments but they are not always easy to understand. Most of this is when I'm sharing data across threads - there is good reason for each lock/atomic and each missing lock. (I avoid writing such code, but sometimes there is no choice). If AI is writing such code I don't trust them to figure out those details, while I have some (only a minority but some) coworkers I trust to figure this out.
m463•23m ago
> Code is code

oh come on.

That's like saying "food is food" or "an AI howto is the same as a human-written howto".

The problem is that code that looks good is not the same as code that is good, but they are superficially similar to a reviewer.

and... you can absolutely bury reviewers in it.

Workaccount2•1h ago
>My biggest annoyance is that people aren't transparent about when they use AI

You get shamed and dismissed for mentioning that you used AI, so naturally nobody mentions they used AI. They mention AI the first time, see the blow back, and never mention it again. It just shows how myopic group-think can be.

sys13•1h ago
> the best suggestions I've seen are found by linters in CI, and spell checkers

I don't think this is a rational take on the utility of AI. You really are not leveraging it well.

yomismoaqui•1h ago
AI coding should be better with a little profesionalism thrown in. I mean, if you have commit that code you are responsible for it. Period.

And I say this as a grumpy senior that has found a lot of value in tools like Copilot and specially Claude Code.

vegancap•1h ago
Yeah, I get the feeling. I'm torn to be honest, because I quite enjoy using it, but then I sift through everything line by line, correct things, change the formatting. Alter parts it's gotten wrong. So for me, it's saving me a little bit of time manually writing it all out. My colleagues are either like me, or aren't sold on it. So I think there's a level of trust and recognition that even if we are using it, we're using it cautiously, and wouldn't just YOLO some AI generated code straight into main.

But we're a really small but mature engineering org, I can't imagine the bigger companies with hundreds of less experienced engineers, just using it without car and caution, it must just cause absolutely chaos (or will soon).

barrell•1h ago
I'm not convinced it's what the future holds for three main reasons:

1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.

2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.

3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.

I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.

I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app

koakuma-chan•1h ago
I agree on all, but I also have a PTSD of the pre-LLM era where people kept telling me that my code is garbage, because it wasn't SOLID or whatever. I prefer the way it is now.
majorbugger•1h ago
and what LLM has to do with your PTSD?
koakuma-chan•1h ago
It has to do because those assholes will no longer tell me that I should have written an abstract factory or some shit. AI generated code is so fucking clean and SOLID.
skydhash•1h ago
SOLID is a nice sets of principles. And like principles, there are valid reasons to break them. To use or not to use is a decision best taken after you’ve become a master, when you know the tradeoffs and costs.

Learn the rules first, then learn when to break them.

koakuma-chan•53m ago
This is idealistic. Do you actually sit down and evaluate whether the code is SOLID or maybe it's more like you're just vibe checking it, and it doesn't actually matter if you call that SOLID or DRY or whatever letters of the alphabet you prefer. Meanwhile your project is just a PostgreSQL proxy.
skydhash•42m ago
These are principles, not mathematical equations. It’s like drawing an human face. The general rule is that the eyes are spaced by another eye length viewed from the front. Or the intervals between the chin, the base of the nose, the eyebrows and the hairline are equal. It does not fit every face, and artists do break these rules. But a beginner breaks them for the wrong reasons.

So there’s a lot of heuristics in code’s quality. But some time, it’s just plain bad.

mattmanser•25m ago
I actually sat down to really learn what SOLID meant a few years ago when I was getting a new contract and it came up in a few job descriptions. Must have some deep wisdom if everyone wants SOLID code, right?

At least two parts of the SOLID acronym are basically anachronisms, nonsense in modern coding (O + L). And I is basically handled for you with DI frameworks. D doesn't mean what most people think it does.

S is the only bit left and it's pretty much open to interpretation.

I don't really see them as anything meaningful, these days it's basically just make your classes have a single responsibility. It's on a level of KISS, but less general.

james2doyle•1h ago
Totally agree. I use for chores (generate an initial README, document the changes from this diff, summarize this release, scaffold out a new $LANG/$FRAMEWORK project) that are well understood. I have also been using it to work in languages that I can/have written in the past but are out of practice with (python) but I’m still babysitting it.

I recently used it to write a Sublime Text plugin for me and forked a Chrome extension and added a bunch of features to. Both open source and pretty trivial projects.

However, I rarely use it to write code for me in client projects. I need to know and understand everything going out that we are getting paid for.

bdangubic•16m ago
> I need to know and understand everything going out that we are getting paid for.

what is preventing you from this even if you are not the one typing it up? you can actually understand more when you remove the burden of typing, keep asking questions, iterate on the code, do code review, security review, performance review… if done “right” you can end up not only understanding better but learning a bunch of stuff you didn’t know aling the way

risyachka•1h ago
This.

If someone says "Most of my code is AI" there are only 3 reasons for this 1. They do something very trivial on daily basis (and its not a bad thing, you just need to be clear about this). 2. The skill is not there so they have to use AI, otherwise it would be faster to DIY it than to explain the complex case and how to solve it to AI. 3. They prefer to explain to llm rather than write code themselves. Again, no issue with this. But we must be clear here - its not faster. Its just someone else is writing the code for you while you explain it in details what to do.

bdangubic•12m ago
there is a 4 and 5 and 6… :)

here’s 4 - there are senior-level SWEs who spent their entire career automating every thing they had to do more than once. it is one of core traits that differentiates “10x” SWE from “others”

LLMs have taken the automation part to another level and best SWEs I know use them every hour of every day to automate shit that we never had tools to automate before

codr7•1h ago
Not my future.
Herring•1h ago
AI will keep improving

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

https://epoch.ai/blog/what-will-ai-look-like-in-2030

There's a good chance that eventually reading code will become like inspecting assembly.

runjake•1h ago
> AI will keep improving

Agree. But most code already generated won't be improved until many years from now.

> There's a good chance that eventually reading code will become like inspecting assembly.

Also agree, but I believe it will be very inefficient and complex code, unlike most written assembly.

I'm not sure tight code matters to anyone but maybe 0.0001% of us programmers, anymore.

epicureanideal•1h ago
> There's a good chance that eventually reading code will become like inspecting assembly.

We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

The equivalent situation for LLMs would be if we were reviewing the prompts only, and if we had 100% confidence that the prompt resulted in code that does exactly what the prompt asks.

Otherwise we need to inspect the generated code. So the situation isn’t the same, at least not with current LLMs and current LLM workflows.

YeGoblynQueenne•1h ago
>> We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

I think the reason "we" don't read, or write, assembly is that it takes a lot of effort and a detailed understanding of computer architecture that are simply not found in the majority of programmers, e.g. those used to working with javascript frameworks on web apps etc.

There are of course many "we" who work with assembly every day: as a for instance, people working with embedded systems, or games programmers as another.

incomingpain•1h ago
>I think I'm finally ready to get off the ride.

I'm sorry you feel that way. Yes, this is probably the future.

AI is a new tool or really a huge new category of different AI tools that will need time to gain competency on.

AI doesnt eliminate the need for developers, it's just a whole new load of baggage and we will NEVER get to the point where that new pile of problems becomes 0.

A tool that gemini cli really loves if Ruff, I run it often :)

sexyman48•1h ago
I'm finally ready to get off the ride

c ya, wouldn't wanna b ya.

codingdave•1h ago
I think we're going to look back on this time as "Remember when basically all new software dev spun its wheels for years while everyone tried to figure out where AI fit in?"

I'm not sick of AI. I'm just sick of people thinking that AI should be everything in our industry. I don't know how many times I can say "It is just a tool." Because it is. We're 3 years deep into LLM-based products, and people are just now starting to even ask... "Hey, where are the strengths and weaknesses of this tool, and best practices for when to use it or not?"

andrewstuart•1h ago
No I love it.

When I see AI code I feel excited that the developer is building stuff beyond their previous limits.

madamelic•1h ago
As others have said, LLM generation of code is no excuse for not self-reviewing, testing, and understanding their own code.

It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.

The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.

I adopted LLM coding fairly early on (GPT3) and the difference between then and now is immense. It's a fast-moving technology still so I don't have the expectation that the model or tool I use today will be the one I use in 3 months.

I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind. It's going to be the equivalent, in my opinion, of keeping hard cover reference manuals on your desk versus using a search engine.

bigstrat2003•1h ago
I think people will eventually wake up and realize LLMs aren't actually good for generating code, but it might take a while. The hype train is rolling at full steam and a lot of people won't get off until they get personally burned.
gerash•1h ago
One downside IMHO is reimplementing the same building blocks rather than refactoring and reusing because it’s cheap to reimplement.
add-sub-mul-div•1h ago
I am so glad I spent 25 years in this field, made my bag, and got out right before it became the norm to stop doing the fun part of the job yourself.
apple4ever•1h ago
I'm sick of AI in general.
twalichiewicz•1h ago
I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.

When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”

But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.

whycome•1h ago
Did you just basically coin “artisanal code”?
greenavocado•1h ago
AI generated code by Claude Sonnet, Kimi K2 0905, GLM-4.5 is not good enough to simultaneously maintain structure and implement features in complex code without doing insane things like grossly violating each SOLID principle. If you impose too much structure upon them, they fall apart as they don't truly understand the long range ramifications of their code too often. These assistants are best suited for generating highly testable snippets of code and pushing them to work in a large codebase pushes their capabilities too far, too often.
juancn•1h ago
I only use small local models like those of IntelliJ (under 100M each), which just save you the tedium of typing some common boilerplate.

But I don't prompt them, they typically just suggest a completion, usually better than what we had before from pure static analysis.

Anything more it detracts. I learn nothing, and the code is believable crap, which requires mindbogglingly boring and intense code reviews.

It's sometimes fine to prototype throw-away code (specially if you don't to intend to invest in learning the tech deeply), but I don't like what I miss by not doing the thinking by myself.

breppp•1h ago
Due to Brandolini's Law, there's an asymmetry between the time it takes to generate crap code and the time it takes to review crap code.

That what makes it seem disrespectful, as if someone is wasting your time when they could have done better

throwacct•26m ago
I'm using "AI" almost exclusively to scaffold projects. I spent 2 days trying to find the reason the code wasn't working the way it supposed to work. Where I work we're using it with moderation, knowing that if you generate code, you must double check everything and confirm that what you generated doesn't smell. You'll be held accountable if something brakes because you were eager to push unreviewed code.