frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zed editor switching graphics lib from blade to wgpu

https://github.com/zed-industries/zed/pull/46758
96•jpeeler•1h ago•53 comments

Monosketch

https://monosketch.io/
234•penguin_booze•3h ago•47 comments

Open Source Is Not About You (2018)

https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9
41•doubleg•47m ago•13 comments

Apple, fix my keyboard before the timer ends or I'm leaving iPhone

https://ios-countdown.win/
47•ozzyphantom•1h ago•17 comments

Resizing windows on macOS Tahoe – the saga continues

https://noheger.at/blog/2026/02/12/resizing-windows-on-macos-tahoe-the-saga-continues/
710•erickhill•15h ago•353 comments

MinIO repository is no longer maintained

https://github.com/minio/minio/commit/7aac2a2c5b7c882e68c1ce017d8256be2feea27f
338•psvmcc•7h ago•216 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
20•mxfh•5d ago•4 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
36•todsacerdoti•5d ago•4 comments

GPT‑5.3‑Codex‑Spark

https://openai.com/index/introducing-gpt-5-3-codex-spark/
806•meetpateltech•21h ago•350 comments

Gemini 3 Deep Think

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/
946•tosh•22h ago•627 comments

Cache Monet

https://cachemonet.com
70•keepamovin•5d ago•21 comments

Gauntlet AI (YC S17) train you to master building with AI, give you $200k+ job

http://qualify.gauntletAI.com
1•austenallred•2h ago

Faster Than Dijkstra?

https://systemsapproach.org/2026/02/09/faster-than-dijkstra/
5•drbruced•3d ago•0 comments

Tell HN: Ralph Giles has died (Xiph.org| Rust@Mozilla | Ghostscript)

367•ffworld•16h ago•19 comments

Advanced Aerial Robotics Made Simple

https://www.drehmflight.com
44•jacquesm•5d ago•5 comments

An AI agent published a hit piece on me

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
2037•scottshambaugh•23h ago•822 comments

Particle Lenia

https://znah.net/lenia/
38•memalign•4d ago•0 comments

MMAcevedo aka Lena by qntm

https://qntm.org/mmacevedo
183•stickynotememo•9h ago•121 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
69•lukastyrychtr•6d ago•5 comments

colorForth

https://colorforth.github.io/cf.htm
11•tosh•3h ago•1 comments

AWS Adds support for nested virtualization

https://github.com/aws/aws-sdk-go-v2/commit/3dca5e45d5ad05460b93410087833cbaa624754e
250•sitole•15h ago•98 comments

CSS-Doodle

https://css-doodle.com/
54•dsego•7h ago•2 comments

Polis: Open-source platform for large-scale civic deliberation

https://pol.is/home2
291•mefengl•21h ago•108 comments

Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed

http://blog.can.ac/2026/02/12/the-harness-problem/
728•kachapopopow•1d ago•267 comments

Ruby Newbie Is Joining the Ruby Users Forum

https://www.rubyforum.org/tag/getting-started
54•jvrc•4d ago•11 comments

Apocalypse no: how almost everything we thought we knew about the Maya is wrong

https://www.theguardian.com/news/2026/feb/12/apocalypse-no-how-almost-everything-we-thought-we-kn...
9•speckx•47m ago•5 comments

My Grandma Was a Fed – Lessons from Digitizing Hours of Childhood

https://sampatt.com/blog/2025-12-13-my-grandma-was-a-fed-lessons-from-digitizing-hundreds-of-hour...
169•SamPatt•5d ago•53 comments

Beginning fully autonomous operations with the 6th-generation Waymo driver

https://waymo.com/blog/2026/02/ro-on-6th-gen-waymo-driver
242•ra7•23h ago•303 comments

Major European payment processor can't send email to Google Workspace users

https://atha.io/blog/2026-02-12-viva
567•thatha7777•1d ago•388 comments

Ring owners are returning their cameras

https://www.msn.com/en-us/lifestyle/shopping/ring-owners-are-returning-their-cameras-here-s-how-m...
308•c420•9h ago•223 comments
Open in hackernews

I asked Claude Code to remove jQuery. It failed miserably

https://www.jitbit.com/alexblog/323-i-asked-claude-code-to-remove-jquery-it-failed-miserably/
52•speckx•2h ago

Comments

q3k•1h ago
You're holding it wrong. I just spent 14 hours (high on coke) working with Claude to generate an agent orchestration framework that has already increased my output to 20x over just using Copilot. Adapt or you'll be left behind and forever part of the permanent underclass.
defraudbah•1h ago
that's a pretty long time to be on someones cok
bdangubic•1h ago
the time you are on coke = the time there is coke around to be had :)
re-thc•1h ago
It’s Claude Coke
Insanity•1h ago
Well, it’ll definitely make you hallucinate!
netdevphoenix•1h ago
For the oblivious: /s
snarf21•1h ago
This one is a lot harder to tell because there are some AI bros who claim similar things but are completely serious. Even look at Show HN now: There used to be ~20-40 posts per day but now there are 20 per HOUR.

(Please oh please can we have a Show HN AI. I'm not interested in people's weekend vibe coded app to replace X popular tool. I want to check out cool projects wher people invested their passion and time.)

zdw•1h ago
RFK Jr. is that you?
nananana9•1h ago
Tomorrow you'll write 20 agent orchestration frameworks in 14 hours!
q3k•1h ago
Amen! I'm pissing blood faster than I can increase my credit card limit for token use, but we'll make it. The 200x (10x from LLM + 20x from orchestration) means that by the end of 2026 we'll all be building $1MM ARR side projects daily.
esseph•32m ago
I would love to subscribe to your newsletter to hear more about this topic.
xcubic•1h ago
Can you share details about this? Do you have a repo?
gherkinnn•1h ago
Doesn't coke come with mania?

Either way, OP is holding it wrong and vague hypebro comments like yours don't help either. Be specific.

Here's an example: I told Claude 4.5 Opus to go through our DB migration files and the ORM model definitions and point out any DB indexes we might be missing based on how the data is being accessed. It did so, ingested all the controllers as well and a short while later presented me with a list of missing indexes, ordered by importance and listing why each index would speed up reads and how to test the gains.

Now, I have no way of knowing how exhaustive the analysis was, but the suggestions it gave were helpful, Claude did not recommend over-indexing, and considered read vs write performance.

The equivalent work would have taken me a day, Claude gave me something helpful in a matter of minutes.

Now, I for one could not handle the information stream of 20 such analyses coming in. I can't even handle 2 large feature PRs in parallel. This is where I ask for more specifics.

dmbche•1h ago
Parent comment seems sarcastic
morkalork•35m ago
I believe it's in reference to things like this:

https://steve-yegge.medium.com/gas-town-emergency-user-manua...

weakfish•1h ago
Parent comment is a joke I think, but there’s something ironic (Poe’s law?) about it being possibly _not_ a joke
beepbooptheory•1h ago
Why go through all migration files if you're looking for missing indices in the present? That doesn't seem to make sense when you could just look at the schema as it stands? Either way, why would this take you a day? How many tables do you have?
bogzz•1h ago
Sniped.
SJMG•56m ago
There's a parenthetical offset about being high on coke for 14 hours. It's obviously a joke.
chasd00•1h ago
That’s nothing I used a Claude code to put together a totally new agent harness model architecture that can cook 30min brownies in only 20mjnutes!
bogzz•1h ago
CDDOL is undoubtedly the future, it is just sad seeing all these negative comments. It's like those people don't even know they've been made redundant already.

It's not too late to jump on the Cocaine-Driven Development Orchestrated by LLMs train.

neya•1h ago
I built a windmill with Claude. I created a skills.md and followed everything by the book. But now, I have to supply power to keep the windmill running. What am I doing wrong?
ladyprestor•54m ago
You didn't mention the $1M ARR!
aurareturn•1h ago

  The moment you point it at a real, existing codebase - even a small one - everything falls apart.
Not my experience. It excels in existing codebases too.

I often ask it "I have this bug. Why?" And it almost always figures it out and fixes it. Huge code base.

Codex user, not Claude Code.

bsaul•1h ago
Not my experience too and i'm on claude code. I'd be really curious to see what when wrong in OP case. Maybe too much indication ? Could it be that it used a fast model instead of the deep ones ?
n4r9•1h ago
They say explicitly what model they're using.
aurareturn•1h ago
No, OP said he used the Max Opus 4.6.

Anyways, I think one area where Codex and Claude Code falls short is that they do not test the changes they made by using the app.

In this case, the LLM should ideally render the page in a real browser, and actually click on the buttons to verify. Best if the LLM test it before the changes, and then after so that it is the same. Maybe it should take a screenshot of before the change, then take a screenshot after. And match.

I asked why Codex and Claude don't do this here: https://news.ycombinator.com/item?id=46792066

threetonesun•1h ago
Yeah, if you have these tools in place to validate it's changes you can quickly iterate with it to the right results. But think through how it's making UI changes and it becomes obvious quickly why it can make absolutely wrong and terrible guesses about the implementation details, it can't _see_ what it's doing, or interact with it, it's just pattern matching other implementations its seen.
aurareturn•1h ago
Yea, the next breakthrough for Codex or Claude Code would be to actually use/test the app like a real human would during the development process.
simonw•42m ago
Here's a document produced by Claude Code using my Showboat testing tool this morning to help explore SeaweedFS (a local S3 clone) - it includes trying things out with curl and getting screenshots from Chrome using my Rodney tool: https://github.com/simonw/research/blob/main/seaweedfs-testi...
throwup238•1h ago
See the /chrome command in Claude code.
mwigdahl•59m ago
You can easily do this, at least with Claude Code. Ask it to install and use Playwright to confirm rendering and flow. You're correct that it is a failing to not do this. When you do, it definitely helps cut down on bugs.

EDIT: Sorry, just noticed you said "real browser". Haven't tried this but Playwright gets you a long way down the road.

aurareturn•56m ago
Will check it out. Looks like there is also chrome-devtools-mcp for Codex.
lenerdenator•57m ago
FWIW, I've found Playwright tests to be a decent way of getting Claude to do what you're talking about.
netdevphoenix•1h ago
> Not my experience. It excels in existing codebases too.

Why don't you prove it?

1. Find an old large codebase in codeberg (avoiding the octopus for obvious reasons)

2. Video stream the session and make the LLM convo public

3. Ask your LLM to remove jQuery from the db and submit regular commits to a public remote branch

Then we will be able to judge if the evidence stands

aurareturn•1h ago
I don't have to prove it. I do it every single day at work in a real production codebase that my business relies on.

And I don't remove jQuery every day. Maybe the OP is right that Opus 4.6 sucks at removing jQuery. I don't know. I've never asked an AI to do it.

    The moment you point it at a real, existing codebase - even a small one - everything falls apart.
This statement is absolutely not true based on my experience. Codex has been amazing for me at existing code bases.
netdevphoenix•1h ago
Extraordinary claims require extraordinary evidence. "Works on my machine" ain't it.
aurareturn•1h ago
Is it an extraordinary claim that Opus 4.6 or GPT 5.3 works amazing on existing code bases in my experience?

That's funny. I feel like it's the opposite. Claiming that Opus 4.6 or GPT 5.3 fails as soon as you point them to an existing code base, big or small, is a much more extraordinary claim.

simonw•1h ago
What are the obvious reasons?
uludag•1h ago
There could be a whole spectrum of types of repositories where these tools exceed and fail. I can immagine a large repository, poorly documented, with confusing inconsistent usages/patterns, in a dynamic language, with poor tests will almost always lead to failure.

I honestly think that size and age alone are sufficient to lead these tools into failure cases.

aurareturn•1h ago
It could be. I mainly use LLMs with Typescript and Go, both typed languages.
netdevphoenix•1h ago
> I often ask it "I have this bug. Why?" And it almost always figures it out and fixes it. Huge code base.

Is your AI PR publicly available in github?

aurareturn•1h ago
No. I don't do any open source work. I work for a private company.
whiplash451•1h ago
These two things are not mutually exclusive.
re-thc•1h ago
You don't remove jQuery. EVER. You'll lose all the $.
padjo•1h ago
This sounds like something I would have done with sed
rado•1h ago
Refactoring jQuery to vanilla JS was one of my first AI dev experiences a couple of years ago and it was great.
coldcode•1h ago
For any AI post, there seems like that one person for whom it worked great, and a whole lot where it didn't. Your mileage may vary...

Some things AI does well, many things it may be not worth the effort entailed, and some where it downright sucks and may even be harmful. The question is will it ever change the curve to where it is useful most of the time?

mingus88•1m ago
Like any tool, you get better at using it. YMMV indeed.

The author of this article could probably have, for example, written most of this into the project’s Claude.md and the AI would learn what not to do.

Instead they wrote it up as a blog post which is unsurprisingly not going to net quality software.

Having some way for Claude to test what it wrote is critical as well. It will learn on its own very fast if it can see the error messages and iterate on it like any other developer would

Sounds like the author had tests that Claude never ran. Sounds misconfigured to me. Again, did the author learn how to use the tool?

Arubis•1h ago
That sounds like a realistic outcome for a real engineer, too.
simonw•1h ago
How did you have it testing its code changes? Did you tell it to use Playwright or agent-browser or anything like that?

If coding agents can't test the code as they're editing it they're no different from pasting your entire codebase into ChatGPT and crossing your fingers.

At one point you mention it hadn't run "npm test" - did it run that once you directly told it to?

I start every one of my coding agent sessions with "run uv run pytest" purely to confirm that it can run the tests and seed the idea with it that tests exist and matter to me.

Your post ends with a screenshot showing you debating a C# syntax thing with the bot. I recommend telling it "write code that demonstrates if this works or not" in cases like that.

aurareturn•1h ago

  If coding agents can't test the code as they're editing it they're no different from pasting your entire codebase into ChatGPT and crossing your fingers.
Out of curiosity, how do you get Claude Code or Codex to actually do this? I asked this question here before:

https://news.ycombinator.com/item?id=46792066

SJMG•54m ago
Instruct it to test as it goes along. Add whatever testing base command to your list of trusted tools.
simonw•49m ago
I don't use CLAUDE.md, I instead use simple token-efficient conventions.

Most importantly all of my Python projects use a pyproject.toml file with this pattern:

  [dependency-groups]
  dev = ["pytest"]
Which means I can tell the agent:

  Run "uv run pytest"
And it will run the tests - without first needing to setup a virtual environment or install dependencies or anything like that. I wrote more about that pattern here: https://til.simonwillison.net/uv/dependency-groups

For more complex test suites I'll give it more detailed instructions.

For testing web apps I used to tell it "use playwright" or "use playwright Python".

I'm currently experimenting with my own simple CLI browser automation tool. This means I can tell it:

  Run "uvx rodney --help" and then use 
  rodney to test this change
The --help output tells it everything it needs to use the tool - here's that document in the repo: https://github.com/simonw/rodney/blob/10b2a6c81f9f3fb36ce4d1...

I've recently started having the bots "manually" test changes with a new tool I built called Showboat. It's less than a week old but it's so far been working really well: https://simonwillison.net/2026/Feb/10/showboat-and-rodney/

dana321•1h ago
Its a slot machine, you need to revert the changes and try again!
lenerdenator•1h ago
jQuery simply turned the tables and executed a `$( ".Claude_Code" ).remove();`. Now Anthropic's services are down across several regions and emergency meetings are being held with stakeholders.

jQuery: It's Going Absolutely Nowhere™

cbg0•1h ago
Seeing some of the pictures where OP says "MOTHERFUCKER" in the prompts and how simplistic some of the questions provided are gives me a feeling that CC is being used incorrectly.

My experience with 4.6 has been that it gobbles up tokens like crazy but it's pretty smart otherwise. Even the latest LLMs need a lot of context to know what they're working on, which versions to target, access to some MCP like Context7 to get up to date documentations(especially for js/ts).

My non-tech friends have a tendency to talk to AI like a person and then complain about the quality of the answers and I always tell them: ask your question, with one or two follow-ups max then start a new conversation. Also, provide as much relevant context as possible to get the best answer, even if it seems obvious. I'd expect a SWE to already be aware of this stuff.

I've been able to find obscure edge cases thanks to Claude and I've also had it produce code that does the opposite of what I asked even with a clear prompt, but that's the nature of LLMs.

Anon1096•1h ago
> Also, why not run "npm run test" at some point? We have tons of tests. I even have an integration test that crawls the entire fucking app recusrively link-by-link in a headless browser and reports on JS errors. CLAUDE.md has all the info.

I'm a little baffled by this post. The author claims to have "Wrote a comprehensive CLAUDE.md with detailed instructions." and yet didn't have "run the tests" anywhere? I realize this post is going to be a playground for bashing on AI but I just wish the prompt was published or even better, if it's open source let other people try. Seems like the perfect case to throw claude code in a wiggum loop at overnight.

kittikitti•1h ago
Removing jQuery is a great task and one I hope to implement in some of my JavaScript code bases. Thank you for this post. I don't know exactly why but I've found these agents to be less useful when it's counterintuitive from popular coding methods. Although there are many reasons why replacing jQuery is a great idea, coding agents may fail on this because so much of their training data requires jQuery. For example, many top comments on StackOverflow utilize jQuery, perhaps to address the same logic you are trying to replace.
josefritzishere•1h ago
suprise factor zero.
lenerdenator•59m ago
> Why AI is so bad at vanilla JS and HTML, when there's no React/Vue in a project?

Because we're still paying for Brendan Eich's mistakes 30 years later (though Brendan isn't, apparently), and even an LLM trained on an unfathomably-large corpus of code by experts at hundreds of millions of dollars of expense can't unscrew it. What, like, even is a language's standard library, man?

> The moment you point it at a real, existing codebase - even a small one - everything falls apart

That's not been my experience with running Claude to create production code. Plan mode is absolutely your friend, as is tuning your memory files and prompts. You'll need to do code reviews as before, and when it makes changes that you don't like (like patching in unit tests), you need to correct it.

Also, we use hexagonal architecture, so there are clean patterns for it to gather context from. FWIW, I work in Python, not JS, so when Claude was trained on it, there weren't twenty wildly different flavor-of-the-week-fifteen-years-ago frameworks and libraries to confuse it.

If JS sucks to write as a human, it will suck even more to write as a LLM.

littlecranky67•48m ago
Not surprised. The amount of jQuery pasta code from the 2010s the models are trained on make it probably look like all jQuery-specific stuff is plain JavaScript. Plus in my experience (and lucky for me as a mostly FE dev) AIs suck at all things frontend (relative to other scenarios). They just never got trained on the real, rendered output in the browser so they can't "see" and complete the feedback loop during training. Most tests in Javascript projects genereate <div>-soup - so the AI gets trained on that output as a feedback, vs. the actual browser rendered image.
simonw•40m ago
Were you using --dangerously-skip-permissions or were you approving every edit and every tool use?

Which tools did it use?

tommy_axle•21m ago
If doing it directly fails (not surprising) wouldn't the next thing (maybe the first thing) to do was to have AI write a codemod to do what needed to be done then apply the codemod? Then all you need to do is get the codemod right and apply it to as many files as you need. Seems much more predictable and context-efficient.
simonw•19m ago
This should work really well, but you still need to first ensure the agent is able to test the code (both through automated tests and "manually" poking at it) so it can verify the changes made actually work.