frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•37s ago•0 comments

Japanese rice is the most expensive in the world

https://www.cnn.com/2026/02/07/travel/this-is-the-worlds-most-expensive-rice-but-what-does-it-tas...
1•mooreds•1m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•1m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•1m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•1m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•1m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•2m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•2m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•3m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•3m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•6m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•6m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•7m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•8m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•9m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•9m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•10m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•10m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•11m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•12m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•13m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•17m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•18m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•18m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•22m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•23m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•24m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•26m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•26m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•27m ago•0 comments
Open in hackernews

I've spent 4 months and $800/mo AI bill on Cursor, Claude Code. Later is better?

3•ianberdin•6mo ago
Hi HN. There is a huge hype around Claude Code, and AI agents overall.

After four months with Cursor and one with Claude Code, I'm a super-user. I was paying up to $700/mo for Cursor on a usage basis before switching to their new subscription, and I've been on a paid Claude Code plan for the last month. I code every day with these tools, using Sonnet 4.0 and Gemini 2.5 Pro. This is a guide born from experience and frustration.

First, the verdict on Claude Code (the CLI agent). The idea is great—programming from the terminal, even on a server. But in practice, it's inferior. You can't easily track its changes, and within days, the codebase becomes a mess of hacks and crutches. Compared to Cursor, the quality and productivity are at least three times worse. It’s a step backward. But it is nice to make one-time prototypes without worrying about codebase.

Now, let's talk about LLMs. This is the most important lesson: models do not think. They are not your partner. They are hyper-sensitive calculators. The best analogy is time travel: change one tiny detail in the past, and the entire future is different. It’s the same with an LLM. One small change in your input context completely alters the output. Garbage in, garbage out. There is no room for laziness.

Understanding this changes everything. You stop hoping the AI will "figure it out" and start engineering the perfect input. After extensive work with LLMs both in my editor and via their APIs, here are the non-negotiable rules for getting senior-level code instead of junior-level spaghetti.

Absolute Context is Non-Negotiable. You must provide 99% of the relevant code in the context. If you miss even a little, the model will not know its boundaries; it will hallucinate to fill the gap. This is the primary source of errors.

Refactor Your Code for the AI. If your code is too large to fit in the context window (Cursor's max is 200k tokens), the LLM is useless for complex tasks. You must write clean, modular code broken into small pieces that an AI can digest. The architecture must serve the AI.

Force-Feed the Context. Cursor tries to save money by limiting the context it sends. This is a fatal flaw. I built a simple CLI tool that uses regex to grab all relevant files, concatenates them into a single text block, and prints it to my terminal. I copy this entire 150k-200k token block and paste it directly into the chat. This is the single most important hack for good results.

Isolate the Task. Only give the LLM a small, isolated piece of work that you can track yourself. If you can't define the exact scope and boundaries of the task, the AI will run wild and you will be left with a mess you can't untangle.

"Shit! Redo." Never ask the AI to fix its own bad code. It will only dig a deeper hole. If the output is wrong, scrap it completely. Revert the changes, refine your context and prompt, and start from scratch.

Working with an LLM is like handling an aggressive, powerful pitbull. You need a spiked collar—strict rules and perfect context—to control it.