frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•18s ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•56s ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•1m ago•0 comments

Show HN: I built an invoicing SaaS with AI-generated invoice templates

https://www.invocrea.com/en
1•mathysth•1m ago•0 comments

Velocity

https://velocity.quest
1•kevinelliott•2m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•3m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•3m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•10m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•10m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•11m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•13m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•13m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•14m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•14m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•16m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•17m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•18m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•19m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•21m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•21m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•21m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•22m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•22m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•23m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•24m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•24m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•25m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•27m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•29m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•33m ago•0 comments
Open in hackernews

I've spent 4 months and $800/mo AI bill on Cursor, Claude Code. Later is better?

3•ianberdin•6mo ago
Hi HN. There is a huge hype around Claude Code, and AI agents overall.

After four months with Cursor and one with Claude Code, I'm a super-user. I was paying up to $700/mo for Cursor on a usage basis before switching to their new subscription, and I've been on a paid Claude Code plan for the last month. I code every day with these tools, using Sonnet 4.0 and Gemini 2.5 Pro. This is a guide born from experience and frustration.

First, the verdict on Claude Code (the CLI agent). The idea is great—programming from the terminal, even on a server. But in practice, it's inferior. You can't easily track its changes, and within days, the codebase becomes a mess of hacks and crutches. Compared to Cursor, the quality and productivity are at least three times worse. It’s a step backward. But it is nice to make one-time prototypes without worrying about codebase.

Now, let's talk about LLMs. This is the most important lesson: models do not think. They are not your partner. They are hyper-sensitive calculators. The best analogy is time travel: change one tiny detail in the past, and the entire future is different. It’s the same with an LLM. One small change in your input context completely alters the output. Garbage in, garbage out. There is no room for laziness.

Understanding this changes everything. You stop hoping the AI will "figure it out" and start engineering the perfect input. After extensive work with LLMs both in my editor and via their APIs, here are the non-negotiable rules for getting senior-level code instead of junior-level spaghetti.

Absolute Context is Non-Negotiable. You must provide 99% of the relevant code in the context. If you miss even a little, the model will not know its boundaries; it will hallucinate to fill the gap. This is the primary source of errors.

Refactor Your Code for the AI. If your code is too large to fit in the context window (Cursor's max is 200k tokens), the LLM is useless for complex tasks. You must write clean, modular code broken into small pieces that an AI can digest. The architecture must serve the AI.

Force-Feed the Context. Cursor tries to save money by limiting the context it sends. This is a fatal flaw. I built a simple CLI tool that uses regex to grab all relevant files, concatenates them into a single text block, and prints it to my terminal. I copy this entire 150k-200k token block and paste it directly into the chat. This is the single most important hack for good results.

Isolate the Task. Only give the LLM a small, isolated piece of work that you can track yourself. If you can't define the exact scope and boundaries of the task, the AI will run wild and you will be left with a mess you can't untangle.

"Shit! Redo." Never ask the AI to fix its own bad code. It will only dig a deeper hole. If the output is wrong, scrap it completely. Revert the changes, refine your context and prompt, and start from scratch.

Working with an LLM is like handling an aggressive, powerful pitbull. You need a spiked collar—strict rules and perfect context—to control it.