frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

My First Impressions of MeshCore Off-Grid Messaging

https://mtlynch.io/first-impressions-of-meshcore/
1•mtlynch•1m ago•0 comments

I built a tool to restore old family photos without ruining them with AI

https://forevi.ai
1•poznerd•1m ago•1 comments

Designing Electronics That Works

https://nostarch.com/designingelectronics
1•0x54MUR41•1m ago•0 comments

Most LLM cost isn't compute – it's identity drift (110-cycle GPT-4o benchmark)

https://github.com/sigmastratum/documentation/blob/main/sigma-runtime/SR-EI-03/benchmark_report_S...
1•teugent•2m ago•1 comments

Show HN: PlanEat AI, an AI iOS app for weekly meal plans and smart grocery lists

1•franklinm1715•2m ago•0 comments

A Post-Incident Control Test for External AI Representation

https://zenodo.org/records/17921051
1•businessmate•3m ago•1 comments

اdifference gbps overview find answers

1•shahrtjany•4m ago•0 comments

Measuring Impact of Early-2025 AI on Experienced Open-Source Dev Productivity

https://arxiv.org/abs/2507.09089
1•vismit2000•5m ago•0 comments

Show HN: Lazy Demos

http://demoscope.app/lazy
1•admtal•6m ago•0 comments

AI-Driven Facial Recognition Leads to Innocent Man's Arrest (Bodycam Footage) [video]

https://www.youtube.com/watch?v=B9M4F_U1eEw
1•niczem•7m ago•1 comments

Annual Production of 1/72 (22mm) scale plastic soldiers, 1958-2025

https://plasticsoldierreview.com/ShowFeature.aspx?id=27
1•YeGoblynQueenne•8m ago•0 comments

Error-Handling and Locality

https://www.natemeyvis.com/error-handling-and-locality/
1•Theaetetus•9m ago•0 comments

Petition for David Sacks to Self-Deport

https://form.jotform.com/253464131055147
1•resters•9m ago•0 comments

Get found where people search today

https://kleonotus.com/
1•makenotesfast•12m ago•1 comments

Show HN: An early-warning system for SaaS churn (not another dashboard)

https://firstdistro.com
1•Jide_Lambo•13m ago•1 comments

Tell HN: Musk has never *tweeted* a guess for real identity of Satoshi Nakamoto

1•tokenmemory•13m ago•2 comments

A Practical Approach to Verifying Code at Scale

https://alignment.openai.com/scaling-code-verification/
1•gmays•15m ago•0 comments

Show HN: macOS tool to restore window layouts

https://github.com/zembutsu/tsubame
1•zembutsu•17m ago•0 comments

30 Years of <Br> Tags

https://www.artmann.co/articles/30-years-of-br-tags
2•FragrantRiver•24m ago•0 comments

Kyoto

https://github.com/stevepeak/kyoto
2•handfuloflight•25m ago•0 comments

Decision Support System for Wind Farm Maintenance Using Robotic Agents

https://www.mdpi.com/2571-5577/8/6/190
1•PaulHoule•25m ago•0 comments

Show HN: X-AnyLabeling – An open-source multimodal annotation ecosystem for CV

https://github.com/CVHub520/X-AnyLabeling
1•CVHub520•28m ago•0 comments

Penpot Docker Extension

https://www.ajeetraina.com/introducing-the-penpot-docker-extension-one-click-deployment-for-self-...
1•rainasajeet•29m ago•0 comments

Company Thinks It Can Power AI Data Centers with Supersonic Jet Engines

https://www.extremetech.com/science/this-company-thinks-it-can-power-ai-data-centers-with-superso...
1•vanburen•32m ago•0 comments

If AIs can feel pain, what is our responsibility towards them?

https://aeon.co/essays/if-ais-can-feel-pain-what-is-our-responsibility-towards-them
3•rwmj•36m ago•5 comments

Elon Musk's xAI Sues Apple and OpenAI over App Store Drama

https://mashable.com/article/elon-musk-xai-lawsuit-apple-openai
1•paulatreides•39m ago•1 comments

Ask HN: Build it yourself SWE blogs?

1•bawis•39m ago•1 comments

Original Apollo 11 Guidance Computer source code

https://github.com/chrislgarry/Apollo-11
3•Fiveplus•45m ago•0 comments

How Did the CIA Lose Nuclear Device?

https://www.nytimes.com/interactive/2025/12/13/world/asia/cia-nuclear-device-himalayas-nanda-devi...
1•Wonnk13•45m ago•1 comments

Is vibe coding the new gateway to technical debt?

https://www.infoworld.com/article/4098925/is-vibe-coding-the-new-gateway-to-technical-debt.html
3•birdculture•49m ago•1 comments
Open in hackernews

I made 4000 agent calls in Cursor last month. Each model has a personality

8•mike210•7mo ago
The lazy architect (OpenAI’s o3). o3 is incredibly lazy at writing code, but very good at planning. Will happily read tens of files and do deep analysis, but often struggles in scenarios where it needs to edit more than one file.

The over-eager child (Claude Sonnet 3.7 Thinking). Claude Sonnet is eager to just get going, man! It’s not the most careful, and in longer strings of tool calls, may start editing something completely unrelated to what you asked it to.

Pretty balanced?(Gemini 2.5 Pro). Gemini 2.5 is a little more intelligent, and significantly faster and more reserved than Sonnet 3.7. Usually the best choice for writing code in multiple files.

I’ve found o4-mini to be incredibly slow and fairly mediocre, and GPT 4.1 useful in very situational areas. My tips:

- Use o3 to plan and/or write code in one or max two file only. If you do more, it may openly revolt and just refuse to write any longer.

- Always make sure Sonnet 3.7 is following a tightly scoped plan on a relatively small section of the product, and supervise it. If you have an easy change to make in many areas of your codebase, for example, letting Sonnet run, still supervised, is a perfect use of the model’s persona

Generally what I do:

- Medium complexity: editing one file: o3. Editing multiple files: plan with o3, write with gemini-2.5

- Simple complexity: Editing many files, very simple: plan with o3 if needed, write with claude-3.7. Editing many files, simple, needs formulaic approach: write a detailed prompt into GPT 4.1

- High complexity: plan with o3, separate into multiple chunks, write small chunks at a time with gemini-2.5 and be very careful with each section. If I'm super lazy sometimes I just YOLO all of the sections and then fix all the bugs at the end but this probably leads to code issues later down the line.

Would love to hear other people are using the different models!

Comments

joegibbs•7mo ago
I really like the code that Gemini 2.5 Pro writes but it tends to stop for no reason and needs to be reprompted to start again. I'm not sure why this is. Also, what's the difference between 2.5 Pro and 2.5 Pro Max? Or Claude 3.7 and 3.7 Max?

Aside: it would be good for Cursor to add something to tell their agents not to run tool calls that run forever (like test watchers). I add this in my .mdc files but I think it would be a good default so that it can run tests, update the code, run them again until it works.

mike210•7mo ago
I sometimes but rarely turn on Max for Gemini for more context in a long conversations. The tool use (5 cents per tool call) can get pretty ridiculous on Claude 3.7 Sonnet Max and I've had calls that have been ~$2.
muzani•7mo ago
Sonnet 3.5 has a very different personality. It's less skilled, but often I opt for it because of the personality.

Deepseek is actually pretty good and underappreciated too. It feels unreliable though. Downside is tool use, but I prefer it over o3.

mike210•7mo ago
Interesting - what kinds of tasks do you reach for 3.5?
muzani•7mo ago
Pretty much whatever you're using 3.7 for. You don't need as tight a scope. It does easy things well.

An situation I had yesterday: we had two dropdowns. For simplicity, let's say it's country. When you pick a different country, it shows states. When you select state, then change country, it crashes because the state doesn't exist in the new country.

The standard solution is simple – just make it reset to null when switching countries, or better yet, check whether the selected state exists in the new country. But the thinking models will overengineer the hell out of this. They'll check from the deep service level when these checks can be made just below the view layer.