frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

43•UmYeahNo•1d ago•27 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•3h ago•1 comments

Ask HN: Non AI-obsessed tech forums

18•nanocat•6h ago•13 comments

Ask HN: Ideas for small ways to make the world a better place

9•jlmcgraw•8h ago•17 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

42•Invictus0•1d ago•11 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•10h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•512 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•6h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•16h ago•13 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•18h ago•7 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•1d ago•12 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•5 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•1d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•3d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•4d ago•17 comments
Open in hackernews

Ask HN: How do you get comfortable with shipping code you haven't reviewed?

7•fnimick•1mo ago
This is the advice I've gotten on how to adapt to AI driven development at breakneck speed - to the point of having AI tooling write and ship projects in languages the 'operator' doesn't even know. How do you get confidence in a workflow where e.g. a team of agents does development, another team of agents does code review and testing, and then it is shipped without a human ever verifying the implementation?

I hear stories of startup devs deploying 10-30k+ lines of code per day and that a single dev should now be able to build complete products that would ordinarily take engineer-years in under a month. Is this realistic? How do you learn to operate like this?

Comments

Sevii•1mo ago
You get confidence in things by doing them. If you don't have experience doing something, you aren't going to be confident at it. Try vibe coding a few small projects. See how it works out. Try different ways of structuring your instructions to the 'agents'.
fnimick•1mo ago
Are there public examples of "good instruction" and an iteration process? I have tried and have not been very successful at getting Claude Code to generate correct code for medium sized projects or features.
Sevii•1mo ago
Anthropic has a short training course https://www.coursera.org/learn/claude-code-in-action. There isn't really a lot of best practices at this point because the technology has improved significantly in 2025.
mkranjec•1mo ago
Heads up, this is a paid course.
Jeremy1026•1mo ago
I had Claude write a piano webapp (https://webpiano.jcurcioconsulting.com) as a "let's see how this thing works" project. I was pleasantly surprised by the ease of it.

I actually just put together a write up showing my prompts and explaining what was generated after each, if you're interested at all https://jcurcioconsulting.com/posts/how-i-used-claude-code-t...

mattmanser•1mo ago
You don't. Whoever's telling you those stories has a very long nose.
2rsf•1mo ago
100% agree with the "you don't", but I wouldn't be surprised if young startups or highly stressed teams delivering low-risk products will do just that and deliver unreviewed code
didgetmaster•1mo ago
Sometimes it feels like there is an awful lot of software out there that shipped without much review. This was happening long before AI arrived on the scene.

Hard to tell if anyone was 'comfortable' with that.

muzani•1mo ago
Likely a good deal of test coverage. At the far end of this is something like Facebook, which has everything monitored by A/B tests. If it breaks something that changes something serious, the alarm triggers. Move fast break things isn't a new way of doing things, so might as well pick up a framework that works.
chaidhat•1mo ago
Hi, I’m running a 4-person startup based in Bangkok, Thailand and we differentiate code quality based on priority. We try to ship only clean code on master but when we talk with clients and they want a demo of a new product/feature, we use AI to rapidly create an MVP to see if it aligns with their needs. If they are happy, we then refine this MVP until we are happy with the code through manual review and refactoring, or we even rewrite it. We make sure our data is shaped correctly, hot paths are tested and things are well separated by domain (Domain driven design). DDD ensures us that if the code is shitty, only that part of the project is shitty. Only when the code is acceptable, we rebase to master. I try to let engineers talk to clients so that they learn the most from them first hand and then let them dictate smaller tasks for AI to do —- they are more product managery than what a typical engr would be ten years ago. Do you think this is a good approach? I’m also curious what other startups do too.

tldr we aren’t confident of the code we write quickly but we then take time to make sure we’re confident before we merge to master

binsquare•1mo ago
The same way you learn to trust other dev's to do work.

I see ai as a tool, not a peer. I trust a peer when we aligned on the requirements of the project and where we want to go.

So the answer to how we get confidence in a workflow of agents to develop, to review and test without a human verifying the implementation?

I personally don't see me getting there.

LarryMade2•1mo ago
I can see companies could get away with it from new coders who don't understand the repercussions of releasing )buggy code or know much of long-term maintenance.

You know, this sounds familiar - the big outsourcing boon to less expensive development in other countries, sure they can write the code, but it's either shipped buggy or takes a lot of management and hand-holding of the outsourced development teams to get everything right.