frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•1m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•2m ago•0 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•2m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•2m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•3m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•5m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•5m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•6m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•6m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•7m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
2•vedantnair•7m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•8m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•13m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•15m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•16m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•22m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•24m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•24m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•25m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•26m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•27m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•27m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•28m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•30m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•30m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•31m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•35m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•36m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•36m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•36m ago•0 comments
Open in hackernews

Programming with AI: You're Probably Doing It Wrong

https://www.devroom.io/2025/08/08/programming-with-ai-youre-probably-doing-it-wrong/
22•ariejan•6mo ago

Comments

ActionHank•6mo ago
"If you are only using your hammer to hammer nails, you're doing it wrong" then goes on to explain how you should use agents.

I would've thought that following the initial argument and the progression to the latest trend we would've ended at use agents and write specs and these several currently popular MCPs.

I guess my rant is it to arrive at the point that no one knows what the "correct" way to use them is yet. A hammer has many uses.

mattkrick•6mo ago
I want to believe, and I promise I'm not trying to be a luddite here. Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

Agents are great at familiarizing me with a new codebase. They're great at debugging because even when they're wrong, they get me thinking about the problem differently so I ultimately get the right solution quicker. I love using it like a super-powered search tool and writing single functions or SQL queries about the size of a unit test. However, reviewing a junior's code ALWAYS takes more time than writing it myself, and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?

9rx•6mo ago
> Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

I would say yes. I have been blown away a couple of times. But find it is like playing a slot machine. Occasionally you win — most of the time you lose. As long as my employer is willing to continue to cover the bet, I may as well pull the handle. I think it would be pretty hard to convince myself to pay for it myself, though.

ath3nd•6mo ago
> and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?

The emperor is stark naked, but the hype is making people see clothes where there is only an hairy shriveled old man.

Sure, I can produce "working" code with Claude, but I have not ever been able to produce good working code. Yes, it can write a okay-ish unit test (almost 100% identical to how I'd have written it), and on a well structured codebase (not built with Claude) and with some preparation, it can kind of produce a feature. However, on more interesting problems it's just slop and you gotta keep trying and prodding until it produces something remotely reasonable.

It's addictive to watch it conjure up trash and you constantly trying to steer it in the right direction, but I have never ever ever been able to achieve the code quality level that I am comfortable with. Fast prototype? Sure. Code that can pass my code review? Nah.

What is also funny is how non-deterministic the quality of the output is. Sometimes it really does feel like you almost fly off with it, and then bam, garbage. It feels like a roulette, and you gotta keep spinning the wheel to get your dopamine hit/reward.

All while wasting money and time, and still it ends up far far worse than you doing it in the first place. Hard pass.

rco8786•6mo ago
“Kinda”. I run Claude code on a parallel copy of our monorepo, while I use my primary copy.

I typically only give Claude the boring stuff. Refactors, tech debt cleanup, etc. But occasionally will give it a real feature if the urgency is low and the feature is extremely well defined.

That said, I still spend a considerable amount of time reviewing and massaging Claude’s code before it gets to PR. I haven’t timed myself or anything, but I suspect that when the task is suitable for an LLM, it’s maybe 20-40% faster. But when it’s not, it’s considerably slower and sometimes just fails completely.

ramesh31•6mo ago
>Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

Yes. Claude Code has turned quarter long initiatives into a few afternoons of prompting for me, in the context of multiple different massive legacy enterprise codebases. It all comes down to just reaching that "jesus take the wheel" level of trust in it. You have to be ok with letting it go off and potentially waste hundreds of dollars in tokens giving you nonsense, which it will some times. But when it doesn't it's like magic, and makes the times that it does worth the cost. Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place. It has fundamentally changed what myself and our team is able to accomplish.

glhaynes•6mo ago
>Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place.

Just speculating here, but I wouldn't be surprised if the truth of both parts of this sentence vary quite a bit amongst users of AI coding tools and their various applications; and, if so, if that explains a lot of the discrepancy amongst reports of success/enthusiasm levels.

jtfrench•6mo ago
This article was a bit confusing for me. It starts off by describing what "doing it wrong" looks like (okay). It then goes on to talk about Agents. Perhaps it's just that my human brain needs a firmware update, but I was expecting the "what doing it wrong looks like" section to be followed by a "what doing it right looks like" section. Instead, the next paragraph just begins with "Agents".

Sure, one could surmise that perhaps "doing it right" means "using Agents", but that's not even how the article reads:

> "To make AI development work for you, you’ll need to provide your AI assistant with two things: the proper context and specific instructions (prompts) on how to behave under certain circumstances."

This, to me, doesn't necessitate the usage of agents, so to then enter a section of agents seems to be skipping over a potentially-implied logical connection between the problem in the "doing it wrong" section and how that is solved in the "Agents" section.

Copying code snippets into web UIs and testing manually is slow and clunky, but Agents are essentially just automations around these same core actions. I feel this article could've made a stronger point by getting at the core of what it means to do it wrong.

• Is "doing it wrong" indicated by the time wasted by not using an agentic mechanism vs manual manipulation?

• Is "doing it wrong" indicated by manually switching between tools instead of using MCP to automate tool delegation?

Having written several non-trivial agents myself using Gemini and OpenAI's APIs, the main difference between handing off a task to an agent and manually copy/pasting into chat UIs is efficiency — I usually first do a task manually using chat UIs, but once I have a pattern established, or have identified a set of tools to validate responses, I can then "agentify" it if it's something I need to do repeatedly. But the quality of both approaches is still dependent on the same core principles: adequate context (no more nor less than what keeps the LLM's attention on the task at hand) and adequate instructions for the task (often with a handful of examples). In this regard, I agree with the author, as correct context + instructions are the key ingredients to a useful response. The agentic element is an efficiency layer on top of those key ingredients which frees up the dev from having to manually orchestrate, and potentially avoids human error (and potentially introduces LLM error).

Am I missing something here?

kingkawn•6mo ago
Will there be any more junior engineers?

By the end of the current generation’s careers in a few decades surely it will be able to do everything.

jmull•6mo ago
Well, F-.

I can't believe how far I got on that article before it finally dawned on me that it's just some AI slop. (Well, that's the charitable explanation.)

Chinjut•6mo ago
It claims at the bottom to be hand-written. But one can manually write slop too.