frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Future of Everything Is Lies, I Guess: New Jobs

https://aphyr.com/posts/419-the-future-of-everything-is-lies-i-guess-new-jobs
107•aphyr•2h ago•55 comments

God Sleeps in the Minerals

https://wchambliss.wordpress.com/2026/03/03/god-sleeps-in-the-minerals/
155•speckx•2h ago•47 comments

Show HN: Every CEO and CFO change at US public companies, live from SEC

https://tracksuccession.com/explore
95•porsche959•2h ago•43 comments

Want to Write a Compiler? Just Read These Two Papers (2008)

https://prog21.dadgum.com/30.html
293•downbad_•5h ago•83 comments

Good Sleep, Good Learning (2012)

https://super-memory.com/articles/sleep.htm
212•downbad_•6h ago•103 comments

MCP as Observability Interface: Connecting AI Agents to Kernel Tracepoints

https://ingero.io/mcp-observability-interface-ai-agents-kernel-tracepoints/
30•ingero_io•2h ago•11 comments

Elevated errors on Claude.ai, API, Claude Code

https://claudestatus.com/
153•redm•52m ago•123 comments

Gemini Robotics-ER 1.6

https://deepmind.google/blog/gemini-robotics-er-1-6/
66•markerbrod•1h ago•9 comments

Costasiella kuroshimae – Solar Powered animals, that do indirect photosynthesis

https://en.wikipedia.org/wiki/Costasiella_kuroshimae
94•vinnyglennon•3d ago•38 comments

Do you even need a database?

https://www.dbpro.app/blog/do-you-even-need-a-database
34•upmostly•3h ago•56 comments

Wacli – WhatsApp CLI

https://github.com/steipete/wacli
166•dinakars777•8h ago•120 comments

Fixing a 20-year-old bug in Enlightenment E16

https://iczelia.net/posts/e16-20-year-old-bug/
210•snoofydude•10h ago•107 comments

Metro stop is Ancient Rome's new attraction

https://www.bbc.com/travel/article/20260408-a-150-metro-ticket-to-ancient-rome
68•Stevvo•5d ago•13 comments

Forcing an Inversion of Control on the SaaS Stack

https://www.100x.bot/a/client-side-injection-inversion-of-control-saas
4•shardullavekar•4d ago•0 comments

Google Gemma 4 Runs Natively on iPhone with Full Offline AI Inference

https://www.gizmoweek.com/gemma-4-runs-iphone/
177•takumi123•10h ago•110 comments

We ran Doom on a 40 year old printer controller (Agfa Compugraphic 9000PS) [video]

https://www.youtube.com/watch?v=cltnlks2-uU
18•zdw•3d ago•3 comments

The Deepfake Nudes Crisis in Schools Is Worse Than You Thought

https://www.wired.com/story/deepfake-nudify-schools-global-crisis/
27•smurda•46m ago•29 comments

Pretty Fish: A better mermaid diagram editor

https://pretty.fish/
47•pastelsky•5d ago•11 comments

AI ruling prompts warnings from US lawyers: Your chats could be used against you

https://www.reuters.com/legal/government/ai-ruling-prompts-warnings-us-lawyers-your-chats-could-b...
60•alephnerd•2h ago•30 comments

US v. Heppner (S.D.N.Y. 2026) no attorney-client privilege for AI chats [pdf]

https://fingfx.thomsonreuters.com/gfx/legaldocs/xmvjyjekkpr/Rakoff%20-%20order%20-%20AI.pdf
45•1vuio0pswjnm7•1h ago•30 comments

Academic fraud may be the symptom of a more systemic problem

https://www.voxweb.nl/en/academic-fraud-may-be-the-symptom-of-a-much-more-systemic-problem
24•the-mitr•4h ago•18 comments

Study: Back-to-basics approach can match or outperform AI in language analysis

https://www.manchester.ac.uk/about/news/back-to-basics-approach-can-match-or-outperform-ai/
11•giuliomagnifico•3h ago•0 comments

Your Backpack Got Worse on Purpose

https://www.worseonpurpose.com/p/your-backpack-got-worse-on-purpose
134•113•4h ago•126 comments

New Modern Greek

https://redas.dev/NewModernGreek/
4•holoflash•2d ago•3 comments

Sam Vimes 'Boots' Theory of Socio-Economic Unfairness

https://terrypratchett.com/explore-discworld/sam-vimes-boots-theory-of-socio-economic-unfairness/
45•latexr•1h ago•33 comments

Dependency cooldowns turn you into a free-rider

https://calpaterson.com/deps.html
160•pabs3•13h ago•110 comments

MIT Radiation Laboratory

https://www.ll.mit.edu/about/history/mit-radiation-laboratory
27•stmw•3d ago•7 comments

A communist Apple II and fourteen years of not knowing what you're testing

https://llama.gs/blog/index.php/2026/04/10/friday-archaeology-a-communist-apple-ii-and-fourteen-y...
215•major4x•4d ago•97 comments

My adventure in designing API keys

https://vjay15.github.io/blog/apikeys/
90•vjay15•3d ago•70 comments

Direct Win32 API, Weird-Shaped Windows, and Why They Mostly Disappeared

https://warped3.substack.com/p/direct-win32-api-weird-shaped-windows
145•birdculture•6h ago•82 comments
Open in hackernews

Why Vibe Coding Fails

6•10keane•1h ago
i am using claude to maintain an agent loop, which will pause to ask for users' approval before important tool call. while doing some bug fixes,i have identified some clear patterns and reasons why vibe coding can fail for people who dont have technical knowledge and architecture expertise.

let me describe my workflow first - this has been my workflow across hundreds of successful sessions: 1. identify bugs through dogfooding 2. ask claude code to investigate the codebase for three potential root causes. 3. paste the root causes and proposed fixes to claude project where i store all architecture doc and design decision for it to evaluate 4. discuss with claude in project to write detailed task spec - the task spec will have a specified format with all sorts of test 5. give it back to claude code to implement the fix

in today's session, the root cause analysis was still great, but the proposed fixes are so bad that i really think that's how most of vibe coded project lost maintainability in the long run.

there is two of the root causes and proposed fix:

bug: agent asks for user approval, but sometimes the approval popup doesnt show up. i tried sending a message to unstick it. message got silently swallowed. agent looks dead. and i needed to restart the entire thing.

claude's evaluation: root cause 1: the approval popup is sent once over a live connection. if the user's ui isn't connected at that moment — page refresh, phone backgrounded, flaky connection — they never see it. no retry, no recovery.

this is actually true.

proposed fix "let's save approval state to disk so it survives crashes". sounds fine but then the key is by design, if things crashes, the agent will cold-resume from the session log, and it wont pick up the approval state anyway. the fix just add schema complexity and it's completely useless

root cause 2: when an approval gets interrupted (daemon crash, user restart), there's an orphan tool_call in the session history with no matching tool_result.

proposed fix: "write a synthetic tool_result to keep the session file structurally valid." sounds clean. but i asked: who actually breaks on this? the LLM API? no, it handles missing results. the session replay? no, it reads what's there. the orphan tool_call accurately represents what happened: the tool was called but never completed. that's the truth. writing a fake result to paper over it introduces a new write-coordination concern (when exactly do you write the fake result? what if the daemon crashes during the write?) to solve a problem that doesn't exist. the session file isn't "broken." it's accurate.

claude had full architecture docs, the codebase, and over a hundred sessions of project history in context. it still reaches for the complex solution because it LOOKS like good engineering. it never asked "does it even matter after a restart?"

i have personally encounterd this preference for seemingly more robust over-engineering multiple times. and i genuinely believe that this is where human operate actually should step in, instead of giving an one-sentence requirement and watches agents to do all sorts of "robust" engineering.

Comments

boesboes•1h ago
> because it LOOKS like good engineering

That is the whole problem imho. I've found that I can use LLMs to do programming only if I fully understand the problem and solution. Because if I don't, it will just pretend that I'm right and happily spend hours trying to implement a broken idea.

The problem is that it's very hard to known whether my understanding of something is sufficient to have claude propose a solution and for me to know if it is going to work. If my understanding of the problem is incorrect or incomplete, the plan will look fine too me, but it will be wrong.

If I start working on something from poor understanding, I will notice and improve my understanding. A LLM will just deceive and try to do the impossible anyway.

Also, it overcooks everything, atleast 50-60% of the code it generates are pointlessly verbose abstractions. agian: imho, ymmv, ianal, not financial advice ;)

10keane•1h ago
exactly. vibe coding only works when you fully understand the problem and know precisely how to solve it. ai just do the dirty implementation work for you.

that is another reason in why i separate product/architecture design and implementation into two agents with isolated context in my workflow. because i can always iterate with the product agent to refine my understanding and THEN ask the coding agent to implement it. by that time i already have the ability to make proper judgement and evaluate coding agent's output