frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Zero-Layer Approach to Memory Safety (1:1 IR, No Sandbox)

1•pratyagatma•22s ago•0 comments

Superagent: A Multi-Agent System for Work

https://www.airtable.com/newsroom/introducing-superagent
2•T-A•1m ago•0 comments

How LLMs Scaled from 512 to 2M Context: A Technical Deep Dive

https://amaarora.github.io/posts/2025-09-21-rope-context-extension.html
2•rzk•1m ago•0 comments

Anthropic's launch of AI legal tool hits shares in European data companies

https://www.theguardian.com/technology/2026/feb/03/anthropic-ai-legal-tool-shares-data-services-p...
2•tdchaitanya•2m ago•0 comments

OpenAI Google Play billing flaw allows receipt replay attacks

1•Agoodgirl3232•5m ago•0 comments

Human message request: sharing a brief support request about family in crisis

1•ForestSeeker•6m ago•0 comments

Why Vibe First Development Collapses Under Its Own Freedom

https://techyall.com/blog/why-vibe-first-development-collapses-under-its-own-freedom
1•birdculture•7m ago•0 comments

AVATrade's Credibility – A Fintech Perspective on User-Protection Gaps

1•ReviewShield•10m ago•0 comments

The difference between the CE mark and the China Export mark

https://www.jjcarter.com/news/ce-mark-and-the-china-export-mark
1•tokyobreakfast•12m ago•0 comments

High Performance Virtual Dedicated Server (VDS) Hosting

1•John_rdpextra•12m ago•0 comments

Most and least expensive US supermarkets

https://www.consumerreports.org/money/prices-price-comparison/most-and-least-expensive-supermarke...
2•MinimalAction•13m ago•0 comments

Organic Maps is working on live public transport schedules

https://fosstodon.org/@organicmaps/116011424160931246
1•sohkamyung•14m ago•0 comments

The Bitcoin Perpetual Motion Machine Is Starting to Sputter

https://slate.com/technology/2026/02/bitcoin-crypto-treasury-wall-street-microstrategy.html
1•decimalenough•16m ago•0 comments

Created a Doom Scrollable HN website in 7 minutes

https://hacker-feed-glance.lovable.app
1•thasaleni•19m ago•1 comments

My Eighth Year as a Bootstrapped Founder

https://mtlynch.io/bootstrapped-founder-year-8/
1•Brajeshwar•20m ago•0 comments

Broken Proofs and Broken Provers

https://lawrencecpaulson.github.io/2026/01/15/Broken_proofs.html
1•RebelPotato•20m ago•0 comments

Proof of Claude Max quota regression

https://github.com/anthropics/claude-code/issues/22435
1•kstenerud•21m ago•2 comments

Price increase on .AI domains by NameCheap

2•hibijibies•22m ago•0 comments

Ktkit: A Kotlin toolkit for building server applications with Ktor

https://github.com/smyrgeorge/ktkit
1•smyrgeorge•24m ago•0 comments

Ask HN: Dynamic ROI vs. Tiling for high-speed object tracking (<20ms latency)?

1•LucaHerakles•27m ago•0 comments

Show HN: Bakeoff – Send Your Clawdbots to Work

https://www.bakeoff.app/
2•ohong•28m ago•2 comments

Agent Instructions to Command Humans

https://gist.github.com/matiaso/d8d0ee2f72270256c8e19b258d3704b1
1•aphroz•28m ago•1 comments

Saying "No" in an Age of Abundance

https://blog.jim-nielsen.com/2026/saying-no/
1•onurkanbkrc•29m ago•0 comments

Finland is a high-context society that loves defaults

https://rakhim.exotext.com/finland-is-a-high-context-society-that-loves-defaults
2•mefengl•30m ago•0 comments

Fine-tuning open LLM judges to outperform GPT-5.2

https://www.together.ai/blog/fine-tuning-open-llm-judges-to-outperform-gpt-5-2
1•zainhsn•31m ago•0 comments

The Coming AI Compute Crunch

https://martinalderson.com/posts/the-coming-ai-compute-crunch/
1•swolpers•31m ago•0 comments

Marc Andreessen: Defining the Voice of a Startup (2011) [video]

https://www.youtube.com/watch?v=DpNso4MQlPE
1•walterbell•35m ago•0 comments

A Better Figma MCP: Letting Claude Design

https://cianfrani.dev/posts/a-better-figma-mcp/
1•Ozzie_osman•38m ago•0 comments

Launching the Rural Guaranteed Minimum Income Initiative

https://blog.codinghorror.com/launching-the-rural-guaranteed-minimum-income-initiative/
3•foxfired•39m ago•0 comments

Lady Jane Grey

https://vvesh.de/death/nine-days-queen
3•pryncevv•40m ago•0 comments
Open in hackernews

Ask HN: Does anyone keep prompts and reasoning as part of dev cycle?

3•sshadmand•1h ago
We've never been able to read developers' minds, so we relied on documentation and comments to capture intent, decisions, and context even though most engineers dislike writing it and even fewer enjoy reading it.

Now with coding agents, in a sense, we can read the “mind” of the system that helped build the feature. Why did it do what it did, what are the gotchas, any follow up actions items.

Today I decided to paste my prompts and agent interactions into Linear issues instead of writing traditional notes. It felt clunky, but stopped and thought "is this valuable?" It's the closest thing to a record of why a feature ended up the way it did.

So I'm wondering:

- Is anyone intentionally treating agent prompts, traces, or plans as a new form of documentation? - Are there tools that automatically capture and organize this into something more useful than raw logs? - Is this just more noise and not useful with agentic dev?

It feels like there's a new documentation pattern emerging around agent-native development, but I haven't seen it clearly defined or productized yet. Curious how others are approaching this.

Comments

sshadmand•1h ago
like with most agentic dev I do these days, I go between "I need this" to "I used to need this when humans were involved but is it just vestigial" a lot. In this case, why am I documenting at all if the agent is pretty good at understanding things quickly via the context and indexes it creates from the code itself.

...on the other hand... since we still have humans using the features and interacting with them, knowing what is going on and why it made a decisions (for better or worst) doesn't seem like something to let go of.

msejas•39m ago
I approach this by always asking Opus to send an agent to explore and trace how a pipeline works. Even better if I have an integration test. Once it's fully mapped out I might ask it to dump everything it discovered on a markdown doc, clear the context and start the task. The docs folder keeps the information intact for future development.

Managing context is by far the most important skill to be effective with LLMS, in addition to having already existing clean code on the codebase.

As they read your files, you are one shot training the LLM in how to write code and how you structure it and it will adapt. With clean codebases, I found the LLMs were outputting well documented, well logged, and even tested functions by default because the other files it interacted with were like this, 'it learns'.

Additionally you have to think how they train and evaluate the model, there are so many use cases to cover, I'm pretty sure in the Reinforcement Learning part they are not going in huge long threads, but are actually benchmarking and optimizing from fresh context starts, and you should do that as much as possible in your tasks.