frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

1Password Raising Prices ~33%

61•iamben•4h ago•26 comments

Using "Hi Claudette " on Claude.ai

2•mlongval•37m ago•1 comments

Looking 4 open-source knowledge base and project management tool 4 personal use

3•TheAlgorist•1h ago•0 comments

Ask HN: Who has seen productivity increases from AI

6•Kapura•6h ago•4 comments

Ask HN: Chromebook leads for K-8 school in need?

45•techteach00•2d ago•43 comments

Ask HN: How do you know if AI agents will choose your tool?

28•dmpyatyi•1d ago•23 comments

Would you choose the Microsoft stack today if starting greenfield?

16•JB_5000•17h ago•14 comments

Ask HN: Any DIY open-source Alexa/Google alternatives?

6•personality0•10h ago•4 comments

Comparing manual vs. AI requirements gathering: 2 sentences vs. 127-point spec

2•thesssaism•10h ago•3 comments

ChatGPT finds an error in Terence Tao's math research

41•codexon•21h ago•6 comments

Ask HN: What Linux Would Be a Good Transition from Windows 11

11•Cyberis•21h ago•18 comments

Ask HN: How are you controlling AI agents that take real actions?

2•thesvp•13h ago•15 comments

Ask HN: Is it better to have no Agent.md than a bad one?

5•parvardegr•1d ago•8 comments

Ask HN: Where do you save links, notes and random useful stuff?

17•a_protsyuk•1d ago•39 comments

Does anyone use CrewAI or LangChain anymore?

7•rakan1•19h ago•3 comments

Ask HN: Programmable Watches with WiFi?

11•dakiol•3d ago•5 comments

Ask HN: What is up with all the glitchy and off-topic comments?

7•marginalia_nu•22h ago•3 comments

Ask HN: Why doesn't HN have a rec algorithm?

9•sujayk_33•2d ago•21 comments

GLP-1 Second-Order Effects

21•7777777phil•1d ago•9 comments

Explanation of JEPA – Yann LeCun's proposed solution to self-supervised learning

2•helloplanets•13h ago•1 comments

Ask HN: What breaks when you run AI agents unsupervised?

11•marvin_nora•2d ago•8 comments

Ask HN: Cognitive Offloading to AI

12•daringrain32781•2d ago•8 comments

Ask HN: What Comes After Markdown?

7•YuukiJyoudai•2d ago•13 comments

I'm 15 and built a platform for developers to showcase WIP projects

12•amin2011•3d ago•6 comments

So Claude's stealing our business secrets, right?

25•arm32•2d ago•18 comments

Ask HN: How are early-stage AI startups thinking about IP protection?

4•shaheeniquebal•1d ago•3 comments

Ask HN: If the "AI bubble" pops, will it really be that dramatic?

14•moomoo11•2d ago•11 comments

1Password pricing increasing up to 33% in March

87•otterley•4h ago•86 comments

Back end where you just define schema, access policy, and functions

3•emilss•1d ago•5 comments

Ask HN: Why don't software developers make medical devices?

7•piratesAndSons•3d ago•19 comments
Open in hackernews

Comparing manual vs. AI requirements gathering: 2 sentences vs. 127-point spec

2•thesssaism•10h ago
We took a vague 2-sentence client request for a "Team Productivity Dashboard" and ran it through two different discovery processes: a traditional human analyst approach vs an AI-driven interrogation workflow.

The results were uncomfortable. The human produced a polite paragraph summarizing the "happy path." The AI produced a 127-point technical specification that highlighted every edge case, security flaw, and missing feature we usually forget until Week 8.

Here is the breakdown of the experiment and why I think "scope creep" is mostly just discovery failure.

The Problem: The "Assumption Blind Spot"

We’ve all lived through the "Week 8 Crisis." You’re 75% through a 12-week build, and suddenly the client asks, "Where is the admin panel to manage users?" The dev team assumed it was out of scope; the client assumed it was implied because "all apps have logins."

Humans have high context. When we hear "dashboard," we assume standard auth, standard errors, and standard scale. We don't write it down because it feels pedantic.

AI has zero context. It doesn't know that "auth" is implied. It doesn't know that we don't care about rate limiting for a prototype. So it asks.

The Experiment

We fed the same input to a senior human analyst and an LLM workflow acting as a technical interrogator.

Input: "We need a dashboard to track team productivity. It should pull data from Jira and GitHub and show us who is blocking who."

Path A: Human Analyst Output: ~5 bullet points. Focused on the UI and the "business value." Assumed: Standard Jira/GitHub APIs, single tenant, standard security. Result: A clean, readable, but technically hollow summary.

Path B: AI Interrogator Output: 127 distinct technical requirements. Focused on: Failure states, data governance, and edge cases. Result: A massive, boring, but exhaustive document.

The Results

The volume difference (5 vs 127) is striking, but the content difference is what matters. The AI explicitly defined requirements that the human completely "blind spotted":

- Granular RBAC: "What happens if a junior dev tries to delete a repo link?" - API Rate Limits: "How do we handle 429 errors from GitHub during a sync?" - Data Retention: "Do we store the Jira tickets indefinitely? Is there a purge policy?" - Empty States: "What does the dashboard look like for a new user with 0 tickets?"

The human spec implied these were "implementation details." The AI treated them as requirements. In my experience, treating RBAC as an implementation detail is exactly why projects go over budget.

Trade-offs and Limitations

To be fair, reading a 127-point spec is miserable. There is a serious signal-to-noise problem here.

- Bloat: The AI can be overly rigid. It suggested microservices architecture for what should be a monolith. It hallucinated complexity where none existed. - Paralysis: Handing a developer a 127-point list for a prototype is a great way to kill morale. - Filtering: You still need a human to look at the list and say, "We don't need multi-tenancy yet, delete points 45-60."

However, I'd rather delete 20 unnecessary points at the start of a project than discover 20 missing requirements two weeks before launch.

Discussion

This experiment made me realize that our hatred of writing specs—and our reliance on "implied" context—is a major source of technical debt. The AI is useful not because it's smart, but because it's pedantic enough to ask the questions we think are too obvious to ask.

I’m curious how others handle this "implied requirements" problem:

1. Do you have a checklist for things like RBAC/Auth/Rate Limits that you reuse? 2. Is a 100+ point spec actually helpful, or does it just front-load the arguments? 3. How do you filter the "AI noise" from the critical missing specs?

If anyone wants to see the specific prompts we used to trigger this "interrogator" mode, happy to share in the comments.

Comments

mackatsol•7h ago
Both are bad IMHO .. a senior human analyst who accepts a 2 sentence product description? What kind of weird reality is that coming from? I’ve had a client do that too, but it’s the analysts job to ask all the point follow-up questions so they end up with a proper requirements list. That ends up being multiple pages long.. feed that to the AI! I agree the AI as stated above has a bigger coverage, but it’s not doing a better job, it’s being just as lazy and adding a ton of filler to a lousy prompt. Yeah, this set me off. Great topic! Looking forward to reading the discussion. :)
allinonetools_•7h ago
I have seen this play out on real projects. The missing edge cases are usually what cause delays, not the main features. Using AI as a checklist and then trimming it down with human judgment seems to work better than relying on assumptions alone.
muzani•1h ago
Movies screenwriters seem to have found the perfect balance.

A character description looks like this: "APOLLO CREED. Creed is twenty-eight years old. He is a tall, smooth-muscled Black with barely a scar on his light coffee-colored face."

It includes everything important for casting this guy. It doesn't say what he's wearing, hair style, things like a wedding ring. Once they cast an actor, the actor fills this up.

Fight scenes are designed to be as fast to read as the action. Poor writing is something like, "Terrorists B and C fire RPGs at the van. The van makes evasive maneuvers. After the third rocket, the van flips off road." It doesn't make the story nor the scene clearer.

The better script: "The van takes evasive maneuvers to dodge the RPGs. BLAM! BOOM! BANG! The van flips off road."

Maybe when filming, they realize two rockets make more sense. Leave the implementation details to the experts.

However dialogue forms a large part of these scripts. Dialogue is engineered by writers, right down to the syllables. (Funny enough, AI screenwriters often forget syllables exist, and you can tell because they're difficult to actually speak)

What's the purpose of the spec? Instructions? To iron out risks and roadblocks? The document should aim for the bare minimum for that. What's your "dialogue" part - the thing that you need analysts to plan out precisely?