frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ChatGPT finds an error in Terence Tao's math research

2•codexon•33m ago•0 comments

Ask HN: What is up with all the glitchy and off-topic comments?

3•marginalia_nu•1h ago•1 comments

Ask HN: How do you know if AI agents will choose your tool?

15•dmpyatyi•5h ago•6 comments

Ask HN: Chromebook leads for K-8 school in need?

42•techteach00•1d ago•42 comments

Ask HN: Where do you save links, notes and random useful stuff?

5•a_protsyuk•4h ago•14 comments

Ask HN: Is it better to have no Agent.md than a bad one?

4•parvardegr•8h ago•2 comments

GLP-1 Second-Order Effects

19•7777777phil•7h ago•9 comments

Ask HN: Are developers who build libs and dev tools safer from AI replacement?

2•danver0•3h ago•2 comments

Ask HN: Programmable Watches with WiFi?

11•dakiol•2d ago•5 comments

So Claude's stealing our business secrets, right?

25•arm32•1d ago•16 comments

Ask HN: What breaks when you run AI agents unsupervised?

11•marvin_nora•1d ago•6 comments

Ask HN: How are early-stage AI startups thinking about IP protection?

4•shaheeniquebal•18h ago•3 comments

Ask HN: Why doesn't HN have a rec algorithm?

9•sujayk_33•1d ago•17 comments

Ask HN: Cognitive Offloading to AI

11•daringrain32781•1d ago•5 comments

Ask HN: What Comes After Markdown?

7•YuukiJyoudai•1d ago•13 comments

Back end where you just define schema, access policy, and functions

3•emilss•1d ago•5 comments

I'm 15 and built a platform for developers to showcase WIP projects

12•amin2011•2d ago•6 comments

Ask HN: Is there a reliable way to tell if an image is AI generated?

8•leandrobon•1d ago•9 comments

Ask HN: If the "AI bubble" pops, will it really be that dramatic?

14•moomoo11•2d ago•12 comments

Tell HN: Claude mangles XML files with <name> as an XML Tag to <n>

9•exabrial•1d ago•3 comments

I made my favorite AI tool

4•sebringj•1d ago•4 comments

Should I add this acknowledgement/shoutout by xAI/Grok to my resume?

2•aehsan4004•1d ago•7 comments

Ask HN: Why don't software developers make medical devices?

7•piratesAndSons•2d ago•19 comments

Open-Source Bionic Reading Chrome Extension (MIT)

2•sdgnbs•1d ago•1 comments

Orvia – Spin up a real-time room, share files, leave – everything disappears

2•yc_surajkr•1d ago•2 comments

Ask HN: How do new blogs break the backlink–indexing loop?

4•lilcodingthings•1d ago•4 comments

Ask HN: Is it worth learning Vim in 2026?

34•zekejohn•3d ago•35 comments

Ask HN: Is there a workaround in OpenClaw for tab not found

2•jinen83•19h ago•0 comments

Peer validation platform for engineering skills (inspired by X community notes)

4•ms_sv•2d ago•16 comments

Ask HN: Do US presidents have less fiduciary liability than CEOs?

6•stopbulying•1d ago•18 comments
Open in hackernews

Ask HN: How do you know if AI agents will choose your tool?

15•dmpyatyi•5h ago
YC recently put out a video about the agent economy - the idea that agents are becoming autonomous economic actors, choosing tools and services without human input.

It got me thinking: how do you actually optimize for agent discovery? With humans you can do SEO, copywriting, word of mouth. But an agent just looks at available tools in context and picks one based on the description, schema, examples.

Has anyone experimented with this? Does better documentation measurably increase how often agents call your tool? Does the wording of your tool description matter across different models (ZLM vs Claude vs Gemini)?

Comments

jackfranklyn•5h ago
We've been exposing tools via MCP and the biggest lesson so far: the tool description is basically a meta tag. It's the only thing the model reads before deciding whether to call your tool.

Two things that surprised us: (1) being explicit about what the tool doesn't do matters as much as what it does - vague descriptions get hallucinated calls constantly, and (2) inline examples in the description beat external documentation every time. The agent won't browse to your docs page.

The schema side matters too - clean parameter names, sensible defaults, clear required vs optional. It's basically UX design for machines rather than humans. Different models do have different calling patterns (Claude is more conservative, will ask before guessing; others just fire and hope) so your descriptions need to work for both styles.

zahlman•5h ago
> inline examples in the description beat external documentation every time. The agent won't browse to your docs page.

That seems... surprising, and if necessary something that could easily be corrected on the harness side.

> The schema side matters too - clean parameter names, sensible defaults, clear required vs optional. It's basically UX design for machines rather than humans.

I don't follow. Wouldn't you do all those things to design for humans anyway?

JacobArthurs•5h ago
Tool description quality matters way more than people expect. In my experience with MCP servers, the biggest win is specificity about when not to use the tool. Agents pick confidently when there's a clear boundary, not a vague capability statement.
snowhale•2h ago
tool description wording does matter, at least in my testing. models seem to use the description to reason about whether a tool "should" apply, not just whether it can. two things that helped: (1) explicit input format with an example, (2) a one-sentence note about what the tool does NOT handle. the negative case helps models avoid calling it on edge cases and then failing, which trains them (in context) to prefer it when it's actually the right fit.
kellkell•1h ago
CRIPIX seems to be a new and unusual concept. I came across it recently and noticed it’s available on Amazon. The description mentions something called the Information Sovereign Anomaly and frames the work more like a technological and cognitive investigation than a traditional book. What caught my attention is that it appears to question current AI and computational assumptions rather than promote them. Has anyone here heard about it or looked into it ?