frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Jemalloc un-abandoned by Meta

https://engineering.fb.com/2026/03/02/data-infrastructure/investing-in-infrastructure-metas-renew...
79•hahahacorn•47m ago•17 comments

The “small web” is bigger than you might think

https://kevinboone.me/small_web_is_big.html
99•speckx•1h ago•29 comments

My Journey to a reliable and enjoyable locally hosted voice assistant

https://community.home-assistant.io/t/my-journey-to-a-reliable-and-enjoyable-locally-hosted-voice...
200•Vaslo•5h ago•64 comments

Apideck CLI – An AI-agent interface with much lower context consumption than MCP

https://www.apideck.com/blog/mcp-server-eating-context-window-cli-alternative
77•gertjandewilde•3h ago•81 comments

Launch HN: Voygr (YC W26) – A better maps API for agents and AI apps

34•ymarkov•2h ago•16 comments

Why I love FreeBSD

https://it-notes.dragas.net/2026/03/16/why-i-love-freebsd/
224•enz•7h ago•87 comments

Where does engineering go? Retreat findings and insights [pdf]

https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_d...
12•danebalia•4d ago•2 comments

Language Model Teams as Distrbuted Systems

https://arxiv.org/abs/2603.12229
13•jryio•1h ago•1 comments

Cert Authorities Check for DNSSEC from Today

https://www.grepular.com/Cert_Authorities_Check_for_DNSSEC_From_Today
52•zdw•20h ago•52 comments

Kaizen (YC P25) Hiring Eng, GTM, Cos to Automate BPOs

https://www.kaizenautomation.com/careers
1•michaelssilver•1h ago

Polymarket gamblers threaten to kill me over Iran missile story

https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if...
938•defly•6h ago•585 comments

Corruption erodes social trust more in democracies than in autocracies

https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2026.1779810/full
558•PaulHoule•7h ago•277 comments

Launch HN: Chamber (YC W26) – An AI Teammate for GPU Infrastructure

https://www.usechamber.io/
8•jshen96•1h ago•2 comments

US Job Market Visualizer

https://karpathy.ai/jobs/
265•andygcook•3h ago•220 comments

Lazycut: A simple terminal video trimmer using FFmpeg

https://github.com/emin-ozata/lazycut
95•masterpos•6h ago•33 comments

The return-to-the-office trend backfires

https://thehill.com/opinion/technology/5775420-remote-first-productivity-growth/
37•penguin_booze•46m ago•13 comments

Starlink Mini as a failover

https://www.jackpearce.co.uk/posts/starlink-failover/
57•jkpe•10h ago•81 comments

Speed at the cost of quality: Study of use of Cursor AI in open source projects

https://arxiv.org/abs/2511.04427
41•wek•1h ago•16 comments

Home Assistant waters my plants

https://finnian.io/blog/home-assistant-waters-my-plants/
200•finniananderson•4d ago•92 comments

MoD sources warn Palantir role at heart of government is threat to UK security

https://www.thenerve.news/p/palantir-technologies-uk-mod-sources-government-data-insights-securit...
470•vrganj•7h ago•176 comments

Even faster asin() was staring right at me

https://16bpp.net/blog/post/even-faster-asin-was-staring-right-at-me/
76•def-pri-pub•6h ago•39 comments

Kona EV Hacking

http://techno-fandom.org/~hobbit/cars/ev/
77•AnnikaL•4d ago•47 comments

Lies I was told about collaborative editing, Part 2: Why we don't use Yjs

https://www.moment.dev/blog/lies-i-was-told-pt-2
141•antics•3d ago•73 comments

Comparing Python Type Checkers: Typing Spec Conformance

https://pyrefly.org/blog/typing-conformance-comparison/
65•ocamoss•6h ago•22 comments

Agent Skills – Open Security Database

https://index.tego.security/skills/
5•4ppsec•1h ago•1 comments

Palestinian boy, 12, describes how Israeli forces killed his family in car

https://www.bbc.com/news/articles/c70n2x7p22do
143•tartoran•21m ago•20 comments

AirPods Max 2

https://www.apple.com/airpods-max/
85•ssijak•5h ago•163 comments

Event Publisher enables event integration between Keycloak and OpenFGA

https://github.com/embesozzi/keycloak-openfga-event-publisher
22•mooreds•4h ago•4 comments

On The Need For Understanding

https://blog.information-superhighway.net/on-the-need-for-understanding
16•zdw•4d ago•4 comments

The bureaucracy blocking the chance at a cure

https://www.writingruxandrabio.com/p/the-bureaucracy-blocking-the-chance
34•item•1d ago•55 comments
Open in hackernews

Apideck CLI – An AI-agent interface with much lower context consumption than MCP

https://www.apideck.com/blog/mcp-server-eating-context-window-cli-alternative
76•gertjandewilde•3h ago

Comments

hparadiz•3h ago
10 years from now: "Can you believe they did anything with such a small context window?"
mbreese•3h ago
10 years from now: “what’s a context window?”
sghiassy•3h ago
10 years from now: “come with me if you want to live”

Terminator 2 Clip: https://youtu.be/XTzTkRU6mRY?t=72&si=dmfLNDqpDZosSP4M

berziunas•3h ago
“640K ought to be enough for anybody”
hparadiz•3h ago
I dunno why you're getting down voted. This is funny.
this_user•3h ago
More likely: "Can you believe they were actually trying to use LLMs for this?"
nipponese•2h ago
OSes and software engs did not end up using less RAM.
gitonup•1h ago
Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).

From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.

MattGaiser•3h ago
I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.
lionkor•3h ago
10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"
cheevly•3h ago
Imagine believing humans don’t make the same mistakes. You live in a different universe than me buddy.
recursive•2h ago
Sometimes we repeat mistakes. But humans are capable of occasionally learning. I've seen it!
saalweachter•1h ago
I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.

Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.

The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.

Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.

And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.

creesch•2h ago
I mean, that is not what they are writing buddy.
smrtinsert•2h ago
"That was back when models were so slow and weighty they had to use cloud based versions. Now the same LLM power is available in my microwave"
austinhutch•3h ago
> Not a protocol error, not a bad tool call. The connection never completed.

Very interesting topic, but this LLM structure is instant anthema I just have to stop reading once I smell it.

nicoritschel•3h ago
While I generally prefer CLI over MCP locally, this is bad outdated information.

The major harnesses like Claude Code + Codex have had tool search for months now.

injidup•2h ago
Can you explain how to take advantage. Is there any specific info from anthropic with regards to context window size and not having to care about MCP?
amzil•2h ago
Fair point on tool search. Claude Code and Codex do have it.

But tool search is solving the symptom, not the cause. You still pay the per-tool token cost for every tool the search returns. And you've added a search step (with its own latency and token cost) before every tool call.

With a CLI, the agent runs `--help` and gets 50-200 tokens of exactly what it needs. No search index, no ranking, no middleware. The binary is the registry.

Tool search makes MCP workable. CLIs make the search unnecessary.

caust1c•3h ago
I'm getting tired of everyone saying "MCP is dead, use CLIs!".

Yes, MCP eats up context windows, but agents can also be smarter about how they load the MCP context in the first place, using similar strategy to skills.

The problem with tossing it out entirely is that it leaves a lot more questions for handling security.

When using skills, there's no implicit way to be able to apply policies in the sane way across many different servers.

MCP gives us a registry such that we can enforce MCP chain policies, i.e. no doing web search after viewing financials.

Doing the same with skills is not possible in a programatic and deterministic way.

There needs to be a middle ground instead of throwing out MCP entirely.

yoyohello13•2h ago
It is a weird trend. I see the appeal of Skills over MCP when you are just a solo dev doing your work. MCP is incredibly useful in an organization context when you need to add controls and process. Both are useful. I feel like the anti-MCP push is coming from people who don't need to work in a large org.
krzyk•1h ago
Not sure. Our big org, banned MCPs because they are unsafe, and they have no way to enforce only certain MCPs (in github copilot).
thenewnewguy•1h ago
But skills where you tell the LLM to shell out to some random command are safe? I'm not sure I understand the logic.
toomuchtodo•49m ago
You can control an execution context in a superior manner than a rando MCP server.

MCP Security 2026: 30 CVEs in 60 Days - https://news.ycombinator.com/item?id=47356600 - March 2026

(securing this use case is a component of my work in a regulated industry and enterprise)

newswasboring•24m ago
I think big companies already protect against random commands causing damage. Work laptops are tightly controlled for both networking and software.
yoyohello13•1h ago
We only allow custom MCP servers.
mbreese•50m ago
Isn’t it possible to proxy LLM communication and strip out unwanted MCP tool calls from conversations? I mean if you’re going to ban MCPs, you’re probably banning any CLI tooling too, right?
systima•44m ago
Maybe https://usepec.eu ?
thecopy•28m ago
Shameless plug: im working on a product that aims to solve this: https://www.gatana.ai/
9rx•1h ago
> I feel like the anti-MCP push is coming from people who don't need to work in a large org.

Any kind of social push like that is always understood to be something to ignore if you understand why you need to ignore it. Do you agree that a typical solo dev caught in the MCP hype should run the other way, even if it is beneficial to your unique situation?

yoyohello13•1h ago
Id agree solo devs can lean toward skills. I liken skills to a sort of bash scripts directory. And for personal stuff I generally use skills only.
skybrian•2h ago
Towards the end of the article, they do write about some things that MCP does better.
il•2h ago
Tool search pretty much completely negates the MCP context window argument.
CuriouslyC•2h ago
Skills are just prompts, so policy doesn't apply there. MCP isn't giving you any special policy control there, it's just a capability border. You could do the same thing with a service mesh or any other capability compartmentalization technique.

The only value in MCP is that it's intended "for agents" and it has traction.

consumer451•2h ago
> Yes, MCP eats up context windows, but agents can also be smarter about how they load the MCP context in the first place, using similar strategy to skills.

I have been keeping an eye on MCP context usage with Claude Code's /context command.

When I ran it a couple months ago, supabase used 13.2k tokens all the time, with the search_docs tool using 8k! So, I disabled that tool in my config.

I just ran /context now, and when not being used it uses only ~300 tokens.

I have a question. Does anyone know a good way to benchmark actual MCP context usage in Claude Code now? I just tried a few different things and none of them worked.

ewild•36m ago
I feel like I don't fully understand mcp. I've done research on it but I definitely couldn't explain it. I get lost on the fact that to my knowledge it's a server with API endpoints that are well defined into a json schema then sent the to LLM and the LLM parses that and decides which endpoints to hit (I'm aware some llms use smart calling now so they load the tool name and description but nothing else until it's called). How exactly are you doing the process of stopping the LLM from using web search after it hits a certain endpoint in your MCP server? Or is this referring strictly to when you own the whole workflow where you can then deny websearch capabilities on the next LLM step?
polynomial•9m ago
This is the right framing. The chain policy problem is what happens when you ask the registry to be the entitlement layer.

Here's a longer piece on why the trust boundary has to live at the runtime level, not the interface level, and what that means for MCP's actual job: https://forestmars.substack.com/p/twilight-of-the-mcp-idols

robot-wrangler•3m ago
> I'm getting tired of everyone saying "MCP is dead, use CLIs!".

The people saying this and attacking it basically should first agree about the question.

Are you combining a few tools in the training set into a logical unit to make a cohesive tool-suite, say for reverse engineering or network-debugging? Low stakes for errors, not much on-going development? Great, you just need a thin layer of intelligence on top of stack-overflow and blog-posts, and CLI will probably do it.

Are you trying to weld together basically an AI front-end for an existing internal library or service? Is it something complex enough that you need to scale out and have modular access to? Is it already something you need to deploy/develop/test independently? Oops, there's nothing quite like that in the training set, and you probably want some guarantees. You need a schema, obviously. You can sort of jam that into prompts and prayers, hope for the best with skills, skip validation and and risk annotations being ignored, trust that future opaque model-change will be backwards compatible with how skills are even selected/dispatched. Or.. you can use MCP.

Advocating really hard for one or the other in general is just kind of naive.

rirze•3h ago
At this point, I feel like MCP servers are just not feasible at the current level of context windows and LLMs. Good idea, but we're way too early.
bkummel•3h ago
There's already an open source tool that does exactly the same thing: https://github.com/knowsuchagency/mcp2cli
amzil•2h ago
Great tool, however we went to a dedicated CLI client (think gh, aws, stripe) in Go.
kristjansson•2h ago
CLIs are great for some applications! But 'progressive disclosure' means more mistakes to be corrected and more round trips to the model - every time[1] you use the tool in a new thread. You're trading latency for lower cost/more free context. That might be great! But it might not be, and the opposite trade (more money/less context for lower latency) makes a lot of sense for some applications. esp. if the 'more money' part can be amortized over lots of users by keeping the tool definitions block cached.

[1]: one might say 'of course you can just add details about the CLI to the prompt' ... which reinvents MCP in an ad hoc underspecified non-portable mode in your prompt.

amzil•1h ago
This is a fair trade-off and the post should probably be more explicit about it. You're right that progressive disclosure trades latency for cost and context space. For some workloads that's the wrong trade.

The amortization point is interesting too. If you're running a support agent that calls the same 5 tools thousands of times a day, paying the schema cost once and caching it makes total sense. The post covers this in the "tightly scoped, high-frequency tools" section but your framing of it as a caching problem is cleaner.

On the footnote: guilty as charged, partially. The ~80 token prompt is a minimal bootstrap, not a full schema. It tells the agent how to discover, not what to call. But yeah, the moment you start expanding that prompt with specific flags and patterns, you're drifting toward a hand-rolled tool definition. The difference is where you stop. 80 tokens of "here's how to explore" is different from 10,000 tokens of "here's everything you might ever need." But the line between the two is blurrier than the post implies. Fair point.

machinecontrol•2h ago
The trend is obviously towards larger and larger context windows. We moved from 200K to 1M tokens being standard just this year.

This might be a complete non issue in 6 months.

amzil•2h ago
Context windows getting bigger doesn't make the economics go away. Tokens still cost money. 50K tokens of schemas at 1M context is the same dollar cost as 50K tokens at 200K context, you just have more room left over.

The pattern with every resource expansion is the same: usage scales to fill it. Bigger windows mean more integrations connected, not leaner ones. Progressive disclosure is cheaper at any window size.

magospietato•2h ago
Context caching deals with a lot of the cost argument here.
amzil•1h ago
It helps with cost, agreed. But caching doesn't fix the other two problems.

1) Models get worse at reasoning as context fills up, cached or not. right? 2) Usage expansion problem still holds. Cheaper context means teams connect more services, not fewer. You cache 50K tokens of schemas today, then it's 200K tomorrow because you can "afford" it now. The bloat scales with the budget...

Caching makes MCP more viable. It doesn't make loading 43 tool definitions for a task that uses two of them a good architecture.

hrmtst93837•1h ago
Those bigger windows come with lovely surcharges on compute, latency, and prompt complexity, so "just wait for more tokens" is a nice fantasy that melts the moment someone has to pay the bill. If your use case is tiny or your budget is infinite, fine, but for everyone else the "make the window bigger" crowd sounds like they're budgeting by credit card. Quality still falls off near the edge.
dend•2h ago
One of the MCP Core Maintainers here, so take this with a boulder of salt if you're skeptical of my biases.

The debate around "MCP vs. CLI" is somewhat pointless to me personally. Use whatever gets the job done. MCP is much more than just tool calling - it also happens to provide a set of consistent rails for an agent to follow. Besides, we as developers often forget that the things we build are also consumed by non-technical folks - I have no desire to teach my parents to install random CLIs to get things done instead of plugging a URI to a hosted MCP server with a well-defined impact radius. The entire security posture of "Install this CLI with access to everything on your box" terrifies me.

The context window argument is also an agent harness challenge more than anything else - modern MCP clients do smart tool search that obviates the entire "I am sending the full list of tools back and forth" mode of operation. At this point it's just a trope that is repeated from blog post to blog post. This blog post too alludes to this and talks about the need for infrastructure to make it work, but it just isn't the case. It's a pattern that's being adopted broadly as we speak.

o_____________o•59m ago
> modern MCP clients do smart tool search that obviates the entire "I am sending the full list of tools back and forth" mode of operation

How, "Dynamic Tool Discovery"? Has this been codified anywhere? I've only see somewhat hacky implementations of this idea

https://github.com/modelcontextprotocol/modelcontextprotocol...

Or are you talking about the pressure being on the client/harnesses as in,

https://platform.claude.com/docs/en/agents-and-tools/tool-us...

dend•24m ago
More of the latter than the former. The protocol itself is constrained to a set of well-defined primitives, but clients can do a bunch of pre-processing before invoking any of them.
ekropotin•2h ago
Let me guess - another article about how CLI s are superior to MCP?
nzoschke•2h ago
The industry is talking in circles here. All you need is "composability".

UNIX solved this with files and pipes for data, and processes for compute.

AI agents are solving this this with sub-agents for data, and "code execution" for compute.

The UNIX approach is both technically correct and elegant, and what I strongly favor too.

The agent + MCP approach is getting there. But not every harness has sub-agents, or their invocation is non-deterministic, which is where "MCP context bloat" happens.

Source: building an small business agent at https://housecat.com/.

We do have APIs wrapped in MCP. But we only give the agent BASH, an CLI wrapper for the MCPs, and the ability to write code, and works great.

"It's a UNIX system! I know this!"

m3kw9•2h ago
The thing with CLIs is that you also need to return results efficiently. It if both MCP and CLI return results efficiently, CLI wins
enraged_camel•1h ago
With context windows starting to get much larger (see the recent 1M context size for Claude models), I think this will be a non-issue very soon.
mihir_kanzariya•1h ago
The real issue isn't MCP vs Skills/CLIs, it's that most MCP servers dump their entire schema into context on init regardless of whether you'll actually use those tools. Lazy loading tool definitions based on what the agent is actually doing would solve like 80% of the bloat problem without throwing out the protocol entirely.

The security/registry point in the thread is underrated too. Being able to enforce policies at the protocol level is something you lose completely with ad hoc skill files.

JohnMakin•1h ago
This matches my experience building in-house MCP servers. The mechanism I prefer on load is something like a quick FTS5 with BM25 ranking lookup to find what it needs, and then serve those. I think a lot of these things are implemented pretty naively - for instance, we ran into the huge context problem with Jira, so we just built our own Jira MCP interface that doesn't have all the bloat. If the agent finds it needs something it doesnt have, it can ask again.
esafak•57m ago
This is becoming a solved problem with tool search; MCP is back.
rob•55m ago
The real issue isn't MCP, it's these fucking bots posting here every day.
mritchie712•41m ago
claude code solved this about a month ago
Havoc•1h ago
Getting LLMs to reliably trigger CLI functions is quite hard in my experience though especially if it’s a custom tool
drewbitt•47m ago
https://github.com/RhysSullivan/executor
robot-wrangler•38m ago
> Limit integrations → agent can only talk to a few services

The idea that people see this as one horn of a trilemma instead of just good practice is a bit strange. Who would complain that every import isn't a star-import? Bring in what you need at first, then load new things dynamically with good semantics for cascade / drill-down. Let's maybe abandon simple classics like namespacing and the unix philsophy for the kitchen-sink approach after the kitchen-sink thing is shown to work.

mt42or•5m ago
Tired of this shit. Be less stupid.
bazhand•4m ago
I ran into this exact problem building a MCP server. 85 tools in experimental mode, ~17k tokens just for the tool manifest before any work starts.

The fix I (well Codex actually) landed on was toolset tiers (minimal/authoring/experimental) controlled by env var, plus phase-gating, now tools are registered but ~80% are "not connected" until you call _connect. The effective listed surface stays pretty small.

Lazy loading basically, not a new concept for people here.