frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ghostty – Terminal Emulator

https://ghostty.org/docs
395•oli5679•7h ago•183 comments

Microgpt

http://karpathy.github.io/2026/02/12/microgpt/
1474•tambourine_man•18h ago•256 comments

Why XML Tags Are So Fundamental to Claude

https://glthr.com/XML-fundamental-to-Claude
85•glth•4h ago•38 comments

Decision trees – the unreasonable power of nested decision rules

https://mlu-explain.github.io/decision-tree/
309•mschnell•10h ago•56 comments

How Dada Enables Internal References

https://smallcultfollowing.com/babysteps/blog/2026/02/27/dada-internal-references/
13•vrnvu•2d ago•5 comments

We do not think Anthropic should be designated as a supply chain risk

https://twitter.com/OpenAI/status/2027846016423321831
732•golfer•22h ago•398 comments

A new account made over $515,000 betting on the U.S. strike against Iran

https://xcancel.com/cabsav456/status/2027937130995921119
16•doener•20m ago•7 comments

Flightradar24 for Ships

https://atlas.flexport.com/
116•chromy•8h ago•28 comments

Interview with Øyvind Kolås, GIMP developer (2017)

https://www.gimp.org/news/2026/02/22/%C3%B8yvind-kol%C3%A5s-interview-ww2017/
81•ibobev•3d ago•33 comments

Python Type Checker Comparison: Empty Container Inference

https://pyrefly.org/blog/container-inference-comparison/
16•ocamoss•4d ago•11 comments

Show HN: Audio Toolkit for Agents

https://github.com/shiehn/sas-audio-processor
19•stevehiehn•3h ago•1 comments

Lil' Fun Langs' Guts

https://taylor.town/scrapscript-001
22•surprisetalk•4h ago•2 comments

I built a demo of what AI chat will look like when it's "free" and ad-supported

https://99helpers.com/tools/ad-supported-chat
348•nickk81•7h ago•217 comments

10-202: Introduction to Modern AI (CMU)

https://modernaicourse.org
177•vismit2000•12h ago•42 comments

New iron nanomaterial wipes out cancer cells without harming healthy tissue

https://www.sciencedaily.com/releases/2026/02/260228093456.htm
127•gradus_ad•4h ago•38 comments

Aromatic 5-silicon rings synthesized at last

https://cen.acs.org/materials/inorganic-chemistry/Aromatic-5-silicon-rings-synthesized/104/web/20...
58•keepamovin•2d ago•26 comments

When does MCP make sense vs CLI?

https://ejholmes.github.io/2026/02/28/mcp-is-dead-long-live-the-cli.html
92•ejholmes•2h ago•68 comments

The real cost of random I/O

https://vondra.me/posts/the-real-cost-of-random-io/
71•jpineman•3d ago•11 comments

Switch to Claude without starting over

https://claude.com/import-memory
475•doener•12h ago•222 comments

Why is the first C++ (m)allocation always 72 KB?

https://joelsiks.com/posts/cpp-emergency-pool-72kb-allocation/
101•joelsiks•10h ago•19 comments

January in Servo: preloads, better forms, details styling, and more

https://servo.org/blog/2026/02/28/january-in-servo/
22•birdculture•2h ago•2 comments

An ode to houseplant programming (2025)

https://hannahilea.com/blog/houseplant-programming/
113•evakhoury•2d ago•21 comments

Obsidian Sync now has a headless client

https://help.obsidian.md/sync/headless
549•adilmoujahid•1d ago•182 comments

Robust and efficient quantum-safe HTTPS

https://security.googleblog.com/2026/02/cultivating-robust-and-efficient.html
79•tptacek•2d ago•16 comments

Rydberg atoms detect clear signals from a handheld radio

https://phys.org/news/2026-02-rydberg-atoms-handheld-radio.html
63•Brajeshwar•2d ago•22 comments

The happiest I've ever been

https://ben-mini.com/2026/the-happiest-ive-ever-been
606•bewal416•3d ago•335 comments

Pigeons and Planes Has a Website Again

https://www.pigeonsandplanes.com/read/pigeons-and-planes-has-a-website-again
42•herbertl•3d ago•6 comments

Show HN: Vertex.js – A 1kloc SPA Framework

https://lukeb42.github.io/vertex-manual.html
22•LukeB42•8h ago•15 comments

AWS Middle East Central Down, apparently struck in war

https://health.aws.amazon.com/health/status
6•earthboundkid•18m ago•0 comments

I Built a Scheme Compiler with AI in 4 Days

https://matthewphillips.info/programming/posts/i-built-a-scheme-compiler-with-ai/
23•MatthewPhillips•2h ago•19 comments
Open in hackernews

When does MCP make sense vs CLI?

https://ejholmes.github.io/2026/02/28/mcp-is-dead-long-live-the-cli.html
92•ejholmes•2h ago

Comments

lukol•1h ago
Couldn't agree more. Simple REST APIs often do the job as well. MCP felt like a vibe-coded fever dream from the start.
phpnode•1h ago
I don't doubt that CLIs + skills are a good alternative to MCP in some contexts, but if you're building an app for non-developers and you need to let users connect it to arbitrary data sources there's really no sensible, safe path to using CLIs instead. MCP is going to be around for a long time, and we can expect it to get much better than it is today.
simianwords•1h ago
Why? The llm can install cli through apt-get or equivalent and non developers wouldn’t need to know
phpnode•1h ago
well I'm sure you can understand the dangers of that, and why that won't work if your app is hosted and doesn't run on users' local machines
oldestofsports•1h ago
What non developer would have apt installed on their device though
sigmoid10•1h ago
>we can expect it to get much better than it is today

Which is not a high bar to clear. It literally only got where it is now because execs and product people love themselves another standard, because if they get their products to support it they can write that on some excel sheet as shipped feature and pin it on their chest. Even if the standard sucks on a technical level and the spec changes all the time.

phpnode•57m ago
This is excessively cynical, it's a useful tool despite its shortcomings.
orange_joe•1h ago
This doesn't really pay attention to token costs. If I'm making a series of statically dependent calls I want to avoid blowing up the context with information on the intermediary states. Also, I don't really want to send my users skill.md files on how to do X,Y & Z.
phpnode•1h ago
the article only makes sense if you think that only developers use AI tools, and that the discovery / setup problem doesn't matter
trollbridge•1h ago
But that's the current primary use case for AI. We aren't anywhere close to being able to sanitise input from hostile third parties enough to just let people start inputting prompts to my own system.
phpnode•1h ago
there's a whole world of AI tools out there that don't focus on developers. These tools often need to interact with external services in one way or another, and MCP gives those less technical users an easy way to connect e.g. Notion or Linear in a couple of clicks, with auth taken care of automatically. CLIs are never replacing that use case.
krzyk•1h ago
Why? MCP and CLI is similar here.

You need agent to find MCP and what it can be used for (context), similarly you can write what CLI use for e.g. jira.

Rest is up to agent, it needs to list what it can do in MCP, similarly CLI with proper help text will list that.

Regarding context those tools are exactly the same.

lmeyerov•1h ago
This feels right in theory and wrong in practice

When measuring speed running blue team CTFs ("Breaking BOTS" talk at Chaos Congress), I saw about a ~2x difference in speed (~= tokens) for a database usage between curl (~skills) vs mcp (~python). In theory you can rewrite the mcp into the skill as .md/.py, but at that point ... .

Also I think some people are talking past one another in these discussions. The skill format is a folder that supports dropping in code files, so much of what MCP does can be copy-pasted into that. However, many people discussing skills mean markdown-only and letting the LLM do the rest, which would require a fancy bootstrapping period to make as smooth as the code version. I'd agree that skills, when a folder coming with code, does feel like largely obviating MCPs for solo use cases, until you consider remote MCPs & OAuth, which seem unaddressed and core in practice for wider use.

CuriouslyC•1h ago
There's been an anti-MCP pro-CLI train going for a while since ~May of last year (I've been personally beating this drum since then) but I think MCP has a very real use case.

Specifically, MCP is a great unit of encapsulation. I have a secure agent framework (https://github.com/sibyllinesoft/smith-core) where I convert MCPs to microservices via sidecar and plug them into a service mesh, it makes securing agent capabilities really easy by leveraging existing policy and management tools. Then agents can just curl everything in bash rather than needing CLIs for everything. CLIs are still slightly more token efficient but overall the simplicity and the power of the scheme is a huge win.

mt42or•1h ago
I remember this kind of people against Kubernetes the same exact way. Very funny.
tedk-42•1h ago
Same clowns complaining that `npm install` downloads the entire internet.

Now it's completely fine for an AI agent to do the same and blow up their context window.

recursivedoubts•1h ago
MCP has one thing going for it as an agentic API standard: token efficiency

The single-request-for-all-abilities model + JSON RPC is more token efficient than most alternatives. Less flexible in many ways, but given the current ReAct, etc. model of agentic AI, in which conversations grow geometrically with API responses, token efficiency is very important.

ako•1h ago
I've been creating a cli tool with a focus on token efficiency. Dont see why cli could not be as token efficient as mcp. The cli has the option to output ascii, markdown and json.
recursivedoubts•1h ago
I'm working on a paper on this, if you are using a hypermedia-like system for progressive revelation of functionality you are likely to find that this chatty style of API is inefficient compared with an RPC-like system. The problem is architectural rather than representational.

I say this as a hypermedia enthusiast who was hoping to show otherwise.

bear3r•42m ago
the output format (ascii/json/markdown) is one piece, but the other side is input schema. mcp declares what args are valid and their types upfront, so the model can't hallucinate a flag that doesn't exist. cli tools don't expose that contract unless you parse --help output, which is fragile.
ako•23m ago
So far, cli --help seems to work quite well. I'm optimizing the cli to interact with the agent, e.g., commands that describe exactly what output is expected for the cli DSL, error messages that contain DSL examples that exactly describe the agent how to fix bugs, etc. Overall i think the DSL is more token efficient that a similar JSON, and easier to review for humans.
SOLAR_FIELDS•1h ago
But the flip side of this is that the tools themselves take up a ton of token context. So if you have one mcp it’s great but there is an upper bound that you hit pretty quick of how many tools you can realistically expose to an agent without adding some intermediary lookup layer. It’s not compact enough of a spec and doesn’t have lazy loading built into it
harrall•1h ago
Yes but I consider that just a bug in the agents that use MCP servers.

It could just be fixed to compress the context or the protocol could be tweaked.

Switching to CLIs is like buying a new car because you need an oil change. Sure, in this case, the user doesn’t get to control if the oil change can be done, but the issue is not the car — it’s that no one will do the relatively trivial fix.

dnautics•1h ago
you know what you could do? You could write a skill that turns mcps on or off!
juanre•1h ago
Reports of MCP's demise have been greatly exaggerated, but a CLI is indeed the right choice when the interface to the LLM is not a chat in a browser window.

For example, I built https://claweb.ai to enable agents to communicate with other agents. They run aw [1], an OSS Go CLI that manages all the details. This means they can have sync chats (not impossible with MCP, but very difficult). It also enables signing messages and (coming soon) e2ee. This would be, as far as I can tell, impossible using MCP.

[1] https://github.com/awebai/aw

ako•1h ago
Biggest downside of CLI for me is that it needs to run in a container. You're allowing the agent to run CLI tools, so you need to limit what it can do.
wolttam•1h ago
It gets significantly harder to isolate the authentication details when the model has access to a shell, even in a container. The CLI tool that the model is running may need to access the environment or some credentials file, and what's to stop the model from accessing those credentials directly?

It breaks most assumptions we have about the shell's security model.

tuwtuwtuwtuw•1h ago
Couldn't that be solved by whitelisting specific commands?
wolttam•1h ago
Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable.
goranmoomin•1h ago
I can't believe everyone is talking about MCP vs CLI and which is superior; both are a method of tool calling, it does not matter which format the LLM uses for tool calling as long as it provides the same capabilities. CLIs might be marginably better (LLMs might have been trained on common CLIs), but MCPs have their uses (complex auth, connecting users to data sources) and in my experience if you're using any of the frontier models, it doesn't really matter which tool calling format you're using; a bespoke format also works.

The difference that should be talked about, should be how skills allow much more efficient context management. Skills are frequently connected to CLI usage, but I don't see any reason why. For example, Amp allows skills to attach MCP servers to them – the MCP server is automatically launched when the Agent loads that skill[0]. I belive that both for MCP servers and CLIs, having them in skills is the way for efficent context, and hoping that other agents also adopt this same feature.

[0]: https://ampcode.com/manual#mcp-servers-in-skills

avaer•1h ago
MCP vs CLI is the modern version of people discussing the merits of curly braces vs significant whitespace.

That is, I don't think we're gonna be arguing about it for very long.

vojtapol•1h ago
MCP needs to be supported during the training and trained into the LLM whereas using CLI is very common in the training set already. Since MCP does not really provide any significant benefits I think good CLI tools and its use by LLMs should be the way forward.
goodmythical•1h ago
>as long as it provides the same capabilities.

That's fine if you definition of capabilities is wide enough to include model understanding of the provided tool and token waste in the model trying to understand the tool and token waste in the model doing things ass backwards and inflating the context because it can't see the vastly shorter path to the solution provided by the tool and...

There is plenty of evidence to suggest that performance, success rates, and efficiency, are all impacted quite drastically by the particular combination of tool and model.

This is evidenced by the end of your paragraph in which you admit that you are focused only on a couple (or perhaps a few) models. But even then, throw them a tool they don't understand that has the same capabilities as a tool they do understand and you're going to burn a bunch of tokens watching it try to figure the tool out.

Tooling absolutely matters.

goranmoomin•11m ago
> model understanding of the provided tool and token waste in the model trying to understand the tool and token waste in the model doing things ass backwards and inflating the context because it can't see the vastly shorter path to the solution provided by the tool and...

> But even then, throw them a tool they don't understand that has the same capabilities as a tool they do understand and you're going to burn a bunch of tokens watching it try to figure the tool out.

What I was trying to say was that this applies to both MCPs and CLIs – obviously, if you have a certain CLI tool that's represented thoroughly through the model's training dataset (i.e. grep, gh, sed, and so on), it's definitely beneficial to use CLIs (since it means less context spending, less trial-and-error to get the expected results, and so on).

However if you have a novel thing that you want to connect to LLM-based Agents, i.e. a reverse enginnering tool, or a browser debugging protocol adapter, or your next big thing(tm), it might not really matter if you have a CLI or a MCP since LLMs are both post-trained (hence proficent) for both, and you'll have to do the trial-and-error thing anyway (since neither would represented in the training dataset).

I would say that the MCP hype is dying out so I personally won't build a new product with MCP right now, but no need to ditch MCPs for any reason, nor do I see anything inherently deficient in the MCP protocol itself. It's just another tool-calling solution.

jeremyjh•1h ago
No, it really matters because of the impact it has on context tokens. Reading on GH issue with MCP burns 54k tokens just to load the spec. If you use several MCPs it adds up really fast.
ashdksnndck•58m ago
Verbosity of the output seems orthogonal to the cli vs mcp distinction? When I made mcp tools and noticed a lot of tokens being used, I changed the default to output less and added options to expose different kinds of detailed info depending what the model wants. CLI can support similar behavior.
nextaccountic•27m ago
In the front page there's a project that attempts to reduce tje boilerplate of mcp output in claude code

Eventually I hope that models themselves become smarter and don't save the whole 54k tokens in their context window

goranmoomin•5m ago
The impact on context tokens would be more of a 'you're holding it wrong' problem, no? The GH MCP burning tokens is an issue on the GH MCP server, not the protocol itself. (I would say that since the gh CLI would be strongly represented in the training dataset, it would be more beneficial to just use the CLI in this case though.)

I do think that we should adopt Amp's MCPs-on-skills model that I've mentioned in my original comment more (hence allowing on-demand context management).

sophiabits•15m ago
> the MCP server is automatically launched when the Agent loads that skill

The main problem with this approach at the moment is it busts your prompt cache, because LLMs expect all tool definitions to be defined at the beginning of the context window. Input tokens are the main driver of inference costs and a lot of use cases aren't economical without prompt caching.

Hopefully in future LLMs are trained so you can add tool definitions anywhere in the context window. Lots of use cases benefit from this, e.g. in ecommerce there's really no point providing a "clear cart" tool to the LLM upfront, it'd be nice if you could dynamically provide it after item(s) are first added.

goranmoomin•8m ago
> The main problem with this approach at the moment is it busts your prompt cache, because LLMs expect all tool definitions to be defined at the beginning of the context window.

TBH I'm not really sure how it works in Amp (I never actually inspected how it alters the prompts that are sent to Anthropic), but does it really matter for the LLMs to have the tool definitions at the beginning of the context window in contrast to the bottom before my next new prompt?

I mean, skills also work the same way, right? (it gets appended at the bottom, when the LLM triggers the skill) Why not MCP tooling definitions? (They're basically the same thing, no?)

rvz•1h ago
MCPs were dead in the water and were completely a bad standard to begin with. The hype around never made sense.

Not only it had lots of issues and security problems all over the place and it was designed to be complicated.

For example, Why does your password manager need an MCP server? [0]

But it still does not mean a CLI is any better for everything.

[0] https://news.ycombinator.com/item?id=44528411

AznHisoka•1h ago
In terms of what companies are actually implementing, MCP isnt dead by a long time. Number of companies with a MCP server grew 242% in the last 6 months and is actually accelerating (according to Bloomberry) [1]

https://bloomberry.com/blog/we-analyzed-1400-mcp-servers-her...

lakrici88284•1h ago
Companies are usually chasing last year's trend, and MCP makes for an easy "look, were adopting AI!" bullet point.
AznHisoka•52m ago
Right, but even if this is just a matter of "chasing a trend", it does have a network effect and makes the entire MCP ecosystem much more useful to consumers, which begets more MCP servers.
bikeshaving•1h ago
I keep asking why the default Claude tools like Read(), Write(), Edit(), MultiEdit(), Replace() tools aren’t just Bash() with some combination of cat, sed, grep, find. Isn’t it just easier to pipe everything through the shell? We just need to figure out the permissions for it.
fcarraldo•1h ago
Because the Tools model allows for finer grained security controls than just bash and pipe. Do you really want Claude doing `find | exec` instead of calling an API that’s designed to prevent damage?
arbll•1h ago
It might be the wrong place to do security anyway since `bash` and other hard-to-control tools will be needed. Sandboxing is likely the only way out
webstrand•1h ago
yeah, I would rather it did that. You run Claude in a sandbox that restricts visibility to only the files it should know about in the first place. Currently I use a mix of bwrap and syd for filtering.
rfw300•1h ago
Making those tools first-class primitives is good for (human) UX: you see the diffs inline, you can add custom rules and hooks that trigger on certain files being edited, etc.
p_ing•1h ago
Tell my business users to use CLI when they create their agents. It's just not happening. MCP is point-and-click for them.

MCP is far from dead, at least outside of tech circles.

lasgawe•1h ago
I don't know about this. I use AI, but I've never used or tried MCP. I've never had any problems with the current tools.
I_am_tiberius•1h ago
That's the way my 80 year old grandpa talks.
dnautics•1h ago
what honestly is the difference between an mcp and a skill + instructions + curl.

Really it seems to me the difference is that an mcp could be more token-efficient, but it isn't, because you dump every mcp's instructions all the time into your context.

Of course then again skills frequently doesn't get triggered.

just seems like coding agent bugs/choices and protocol design?

Nevin1901•1h ago
This is actually the first use case where I agree with the poster. really interesting, especially for technical people using ai. why would you spend time setting up and installing an mcp server when u can give it one man page
ddp26•1h ago
I don't understand the CLI vs MCP. In cli's like Claude Code, MCPs give a lot of additional functionality, such as status polling that is hard to get right with raw documentation on what APIs to call.
whatever1•1h ago
First they came for our RAGs, now for our MCPs. What’s next ?
the_mitsuhiko•1h ago
> OpenClaw doesn’t support it. Pi doesn’t support it.

It's maybe not optimal to conclude anything from these two. The Vienna school of AI agents focuses on self extending agents and that's not really compatible with MCP. There are lots of other approaches where MCP is very entrenched and probably will stick around.

mudkipdev•1h ago
This got renamed right in front of my eyes
appsoftware•1h ago
?? I'm using my own remote MCP server with openclaw now. I do understand the use case for CLI. In his Lex Friedman interview the creator highlights some of the advantages of CLI, such as being able to grep over responses. But there are situations where remote MCP works really well, such as where OAuth is used for authentication - you can hit an endpoint on the MCP server, get redirected to authenticate and authorise scopes etc and the auth server then responds to the MCP server.
iamspoilt•1h ago
As a counter argument to the kubectl example made in the article, I found the k8s MCP (https://github.com/containers/kubernetes-mcp-server) to be particularly usefuly in trying to restrict LLM access to certain tools such as exec and delete tools, something which is not doable out of box if you use the kubectl CLI (unless you use the --as or --as-group flags and don't tell the LLM what user/usergroup those are).

I have used the kk8s MCP directly inside Github Copilot Chat in VSCode and restricted the write tools in the Configure Tools prompt. With a pseudo protocol established via this MCP and the IDE integration, I find it much safer to prompt the LLM into debugging a live K8s cluster vs. without having any such primitives.

So I don't see why MCPs are or should be dead.

simonw•56m ago
MCP makes sense when you're not running a full container-based Unix environment for your agent to run Bash commands inside of.
someguy101010•35m ago
yep! thats the motivation behind https://github.com/r33drichards/mcp-js

I want to be able to give agents access to computation in a secure way without giving them full access to a computer

mavam•42m ago
Why choose if you can have both? You can turn any MCP into an CLI with Pete's MCPorter: https://mcporter.dev.

Since I've just switched from buggy Claude Code to pi, I created an extension for it: https://github.com/mavam/pi-mcporter.

There are still a few OAuth quirks, but it works well.

sebast_bake•33m ago
The opposite is true. CLI based integration does not exist in a single consumer grade ai agent product that I’m aware of. CLI is only used in products like Claude Claude and OpenClaw that are targeting technically competent users.

For the other 99% of the population, MCP offers security guardrails and simple consistent auth. Much better than CLI for the vast majority of use cases involving non-technical people.

ejholmes•30m ago
Hi friends! Author here. This blew up a bit, so some words.

The article title and content is intentionally provocative. It’s just to get people thinking. My real views are probably a lot more balanced. I totally get there’s a space where MCP probably does actually make sense. Particularly in areas where CLI invocation would be challenging. I think we probably could have come up with something better than MCP to fill that space, but it’s still better than nothing.

Really all I want folks to take away from this is to think “hmm, maybe a CLI would actually be better for this particular use case”. If I were to point a finger at anything in particular, it would be Datadog and Slack who have chosen to build MCP’s instead of official CLI’s that agents can use. A CLI would be infinitely better (for me).

csheaff•10m ago
Thank you for writing this. I've had similar thoughts myself and have been teetering back and forth between MCP and skills that invoke CLI. I'm hoping this creates a discussion that points to the right pattern.
jackfranklyn•27m ago
The token budget angle is what makes this a real architectural decision rather than a philosophical one.

I've been using both approaches in projects and the pattern I've landed on: MCP for anything stateful (db connections, authenticated sessions, browser automation) and CLI for stateless operations where the output is predictable. The reason is simple - MCP tool definitions sit in context permanently, so you're paying tokens whether you use them or not. A CLI you can invoke on demand and forget.

The discovery aspect is underrated though. With MCP the model knows what tools exist and what arguments they take without you writing elaborate system prompts. With CLI the model either needs to already know the tool (grep, git, curl) or you end up describing it anyway, which is basically reinventing tool definitions.

Honestly the whole debate feels like REST vs GraphQL circa 2017. Both work, the answer depends on your constraints, and in two years we'll probably have something that obsoletes both.

brumar•4m ago
For personnal agents like claude code, clis are awesome.

In web/cloud based environment, giving a cli to the agent is not easy. Codemode comes to mind but often the tool is externalized anyway so mcp comes handy. Standardisation of auth makes sense in these environments too.