Although Skills are just md files but it’s good to see them “donate” it.
There goal seems to be simple: Focus on coding and improving it. They’ve found a great niche and hopefully revenue generating business there.
OpenAI on the other hand doesn’t give me same vibes, they don’t seem very oriented. They’re playing catchup with both Google models and Anthropic
Apple has shortcuts, but they haven’t propped it up like a standard that other people can use.
To contrast this is something you can use even if you have nothing to do with Claude, and your tools created will be compatible with the wider ecosystem.
Many many MCPs could and should just be a skill instead.
Might add this to the next https://hackernewsai.com/ newsletter.
We'll see how many of these are around in a few years.
The reason I ask is that the pace of new things arriving is overwhelming, hence I was tempted to just ignore it. Not because things had signs of transience, but because I was drowning and didn't know where to start. That is not the same thing as actually observing signs of things being too foamy.
Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on. They have excellent skills and general abilities to construct plausible sequences of actions to accomplish work, but we need to hold their hands to really get decent performance across a wide range of activities. Skills and agent frameworks and MCP carve out different domains of that problem, with successful solutions providing training data for future models that might be able to be either generalized, or they'll be able to create a vast mountain of synthetic data following successful patterns, and make the next generation of models incredibly useful for a huge number of tasks, by default.
It might also be possible that by studying the problem, identifying where mode collapses and issues with training prevent the right sort of generalization, they might tweak the architecture and be able to solve the deficiency through normal training runs, and thereby discard the need for all the bespoke artisanal agent specifications.
However the "waiting out" strategy needs a timeout. It might happen that agentic crutches around LLMs will bear fruit much sooner than high-quality LLMs arrive. If you don't have a timeout or a decent exit criteria you may end up waiting indefinitely, or at least until reality of things becomes too painful to ignore.
The "ski rental problem" comes to mind here, but maybe there is another "wait it out" exit strategy?
https://github.com/alganet/skills/blob/main/skills/left-padd...
Either way, that’s hilarious. Well done.
It's a much better system in my experience.
There's no real benefit to the MCP protocol over a regular API with a published "client" a local LLM can invoke. The only downside is you'd have to pull this client prior.
I am using local "skill" as reference to an executable function, not specifically Claude Skills.
If the LLM/Agent executes tools via code in a sandbox (which is what things are moving towards), all LLM tools can be simply defined as regular functions that have the flexibility to do anything.
I seriously doubt MCP will exist in any form a few years from now
Paper & applications published here: https://earthpilot.ai/metaskills/
---
persona: hacker
description: logical, talks about computers a lot, enjoys coffee, somewhat snarky and arrogant
---
<more details here>"you're absolutely right!"
Please tell us how REALLY feel about JavaScript.
Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in.
So yes, it's just turtles, sorry, prompts all the way down.
This may all be very wrong, though, as it's mostly conjecture from the little I've worked with skills.
BUT what makes them powerful is that you can include code with the skill package.
Like I have a skill that uses a Go program to traverse the AST of a Go project to find different issues in it.
You COULD just prompt it but then the LLM would have to dig around using find and grep. Now it runs a single executable which outputs an LLM optimised clump of text for processing.
Apart from Google Inc., I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard. [0]
"MCP" was one of the worst so-called "standards" ever built since the JWT was proposed. So I do not take Anthropic seriously when they create so-called "open standards" especially when the reference implementation is in Javascript or TypeScript.
> I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard.
Why would the IETF have anything to do with LLM/agent standards? This seems like a category error. They also don’t ratify web standards, for example.
But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here
I think that they often do solve the problem, just maybe they have some other side effects/trade offs.
The best one we have thought of so far.
Marketing. That defines pretty much everything Anthropic does beyond frontier model training. They're the same people producing sensationalized research headlines about LLMs trying to blackmail folks in order to prevent being deleted.
It has been published as an open specification.
Whether it is a standard isn't for them to declare.
It is functionally a skill. I suppose once anti-gravity supports skills, I will make it one officially.
albingroen•1h ago