Furthermore, with all the hype around MCP servers and simply the amount of servers now existing, do they just immediately come obsolete? its also a bit fuzzy to me just exactly how an LLM will choose an MCP tool over a skill and vice versa...
if you're running an MCP file just to expose local filesystem resources, then it's probably obsolete. but skills don't cover a lot of the functionality that MCP offers.
I also think "skills" is a bad name. I guess its a reference to the fact that it can run scripts you provide, but the announcement really seems to be more about the hierarchical docs. It's really more like a selective context loading system than a "skill".
What bugs me: if we're optimizing for LLM efficiency, we should use structured schemas like JSON. I understand the thinking about Markdown being a happy medium between human/computer understanding but Markdown is non-deterministic for parsing. Highly structured data would be more reliable for programmatic consumption while still being readable.
*I use a TUI to manage the context.
Over time I would systematically create separate specialized docs around certain topics and link them in my CLAUDE.md file but noticeably without using the "@" symbol which to my understanding always causes CLAUDE to ingest the linked files resulting in unnecessarily bloating your prompt context.
So my CLAUDE md file would have a header section like this:
# Documentation References
- When adding CSS, refer to: docs/ADDING_CSS.md
- When adding or incorporating images, refer to: docs/ADDING_IMAGES.md
- When persisting data for the user, refer to: docs/STORAGE_MANAGER.md
- When adding logging information, refer to: docs/LOGGER.md
It seems like this is less of a breakthrough and more an iterative improvement towards formalizing this process from a organizational perspective.https://github.com/anthropics/skills/blob/main/document-skil...
There are many edge cases when writing / reading Excel files with Python and this nails many of them.
MCP gives the LLM access you your APIs. These skills are just text files with context about how to perform specific tasks.
RAG was originally about adding extra information to the context so that an LLM could answer questions that needed that extra context.
On that basis I guess you could call skills a form of RAG, but honestly at that point the entire field of "context engineering" can be classified as RAG too.
Maybe RAG as a term is obsolete now, since it really just describes how we use LLMs in 2025.
And, this is why I usually use simple system prompts/direct chat for "heavy" problems/development that require reasoning. The context bloat is getting pretty nutty, and is definitely detrimental to performance.
Claude Skills
If we're considering primarily coding workflows and CLI-based agents like Claude Code, I think it's true that CLI tools can provide a ton of value. But once we go beyond that to other roles - e.g., CRM work, sales, support, operations, finance; MCP-based tools are going to have a better form factor.
I think Skills go hand-in-hand with MCPs, it's not a competition between the two and they have different purposes.
I am interested though, when the python code in Skills can call MCPs directly via the interpreter... that is the big unlock (something we have tried and found to work really well).
You can drive one or two MCPs off a model that happily runs on a laptop (or even a phone). I wouldn't trust those models to go read a file and then successfully make a bunch of curl requests!
I really enjoyed seeing Microsoft Amplifier last week, which similarly has a bank of different specialized sub-agents. These other banks of markdowns that get turned on for special purposes feels very similar. https://github.com/microsoft/amplifier?tab=readme-ov-file#sp... https://news.ycombinator.com/item?id=45549848
One of the major twists with Skills seems to be that Skills also have a "frontmatter YAML" that is always loaded. It still sounds like it's at least somewhat up to the user to engage the Skills, but this "frontmatter" offers… something, that purports to help.
> There’s one extra detail that makes this a feature, not just a bunch of files on disk. At the start of a session Claude’s various harnesses can scan all available skill files and read a short explanation for each one from the frontmatter YAML in the Markdown file. This is very token efficient: each skill only takes up a few dozen extra tokens, with the full details only loaded in should the user request a task that the skill can help solve.
I'm not sure what exactly this does but conceptually it sounds smart to have a top level awareness of the specializations available.
I do feel like I could be missing some significant aspects of this. But the mod-launched paradigm feels like a fairly close parallel?
I hate how we are focusing on just adding more information to look up maps, instead of focusing on deriving those maps from scratch.
Rather than define skills and execution agents, letting a meta-Planning agent determine the best path based on objectives.
how are skills different from SlashCommand tool in claude-code then?
Basically the way it would work is, in the next model, it would avoid role playing type instructions, unless they come from skill files, and internally they would keep track of how often users changed skill files, and it would be a TOS violation to change it too often.
Though I gave up on Anthropic in terms of true AI alignment long ago, I know they are working on a trivial sort of alignment where it prevents it from being useful for pen testers for example.
https://ampcode.com/news/toolboxes
Those are nice too — a much more hackable way of building simple personal tools than MCP, with less token and network use.
billconan•1h ago
I do not understand this. cli-tool --help outputs still occupies tokens right?
SoMomentary•1h ago
billconan•1h ago
brazukadev•1h ago
8note•1h ago
CharlesW•1h ago
esafak•37m ago
CharlesW•32m ago