frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Claude Skills

https://www.anthropic.com/news/skills
168•meetpateltech•2h ago
https://www.anthropic.com/engineering/equipping-agents-for-t...

Comments

j45•1h ago
I wonder if Claude Skills will help return Claude back to the level of performance it had a few months ago.
bicx•1h ago
Interesting. For Claude Code, this seems to have generous overlap with existing practice of having markdown "guides" listed for access in the CLAUDE.md. Maybe skills can simply make managing such guides more organized and declarative.
kfarr•1h ago
Yeah my first thought was, oh it sounds like a bunch of CLAUDE.md's under the surface :P
crancher•1h ago
It's interesting (to me) visualizing all of these techniques as efforts to replicate A* pathfinding through the model's vector space "maze" to find the desired outcome. The potential to "one shot" any request is plausible with the right context.
candiddevmike•1h ago
> The potential to "one shot" any request is plausible with the right context.

You too can win a jackpot by spinning the wheel just like these other anecdotal winners. Pay no attention to your dwindling credits every time you do though.

NitpickLawyer•54m ago
On the other hand, our industry has always chased the "one baby in one month out of 9 mothers" paradigm. While you couldn't do that with humans, it's likely you'll soon (tm) be able to do it with agents.
j45•1h ago
If so, it would be a better way than encapsulating functionality in markdown.

I have been using claude code to create some and organize them but they can have diminishing return.

guluarte•16m ago
it also may point out that the solution for context rot may not be coming in the foreseeable future
phildougherty•1h ago
getting hard to keep up with skills, plugins, marketplaces, connectors, add-ons, yada yada
prng2021•1h ago
Yep. Now I need an AI to help me use AI
consumer451•1h ago
I mean, that is a very common thing that I do.
wartywhoa23•57m ago
That's why the key word for all the AI horror stories that have been emerging lately is "recursion".
consumer451•56m ago
Does that imply no human in the loop? If so, that's not what I meant, or do. Whoever is doing that at this point: bless your heart :)
gordonhart•1h ago
Agree — it's a big downside as a user to have more and more of these provider-specific features. More to learn, more to configure, more to get locked into.

Of course this is why the model providers keep shipping new ones; without them their product is a commodity.

hansonkd•1h ago
Thats the start of the singularity. The changes will keep accelerating and less and less people will be able to keep up until only the AIs themselves know how to use.
matthewaveryusa•1h ago
Nah, we'll create AI to manage the AI....oh
skybrian•47m ago
People thought the same in the ‘90’s. The argument that technology accelerates and “software eats the world” doesn’t depend on AI.

It’s not exactly wrong, but it leaves out a lot of intermediate steps.

xpe•41m ago
Yes and as we rely on AI to help us choose our tools... the phenomena feels very different, don't you think? Human thinking, writing, talking, etc is becoming less important in this feedback loop seems to me.
xpe•44m ago
abstractions all the way down:

    abstraction
      abstraction
        abstraction
          abstraction
            ...
marcusestes•57m ago
Agreed, but I think it's actually simple.

Plugins include: * Commands * MCPs * Subagents * Now, Skills

Marketplaces aggregate plugins.

xpe•49m ago
If I were to say "Claude Skills can be seen as a particular productization of a system prompt" would I be wrong?

From a technical perspective, it seems like unnecessary complexity in a way. Of course I recognize there are lot of product decisions that seem to layer on 'unnecessary' abstractions but still have utility.

In terms of connecting with customers, it seems sensible, under the assumption that Anthropic is triaging customer feedback well and leading to where they want to go (even if they don't know it yet).

Update: a sibling comment just wrote something quite similar: "All these things are designed to create lock in for companies. They don’t really fundamentally add to the functionality of LLMs." I think I agree.

tempusalaria•48m ago
All these things are designed to create lock in for companies. They don’t really fundamentally add to the functionality of LLMs. Devs should focus on working directly with model generate apis and not using all the decoration.
tqwhite•6m ago
Me? I love some lock in. Give me the coolest stuff and I'll be your customer forever. I do not care about trying to be my own AI company. I'd feel the same about OpenAI if they got me first... but they didn't. I am team Anthropic.
dominicq•46m ago
Features will be added until morale improves
hansmayer•37m ago
Well, have some understanding: the good folks need to produce something, since their main product is not delivering the much yearned for era of joblessness yet. It's not for you, it's signalling their investors - see, we're not burning your cash paying a bunch of PhDs to tweak the model weights without visible results. We are actually building products. With a huge and willing A/B testing base.
hiq•37m ago
IMHO, don't, don't keep up. Just like "best practices in prompt engineering", these are just temporary workaround for current limitations, and they're bound to disappear quickly. Unless you really need the extra performance right now, just wait until models get you this performance out of the box instead of investing into learning something that'll be obsolete in months.
BoredPositron•1h ago
It is a bit ironic that the better the models get they seem to need more and more user input.
quintu5•35m ago
More like they can better react to user input within their context window. With older models, the value of that additional user input would have been much more limited.
nozzlegear•1h ago
It superficially reminds me of the old "Alexa Skills" thing (I'm not even sure if Alexa still has "Skills"). It might just be the name making that connection for me.
j45•1h ago
Seems to be a bit more than that.
phildougherty•1h ago
Alexa skills are 3rd party add-ons/plugins. Want to control your hue lights? add the phillips hue skill. I think claude skills in an alexa world would be like having to seed alexa with a bunch of context for it to remember how to turn my lights on and off or it will randomly attempt a bunch of incorrect ways of doing it until it gets lucky.
candiddevmike•1h ago
And how many of those Alexa Skills are still being updated...

This is where waiting for this stuff to stablize/standardize, and then writing a "skill" based on an actual RFC or standard protocol makes more sense, IMO. I've been burned too many times building vendor-locked chatbot extensions.

nozzlegear•57m ago
> And how many of those Alexa Skills are still being updated...

Not mine! I made a few when they first opened it up to devs, but I was trying to use Azure Logic Apps (something like that?) at the time which was supremely slow and finicky with F#, and an exercise in frustration.

joilence•1h ago
If I understand correctly, looks like `skill` is a instructed usage / pattern of tools, so it saves llm agent's efforts at trial & error of using tools? and it basically just a prompt.
sshine•1h ago
I love how the promise of free labor motivates everyone to become API first, document their practices, and plan ahead in writing before coding.
ebiester•1h ago
It helps that you can have the "free" labor document the processes and build the plan.
skybrian•55m ago
Cheaper, not free. Also, no training to learn a new skill.

Building a new one that works well is a project, but then it will scale up as much as you like.

This is bringing some of the advantages of software development to office tasks, but you give up some things like reliable, deterministic results.

sshine•46m ago
There is an acquisition cost of researching and developing the LLM, but the running cost should not be classified as a wage, hence cost of labor is zero.
maigret•36m ago
It’s still opex for finance
_pdp_•1h ago
At first I wasn't sure what this is. Upon further inspection skills are effectively a bunch of markdown files and scripts that get unzipped at the right time and used as context. The scripts are executed to get deterministic output.

The idea is interesting and something I shall consider for our platform as well.

nperez•1h ago
Seems like a more organized way to do the equivalent of a folder full of md files + instructing the LLM to ls that folder and read the ones it needs
j45•1h ago
If so it would be most welcome since LLMs doesn't always consistently follow the folder full of MD files to the same depth and consistency.
RamtinJ95•1h ago
what makes it more likely that claude would read these .md files then?
phildougherty•52m ago
trained to
meetpateltech•1h ago
Detailed engineering blog:

"Equipping agents for the real world with Agent Skills" https://www.anthropic.com/engineering/equipping-agents-for-t...

dang•1h ago
Thanks, we'll put that link in the toptext as well
jampa•1h ago
I think this is great. A problem with huge codebases is that CLAUDE.md files become bloated with niche workflows like CI and E2E testing. Combined with MCPs, this pollutes the context window and eventually degrades performance.

You get the best of both worlds if you can select tokens by problem rather than by folder.

The key question is how effective this will be with tool calling.

crancher•1h ago
Seems like the exact same thing, from front page a few days ago: https://github.com/obra/superpowers/tree/main
Flux159•1h ago
I wonder how this works with mcpb (renamed from dxt Desktop extensions): https://github.com/anthropics/mcpb

Specifically, it looks like skills are a different structure than mcp, but overlap in what they provide? Skills seem to be just markdown file & then scripts (instead of prompts & tool calls defined in MCP?).

Question I have is why would I use one over the other?

irtemed88•1h ago
Can someone explain the differences between this and Agents in Claude Code? Logically they seem similar. From my perspective it seems like Skills are more well-defined in their behavior and function?
j45•1h ago
Skills might be used by Agents.

Skills can merge together like lego.

Agents might be more separated.

ryancnelson•1h ago
The uptake on Claude-skills seems to have a lot of momentum already! I was fascinated on Tuesday by “Superpowers” , https://blog.fsck.com/2025/10/09/superpowers/ … and then packaged up all the tool-building I’ve been working on for awhile into somewhat tidy skills that i can delegate agents to:

http://github.com/ryancnelson/deli-gator I’d love any feedback

mousetree•1h ago
I'm perplexed why they would use such a silly example in their demo video (rotating an image of a dog upside down and cropping). Surely they can find more compelling examples of where these skills could be used?
alansaber•41m ago
Dog photo >> informing the consumer
Mouvelie•27m ago
You'd think so, eh ? https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
antiloper•14m ago
The developer page uses a better example, a PDF processing skill: https://github.com/anthropics/skills/tree/main/document-skil...

I've been emulating this in claude code by manually @tagging markdown files containing guides for common tasks in our repository. Nice to see that this step is now automatic as well.

bgwalter•1h ago
"Skills are repeatable and customizable instructions that Claude can follow in any chat."

We used to call that a programming language. Here, they are presumably repeatable instructions how to generate stolen code or stolen procedures so users have to think even less or not at all.

azraellzanella•1h ago
"Keep in mind, this feature gives Claude access to execute code. While powerful, it means being mindful about which skills you use—stick to trusted sources to keep your data safe."

Yes, this can only end well.

m3kw9•1h ago
I feel like this is making things more complicated than it needs to be. LLMs should automatically do this behind you, you won’t even see it.
Imnimo•1h ago
I feel like a danger with this sort of thing is that the capability of the system to use the right skill is limited by the little blurb you give about what the skill is for. Contrast with the way a human learns skills - as we gain experience with a skill, we get better at understanding when it's the right tool for the job. But Claude is always starting from ground zero and skimming your descriptions.
j45•55m ago
LLMs are a probability based calculation, so it will always skim to some degree, and always guess to some degree, and often pick the best choice available to it even though it might not be the best.

For folks who this seems elusive for, it's worth learning how the internals actually work, helps a great deal in how to structure things in general, and then over time as the parent comment said, specifically for individual cases.

zobzu•54m ago
IMO this is a context window issue. Humans are pretty good are memorizing super broad context without great accuracy. Sometimes our "recall" function doesn't even work right ("How do you say 'blah' in German again?"), so the more you specialize (say, 10k hours / mastery), the better you are at recalling a specific set of "skills", but perhaps not other skills.

On the other hand, LLMs have a programatic context with consistent storage and the ability to have perfect recall, they just don't always generate the expected output in practice as the cost to go through ALL context is prohibitive in terms of power and time.

Skills.. or really just context insertion is simply a way to prioritize their output generation manually. LLM "thinking mode" is the same, for what it's worth - it really is just reprioritizing context - so not "starting from scratch" per se.

When you start thinking about it that way, it makes sense - and it helps using these tools more effectively too.

dwaltrip•40m ago
There are ways to compensate for lack of “continual learning”, but recognizing that underlying missing piece is important.
ryancnelson•37m ago
I commented here already about deli-gator ( https://github.com/ryancnelson/deli-gator ) , but your summary nailed what I didn’t mention here before: Context.

I’d been re-teaching Claude to craft Rest-api calls with curl every morning for months before i realized that skills would let me delegate that to cheaper models, re-using cached-token-queries, and save my context window for my actual problem-space CONTEXT.

mbesto•9m ago
> IMO this is a context window issue.

Not really. It's a consequential issue. No matter how big or small the context window is, LLMs simply do not have the concept of goals and consequences. Thus, it's difficult for them to acquire dynamic and evolving "skills" like humans do.

seunosewa•49m ago
The blurbs can be improved if they aren't effective. You can also invoke skills directly.

The description is equivalent to your short term memory.

The skill is like your long term memory which is retrieved if needed.

These should both be considered as part of the AI agent. Not external things.

blackoil•44m ago
Most of the experience is general information not specific to project/discussion. LLM starts with all that knowledge. Next it needs a memory and lookup system for project specific information. Lookup in humans is amazingly fast, but even with a slow lookup, LLMs can refer to it in near real-time.
andruby•12m ago
Would this requirement to start from ground zero in current LLMs be an artefact of the requirement to have a "multi-tenant" infrastructure?

Of course OpenAI and Anthropic want to be able to reuse the same servers/memory for multiple users, otherwise it would be too expensive.

Could we have "personal" single-tenant setups? Where the LLM incorporates every previous conversation?

mbesto•11m ago
> Contrast with the way a human learns skills - as we gain experience with a skill, we get better at understanding when it's the right tool for the job.

Which is precisely why Richard Sutton doesn't think LLMs will evolve to AGI[0]. LLMs are based on mimicry, not experience, so it's more likely (according to Sutton) that AGI will be based on some form of RL (reinforcement learning) and not neural networks (LLMs).

More specifically, LLMs don't have goals and consequences of actions, which is the foundation for intelligence. So, to your point, the idea of a "skill" is more akin to a reference manual, than it is a skill building exercise that can be applied to developing an instrument, task, solution, etc.

[0] https://www.youtube.com/watch?v=21EYKqUsPfg

buildbot•7m ago
The industry has been doing RL on many kinds of neural networks, including LLMs, for quite some time. Is this person saying we RL on some kind of non neural network design? Why is that more likely to bring AGI than an LLM?.

> More specifically, LLMs don't have goals and consequences of actions, which is the foundation for intelligence.

Citation?

jfarina•3m ago
Why are you asking them to cite something for that statement? Are you questioning whether it's the foundation for intelligence or whether LLMS understand goals and consequences?
fridder•1h ago
All of these random features is just pushing me further towards model agnostic tools like goose
xpe•28m ago
Thanks for sharing goose.

This phase of LLM product development feels a bit like the Tower of Babel days with Cloud services before wrapper tools became popular and more standardization happened.

asdev•57m ago
I wonder what the accuracy is for Claude to always follow a Skill accurately. I've had trouble getting LLMs to follow specific workflows 100% consistently without skipping or missing steps.
rob•57m ago
Subagents, plugins, skills, hooks, mcp servers, output styles, memory, extended thinking... seems like a bunch of stuff you can configure in Claude Code that overlap in a lot of areas. Wish they could figure out a way to simplify things.
singularity2001•42m ago
Also the post does not contain a single word how it relates to the very similar agents in claude code. Capabilities, connectors, tasks, apps, custom-gpts, ... the space needs some serious consolidation and standardization!

I noticed the general tendency for overlap also when trying to update claude since 3+ methods conflicted with each other (brew, curl, npm, bun, vscode).

Might this be the handwriting of AI? ;)

kordlessagain•30m ago
The post is simply "here's a folder with crap in it I may or may not use".
CuriouslyC•28m ago
My agent has handlebars system prompts that you can pass variables at orchestration time. You can cascade imports and such, it's really quite powerful; a few variables can result in radically different system prompt.
_greim_•51m ago
> Developers can also easily create, view, and upgrade skill versions through the Claude Console.

For coding in particular, it would be super-nice if they could just live in a standard location in the repo.

GregorStocks•44m ago
Looks like they do:

> You can also manually install skills by adding them to ~/.claude/skills.

deeviant•48m ago
Basically just rules/workflows from cursor/windsurf, but with a UI.
pixelpoet•47m ago
Aside: I really love Anthropic's design language, so beautiful and functional.
maigret•39m ago
Yes and fantastically executed, consistently through all their products and website - desktop, command line, third parties and more.
jasonthorsness•42m ago
When the skill is used locally in Claude Code does it still run in a virtual machine? Like some sort of isolation container with the target directory mounted?
xpe•40m ago
Better when blastin' Skills by Gang Starr (headphones recommended if at work):

https://www.youtube.com/watch?v=Lgmy9qlZElc

999900000999•40m ago
Can I just tell it to read the entire Godot source repo as a skill ?

Or is there some type of file limit here. Maybe the context windows just aren't there yet, but it would be really awesome if coding agents would stop trying to make up functions.

s900mhz•4m ago
Download the godot docs and tell the skill to use them. It won’t be able to fit the entire docs in the context but that’s not the point. Depending on the task it will search for what it needs
dearilos•39m ago
We're trying to solve a similar problem at wispbit - this is an interesting way to do it!
CuriouslyC•30m ago
Anything the model chooses to use is going to waste context and get utilized poorly. Also, the more skills you have, the worse they're going to be. It's subagents v2.

Just use slash commands, they work a lot better.

just-working•28m ago
I simply do not care about anything AI now. I have a severe revulsion to it. I miss the before times.
sega_sai•25m ago
There seems to be a lot of overlap of this with MCP tools. Also presumably if there are a lot of skills, they will be too big for the context and one would need some way to find the right one. It is unclear how well this approach will scale.
guluarte•17m ago
great! another set of files the models will completely ignore like CLAUDE.md
simonw•12m ago
I accidentally leaked the existence of these last Friday, glad they officially exist now! https://simonwillison.net/2025/Oct/10/claude-skills/
sva_•10m ago
All this AI, and yet it can't render properly on mobile.
mikkupikku•5m ago
I'd love a Skill for effective use of subagents in Claude Code. I'm still struggling with that.

Linux Affected by Decade Old Bug in Software Raid Around O_direct Usage

https://www.phoronix.com/news/Linux-RAID-Bug-O-DIRECT
1•mikece•2m ago•0 comments

Detect-fash – scans for software associated with fascist ideologies

https://github.com/systemd/systemd/pull/39285
1•felineflock•2m ago•0 comments

test-ipv6.com will stay online!

https://status.test-ipv6.com
1•throw0101d•2m ago•0 comments

Show HN: I built a vector store from scratch (and you can too)

2•novocayn•5m ago•0 comments

Could this year's drought dull fall foliage viewing?

https://phys.org/news/2025-10-year-drought-dull-fall-foliage.html
1•PaulHoule•6m ago•0 comments

Workslop in Anthropic's own engineering article on Claude Agent SDK

https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk
1•argoeris•6m ago•1 comments

AI Engineering for Everyone

https://python2llms.org/
1•yegortk•7m ago•0 comments

Machine-in-the-Middle $100k+ AI Hacking Challenge

https://app.grayswan.ai/arena/challenge/machine-in-the-middle/rules
1•SweetSoftPillow•7m ago•0 comments

Prof. Brian Launder – CFD and Turbulence Modelling Pioneer [video]

https://www.youtube.com/watch?v=Eqv0bK7HVnE
1•pillars•7m ago•0 comments

John Doyle: A Pioneer's Guide to Robust Control – Podcast, Part-1

https://www.incontrolpodcast.com/1632769/episodes
1•pillars•10m ago•0 comments

Most of What We Call Progress

https://yusufaytas.com/most-of-what-we-call-progress/
7•yusufaytas•15m ago•0 comments

Ask HN: LinkedIn Down?

1•ustad•15m ago•0 comments

Graphing the Growth of English 1800-2006

https://twitter.com/bronzeagecto/status/1978581111912747304
1•captradeoff•15m ago•0 comments

Trump Family Has Made over $1B in Profit on Crypto

https://decrypt.co/344663/trump-family-already-made-1-billion-profit-crypto-eric-trump
7•OutOfHere•15m ago•2 comments

Gamma Correction on Fragment Shaders

https://riccardoscalco.it/blog/gamma-correction-on-fragment-shaders/
2•Bogdanp•17m ago•0 comments

Hornet: Efficient Data Structure for Dynamic Sparse Graphs and Matrices on GPUs [pdf]

https://davidbader.net/publication/2018-bgbb/2018-bgbb.pdf
1•jerlendds•20m ago•0 comments

Ruby and Its Neighbors: Perl

https://noelrappin.com/blog/2025/10/ruby-and-its-neighbors-perl/
1•mooreds•20m ago•0 comments

Biff is a command line datetime Swiss army knife

https://github.com/BurntSushi/biff
2•burntsushi•22m ago•0 comments

What the world needs now is groupcore

https://blog.metalabel.com/what-the-world-needs-now-is-groupcore/
2•marcusestes•22m ago•0 comments

Do You Know What I Know?

https://www.newyorker.com/culture/open-questions/do-you-know-what-i-know
2•prismatic•22m ago•0 comments

Johnson and Johnson ordered to pay $966M in talc cancer case

https://www.reuters.com/legal/litigation/johnson-johnson-ordered-pay-966-million-after-jury-finds...
2•geox•24m ago•0 comments

DigitalPlat FreeDomain

https://domain.digitalplat.org/
1•ulrischa•25m ago•1 comments

The AI that we'll have after AI

https://pluralistic.net/2025/10/16/post-ai-ai/#productive-residue
1•FromTheArchives•26m ago•0 comments

Pathetic Losers

https://geohot.github.io//blog/jekyll/update/2025/10/15/pathetic-losers.html
3•allenleein•26m ago•0 comments

The Scientists Growing Living Computers in Swiss Labs [video]

https://www.youtube.com/watch?v=nUn0a9B1Tbc
1•frag•27m ago•0 comments

See all your investments and bank accounts in one dashboard

https://www.flint-investing.com/
1•scapota06•27m ago•2 comments

Grand Challenges of the CFD Vision 2030

https://www.cfd2030.com/gc.html
2•nill0•28m ago•0 comments

Talent

https://www.felixstocker.com/blog/talent
2•BinaryIgor•28m ago•0 comments

History accumulation in .claude.json causes performance issues and storage bloat

https://github.com/anthropics/claude-code/issues/5024
1•rob•30m ago•0 comments

Linux Mint Debian Edition7 "Gigi" Released

https://blog.linuxmint.com/?p=4924
4•robtherobber•33m ago•0 comments