It’s designed for LLM prompts / RAG contexts where you want something more structured than token masking: you can define “features” as tokens, sentences, paragraphs, tools, images, or arbitrary fields, pin “permanent” context that should always be included (e.g., system prompt + question), and compute attributions with options like caching and multi-threading to speed up the many model calls.
Repo + tutorial are here: https://github.com/filipnaudot/llmSHAP https://filipnaudot.github.io/llmSHAP/tutorial.html
I’d love feedback on if you think this could be genuinely useful in your LLM workflows (prompt engineering, RAG debugging, evals, guardrail auditing, etc.)?