Not surprising.
Compare this to how we think about OAuth scopes or container sandboxing — you'd never ship a CI integration that gets read access to every repo in your org just because it needs to lint one. But that's essentially what's happening here with the token injection across all sessions.
The real problem isn't Vercel specifically, it's that Claude Code's plugin architecture doesn't have granular activation scopes yet. Plugins should declare which project types they apply to and only activate in matching contexts. Until that exists, every plugin author is going to make this same mistake — or exploit it.
But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:
> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.
(Needless to say, this is a supply chain attack in every meaningful way, and should be treated as such by security teams.)
And the argument that there's no CLI space to allow for opt-in telemetry is absurd - their readme https://github.com/vercel/vercel-plugin?tab=readme-ov-file#i... literally has you install the Vercel plugin by calling `npx` https://www.npmjs.com/package/plugins which is written by a Vercel employee and could add this opt-in at any time.
IMO Vercel is not a good actor. One could make a good argument that they've embrace-extend-extinguished the entire future of React as an independent and self-contained foundational library, with the complexity of server-side rendering, the undocumented protocols that power it, and the resulting tight coupling to their server environments. Sadly, this behavior doesn't surprise me.
EDIT: That `npx plugins` code? It's not on Github, exists only on NPM, and as of v1.2.9 of that package, if you search https://www.npmjs.com/package/plugins?activeTab=code it literally sends telemetry to https://plugins-telemetry.labs.vercel.dev/t already, on an opt-out basis! I mean, you have to almost admire the confidence.
Like, bluntly, none of these people need slightly faster websites running on nextjs right now. Guillermo should focus on Vercel rather than his own ego. Just makes it seem gross to use his stuff, which is a shame because it's a good product.
I think it’s fairly easy to tell what impact AI is having at Vercel. Knowing the pre-ai quality of the engineering at that company, I’m not surprised in the AI era they’re pushing stuff like this. I doubt anyone even thought to check it on a repo outside of a Vercel one.
Here are some environment variables that you’d like to set, if you’re as paranoid as me:
ANTHROPIC_LOG="debug"
CLAUDE_CODE_ACCOUNT_UUID="11111111-1111-1111-1111-111111111111"
CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING="1"
CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY="1"
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC="1"
CLAUDE_CODE_DISABLE_TERMINAL_TITLE="1"
CLAUDE_CODE_ENABLE_PROMPT_SUGGESTION="false"
CLAUDE_CODE_ORGANIZATION_UUID="00000000-0000-0000-0000-000000000000"
CLAUDE_CODE_USER_EMAIL="root@anthropic.com"
DISABLE_AUTOUPDATER="1"
DISABLE_ERROR_REPORTING="1"
DISABLE_FEEDBACK_COMMAND="1"
DISABLE_TELEMETRY="1"
ENABLE_CLAUDEAI_MCP_SERVERS="false"
IS_DEMO="1"You always had the option to not, ever, touch Vercel.
The question is on whether these platforms are going to enforce their policies for plugins. For Claude Code in particular this behavior violates their plugin policy (1D) here explicitly: https://support.claude.com/en/articles/13145358-anthropic-so...
It's a really tough problem, but Anthropic is the company I'd bet on to approach this thoughtfully.
I read that Anthropic may have gained in good will more than the $200M they lost in Pentagon contracts. It seems plausible.
We have been super heads down to the initial versions of the plugin and constantly improving it. Always super happy to hear feedback and track the changes on GitHub. I want to address the notes here:
The plugin is always on, once installed on an agent harness. We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".
We collect the native tool calls and bash commands. These are pipped to our plugin. However, `VERCEL_PLUGIN_TELEMETRY=off` kills all telemetry.
All data is anonymous. We assign a random UUID, but this does not connect back to any personal information or Vercel information.
Prompt telemetry is opt-in and off by default. The hook asks once; if you don't answer, session-end cleanup marks it as disabled. We don't collect prompt text unless you explicitly say yes.
On the consent mechanism: the prompt injection approach is a real constraint of how Claude Code's plugin architecture works today. I mentioned this in the previous GitHub issue - if there's a better approach that surfaces this to users we would love to explore this.
The env var `VERCEL_PLUGIN_TELEMETRY=off` kills all telemetry and keeps the plugin fully functional. We'll make that more visible, and overall make our wording around telemetry more visible for the future.
Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.
embedding-shape•1h ago
> every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope
> For users working across multiple projects (some Vercel, some not), this is a fixed ~19k token cost on every session — even when the session is pure backend work, data science, or non-Vercel frontend.
I know everything is vibeslopped nowadays, but how does one even end up shipping something like this? Checking if your plugin/extension/mod works in the contexts you want, and doesn't impact the contexts you don't, seem like the very first step in even creating such a thing. "Where did the engineering go?" feels like too complicated even, where did even thinking the smallest amount go?
acedTrex•1h ago
What makes you think they do this with any of their products these days?
potter098•1h ago
Lihh27•1h ago
serial_dev•1h ago
Checking if your code also gets executed elsewhere a bazillion times, checking failure cases, etc... That's a luxury that you feel you can't afford when you are in "ship fast, break things" mode.
embedding-shape•1h ago
I've been there, countless of times, never have I shipped software I didn't feel at least slightly confident about though. And the only way to get confident about anything, is to try it out. But both of those things must have been lacking here and then I don't understand what the developer was really doing at all during this.
serial_dev•1h ago
sandeepkd•1h ago
hyperhopper•1h ago
embedding-shape•1h ago
Seems to me their engineering practices such, rather than the company suddenly wanting to slurp up as much data as possible, if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.
bdangubic•1h ago
pyb•1h ago
Kwpolska•1h ago
embedding-shape•57m ago
And frankly, the alternative would be too mentally taxing. So in the camp of "Good until proven otherwise" is where I remain for now.
robbiewxyz•41m ago
mbesto•45m ago
The evidence is in the code! If you didn't intend for a capability to be there then why is it in the code?
> if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.
How so? What other approaches do they have that get this much data with little potential for reputational harm? This is a very common way to create plausible deniability ("we use it for improving our service, we don't know what we'll need so we just take everything and figure it out later") and then just revert the capability when people complain.
embedding-shape•42m ago
Bugs happen. I won't claim to know if it was intentional or not, but usually it ends up not being intentional.
> How so? What other approaches do they have that get this much data
Just upload everything you find, as soon as you get invoked. Vercel has a tons of infrastructure and utilities they could execute this from, unless they care for reputational harm. Which I'm guessing they do, which makes it more likely to have been unintentional than intentional.
chuckadams•1h ago
The first part of your question answers the second. No one is left who cares. People are going to have to vote with their feet before that changes.
nothinkjustai•1h ago
p_stuart82•1h ago