The pricing and usage of tokens hugely changed recently. To test I gave a simple task to Cursor which would use a 192 line swift code, including comments. This file has 1213 tokens according to https://platform.openai.com/tokenizer .
Here is my prompt: "There is a bug when i click on a raw it plays the audio and it deletes it afterwards, even though without clicking the delete button."
The Cursor dashboard says it used 269,738 tokens, and records to usage accordingly. I am a pro plan user.
Is there anyone else having similar issues?
muzani•3h ago
Most of the tokens are cache read and cache write. It's possible that it's searching the whole codebase with these. My guess is they're embeddings tokens which are substantially cheaper.
Your 1213 should be somewhere in the input column. Probably less because of the caching. This is the magic where your $20 goes for more than $20 worth.
sarpdag•2h ago
The screenshot for my dashboard: https://ibb.co/2Vy5CRJ
Cursor Version: 1.2.4 (Universal)
muzani•1h ago
But the Usage at the top is a clear view too. On Claude sonnet-thinking, you've used $10 worth. 10k input (your prompt and code), 185k output (presumably written code, maybe thinking tokens too). ~900k cache write is probably it indexing the code and 14m cache read is it understanding/scanning your code.