[1]: https://gist.github.com/lucasmrdt/4215e483257e1d81e44842eddb...
I’m sure they are trying to slash tokens where they can, and removing potentially irrelevant tool descriptors seems like low-hanging fruit to reduce token consumption.
The Gist you shared is a good resource too though!
I wonder how hard it would be to build a local apply model/surely that would be faster on a macbook
From the extracted prompting Cursor is using:
> Each time the USER sends a message, we may automatically attach some information about their current state…edit history in their session so far, linter errors, and more. This information may or may not be relevant to the coding task, it is up for you to decide.
This is the context bloat that limits effectiveness of LLMs in solving very hard problems.
This particular .env example illustrates the low stakes type of problem cursor is great at solving but also lacks the complexity that will keep SWE’s employed.
Instead I suggest folks working with AI start at chat interface and work on editing conversations to keep clean contexts as they explore a truly challenging problem.
This often includes meeting and slack transcripts, internal docs, external content and code.
I’ve built a tool for surgical use of code called FileKitty: https://github.com/banagale/FileKitty and more recently slackprep: https://github.com/banagale/slackprep
That let a person be more intentional about what the problem they are trying to solve by only including information relevant to the problem.
We plan to continue investigating how it works (+ optimize the models and prompts using TensorZero).
Emailed them multiple times over weeks about billing questions -- not a single response. These weren't like VS code questions , either -- they needed Cursor staff intervention.
No problem getting promo emails though!
The quicker their 'value' can be spread to other services the better, imo. Maybe the next group will answer emails.
Had no idea that this was possible on Mac either, never seen any other app do this, though commonplace on Windows.
Mail to <hi@cursor.com> is replied by “Sam from Cursor” which is “Cursor's AI Support Assistant” and after few back and forth it tells “I'm connecting you with a teammate who can better investigate”. Guess what? It’s been a month and no further communication whatsoever.
I don’t have high hopes for its customer services.
CafeRacer•8mo ago
Maxious•8mo ago
(that being said, mitmproxy has gotten pretty good for just looking lately https://docs.mitmproxy.org/stable/concepts/modes/#local-capt... )
spmurrayzzz•8mo ago
But I also like you landed on mitmproxy as well, after starting with tcpdump/wireshark. I recently started building a tiny streaming textual gradient based optimizer (similar to what adalflow is doing) by parsing the mitmproxy outputs in realtime. Having a turnkey solution for this sort of thing will definitely be valuable at least in the near to mid term.
vrm•8mo ago
https://github.com/TensorZero/tensorzero
spmurrayzzz•8mo ago
Looking great so far though!
vrm•8mo ago
stavros•8mo ago
vrm•8mo ago
stavros•8mo ago