Why is OpenAI buying Windsurf? - https://news.ycombinator.com/item?id=43743993 - April 2025 (218 comments)
OpenAI looked at buying Cursor creator before turning to Windsurf - https://news.ycombinator.com/item?id=43716856 - April 2025 (115 comments)
OpenAI in Talks to Buy Windsurf for About $3B - https://news.ycombinator.com/item?id=43708725 - April 2025 (44 comments)
https://windsurf.com/privacy-policy
Am I the only one bothered by this? Same with Gemini Advanced (paid) training on your prompts. It feels like I’m paying with money, but also handing over my entire codebase to improve your products. Can’t you do synthetic training data generation at this point, along with the massive amount of Q/A online to not require this?
(I assume that there's a reason that wouldn't happen, but it would be nice to know what that reason is.)
I believe its just for free usage and the web app.
https://support.google.com/gemini/answer/13594961?hl=en
> What data is collected and how it’s used
> Google collects your chats (including recordings of your Gemini Live interactions), what you share with Gemini Apps (like files, images, and screens), related product usage information, your feedback, and info about your location. Info about your location includes the general area from your device, IP address, or Home or Work addresses in your Google Account. Learn more about location data at g.co/privacypolicy/location.
Google uses this data, consistent with our Privacy Policy, to provide, improve, and develop Google products and services and machine-learning technologies, including Google’s enterprise products such as Google Cloud.
Gemini Apps Activity is on by default if you are 18 or older. Users under 18 can choose to turn it on. If your Gemini Apps Activity setting is on, Google stores your Gemini Apps activity with your Google Account for up to 18 months. You can change this to 3 or 36 months in your Gemini Apps Activity setting.
Furthermore, synthetic data is a flawed concept. At a minimum, it tends to propagate and amplify biases in the model generating the data. If you ignore that, there's also the fundamental issue that data doesn't exist purely to run more gradient descent, but to provide new information that isn't already compressed into the existing model. Providing additional copies of the same information cannot help.
Pretty sure it does - that’s the whole point of using more test time compute. Also, a lot of research efforts goes into improving data efficiency.
I'm not sure if this is true.
> 17. Training Restriction. Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction.
https://cloud.google.com/terms/service-terms
> This Generative AI for Google Workspace Privacy Hub covers... the Gemini app on web (i.e. gemini.google.com) and mobile (Android and iOS).
> Your content is not used for any other customers. Your content is not human reviewed or used for Generative AI model training outside your domain without permission.
> The prompts that a user enters when interacting with features available in Gemini are not used beyond the context of the user trust boundary. Prompt content is not used for training generative AI models outside of your domain without your permission.
> Does Google use my data (including prompts) to train generative AI models? No. User prompts are considered customer data under the Cloud Data Processing Addendum.
> When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.
See my post here about Gemini Advanced (the web chat app) https://news.ycombinator.com/item?id=43756269
>If you enable "Privacy Mode" in Cursor's settings: zero data retention will be enabled, and none of your code will ever be stored or trained on by us or any third-party.
The only thing I notice is that for 250 additional credit you pay $10 and at this point itis cheaper and better to get another subscription for $15 which will give you another 500 instead of $20. That is if you think you will need this number.
For power users, flow actions would deplete much more quickly (every time the LLM analyzed a file, edited, etc), so Windsurf removed the flow action limit, so you're only getting charged for 500 messages to the AI, which is strictly better for the user.
Most of the time though, it just got in the way. I'd press tab to indent a line, and it'd instead jump half-way down the file to delete some random code instead. On more than one occasion I'd by typing happily, and I'd see it had gone off and completely mangled some unrelated section without my noticing. I felt like I needed to be extremely attentive when reviewing commits to make sure nothing was astray.
Most of its suggestions seemed hyper-fixated on changing my indent levels, adding braces where they weren't supposed to go, or deleting random comments. I also found it broke common shortcuts, like tab (as above), and ctrl+delete.
The editor experience also felt visually very noisy. It was constantly popping up overlays, highlighting things, and generally distracting me while I was trying to write code. I really wished for a "please shut up" button.
The chat feature also seemed iffy. It was actually able to identify one bug for me, though many of the times I'd ask it to investigate something, it'd get stuck scanning through files endlessly until it just terminated the task with no output. I was using the unlimited GPT-4.1 model, so maybe I needed to switch to a model with more context length? I would have expected some kind of error, at least.
So I don't know. Is anyone else having this experience with Windsurf? Am I just "holding it wrong"? I see people being pretty impressed with this and Cursor, but it hasn't clicked for me yet. How do you get it to behave right?
I've also found that test-driven development is even more critical for these tools than for human devs. Fortunately, it's also far less of a chore.
Sounds quite apache given the fact that you will need to track utilisation all the time for something like prompting.
Does someone has any insight on why the old flat pricing or utilisation prices are not in those new AI products where we have this abstract concept as credit?
It's simply passing on the cost of respective model's costs, I think. I can image it's hard to come up with an affordable / interesting flat rate _and_ support all those differently priced models.
Anybody use Windsurf as their daily driver and have experience with other editors who can chime in, for those of us who are considering it as an alternative?
Alright, I'm back, this sounds like one of the rare times in which a pricing "update" is actually an update and not just a disguised increase. But I'm also not a Windsurf user, so let me jump through some of the comments and double-check that I didn't get hoodwinked.
...and the comments seem pretty positive, at least regarding the pricing. So I think this is actually one of those few times "update" means "update".
With Cursor/Windsurf, you make requests, your allowed credit quantity ticks down (which creates anxiety about running out), and you're trying to do some mental math to figure out how those requests that actually cost you. It feels like a method to obfuscate the real cost to the user and also create an incentive for the user to not use the product very much because of the rapidly approaching limits during a focus/flow coding session. I spent about an hour using Cursor Pro and had used up over 30% of my monthly credits on something relatively small, which made me realize their $20/mo plan likely was not going to meet my needs and how much it would really cost me seemed like an unanswerable question.
I just don't like it as a customer and it makes me very suspicious of the business model as a result. I spent about $50 on a week with Claude Code, and could easily spend more I bet. The idea that Cursor and Windsurf are suggesting a $20/mo plan could be a good fit for someone like me, in the face of that $50 in one week figure, further illustrates that there is something that doesn't quite match up with these 'credit' based billing systems.
Sorry, but how is this possible? They give 500 credits in a month for the "premium" queries. I don't even think I'd be able to ask more than one question per minute even with tiny requests. I haven't tried the Agent mode. Does that burn through queries?
I was on the "Pro Trial" where you get 150 premium requests and I had very quickly used 34 for them, which admittedly is 22% and not 30%. Their pricing page says that the Free plan includes "Pro two-week trial", but they do not explain that on the pro trial you only get 150 premium requests and that on the real Pro plan you get 500 premium requests. So you're correct to be skeptical, I did not use 30% of 500 requests on the Pro plan. I used 22% of the 150 requests you get on the Trial Pro plan.
And yes, I think the agent mode can burn through credits pretty quickly.
The economist in me is says "just show the prices", though the psychologist in me says "that's hella stressful". ;)
So for me it has a price per task, sort of, because you are topping it up by paying another 10 dollars at a time as you run out.
The plans aren't the right size for professional work, but maybe they wanted to keep the price points low?
Now, I do still use ChatGPT sometimes. It recently helped me find a very simple solution to a pretty obscure compiler error that I'd never encountered in my several decades long career as a developer. Gemini didn't even get close.
Most of the other services seem focused on using the AWS pay-as-you-go pricing model. It's okay to use that pricing model but it's not easy for me to pitch it at work when I can't say exactly what the monthly cost is going to be.
I still love being a developer, but, knowing what I know now, I feel like I'd like it a lot less without Gemini. I'm much more productive and less stressed too. And I want to add, about the stress: being a developer is stressful sometimes. It can be a cold, quiet stress too, not necessarily a loud, hot stress. AI helps solve so many little problems very quickly, what to add/remove from a make file, why my code isn't linking. Or tricky stuff like using intrinsics. Holy fuck! It's really amazeballs.
I can imagine it is a lot easier to develop these things as a custom version of VSCode instead of plugins/extensions for a handful of the popular existing IDEs, but is that really a good long term plan? Is the future going to be littered with a bunch of one-off custom IDEs? Does anyone want that future?
Windsurf is, ultimately, just an IDE extension. They shipped a forked VSCode with their branding for... some reason. But the extension is available in practically every IDE/editor.
Good on them for catching the enterprise market, but that's about all it is; an enterprise friendly wrapper for a second-rate VSCode extension.
"Free" as in no "middleman accounts" or other nonsense. You pay the base token rate directly to Anthropic, and that's it.
Possibly the worst comment I've ever read on this site. And, by 'possibly', I mean 'most definitely'.
jawns•4h ago
In nearly all cases, I don't care how many individual steps the model needs to take to accomplish the task. I just want it to do what I've asked it to do.
It is curious, however, that this move is coinciding with rumors of OpenAI attempting to acquire Windsurf. If an acquisition were imminent, it would seem strange to mess with the pricing structure soon beforehand.
jelling•3h ago
Jcampuzano2•3h ago
leobuskin•3h ago
You just can't measure it properly, outside of experiments and building your own assessment within your context. All the recommendations here just don't work. "Try all of them, stick with one for a while, don't forget to retry others on a regular basis" - that's my moto today.
Cursor (as an agent/orchestrator) didn't work for me at all (Python, low-level, no frameworks, not webdev). I fell in love with Windsurf ($60 tier initially). Switched entirely to JetBrains AI a few days ago (vscode is not friendly for me, PyCharm rocks), so happy about the price drop.