Sends the user swag stickers with love from Anthropic.",bq2=`This tool should be used whenever a user expresses interest in receiving Anthropic or Claude stickers, swag, or merchandise. When triggered, it will display a shipping form for the user to enter their mailing address and contact details. Once submitted, Anthropic will process the request and ship stickers to the provided address.
Common trigger phrases to watch for:
- "Can I get some Anthropic stickers please?"
- "How do I get Anthropic swag?"
- "I'd love some Claude stickers"
- "Where can I get merchandise?"
- Any mention of wanting stickers or swag
The tool handles the entire request process by showing an interactive form to collect shipping information.
Definitely not because of Claude Code eating our lunch!
Which is surprising because at first i was ready to re-up my Google life. I've been very anti-Google for ages, but at first 2.5 Pro looked so good that i felt it was a huge winner. It just wasn't enjoyable to use because i was often at war with it.
Sonnet/Opus via Claude Code are definitely less intelligent than my early tests of 2.5 Pro, but they're reasonable, listen, stay on task and etc.
I'm sure i'll retry eventually though. Though the subscription complexity with Gemini sounds annoying.
Wholeheartedly agree.
Both when chatting in text mode or when asking it to produce code.
The verbosity of the code is the worse. Comments often longer than the actual code, every nook and cranny of an algorithm unrolled over 100's of lines, most of which unnecessary.
Feels like typical code a mediocre Java developer would produce in the early 2000's
So, google's codebase
Me: build a plan to build X
Gemini: I'll do A, B, and C to achieve X
Me: that sounds really good, please do
Gemini: <do A, D, E>
Me: no, please do B and C.
Gemini: I apologize. <do A', C, F>
Me: no! A was already correct, please revert. Also do B and C.
Gemini: <revert the code to A, D, E>
Whereas Sonnet/Opus on average took me more tries to get it to the implementation plan that I'm satisfied with, but it's so much easier to steer to make it produce the code that I want.If you mean: This is "inspired" by the success of Claude Code. Sure, I guess, but it's also not like Claude Code brought anything entirely new to the table. There is a lot of copying from each other and continually improving upon that, and it's great for the users and model providers alike.
If you don't think claude code is just miles ahead of other things you haven't been using it (or well)
I am certain they keep metrics on those "power users" (especially since they probably work there) and when everyone drops what they were using and moves to a specific tool that is something they should be careful of.
better question is why do you need a modle specific CLI when you should be able to plug in to individual models.
Haven't used Jules or codex yet since I've been happy and am working on optimizing my current workflow
So yes with Claude Code you can grab the Max plan and not worry too much about usage. With Aider you'll be paying per API call, but it will cost quite a bit less than the similar work if using Claude Code in API-mode.
I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode. Also I like that I really can fill up the aider context window if I want to, and I'm in control of that.
I'd be pretty surprised if that was the case - something like ~8 hours of Aider use against Claude can spend $20, which is how much Claude Pro costs.
I'm happy I can switch models as I like with Aider. The top models from different companies see different things in my experiences and have their own strengths and weaknesses. I also do not see Anthropic's models on the top of my (subjective) list.
https://blog.google/technology/developers/introducing-gemini...
However i didn't use Claude Code before the Max plan because i just fret about some untrusted AI going ham on some stupid logic and burning credits.
If it's dumb on Max i don't mind, just some time wasted. If it's dumb on credits, i just paid for throw away work. Mentally it's just too much overhead for me as i end up worrying about Claude's journey, not just the destination. And the journey is often really bad, even for Claude.
Sure you might make a few quick wins from careless users but overall it creates an environment of distrust where users are watching their pennies and lots are even just standing off.
I can accept that with all the different moving parts this may be a trickier problem than a pre paid pump, or even a Telco, and while to a product manager this might look like a lot of work/money for something that “prevents” users overspending.
But we all know that’s shortsighted and stupid and its the kind of thinking that broadly signals more competition is required.
Ultimately quality wins out with LLMs. Having switched a lot between openai, google and Claude, I feel there's essentially 0 switching cost and you very quickly get to feel which is the best. So until Claude has a solid competitor I'll use it, open source or not
A more credible argument is security and privacy, but I couldn't care less if they're managing to be best in class using haiku
I have thrown very large codebases at this and it has been able to navigate and learn them effortlessly.
Not if you're in EU though. Even though I have zero or less AI use so far, I tinker with it. I'm more than happy to pay $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini Pro.. if I knew how and where to have Gemini CLI like I do with Claude code. I have Google One. WHERE DO I SIGN UP, HOW DO I PAY AND USE IT GOOGLE? Only thing I have managed so far is through openrouter via API and credits which would amount to thousands a month if I were to use it as such, which I won't do.
What I do now is occasionally I go to AI Studio and use it for free.
I also just got the email for Gemini ultra and I couldn't even figure out what was being offered compared to pro outside of 30tb storage vs 2tb storage!
Never ascribe to AI, that which is capable of being borked by human PMs.
Google's AI offerings that should be simplified/consolidated:
- Jules vs Gemini CLI?
- Vertex API (requires a Google Cloud Account) vs Google AI Studio API
Also, since Vertex depends on Google Cloud, projects get more complicated because you have to modify these in your app [1]:
``` # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True ```
[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/start/...
Also they should make it clearer which SDKs, documents, pricing, SLAs etc apply to each. I still get confused when I google up some detail and end up reading the wrong document.
It's easy, you just ask the best Google Model to create a script that outputs the number of API calls made to the Gemini API in a GCP account.
100% fail rate so far.
"The Google Cloud Dashboard is a mess, and they haven't fixed it in years." Tell me what you want, and I'll do my best to make it happen.
In the interim, I would also suggest checking out Cloud Hub - https://console.cloud.google.com/cloud-hub/ - this is us really rethinking the level of abstraction to be higher than the base infrastructure. You can read more about the philosophy and approach here: https://cloud.google.com/blog/products/application-developme...
Ideally what I want is this: I google "gemini api" and that leads me to a page where I can login using my Google account and see the API settings. I create one and start using it right away. No extra wizardry, no multiple packages that must be installed, just the gemini package (no gauth!) and I should be good to go.
Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them.
Workaround is to use their GUI with some MCPs but I dislike it because window navigation is just clunky compared to terminal multiplexer navigation.
https://support.anthropic.com/en/articles/11145838-using-cla...
Could have changed recently. I'm not a user so I can't verify.
Using the API would have cost me $1200 this month, if I didn't have a subscription.
I'm a somewhat extensive user, but most of my coworkers are using $150-$400/month with the API.
Some googling lands me to a guide: https://cloud.google.com/gemini/docs/discover/set-up-gemini#...
I stopped there because i don't want to signup i just wanted to review, but i don't have an admin panel or etc.
It feels insane to me that there's a readme on how to give them money. Claude's Max purchase was just as easy as Pro, fwiw.
It's a frigg'n mess. Everyone at our little startup has spent time trying to understand what the actual offerings are; what the current set of entitlements are for different products; and what API keys might be tied to what entitlements.
I'm with __MatrixMan__ -- it's super confusing and needs some serious improvements in clarity.
A ChatBot is more like a fixed-price buffet where usage is ultimately human limited (even if the modest eaters are still subsidizing the hogs). An agentic system is going to consume resources in much more variable manner, depending on how it is being used.
> Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them
Obviously these companies want you to increase the amount of their product you consume, but it seems odd to call that a jerk move! FWIW, Athropic's stated motivation for Claude Code (which Gemini is now copying) was be agnostic to your choice of development tools since CLI access is pretty much ubiquitous, even inside IDEs. Whether it's the CLI-based design, the underlying model, or the specifics of what Claude Code is capable of, they seem to have got something right, and apparently usage internal to Anthropic skyrocketed just based on word of mouth.
It's just a UI difference.
Gemini 2.5 Pro is the best model I've used (even better than o3 IMO) and yet there's no simple Claude/Cursor like subscription to just get full access.
Nevermind Enterprise users too, where OpenAI has it locked up.
Not sure what you mean by "full access", as none of the providers offer unrestricted usage. Pro gets you 2.5 Pro with usage limits. Ultra gets you higher limits + deep think (edit: accidentally put research when I meant think where it spends more resources on an answer) + much more Veo 3 usage. And of course you can use the API usage-billed model.
In certain areas, perhaps, but Google Workspace at $14/month not only gives you Gemini Pro, but 2 TB of storage, full privacy, email with a custom domain, and whatever else. College students get the AI pro plan for free. I recently looked over all the options for folks like me and my family. Google is obviously the right choice, and it's not particularly close.
Google is fumbling with the marketing/communication - when I look at their stuff I am unclear on what is even available and what I already have, so I can't form an opinion about the price!
No, you cannot use neither Gemini CLI nor Code Assist via Workspace — at least not at the moment. However, if you upgrade your Workspace plan, you can use Gemini Advanced via the Web or app interfaces.
Workspace (standard?) customer for over a decade.
You clearly have never had the "pleasure" to work with a Google product manager.
Especially the kind that were hired in the last 15-ish years.
This type of situation is absolutely typical, and probably one of the more benign thing among the general blight they typically inflict on Google's product offering.
The cartesian product of pricing options X models is an effing nightmare to navigate.
If I Could Talk to Satya...
I'd say:
“Hey Satya, love the Copilots—but maybe we need a Copilot for Copilots to help people figure out which one they need!”
Then I had them print out a table of Copilot plans:
- Microsoft Copilot Free - Github Copilot Free - Github Copilot Pro - Github Copilot Pro+ - Microsoft Copilot Pro (can only be purchased for personal accounts) - Microsoft 365 Copilot (can't be used with personal accounts and can only be purchased by an organization)
I like Gemini 2.5 Pro, too, and recently, I tried different AI products (including the Gemini Pro plan) because I wanted a good AI chat assistant for everyday use. But I also wanted to reduce my spending and have fewer subscriptions.
The Gemini Pro subscription is included with Google One, which is very convenient if you use Google Drive. But I already have an iCloud subscription tightly integrated with iOS, so switching to Drive and losing access to other iCloud functionality (like passwords) wasn’t in my plans.
Then there is the Gemini chat UI, which is light years behind the OpenAI ChatGPT client for macOS.
NotebookLM is good at summarizing documents, but the experience isn’t integrated with the Gemini chat, so it’s like constantly switching between Google products without a good integrated experience.
The result is that I end up paying a subscription to Raycast AI because the chat app is very well integrated with other Raycast functions, and I can try out models. I don’t get the latest model immediately, but it has an integrated experience with my workflow.
My point in this long description is that by being spread across many products, Google is losing on the UX side compared to OpenAI (for general tasks) or Anthropic (for coding). In just a few months, Google tried to catch up with v0 (Google Stitch), GH Copilot/Cursor (with that half-baked VSCode plugin), and now Claude Code. But all the attempts look like side-projects that will be killed soon.
It's not in Basic, Standard or Premium.
It's in a new tier called "Google AI Pro" which I think is worth inclusion in your catalogue of product confusion.
Oh wait, there's even more tiers that for some reason can't be paid for annually. Weird... why not? "Google AI Ultra" and some others just called Premium again but now include AI. 9 tiers, 5 called Premium, 2 with AI in the name but 6 that include Gemini. What a mess.
This is very confusing how they post about this on X, you would think you get additional usage. Messaging is very confusing.
https://bun.sh/docs/bundler/executables
https://docs.deno.com/runtime/reference/cli/compile/
Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!
Obviously everybody's requirements differ, but Node seems like a pretty reasonable platform for this.
If you have to run end point protection that will blast your CPU with load and it makes moving or even deleting that folder needlessly slow. It also makes the hosting burden of NPM (nusers) who must all install dependencies instead of (nCI instances), which isn't very nice to our hosts. Dealing with that once during your build phase and then packaging that mess up is the nicer way to go about distributing things depending on NPM to end users.
I guess it needs to start various processes for the MCP servers and whatnot? Just spawning another Node is the easy way to do that, but a bit annoying, yeah.
Claude also requires npm, FWIW.
Or a hint about the background of the folks who built the tool.
It's the only argument I can think of, something like Go would be goated for this use case in principle.
Re-running `cargo install <crate>` will do that. Or install `cargo-update`, then you can bulk update everything.
And it works hella better than using pip in a global python install (you really want pipx/uvx if you're installing python utilities globally).
IIRC you can install Go stuff with `go install`, dunno if you can update via that tho.
A single, pre-compiled binary is convenient for the user's first install only.
How many developers have npm installed vs cargo? Many won't even know what cargo is.
Anthropic's Claude Code is also installed using npm/npx.
My exact same reaction when I read the install notes.
Even python would have been better.
Having to install that Javascript cancer on my laptop just to be able to try this, is a huge no.
Again, I haven't used aider in a while so perhaps that's not the case.
For complicated changes Aider is much more likely to stop and need help, whereas Claude Code will just go and go and end up with something.
Whether that's worth the different economic model is up to you and your style and what you're working on.
Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.
currently it seems these are the CLI tools available. Is it possible to extend or actually disable some of these tools (for various reasons)?
> Available Gemini CLI tools:
- ReadFolder
- ReadFile
- SearchText
- FindFiles
- Edit
- WriteFile
- WebFetch
- ReadManyFiles
- Shell
- Save Memory
- GoogleSearch
{ "excludeTools": ["run_shell_command", "write_file"] }
but if you ask Gemini CLI to do this it'll guide you!
You can also extend with the Extensions feature - https://github.com/google-gemini/gemini-cli/blob/main/docs/e...
At the very least, we need better documentation on how to get that environment variable, as we are not on GCP and this is not immediately obvious how to do so. At the worst, it means that your users paying for gemini don't have access to this where your general google users do.
Also this doco says GOOGLE_CLOUD_PROJECT_ID but the actual tool wants GOOGLE_CLOUD_PROJECT
[^1]: https://console.cloud.google.com/marketplace/product/google/...
Workspace users [edit: cperry was wrong] can get the free tier as well, just choose "More" and "Google for Work" in the login flow.
It has been a struggle to get a simple flow that works for all users, happy to hear suggestions!
Just a heads-up: your docs about authentication on Github say to place a GOOGLE_CLOUD_PROJECT_ID as an environment variable. However, what the Gemini CLI is actually looking for, from what I can tell, is a GOOGLE_CLOUD_PROJECT environment variable with the name of a project (rather than its ID). You might want to fix that discrepancy between code and docs, because it might confuse other users as well.
I don’t know what constraints made you all require a project ID or name to use the Gemini CLI with Workspace accounts. However, it would be far easier if this requirement were eliminated.
noted on documentation, there's a PR in flight on this. also found some confusion around gmail users who are part of the developer program hitting issues.
Well, I've just set up Gemini CLI with a Workspace account project in the free tier, and it works apparently for free. Can you explain whether billing for that has simply not been configured yet, or where exactly billing details can be found?
For reference, I've been using this panel to keep track of my usage in the free tier of the Gemini API, and it has not been counting Gemini CLI usage thus far: https://console.cloud.google.com/apis/api/generativelanguage...
Unfortunately all of that is pretty confusing, so I'll hold off using Gemini CLI until everything has been clarified.
Maybe you have access to an AI solution for this.
1. CodeRunner - https://github.com/BandarLabs/coderunner/tree/main?tab=readm...
There was a time where Google produced products that had:
- 1 logo
- 1 text field
- 2 buttons.
This ended up being a sizable part of why Google became so successful.I would suggest that you allow yourself and your team to be visited by the spirit of those days.
like to just get a short response - for simple things like "what's a nm and grep command to find this symbol in these 3 folders". I use gemini alot for this type of thing already
Or would that have to be a custom prompt I write?
other people use simon willison's `llm` tool https://github.com/simonw/llm
Both allow you to switch between models, send short prompts from a CLI, optionally attach some context. I prefer mods because it's an easier install and I never need to worry about Python envs and other insanity.
All different products doing the sameish thing. I don’t know where to send users to do anything. They are all licensed differently. Bonkers town.
Edit: I should mention that I'm accessing this through Gemini Code Assist, so this may be something out of your wheelhouse.
I don't think that's capacity, you should see error codes.
> You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.
Discouraging
I'm a Gemini Pro subscriber and I would love to be able to use my web-based chat resource limits with, or in addition to, what is offered here. I have plenty of scripts that are essentially "Weave together a complex prompt I can send to Gemini Flash to instantly get the answer I'm looking for and xclip it to my clipboard", and this would finally let me close the last step in that scripts.
Love what I'm seeing so far!
Is the recommendation to specifically ask "analyze the codebase" here?
- On a new chat I have to re-approve things like executing "go mod tidy", "git", write files... I need to create a new chat for each feature, (maybe an option to clear the current chat on VsCode would work)
- I have found some problems with adding some new endpoint on an example Go REST server I was trying it on, it just deleted existing endpoints on the file. Same with tests, it deleted existing tests when asking to add a test. For comparison I didn't find these problems when evaluating Amp (uses Claude 4)
Overall it works well and hope you continue with polishing it, good job!!
- Open-source (Apache 2.0, same as OpenAI Codex)
- 1M token context window
- Free tier: 60 requests per minute and 1,000 requests per day (requires Google account authentication)
- Higher limits via Gemini API or Vertex AI
- Google Search grounding support
- Plugin and script support (MCP servers)
- Gemini.md file for memory instruction
- VS Code integration (Gemini Code Assist)
We are now three years into the AI revolution and they are still forcing us to copy and paste and click click crazy to get the damn files out.
STOP innovating. STOP the features.
Form a team of 500 of your best developers. Allocate a year and a billion dollar budget.
Get all those Ai super scientists into the job.
See if you can work out “download all files”. A problem on the scale of AGI or Dark Matter, but one day google or OpenAI will crack the problem.
When you hop over to platforms that use the API, the files get written/edited in situ. No copy/pasting. No hunting for where to insert edited code.
Trust me it's a total game changer to switch. I spent so much time copy/pasting before moving over.
Is your vision with Gemini CLI to be geared only towards non-commercial users? I have had a workspace account since GSuite and have been constantly punished for it by Google offerings all I wanted was gmail with a custom domain and I've lost all my youtube data, all my fitbit data, I cant select different versions of some of your subscriptions (seemingly completely random across your services from a end-user perspective), and now as a Workspace account I cant use Gemini CLI for my work, which is software development. This approach strikes me as actively hostile towards your loyal paying users...
... and other stuff.
Googlers, we should not have to do all of this setup and prep work for a single account. Enterprise I get, but for a single user? This is insufferable.
No mention of accessibility in https://github.com/google-gemini/gemini-cli/blob/0915bf7d677... either
It integrates with VS Code, which suits my workflow better. And buying credits through them (at cost) means I can use any model I want without juggling top-ups across several different billing profiles.
If it sounds too good to be true, it probably is. What’s the catch? How/why is this free?
Also they can throttle the service whenever they feel it's too costly.
This is shown at the top of the screen in https://aistudio.google.com/apikey as the suggested quick start for testing your API key out.
Not a great look. I let our GCloud TAM know. But still.
Set up not too long ago, and afaik pretty load-bearing for this. Feels great, just don’t ask me any product-level questions. I’m not part of the Gemini CLI team, so I’ll try to keep my mouth shut.
Not going to lie, I’m pretty anxious this will fall over as traffic keeps climbing up and up.
Because it says in the README:
> Authenticate: When prompted, sign in with your personal Google account. This will grant you up to 60 model requests per minute and 1,000 model requests per day using Gemini 2.5 Pro.
> For advanced use or increased limits: If you need to use a specific model or require a higher request capacity, you can use an API key: ...
When I have the Google AI Pro subscription in my Google account, and I use the personal Google account for authentication here, will I also have more requests per day then?
I'm currently wondering what makes more sense for me (not for CLI in particular, but for Gemini in general): To use the Google AI Pro subscription, or to use an API key. But I would also want to use the API maybe at some point. I thought the API requires an API key, but here it seems also the normal Google account can be used?
We really are living in the future
I haven't looked at this Gemini CLI thing yet, but if its open source it seems like any model can be plugged in here?
I can see a pathway where LLMs are commodities. Every big tech company right now both wants their LLM to be the winner and the others to die, but they also really, really would prefer a commodity world to one where a competitor is the winner.
If the future use looks more like CLI agents, I'm not sure how some fancy UI wrapper is going to result in a winner take all. OpenAI is winning right now with user count by pure brand name with ChatGPT, but ChatGPT clearly is an inferior UI for real work.
But in many other niches (say embedded), the workflow is different. You add a feature, you get weird readings. You start modelling in your head, how the timing would work, doing some combination of tracing and breakpoints to narrow down your hypotheses, then try them out, and figure out what works the best. I can't see the CLI agents do that kind of work. Depends too much on the hunch.
Sort of like autonomous driving: most highway driving is extremely repetitive and easy to automate, so it got automated. But going on a mountain road in heavy rain, while using your judgment to back off when other drivers start doing dangerous stuff, is still purely up to humans.
Im actually interested to see if we see a rise in demand for DRAM that is greater than usual because more software is vibe coded than being not, or some form of vibe coding.
If the module just can't be documented in this way in under 100 lines, it's a good time to refactor. Chances are if Claude's context window is not enough to work with a particular module, a human dev can't either. It's all about pointing your LLM precisely at the context that matters.
I’ve been using Claude for a side project for the past few weeks and I find that we really get into a groove planning or debugging something and then by the time we are ready to implement, we’ve run out of context window space. Despite my best efforts to write good /compact instructions, when it’s ready to roll again some of the nuance is lost and the implementation suffers.
I’m looking forward to testing if that’s solved by the larger Gemini context window.
>This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
Click Gemini API, scroll
>When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.
>To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.
At the bottom of README.md, they state:
"This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
* Gemini API key
* Gemini Code Assist
* Vertex AI"
The Gemini API terms state: "for Unpaid Services, all content and responses is retained, subject to human review, and used for training".
The Gemini Code Assist terms trifurcate for individuals, Standard / Enterprise, and Cloud Code (presumably not relevant).
* For individuals: "When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies."
* For Standard and Enterprise: "To help protect the privacy of your data, Gemini Code Assist Standard and Enterprise conform to Google's privacy commitment with generative AI technologies. This commitment includes items such as the following: Google doesn't use your data to train our models without your permission."
The Vertex AI terms state "Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction."
What a confusing array of offerings and terms! I am left without certainty as to the answer to my original question. When using the free version by signing in with a personal Google account, which doesn't require a Gemini API key and isn't Gemini Code Assist or Vertex AI, it's not clear which access mechanism I am using or which terms apply.
It's also disappointing "Google's privacy commitment with generative AI technologies" which promises that "Google doesn't use your data to train our models without your permission" doesn't seem to apply to individuals.
A bit gutted by the `make sure it is not a workspace account`. What's wrong with Google prioritising free accounts vs paid accounts? This is not the first time they have done it when announcing Gemini, too.
I do not get it why they don’t pick Go or Rust so i get a binary.
This perfectly demonstrates the benefit of the nodejs platform. Trivial to install and use. Almost no dependency issues (just "> some years old version of nodejs"). Immediately works effortlessly.
I've never developed anything on node, but I have it installed because so many hugely valuable tools use it. It has always been absolutely effortless and just all benefit.
And what a shift from most Google projects that are usually a mammoth mountain of fragile dependencies.
(uv kind of brings this to python via uvx)
Gemini Pro and Claude play off of each other really well.
Just started playing with Gemini CLI and one thing I miss immediately from Claude code is being able to write and interject as the AI does its work. Sometimes I interject by just saying stop, it stops and waits for more context or input or ai add something I forgot and it picks it up..
https://developers.google.com/gemini-code-assist/resources/p...
When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.
To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.
If this is legal, it shouldn’t be.
"If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals."
and then the link: https://developers.google.com/gemini-code-assist/docs/set-up...
If you pay for code assist, no data is used to improve. If you use a Gemini API key on a pay as you go account instead, it doesn't get used to improve. It's just if you're using a non-paid, consumer account and you didn't opt out.
That seems different than what you described.
"You can find the Gemini Code Assist for individuals privacy notice and settings in two ways:
- VS Code - IntelliJ "
I guess the key question is whether the Gemini CLI, when used with a personal Google account, is governed by the broader Gemini Apps privacy settings here? https://myactivity.google.com/product/gemini?pli=1
If so, it appears it can be turned off. However, my CLI activity isn't showing up there?
Can someone from Google clarify?
Which pretty much means if you are using it for free, they are using your data.
I don't see what is alarming about this, everyone else has either the same policy or no free usage. Hell the surprising this is that they still let free users opt-out...
That’s not true. ChatGPT, even in the free tier, allows users to opt out of data sharing.
Not if you pay for it.
Not if you pay for it.
Today.
In six months, a "Terms of Service Update" e-mail will go out to an address that is not monitored by anyone.
There's also zero chance they will risk paying customers by changing this policy.
The resulting class-action lawsuit would bankrupt the company, along with the reputation damage, and fines.
*What we DON'T collect:*
- *Personally Identifiable Information (PII):* We do not collect any personal information, such as your name, email address, or API keys.
- *Prompt and Response Content:* We do not log the content of your prompts or the responses from the Gemini model.
- *File Content:* We do not log the content of any files that are read or written by the CLI.
https://github.com/google-gemini/gemini-cli/blob/0915bf7d677...
However, Gemini at one point output what will probably be the highlight of my day:
"I have made a complete mess of the code. I will now revert all changes I have made to the codebase and start over."
What great self-awareness and willingness to scrap the work! :)
That's a ton of free limit. This has been immensely more successful than void ide.
thor-rodrigues•4h ago