Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development.
via: https://www.linkedin.com/showcase/google-antigravity/about/
I've been using my current IDE for 17 years, and plan to continue using it for at least another 15
I wouldn't be even surprised if internally the AS team's financials are counted under the Playstore umbrella.
I still wouldn't trust a Google product to stick around, but these hints aren't a reliable oracle either.
It is a product launched in the hype cycle of AI. Google has plenty of other products (launched during hype cycles) that are gathering dust.
That's not a guaranteed signal that it will meet the same fate but its something strong enough to be wary of.
https://www.youtube.com/watch?v=YX-OpeNZYI4
- It's VS Code
Like clockwork!
- ai therapist for your ai agents
- genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives
- prompt engineering as a service
- social media post generator so you can post "what if ai fundamentally changes everything about how people interact with software" think pieces even faster
- vector db but actually json for some reason
(Edit: formatting)
AI Agent Orchestration Battle Bots. Old school VMs are the brute tanks just slowly ramming everybody off the field. A swarm of erratically behaving lightweight K8s Pods madly slicing and dicing everything coming on their way. Winner takes control of the host capacity.
I might need this in my life.
Presumably that hasn't changed much. If you want to do any large-scale edits of the UI you need to spin up a fork.
Weirdly, out of all the vscode forks the best UI is probably bytedance's TRAE
You mean Chromium wrapper?
> Come join us! Programming is fun again! It's a whole new world up here!
- Nano Banana => Mockup
- Antigravity/IDE => Comments/note
- Gemini => Turn to code
- Antigravity/IDE => Adjust/code
All on the same platform so can maximum automate / "agentic"
Google at its finest
I'm not sure many engineers will welcome this "promotion".
If existing engineers don't change it doesn't matter because new engineers will take their place.
Car manufacturers made profit
The problem is that the engineer turning what you want into code isn't normally the bottleneck. I would say about 50% of my job is helping people specify what they want sufficiently for someone to implement.
Non-technical people are used to a world of squishy definition where you can tell someone to do something and they will fill in the blanks and it all works out fine.
The problem with successful software is that the users are going to do all the weird things. All the things the manager didn't think about when they were dreaming up their happy path. They are going to try to update the startTime to the past, or to next year and then back to next week. They are going to get their account into some weird state and click the button you didn't think they could. And this is just the users that are trying to use the site without trying to intentionally break it.
I think if managers try to LLM up their dreams it'll go about as well as low/no-code. They will probably be able to get a bit further because the LLM will be willing to bolt on feature after feature and bug fix after bug fix until they realize they've just been piling up bandaids.
I am cautiously optimistic that there will be a thriving market for skilled engineers to come in and fix these things.
Later edit: Probably this one [1], which is par for the course for Alphabet, they're, conceptually, still living in the early 2010s, when this stuff was culturally relevant.
Console error:
> Loading module from “https://antigravity.google/main-74LQFSAF.js” was blocked because of a disallowed MIME type (“text/html”).
Lotta people mining science fiction for cool names and then applying them to their crappy products, cheapening the source ideas.
We are in the future, it’s just a much more rubbish version than people imagined in scifi
Ah Google misconfigured their web server:
> Loading module from “https://antigravity.google/main-74LQFSAF.js” was blocked because of a disallowed MIME type (“text/html”).
Edit: And a couple minutes later, it is now working. Guess Google is reading HN.
But there is a 13 minute demo video.
I'm concerned that the new role of "manager of agents" (as google puts it) will be a soul destroying brain dead work and the morale won't be good.
> Model quota limit exceeded. You have reached the quota limit for this model.
Would be willing to bet this is the issue. Adding html files to context for gemini models results in a ton of token use.
EDIT: why must users care?
Maybe the questioner is also in full control of the HTML creation and they don’t need a parser for all possible HTML edge cases.
It seems that even the very conceptually simple example given by the questioner is impossible.
Free tier users get to use what's left over from Google's capacity. They pay with their data, Google uses their inputs for training.
Paid tier users pay with money, Google doesn't use their inputs. They get priority when capacity is running out (like right after a launch as happened here).
It's the same problem with OpenRouter's free tiers for a long time. If something is truly $0 and widely available, people will absolutely bleed it dry.
The don't seem to be getting any rate limiting issue which I don't understand, maybe a bug in Antigravity allowing them to use it for more. They are really confident in the IDE after a few hours and the output given is really good.
- Gemini 3 Pro (High)
- Gemini 3 Pro (Low)
- Claude Sonnet 4.5
- Claude Sonnet 4.5 (Thinking)
- GPT-OSS 120B (Medium)Google using Electron tells us that quality control is completely out of the window.
Unbelievable.
I wasted a day on trying to get some PNGs to render correctly, but no matter the config I used, the colors came out wrongly oversaturated.
I used Tauri with a WebView, and the app was rendering the images perfectly fine. On top of that the UI looked much better, and I was done in half the time I spend trying to fix the rendering issue in WinUI 3.
Never again will I go native.
... Aren't we talking about a programming IDE here? When did mobile become anything like the primary market for that? Are people expected to sit around for hours inputting symbols with an OSK?
What are all this LLMs for when everything is just fork of electron app? Does not look like good marketing.
Also I'm used to vim and sensitive to lag, so I always hated vscode, but seems a lot of people don't notice or something. And when you're using AI for 90% of the loc, it matters less.
They have the revenues to support all of this.
They spent time learning from all the players and can now fast follow into every market. Now they're fast and nimble and are willing to clone other products wholesale, fork VSCode, etc.
They're developing all of this, meanwhile Pichai is calling it a "bubble" to put a chill on funding (read: competition). It's not like Google is slowing down.
We had a chance to break them up with regulation, and we didn't. Now they're going to kill every market participant.
This isn't healthy. We have an invasive species in the ecology eating up all the diverse, healthy species.
a16z and YC must hate this. It puts a cap on their returns.
As engineers, you should certainly hate this. Google does everything it can to push wages down. Layoffs, offshoring, colluding with competitors. Fewer startups mean fewer rewards for innovation capital and more accrual to the conglomerate taxing the entire internet.
Chrome, Android, Search, Ads, YouTube, Cloud, Workspace, Other Bets, and AI/Deepmind need to be split into separate companies.
Call or email your legislators and ask for antitrust enforcement: https://pluralpolicy.com/find-your-legislator/
Demand a Google breakup.
Google has never successfully done that? Maybe once?
Or perhaps it would be most correct to say Microsoft assassinated Nokia by sending in Stephen Elop as a double agent?
> most correct to say Microsoft assassinated Nokia by sending in Stephen Elop
That is exactly how I see it. It also makes perfect sense for all parties involved.Google spreadsheet was another amazing product back in the day.
https://static01.nyt.com/newsgraphics/documenttools/f6ab5c36...
> Plaintiffs maintain that Google has monopoly power in the product market for general search services in the United States.
> According to Plaintiffs, Google has a dominant and durable share in that market (general search), and that share is protected by high barriers to entry.
> Google counters that there is no such thing as a product market for general search services.
> What exists instead, Google insists, is a broader market for query response.
(+ yes obviously, products like Sheets or Maps were amazing, and are still very much the best.
It was a joke to say that even Google denies its own success, the same way as the earlier comment).
Prior to the push into Cloud computing, Ad revenue was well over 90% of all Google gross income, and Cloud was the first big way they diversified. GCP is definitely a credible competitor these days, but it did not devour AWS. Other commercial Google services didn't even become credible competitors, e.g. Google Stadia was a technically exceptional platform that got nowhere with customers.
The question now is whether Google carves out an edge in AI that makes it profitable overall, directly or strategically. Like many companies, there seems to be a presumption of potentially infinite upside, which is what it would take to justify the astronomical costs.
It's not like Youtube where they legitimately bought their way to dominance. And I'd argue that even in the case of DoubleClick, google was already dominating the search advertising market when they bought DoubleClick to consolidate their dominance.
You love a Google product because of its features but never actually because of the product itself. But you can’t win everything by engineering and sometimes Google struggles with the product side.
AI is part engineering so we’ll see.
They're struggling on cloud, AI, ISP, videoconferencing, and others...
(presumably because if they touch the ad system it might break)
> a16z and YC must hate this. It puts a cap on their returns.
And a16z's main business is investing in financial scams.
The intro checklist for Antigravity includes watching VS Code tutorials!
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/v...
We've come full circle.
Has there been any indication that these folks are "rubbing Google the wrong way"? I think Chromium, as a project, is actually very happy that more people are using their engine.
Should have just been an extension with a paid plan.
Quick, someone throw the Linux kernel source at it and report back! xD
Nothing in there about chromium.
They knew exactly what they were saying.
Really? Madness? "We started with VS Code to develop our IDE...".
Oh, so onerous.
A thank you to the principle developers is a minimum if you're using someone else's work commercially and aren't an asshole.
No it's not a legal requirement, it's just about being a good part of society.
Stallman said that in 1997 there were 75 acknowledgements in a single piece of software. With today’s trend of micro libraries on npm, there will be at least thousands of acknowledgements in one piece of software.
VSCode runs on chromium, like any website you visit when using a Chromium browser.
VSCode -> Electron (essentially purpose specific web browser) -> Chromium
news.ycombinator.com -> General Purpose Web Browser -> Chromium
Which is exactly why VSCode has a near parity version that runs on the web, under any browser engine.
Saying VSCode is based on Chromium doesn't make any sense.
Nothing bad with using code other people made open. Our whole industry is built on this.
I never understood why people scoff at VS Code forks. I'd honestly tend to be more skeptical of new editors that don't fork VS Code, because then they're probably missing a ton of useful capabilities and are incompatible with all the VSC extensions everyone's gotten used to.
forking vscode? simple. extensions not so simple. they are controlled by microsoft. without them you’ll run into continual papercuts as a vendor who has forked vscode.
We finally discovered IDEA and never looked back.
Which is on top of 'Chrome'.
Interesting sandwich: Google-Microsoft-Google.
> Fork VS Code, add a few workflow / management ideas on top.
> "Agentic development platform"
I'm Jack's depressed lack of surprise.
Please someone, make me feel something with software again.
Unfortunately, once money came into the picture, quality, innovation, and anything resembling true progress flew out the window.
The masses will get their metaphorical fake leather belts made by slave labor operating machines and won’t know better.
A few artisans will make very little money making the real thing manually, mostly for their own enjoyment.
This is assuming AI can actually be made to lower knowledge work’s training/education requirements, and that debate is still ongoing.
Work with what you love, and you will never love anything again.
Trying to understand how this is anything net new in the space.
var css = 'body { height: auto !important; overflow: auto !important; } .smooth-scroll-wrapper { transform: none !important; position: static !important; } div[style*="position: fixed"] { position: static !important; overflow: visible !important; inset: auto !important; }';
var style = document.createElement('style');
style.innerHTML = css;
document.head.appendChild(style);
console.log("Default scroll forced.");And I don’t mean like some designers will highjack scroll to deliver a different experience like slide-like transitions or something (which may or may not be, differently, awful) but they’ll override it just to give you ordinary scrolling, except much worse (as on this page).
Seems like a lot of work to do just to make something shittier, but what do I know, I probably can’t implement a* on a whiteboard from memory or whatever.
Apple is the worst offender here. Their product pages are always sluggish.
wut? scrollhijacking is bad, and doesnt matter who does it.
It's incredible to think how many employees of this world-leading Web technology company must have visited this site before launch, yet felt nothing wrong with its basic behavior.
The software of the future, where nobody on staff knows how anything is built, no one understands why anything breaks, and cruft multiplies exponentially.
But at least we're not taken out of our flow!
And it's not like any of your criticisms don't apply to human teams. They also let cruft develop, are confused by breakages, and don't understand the code because everyone on the original team has since left for another company.
Nature does select for laziness. The laziest state that can outpace entropy in diverse ways? Ideal selection.
This is actually a cool use that's being explored more and more. I first saw it in the wiki thing from the devin people, and now google released one as well.
Where they use Claude to analyse an old (demo) COBOL application.
And it understands the context of the files, decrypts the process and even draws graphs to the documentation it creates.
I wish I had this 20 years ago when I was consulting and had to jump into really funky client codebases with zero documentation and everything on fire.
I do think the primary strengths of genai are more in comprehension and troubleshooting than generating code - so far. These activities play into the collaboration and communication narrative. I would not trust an AI to clean up cruft or refactor a codebase unsupervised. Even if it did an excellent job, who would really know?
I wish that were true.
In my experience, most of the time they're not doing the things you talk about -- major architectural decisions don't get documented anywhere, commit messages give no "why", and the people who the knowledge got socialized to in unrecorded conversations then left the company.
If anything, LLM's seem to be far more consistent in documenting the rationales for design decisions, leaving clear comments in code and commit messages, etc. if you ask them to.
Unfortunately, humans generally are not better at communicating about their process, in my experience. Most engineers I know enjoy writing code, and hate documenting what they're doing. Git and issue-tracking have helped somewhat, but it's still very often about the "what" and not the "why this way".
This is so far outside of common industry practices that I don't think your sentiment generalizes. Or perhaps your expectation of what should go in a single commit message is different from the rest of us...
LLMs, especially those with reasoning chains, are notoriously bad at explaining their thought process. This isn't vibes, it is empiricism: https://arxiv.org/abs/2305.04388
If you are genuinely working somewhere where the people around you are worse than LLMs at explaining and documenting their thought process, I would looking elsewhere. Can't imagine that is good for one's own development (or sanity).
I'm not really interested in what some academic paper has to say -- I use LLM's daily and see first-hand the quality of the documentation and explanations they produce.
I don't think there's any question that, as a general rule, LLM's do a much better job documenting what they're doing, and making it easy for people to read their code, with copious comments explaining what the code is doing and why. Engineers, on the other hand, have lots of competing priorities -- even when they want to document more, the thing needs to be shipped yesterday.
Your initial comment made it sound like you were commenting on a genuine apples-for-apples comparisons between humans and LLMs, in a controlled setting. That's the place for empiricism, and I think dismissing studies examining such situations is a mistake.
A good warning flag for why that is a mistake is the recent article that showed engineers estimated LLMs sped them up by like 24%, but when measured they were actually slower by 17%. One should always examine whether or not the specifics of the study really applies to them--there is no "end all be all" in empiricism--but when in doubt the scientific method is our primary tool for determining what is actually going on.
But we can just vibe it lol. Fwiw, the parent comment's claims line up more with my experience than yours. Leave an agent running for "hours" (as specified in the comment) coming up with architectural choices, ask it to document all of it, and then come back and see it is a massive mess. I have yet to have a colleague do that, without reaching out and saying "help I'm out of my depth".
That said, the first comment of the person I replied to contained: "You can ask agents to identify and remove cruft", which is pretty explicitly speaking to agent mode. He was also responding to a comment that was talking about how humans spend "hours talking about architectural decisions", which as an action mapped to AI would be more plan mode than ask mode.
Overall I definitely agree that using LLM tools to just tell you things about the structure of a codebase are a great way to use them, and that they are generally better at those one-off tasks than things that involve substantial multi-step communications in the ways humans often do.
I appreciate being the weeds here haha--hopefully we all got a little better talking abou the nuances of these things :)
I guess in this case we are comparing an idealized human to an idealized AI, given AI has equally its own failings in non-idealized scenarios (like hallucination).
> And it's not like any of your criticisms don't apply to human teams.
Every time the limitations of AI are discussed, we see this unfair standard applied: ideal AI output is compared to the worst human output. We get it, people suck, and sometimes the AI is better.
At least the ways that humans screw up are predictable to me. And I rarely find myself in a gaslighting session with my coworkers where I repeatedly have to tell them that they're doing it wrong, only to be met with "oh my, you're so right!" and watch them re-write the same flawed code over and over again.
Doesn't this apply to people who code in high level languages?
This is more akin to manager-level view of the code (who need developers to go and look at the "deterministic" instructions); the abstraction is a lot lot more leaky than high->low level languages.
:chuckles nervously:
I dont know what i expected tbh
If you are not paying, or paying a consumer level price ($20/mo) you will be trained on.
ETA: In the terms they say they use your data because "free" is the only option available in preview. However it does say you can disable sharing in your settings...
And of course I would need to look at all the implications of spying, being locked out of google account and absence of support that are google amo. No time for that. Not for them.
The task was to put create a header, putting the company logo in the corner and the text in the middle.
The resulting CSS was an abomination - I threw it all away and rewrote it from scratch (using my somewhat anemic CSS knowledge), ending up with like 3 selectors with like 20 lines of styles in total.
This made me think that 1: CSS and the way we do UI sucks, I still don't get why don't we have a graphical editor that can at least do the simple stuff well. 2: when these model's don't wanna do what you want them to the way you want them, they really don't wanna.
I think AI has shown us there's a need for a new generation of simple to write software and libraries, where translating your intent into actual code is much simpler and the tools actually help you work instead of barely allowing to fight be all the accidental complexity.
We were much closer to this reality back in the 90s when you opened up a drag and drop UI editor (like VB6, Borland Delphi, Flash), wrote some glue code and out came an .exe that you could just give to people.
Somewhere along the way, the cool kids came up with the idea that GUIs are bad, and everything needs to go through the command line.
Nowadays I need a shell script that configures my typescript CDK template (with its own NPM repo), that deploys the backend infra (which is bundled via node), the database schema, compiles the frontend, and puts the code into the right places, and hope to god that I don't run into all sorts of weird security errors because I didn't configure the security the way the browser/AWS/security middleware wanted to.
It's important for people to feel like "hackers" that is the primary reason why command line sort of exploded among devs. Most devs will never admit this... they may not even realize it, but I think this is the main reason it went big.
The irony is that the very thing that makes devs feel like "hackers" is the very thing that's enabling agentic AI and making developers get all resistant because they're feeling dumber.
Antigravity would be a world-changing technology. This isn't.
And agentic coding is about working at a much higher conceptual level. Further from the ground. Antigravity is a functional metaphor.
My only issue with it is that it's too long at five syllables, and "anti-" is an inherently negative connotation. I'm guessing this will eventually get renamed if it gets popular, much like Bard was.
Working at a higher conceptual level is just project management. You're the legislator giving out unfunded mandates rather than the agency staff that has to figure out how to comply. There's power there, but it isn't anti-gravity.
That said, I suspect this is really meant to allude to https://xkcd.com/353/.
That's why it's metaphor. "Operation Warp Speed" also delivered vaccines quickly, but not faster than the speed of light.
The list of company and product names that are based on a metaphor that is very obviously exaggerated is endless. Google doesn't index a googol number of pages either.
I just continue to stand by the fact that naming products using exaggerated metaphor is standard practice. The idea that it is "shameful" or "ignorant" seems absurd. I think it's OK not to take it too seriously. Nobody is going to be confused and walk off of a cliff or something because the product is named "antigravity"...
Do you get upset that the Milky Way candy bar doesn't actually contain a galaxy within? Or that the Chicago Bulls aren't as strong as actual bulls?
Geez, people are still this impressed by big tech?
"Geez," it's just a name. Is it too much to not get worked up over a perfectly innocent and fun name?
We're not talking about monopolistic business strategy here or anything. We're talking about a product name. So yeah, I think the name is perfectly innocent and fun. I cannot understand the level of conspiratorial thinking that must be involved to think "antigravity" is some kind of offensive choice. Bizarre.
The name is fine. You are bringing some kind of anti-Google prejudice to this that is irrelevant and frankly baffling.
Quite shocking to see that Google would consider using this crass software and is the most inefficient software libraries ever made.
What were the engineers thinking?
..."Youre absolutely right! I did mess up the internals of that feature and incorrectly reported that it works. let me try again..."
2024: every day a new Chrome fork browser is announced
2025: every day a new AI IDE vscode fork is announced
I wonder why they are not trying to fixup something based on their own GUI stacks like Flutter or Compose Multiplatform.
It seems only Zed is truly innovating in this space.
IMO, it's an absolutely crappy IDE, crappy editor, with absolutely incomprehensible hostile UI.
I have almost two decades of experience with Vim, Emacs and IntelliJ. FWIW, I was able to easily find my ways in helix, kakoune and Zed.
- Icons on the toolbar in the left panel have no labels or even tooltips. No way to know what they do without clicking and checking.
- Space in the file explorer in the left panel opens a file (haven't noticed such behavior in other editors -- totally unexpected).
- Maybe that's the artifact of me installing Vim plugin, but Keyboard shortcuts displayed in the main menu don't do what they say they do.
- It often offers installing some plugins, and I've absolutely no idea why, and what will happen if I do or if I don't.
I'm talking about Cursor, which I assume is exactly like VS Code. Tried VS Code only once very long ago.
I just opened the app to see what else I can bring up, and while clicking through UI I noticed I had some crappy key bindings extension installed, which apparently caused many of my annoyances.
I've probably installed it very long ago, or even by accident.
For example, I was always annoyed that open file/directory shortcut (one of most common operations) is not assigned and requires mouse interaction -- fixed by disabling the extension. Go to file shortcuts does something completely different -- fixed by disabling the extensions.
I likely won't adopt Cursor as my main IDE/Editor, but it's miles better than I thought just an hour ago.
Thanks for your question :D
Decided to ditch it for claude code right after that, since I cannot be bothered to go over the entire list of keyboard shortcuts and see what else it overrode/broke.
That said I also have moved to CLI agents like Claude Code and Codex because I just find them more convenient and, for whatever reason, more intelligent and more likely to correctly do what I request.
At home I use claude and gemini in terminal, both work great for me
I don’t know and honestly I hate the assumption of the software industry that everyone knows or uses vs code. I stuck to sublime for years until I made the switch to Jetbrains IDEs earlier this year.
I quickly looked up the market share and VS code seems to have about 70% which is a lot but the 30% that don’t use it is not that small of a number either.
Like I get it it’s very popular but it’s far from the only editor/IDE people use.
[1]: https://www.gpui.rs
FWIW, the Fuchsia team was working on an editor that had a Flutter UI when run in Fuchsia:
But we're probably 1-2 years away from there still, so we'll live with skinned-forks, VSCode extensions and TUIs for now.
2026: every day ...
I always wonder how this works legally. VSCode needs to comply with the LGPL (it's based on Chromium/Blink which is LGPL) ; they should provide the entire sources that allow us to rebuild our own "official" VSCode binary
https://code.visualstudio.com/blogs/2025/05/19/openSourceAIE...
I've had a Github Copilot subscription from work for 1yr+ and switch between the official Copilot and Roo/Kilo Code from time to time. The official Copilot extension has improved a lot in the last 3-6 months but I can't recall ever seeing Copilot do something that Roo/Kilo can't do, or am I missing something obvious?
I think forking VS Code is probably the most sensible strategy and I think that will remain the case for many years. Really, I don't think it's changing until AI agents get so ridiculously good that you can vibe code an entire full-featured polished editor in one or a few sessions with an LLM. Then we'll be seeing lots of de novo editors.
Creepy stuff :)
That started to change in may.
https://code.visualstudio.com/blogs/2025/05/19/openSourceAIE...
https://code.visualstudio.com/blogs/2025/06/30/openSourceAIE...
https://code.visualstudio.com/blogs/2025/11/04/openSourceAIE...
2024: every day a new electron fork is announced
2025: every day a new electron fork is announced
This is still happening. Didn't you see OpenAI's release of Atlas?
The issue with Eclipse and that approach is the complexity of mixing plugins to do everything, which kills the UX.
When VSCode started, the differentiator from Atom and Eclipse was that the extension points were intentionally limited to optimize the user experience. But with the introduction of Copilot that wasn’t enough, hence the amount of forks.
I think that the Zed approach of having a common protocol to talk with agents (like a LSP but for agents) is much better. The only thing that holds me from switching to Zed is that so far my experience using it hasn’t been that good (it still has many rough edges)
I was an Atom user. Even before the acquisition of GitHub the major feature of VSCode was its speed and TS integration. AFAIK, the only common part between Atom and VSCode is Electron. Other than that, VSCode started with a different codebase based on TypeScript, while Atom was originally written in CoffeeScript.
Multiple design decisions helped VSCode to thrive (btw Erich Gamma was also part of Eclipse):
- The creation of the LSP. Each release of VSCode is also tied to TypeScript releases and improvements. There is a lot of collaboration between the two teams. That gave VSCode the best support for TS and JS. I used Atom and WebStorm regularly when VSCode came out, and VSCode auto-complete and TS support were orders of magnitude better. Everybody caught up since then, but I guess many users like me switched because of that.
- Unlike Atom, VSCode was designed with web integration in mind. A lot of sites started to use Monaco for code editing, and a lot of web-based IDEs use parts of it (CodeSandbox, StackBlitz, etc).
- Gradual rollout of plugin integration. While Atom has the philosophy of everything is pluggable, VSCode was intentionally limited. Which was a good thing given the poor loading performance of Atom.
By the time MS acquired GitHub, Atom usage was already in decline.
* Note: my side of the history, comes from my experience of working in a company that did a custom Eclipse IDE. We evaluated Atom and then VSCode as alternatives to “modernize” our IDE. So I have experience in looking at both Atom and VSCode code bases: they are totally different. Also, the main problem with VSCode for us was the limited extensibility.
I wanted to second this: I compared both, with Atom experience starting before the first VSC release. Atom had performance and stability problems continuously through that period, and never really won any of my coworkers over. A lot of that was simple performance: I remember using Is It Snappy? to test my subjective impression and finding that input latency was a full order of magnitude worse, which is the kind of thing which really colors your impression of an editor.
I think this was more accurate around 2012. My local tech magazine had their own fork and they attached CD with the magazine which included the browser.
I'm going to need an AI summary of this page to even start comprehending this... It doesn't help that the scrolling makes me nauseous, just like real anti-gravity probably would.
"A more intuitive task-based approach to monitoring agent activity, presenting you with essential artifacts and verification results to build trust."
The whole thing around "trust" is really weird. Why would I as a user care about that? It's not as if LLMs are perfect oracles and the only thing between us and a brave new world is blind trust in whatever the LLM outputs.
I mean, google doesn't have the greatest track record.
Also, why does that site's scroll behavior is so weird? Just use the browser's default for Ford's sake!
And now they can’t even ship a desktop app without forking VSCode? Look, I get it. There’s this huge ecosystem. Everyone uses it. I’m not saying it’s damning or even bad to fork it.
But why is this being painted as something revolutionary? It’s a reskin of all the other tools which are variations on the same theme, dressed up in business speak (an agent-first UX!). I’m sure it’s OK. I downloaded it. The default Tokyo Night theme is unusable; the contrast can’t be read. I picked Vim bindings, but as soon as I tried to edit a file I noticed that was ignored.
What happened? Is this how these beautiful, innovative companies are bound to end up?
I know there's a "free plan with generous rate limits" but it's obvious that they're losing money there.
But for writing code in some domain I am good in, they are pretty much useless.. I would spend a lot longer struggling to get something that barely functions from them VS writing it myself, and the one I write myself will be terse and maintainable + if it has bugs they will be like obvious ones, not insane ones that a human would never do.
Even just when getting them to write individual functions with very clear and small scopes.
What about a demo that shows how this can be used to fix for example https://github.com/emscripten-core/emscripten/issues/24792?
Those quota limits brought me back down to earth quickly.
There is currently no support for:
Paid tiers with guaranteed quotas and rate limits
Bring-your-own-key or bring-your-own-endpoint for additional rate limits
Organizational tiers (self-serve or via contract)
So basically just another case of vendor lock-in. No matter whether the IDE is any good - this kills it for me.very interesting times; i'm glad to see browser automation becoming more mainstream as part of the ai-assisted dev loop for testing. (disclosure: started the selenium project, now working on something similar for a vibe coding context)
Looks like I'll wait to see if Google cares about putting the polish into a VSCode fork that at least comes close to what Cursor did.
That's 100% what it is, and rushed at that. Competition is (generally) a good thing though, only time can tell which IDE comes out on top.
I can't really explain what the issue is, I'd assume it's about lock in, but I don't see a VS Code fork or yet another Chromium browser being something that a person couldn't easily replace with another similar fork, but with a different AI. It that the pitch internally? Lock users into a browser or IDE, so they'll be forced to use a certain AI?
Shrugs that's the only reason that makes any sense short of they're just being blindly mimetic (which, let's be honest, isn't outside of the realm of possibility these days).
That seems bad.
I like this tool.
edit: Scratch that, GP3L is erroring out too. Global hug I guess. I still like this.
Why can I not authenticate into Google Antigravity?
Google Antigravity is currently available for non-Workspace personal Google accounts in approved geographies. Please try using an @gmail.com email address if having challenges with Workspace Google accounts (even if used for personal purposes).
https://antigravity.google/docs/faqNot that I have any desire to try this at this point, but it's always felt ironic.
I used to love leaving that site open on public PCs and watching the reactions that resulted :)
Additionally... Google Code was shut down in 2016? I have zero confidence in such a user hostile company. They gave you a Linux phone, they extended it, and made it proprietary. They gave you a good email account, extended it and made it proprietary. They took away office software from you via Google Docs, so now you don't even own the software they do.
No thanks.
My crystal ball says it will be shutdown next year.
Their only real product is advertising, everything else is a pretense to capture users attention and behaviors that they can auction off.
If Google doesn't adapt, they could easily be dead in a decade.
My primary workflow is asking AI questions vaguely to see if it successfully explains information I already know or starts to guess. My average context length of a chat is around 3 messages, since I create new chats with a rephrased version of the question to avoid the context poison. Asking three separate instances the same question in slightly different way regularly gives me 2 different answers.
This is still faster than my old approach of finding a dry ground source like a standards document, book, reference, or datasheet, and chewing through it for everything. Now I can sift through 50 secondary sources for the same information much faster because the AI gives me hunches and keywords to google. But I will not take a single claim for an AI seriously without a link to something that says the same thing.
Most of the other people (so far) in this sub-thread do not think this. They essentially have a conspiratorial view on it.
There is no evidence to support any other motive.
Any experienced (as in, 10+ years) developer knows better than to trust google with dev tools.
Colab is still going strong. Chrome inspector is still going strong.
They've never released a full-fledged IDE before, have they? Which I don't count Apps Script editor as one, but that's been around for a long time as well.
I think it's much more likely that Google believes this is the future of development and wants to get in on the ground floor. As they should.
This is hardly possible as this is definitely not the future of development which is obvious to developers who created this. Or to any developer for that matter.
This is a stakeholders' feature.
None of that matters for actual development work.
A lot of people find it's actually quite valuable for "actual development work". If you want to ignore all that, then I guess go ahead.
But just know that what you're claiming is "obvious", is clearly not. There seems to be large disagreement over it, so it is objectively not obvious, but rather quite debatable.
I've played with Antigravity for the past 48 hours for lots of different tasks. Is it revolutionizing development for me? No. Do I think they want it to do that and are working extremely hard to try to achieve that? I think the answer is very obviously: of course. Will it maybe get closer to that within a few months or a year? Maybe.
I think the comment you’re replying to was addressing the “shutting down” part, not the “investors” part.
also i was alluding to the way their promotion policy encourages people to start rather than maintain projects.
Google is highly profitable. It's not looking for investment, it's the one investing.
Maybe you are confusing it with OpenAI?
edit: Also Jules...
snark off:
I think the Google PMs should have coffee together and see if all of this sprawl makes any sense.
Google AI studio is their developer dashboard.
Google Vertex is their equivalent of Amazon Bedrock.
Google Gemini Chat is their ChatGPT app for normies.
Google Antigravity is their Cursor equivalent.
But AI Studio is getting vibe coding tools. AI Studio also has a API that competes with Vertex. They have IDE plugins for existing IDEs to expose Chat, Agents, etc. They also have Gemini CLI for when those don’t work. There is also Firebase Studio, a browser based IDE for vibe coding. Jules, a browser based code agent orchestration tool. Opal, a node-based tool to build AI… things? Stich, a tool to build UIs. Colab with AI for a different type of coding. Notebook LM for AI research (many features now available in Gemini App). AI Overviews and AI mode in search, which now feature a generic chat interface.
Thats just new stuff, and not including all the existing products (Gmail, Home) that have Gemini added.
This is the benefit of a big company vs startups. They can build out a product for every type of user and every user journey, at once.
In another 2 years we'll probably be back to just "Google" as digital agent that can do any research, creative, or coding task you can imagine.
Well, that clears that up.
Google ADK (agent development kit, awesome)
The whole webpage looks like something from Apple.
Remember took my a while early in my career from changing my resume away from saying "I want to do this at my next job and make a lot of money" and towards "here is how I can make money and save costs for your company".
Google didn't learn that lesson here. They are describing why us using Antigravity is good for Google, not why us using Antigravity is good for us.
Why would I even bother getting mildly invested in this when the product launch/promotion incentive structure at Google is so well known?
Wow was google researching some kind of anti-gravity device behind the curtains for real and then dropped it out of nowhere?
Ah damn, yet another ai-assisted-something. Crap.
But honestly Google software seems so buggy. The management class took over there a long ago and are quietly ruining the company.
It really seems like it's just standardizing into a first-class UI what a lot of people have already been doing.
I don't think I'm the target for this - I already use Claude Code with jj workspaces and a mostly design-doc first workflow, and I don't see why I would switch to this, but I think this could be quite useful for people who don't want to dive in so deep and combine raw tooling themselves.
Can you elaborate on how you personally use jj workspaces with command-line coding agents?
After a couple iterations on this, I've ended up having claude code vibe-code a helper CLI in Go for me which I can invoke with `ontheside <new-workspace-name> <base-change>` and will
- create a new jj workspace based on the given change
- create a docker container configured with everything my unit tests need to run, my claude code config mounted, and jj configured
- it also sets up a claude code hook to run `jj` (no arguments) every time it changes a file, so that jj does a snapshot
- finishes by starting an interactive claude code session with `--dangerously-skip-permissions`
- it also cleans it all up once I exit the claude code session and fish shell that's running it
With this I can have Claude Code working asynchronously on something, while I can review (or peek) the changes in batch from my main editor by running `jj show <change-id>` / `jj diff -r "..."` (which in my case opens it up in the Goland multi-file diff viewer). I can also easily switch to the change it's working on in my main editor, to make some manual modifications if necessary.
This is, in general, primarily for "in the background async stuff" I want to have it work on. Most of the time I just have a dead-normal claude code session running in my main workspace.
Minor self-plug - if you want, I posted a jj intro article a while ago[0], though it doesn't include my current workspace usage.
[0]: https://kubamartin.com/posts/introduction-to-the-jujutsu-vcs...
https://news.ycombinator.com/item?id=36952796
https://news.ycombinator.com/item?id=30398662
I’m already at full mental capacity planning and reviewing the work of two agents (one foreground which almost never asks for approval, and one background which never asks for approval).
I don’t really need the ability to juggle more of them, and noticing their messages is not a bottleneck for me, while I’m happy with the customizability and adaptability of my raw’er workflow.
Maybe if they’re as slow as codex…
Connecting custom mcp servers.
No thanks...
That's not exactly really where I hoped my career would lead. It's like managing junior developers, but without having nice people to work with.
With a human, you give them feedback or advice and generally by the 2nd or 3rd time the same kind of thing happens they can figure it out and improve. With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
The only way that an AI model can "learn" is during model creation, which is then fixed. Any "instructions" or other data or "correcting" you give the model is just part of the context window.
> You can verify code quality as a glance, and ship absolute with confidence.
> You can confidently trust and merge the code without hours of manual review.
I couldn't possibly imagine that going wrong.
The burden of human interaction is removed from building.
I just need some time by myself to recharge after all the social interactions.
I actually checked that before commenting and went off the Google AI overview -.- eugh
- can write code
- tireless
- have no aspirations
- have no stylistic or architectural preferences
- have massive, but at the same time not well defined, body of knowledge
- have no intrinsic memories of past interactions.
- change in unexpected ways when underlying models change
- ...
Edit: Drones? Drains?
They can usually write code, but not that well. They have lots of energy and little to say about architecture and style. Don't have a well defined body of knowledge and have no experience. Individual juniors don't change, but the cast members of your junior cohort regularly do.
But they don't have a grasp for the project's architecture and will reinvent the wheel for feature X even when feature Y has it or there is an internal common library that does it. This is why you need to be the "manager of agents" and stay on top of their work.
Sometimes it's just about hitting ESC and going "waitaminute, why'd you do that?" and sometimes it's about updating the project documentation (AGENTS.md, docs/) with extra information.
Example: I have a project with a system that builds "rules" using a specific interpreter. Every LLM wants to "optimise" it by using a pattern that looks correct, but will in fact break immediately when there's more than one simultaneous user - and I have a unit test that catches it.
I got bored by LLMs trying to optimise the bit wrong, so I added a specific instruction, with reasoning why it shouldn't be attempted and has been tried and failed multiple times. And now they stopped doing it =)
- don't have career growth that you can feel good about having contributed to
- don't have a genuine interest in accomplishment or team goals
- have no past and no future. When you change companies, they won't recognize you in the hall.
- no ownership over results. If they make a mistake, they won't suffer.
We'll fix that, eventually.
- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.
Have you ever spent any time around children? How about people who think they're accomplishing a great mission by releasing truly noxious ones on the world?
You just dismissed the entire notion of accountability as an unnecessary form of suffering, which is right up there with the most nihilistic ideas ever said by, idk, Dostoevsky's underground man or Raskolnikov.
Don't waste your life on being the Joker.
It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.
In any case, the matrix wasn't my inspiration here, but it is a pithy way to describe the concept. It's hard to imagine how humans maintain relevancy if we really do manage to invent something smarter than us. It could be that my imagination is limited though. I've been accused of that before.
> Humans are on the verge of building machines that are smarter than we are.
You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".
Most AI experts not heavily invested in the stocks of inflated tech companies seem to agree that current architectures cannot reach AGI. It's a sci-fi dream, but hyping it is real profitable. We can destroy ourselves plenty with the tech we already have, but it won't be a robot revolution that does it.
What I really need to ask an LLM for is a pointer to a forum that doesn't cultivate proud exhibition of ignorance, Luddism, and general stupidity at the level exhibited by commenters in this entire HN story, and in this subthread in particular.
We already had one Reddit, we didn't need two.
Why?
It's a tool, not an intelligent being
Next year there will be AI screwdriver your employer force you to use.
Then I realised that this will actually happen, and was sadly reminded we’re now in the post-sarcasm era.
Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.
They then know how to do stuff in the future in my projects.
Key words are these.
> They then know how to do stuff in the future in my projects.
No. No, they don't. Every new session is a blank slate, and you have to feed those markdown files manually to their context.
The AI hype folks write massive fan fiction style novellas that don't have any impact.
But there's middle ground where you tell the agent the specific things about your repo that it doesn't know based on its training. Like if your application has a specific way to run tests headless or it's compiled a certain way that's not the default average.
Unless, of course, the phase of the moon is wrong and Claude itself is stupid beyond all reason
AGENTS.md exists, Codex and Crush support it directly. Copilot, Gemini and Claude have their own variants and their /init commands look at AGENTS.md automatically to initialise the project.
Nobody is feeding aything "manually" to Agents. Only people who think "AI" is a web page do that.
All of them often can't even find/read relevant docs in a new session without prompting
And of course it's up to the developer to keep the documentation up to date. Just like when working with humans. Stuff don't magically document itself.
Yes "good code is self-documenting", but it still takes ages to find anything without docs to tell you the approximate direction.
It's literally a text file the agent can create and update itself. Not hard. Try it.
Humans actually learn from codebases they work with. They don't start with a clean slate every time they wake up in the morning. They know where to find information and how to search for them. They don't need someone to constantly update docs to point to changes.
> but it still takes ages to find anything without docs to tell you the approximate direction.
Which humans, unsurprisingly, can do without wiping their memory every time.
That sounds a lot like '50 First Dates' but for programming.
Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.
This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"
It's functionally working the same as learning.
If you look at it like a black box, then you can't tell the difference from the input and output.
For example, let's say LLMs did not have examples of chess gameplay examples in their training data. Would one be able to have an LLM play chess by listing the rules and examples in the context? Perhaps, to some extent, but I believe it would be much worse than if it was part of the training (which of course isn't great either).
Coincidentally, the hippocampus looks like a seahorse (emoji). It's all connected.
Not to mention; hippocampus literally means "seahorse" in Greek. I knew neither of those things before today, thanks!
- constantly give wrong answers, with surprising confidence
- constantly apologize, then make the same mistake again immediately
- constantly forget what you just told them
- ...
Sadly, this is not sustainable and I am not sure what I'm going to do.
Nice? I thought all sycophant LLMs were exceedingly nice.
Someone gave me a great tip though - at least for ChatGPT there's a setting where you can change its personality to "robot". I guess that affects the system prompt in some way but it basically fixes the issue.
NO ONE TALKS TO EACH OTHER unless absolutely necessary for work.
We get on Zooms to talk. Even with the person 1 cubicle over.
Who normalized this?!!
But why? Required? Culture? Maybe it's the company?
It's clear now that "agents" in the context of "AI" is really about answering the question "How can we make users make 10x more calls to our models in a way that makes it feel like we're not just squeezing money out of them?" I've seen so many people that think setting some "agents" of on a minutes to hours long task of basically just driving up internal KPIs at LLM providers is cutting edge work.
The problem is, I haven't seen any evidence at all that spending 10x the number of API calls on an agent results in anything closer to useful than last year when people where purely vibe coding all the time. At least then people would interactively learn about the slop they were building.
It's astounding to watch a coworker walk though through a PR with hundreds of added new files and repeatedly mention "I'm not sure if these actually work, but it does look like there's something here".
Now I'm sure I'll get some fantastic "no true Scotsman" replies about how my coworkers must not be skilled enough or how they need to follow xyz pattern, but the entire point of AI was to remove the need for specialize skills and make everyone 10x more productive.
Not to mention that the shift in focus on "agents" is also useful in detracting from clearly diminishing returns on foundation models. I just hope there are enough people that still remember how to code (and think in some cases) to rebuild when this house of cards falls apart.
At least for programming tools, for everything (well, the vast majority, at least) that is sold that way—since long before generative AI—it actually succeeds or fails based not on whether it eliminates need for specialized skills and makes everyone more productive, but whether it further rewards specialized skills, and makes the people who devote time to learning it more productive than if they devoted the same time to learning something else.
If Google has forgotten how to do Software, than the future doesn't look bright.
It has jamf among other stuff
Too early in my career to not give a shit and retire, but too late be excited about these things and eager to learn. What a time...
This just feels... a little too dystopian. Companies hoovered up the entirety more or less of all of our collective thoughts and writings and output and now want to sell it back to us- and I fear that cost is going to be extremely steep.
It's impressive, but at the same time, just feels like its going to somehow be a net detractor to society, and yet I feel I need to keep up with each new iteration or potentially get washed over and left behind by the wave.
I am somewhat fortunate to be towards the top of the pyramid and also in a position where I could theoretically ride off into the sunset, but I fear the societal implications and the pain that is going to come for vast numbers of people.
One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools.
Everytime I use AI tools in my domain-expertise area, I find it ends up slowing me down. Introducing subtle bugs, me having to provide insane amount of context and details (at which point it becomes way faster to do it myself)
Just code and chill man - having spent the last 6 months really trying everything (all these context engineering strategies, agents, CLAUDE.md files on every directory, et, etc). It really easy still more productive to just code yourself if you know what you’re doing.
The thing I love most though - is having discussions with an LLM about an implementation, having it write some quick unit tests and performance tests for certain base cases, having it write a quick shell script, etc. things like this, it’s Amazing and makes me really enjoy programming since I save time and can focus on doing the actual fun stuff
But it's like you said, I like using LLMs for completing smaller parts or asking specific kind of help or having conversations about solutions, but for anything larger, it just feels like banging my head to a wall.
LLMs are not useful in this workflow, because they are too verbose. Their answers are generic and handle scenarios you don't even support yet. What's useful is good documentation (as in truthful) and the code if it's open.
This approach has worked really well in my career. It gives me KISS and YAGNI for free. And every line of code is purposeful and have a reason to be there.
I’ve been actively using the first tier paid version of:
- GPT - Claude - Gemini
Usually it’s via the cli tool. (Codex, Claude code, Gemini cli)
I have a bunch of scripts setup that write to the tmux pane that has these chats open - so I’ll visually highlight something nvim and pipe that into either of the panes that have one of these tools open and start a discussion.
If I want it to read the full file, I’ll just use the TUIs search (they all use the @ prefix to search for files) and then discuss. If I want to pipe a few files, I’ll add the files I want to nvim quickfix list of literally pipe the files I want to a markdown file (with a full path) and discuss.
So yes - the chat interface in these cli tools mostly. I’m one of those devs that don’t leave the terminal much lol
Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...
There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.
"""
1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.
2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.
3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.
4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.
"""
Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.
Can you link any? All I've seen is stuff like Anthropic claiming 90% of internal code is written by Claude--I think we'd agree that we need an unbiased source and better metrics than "code written". My concern is that whenever AI usage in professional developers is studied empirically, as far as I have seen, the results never corroborate your claim: "Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially."
I'm open to it being possible, but as someone who was a developer before March 2023 and is surrounded by many professionals who were also so, our results are more lukewarm than what I see boosters claim. It speeds up certain types of work, but not everything in a manner that adds up to all work "sped up substantially".
I need to see data, and all the data I've seen goes the other way. Did you see the recent Substack looking at public Github data showing no increase in the trend of PRs all the way up to August 2025? All the hard data I've seen is much, much more middling than what people who have something to sell AI-wise are claiming.
https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
I also have a personal rule that I will try something for at least 4 months actively before making my decision about it (programming language, new tools, or in this case AI assisted coding)
I made the claim that in my area of expertise - I have found that *most of the time it is faster to write something myself than I write out really detailed md file / prompt. It becomes more tedious to express myself via natural language then it is with code when I want something very specific done.
In these types of cases - writing the code myself, allows me to express the thing I want faster. Also, I like to code with the AI auto complete but still while this can be useful I sometimes disable it because it’s distracting and consistently incorrect with its predictions)
---
claim you made: "One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools."
---
You did make that claim but I'm aware my approach would bring the defensiveness out of anyone :P
This is what you said - and I didn’t make this claim. I specifically said that in “my domain”. Meaning a code base I know fully well and own, and it’s a language, framework and patterns that I’ve worked with for years.
For certain things - yes, it’s faster to do myself than write a long prompt with context (or a predefined one) because it’s faster to express what I want with the code than natural language.
It feels like just writing my own code but at 50% higher wpm. Especially if I can limit it to only suggest a single row; it prevents it from effecting my thought process or approach.
This is how the original GitHub copilot worked until it switched to a chat based more agentic behavior. I set it up locally with an old llama on my laptop and it's plenty useful for bash and c, and amazing for python. I ideally want a model trained only for code and not conversational at all, closer to the raw model trained to next-token predict on code.
I think this style just doesn't chew enough tokens to make tech CEOs happy. It doesn't benefit from a massive model and almost drains more networking than compute to run in the cloud.
Ditto. And we still can.
I've yet to use an "agent", and still use a chat UI to an LLM in Emacs. I rely on these tools for design discussion, rough prototyping, and quick reference, but they still waste my time roughly a quarter of the time I use them. They have gotten better in the last year, though, and I've been able to broaden my reach into stacks and codebases I wouldn't have felt comfortable with before, which is good.
I just have no interest in "agents". I don't want to give these companies more access to my system and data, and I want to review every thing these tools generate. If this makes me slower than a vibe coder, that's intentional. Thankfully, there are still sane people and companies willing to pay me for this type of work, so I'm not worried about being displaced any time soon. Once that happens, I'll probably close up shop, figure out an alternative income stream, and continue coding as a hobby.
The very last thing I want is to be "elevated to a manager of agents," as they so smugly say in the video.
Ability to code within an IDE will not make shareholders happy while shoving AI down developers' throats most definitely will.
I guess it must have been the GPL which isn’t compatible with their AI agents.
Oh, wait I was meant to take this announcement seriously?
I'm going to treat this like Kiro, and just use it until they start charging for it and then probably switch back to VS code with its built-in agent support.
Eventually they're going to do a rug pull, and instead of paying $10 a month for tons of AI code request, it's going to be two or $300 for that. The economics just aren't there to actually make a profit, hopefully before the rug pool happens local models on normal hardware will be fast enough.
Why does the IDE eat your files? If an editor shuts down, open up another one and continue. What's with the melodrama?
Using agents effectively is this whole other skillset including managing requirements, prioritization and, worse yet, I'm rarely left with any knowledge. I don't nearly get the same joy out of "I finished a task with an agent" like I do with "I had a problem, I delved deep to understand it, learned something new and solved it"
Then again, I bet people making furniture out of wood felt the same about industrial furniture factories. And it can be argued that not every use case needs custom tailored furniture...
It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.
I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.
Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.
But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.
So I start using it for my project and after about 20 mins - oh, no. Out of credits.
What can I do? Is there a buy a plan button? No? Just use a different model?
What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.
I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.
Real developers want to pay real money for real useful things.
Google needs to not set themselves up for failure with every product release.
If you release a product, let those who actually want to use it have a path to do so.
Google may have won the browser wars with Chrome, but Microsoft seems to be winning the IDE wars with VSCode
Alternatives have a lot of features to implement to reach parity
Microsoft made a great decision to jump on the trend and just pour money to lap Atom and such in optimization and polish.
Especially when you compare it to Microsoft effort for desktop. They acumulated several more or less component libraries over they years and I still prefer WinForms.
The extent to which electron apps run well depends on how many you're running and how much ram you had to spare.
When I complain about electron it has nothing to do with ideology, it's because I do run out of memory, and then I look at my process lists and see these apps using 10x as much as native equivalents.
And the worst part of wasting memory is that it hasn't changed much in price for quite a while. Current model memory has regularly been available for less than $4/GB since 2012, and as of a couple months ago you could get it for $2.50/GB. So even a 50% boost in use wipes out the savings since then. And sure the newer RAM is a lot faster, but that doesn't help me run multiple programs at the same time.
2x as many chrome instances, no issues
If you didn't have those gigabytes of memory sitting idle, you would notice. Either ugly swapping behaviors or programs just dying.
I use all my memory and can't add more, so electron causes me slowdowns regularly. Not constantly, but regularly, mostly when switching tasks.
Visual Studio Code is a developer tool, so there’s no reason to complain about that.
I run multiple Electron apps at a time even on low spec machines and it’s fine. The amount of hypothetical complaining going on about this topic is getting silly.
You know these apps don’t literally need to have everything resident in RAM all the time, right?
"Multiple" isn't too impressive when you compare that a blank windows install has more than a hundred processes going. Why accept bloat in some when it would break the computer if it was in all of them?
> Visual Studio Code is a developer tool, so there’s no reason to complain about that.
Even then, I don't see why developers should be forced to have better computers just to run things like editors. The point of a beefy computer is to do things like compile.
But most of what I'm stuck with Electron-wise is not developer tools.
> The amount of hypothetical complaining going on about this topic is getting silly.
I am complaining about REAL problems that happen to me often.
> You know these apps don’t literally need to have everything resident in RAM all the time, right?
Don't worry, I'm looking specifically at the working set that does need to stay resident for them to be responsive.
Here's the other unspoken issue: WHAT ELSE DO YOU NEED SO MUCH MEMORY FOR!?
When I use a computer, I am in the minority of users who run intensive stuff like a compiler or ML training run. That's still a minute portion of the total time I spend on my computer. You know what I always have open? A browser and a text editor.
Yes, they could use less memory. But I don't need them to use less memory, I need them to run quickly and smoothly because even a 64GB stick of RAM costs almost nothing compared to how much waiting for your browser sucks.
And price is a pathetic excuse for bad work. RAM gets 50x cheaper and some devs think it's fine to use 50x as much of it making their app work? That's awful. That's why computers are still unresponsive half the time despite miracles of chipmaking.
Devs getting good computers compounds this problem too, when they get it to "fast enough" on their machine and stop touching it.
And memory being cheap is an especially bad justification when a program is used by many people. If you make 50 million people use $4 of RAM, that's a lot. Except half the time the OEM they bought the computer from charges $20 for that much extra RAM. Now the bloat's wasting a billion dollars.
And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.
That's ABSURD.
> That's why computers are still unresponsive half the time despite miracles of chipmaking.
Have you ever actually used VSCode? It's pretty snappy even on older hardware.
Of course, software can be written poorly and still fit in a small amount of memory, too :)
> Now the bloat's wasting a billion dollars.
Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.
Do you think most users spec their machines with the exact 4GB of RAM that it takes to run a single poorly-written Electron app?
> And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.
Dude, it's 2025.
I googled "cheapest smartphones India" and the first result was for the Xiaomi POCO F1. It has 8GB of RAM and costs ₹6,199 - about $62. That's a whole-ass _phone_, not just the RAM.
If you want to buy a single 8GB stick of DDR3? That's about $15 new.
> My motherboard does not support more memory. Closer to hundreds of dollars than $4.
If you are buying HUNDREDS of dollars of RAM, you are building a powerful system which almost certainly is sitting idle most of the time.
> And no I will not justify my memory use to you.
Nobody is forcing you to run an electron app, they're just not catering to this weird fetish for having lots of unused RAM all the time.
What is? The devs or my claim? There are apps that use stupid amounts of memory to do the same thing a windows 98 app could do.
And you can do good or bad within the framework of electron but the baseline starts off fat.
> Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.
If they want to do both at the same time, they need the extra. Things like music or chat apps are a constant load.
> Dude, it's 2025.
As recently as 2024 a baseline Mac came with 8GB. Soldered, so you can't buy a stick of anything.
> If you are buying HUNDREDS of dollars of RAM
Not hundreds of dollars of RAM, hundreds of dollars to get a different platform that accepts more RAM.
> Nobody is forcing you to run an electron app
I either don't get to use many programs and services, or I have to deal with these problems that they refuse to solve. So it's reasonable to complain even though I'm not forced.
> weird fetish for having lots of unused RAM
I have no idea why you think I'm asking for unused RAM.
When I run out, I don't mean that my free amount tipped below 10GB, I mean I ran out and things lag pretty badly while swapping, and without swap would have crashed entirely.
Amazon just released OS that uses React Native for all GUI.
Lots of Electron apps are great to use.
Thereby adapted to devs' needs, rather than users'.
IMO The next best cross-platform GUI framework is Qt (FreeCAD, QGIS, etc.)
Qt6 can look quite nice with QSS/QStyle themes, these days, and its native affordances are fairly good.
But it's not close. VSCode is nice-looking, to me.
I've been playing around with different GUI approaches for the desktop, and what impresses me the most about Godot is how lightweight and self-contained it can be while still being cross-platform on both ends.
When did they add that? Last time I used it, it was still based on xterm.js.
Also, technically Chromium/Blink has GPU rendering built in for web pages, so everything could run on GPU.
> GPU acceleration driven by the WebGL renderer is enabled in the terminal by default. This helps the terminal work faster and display at a high FPS by significantly reducing the time the CPU spends rendering each frame
https://code.visualstudio.com/docs/terminal/appearance#_gpu-...
Before that from v1.17 (~October 2017) it was using a 2d canvas context: https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...
My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.
I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:
- Visual Studio Code: 589.4 MB
- Visual Studio 2022: 732.6 MB
update:
I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!
Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.
Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.
I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.
Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).
- notepad.exe: 54.3 MB
- emacs: 15.2 MB
- vim: 5.5MB
I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.
I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.
It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.
Also, emacs is a GUI app since the 90's .
(I've been aware of Qt for like two decades; back in the early 2000s my employer was evaluating such options as Tk, wxWindows, and ultimately settled on Java, I think with AWT. Qt seems to have a determined survival niche in "embedded systems that aren't android"?)
It’s part of the furniture at this point, for better or worse. Maybe don’t bet on it, but certainly wouldn’t be smart to bet against it, either.
JetBrains, Visual Studio, Eclipse, Netbeans…
VS Code does well with performance. Maybe one of the new ones usurps, but I wouldn’t put my money on it.
VSCode has even less features than Emacs, OOTB. Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project. Whatever you need, it's there. I think the only plugins I've installed on Jetbrains's ones is IdeaVim and I've never needed something else for XCode.
It's like complaining about a factory's assembly line, saying it's not as portable as the set of tools in your pelican case.
No way that is true. In fact, it's the opposite, which is the exact reason I use VS Code.
VSCode is more popular, which makes it easy to find extensions. But you don’t see those in the Emacs world because the equivalent is a few lines of config.
So what you will see are more like meta-extensions. Something that either solve a whole class of problems, could be a full app, or provides a whole interaction model.
I've used Emacs.
> But you don’t see those in the Emacs world because the equivalent is a few lines of config.
That is really quite false. It's a common sentiment that people spend their lives in their .emacs file. The exact reason I left Emacs was that getting a remote development setup was incredibly fragile and meant I was spending all this time in .emacs only to get substandard results. The worst you do in VS Code is set high-level settings in VS Code or the various extensions.
Nothing in the Emacs world comes close to the remote extensions for SSH and Docker containers that VS Code nor the Copilot and general AI integration. I can simply install VS Code on any machine, login via GitHub, and have all of my settings, extensions, etc. loaded up. I don't have to mess around with cross-platform issues and Git-syncing my .emacs file. Practically any file format has good extensions, and I can embed Mermaid, Draw.io, Figma, etc. all in my VS Code environment.
Now, I'm sure someone will come in and say "but Emacs does that too!". If so, it's likely a stretch and it won't be as easy in VS Code.
> the only plugins I've installed on Jetbrains's ones
By default, JetBrains' IntelliJ-based IDEs have a huge number of plug-ins installed. If you upgrade from Community Edition to a paid license, the number only increases. Your comment is slightly misleading to me.So? No excuse for a poor interactive experience.
It's still kinda slow for me. I've moved everything but WinForms off it now, though.
VS is much faster considering it is a full blown IDE not a text editor, being mostly C++/COM and a couple of .NET extensions alongside the WPF based UI.
Load VSCode with the same amount of plugins, written in JavaScript, to see where performance goes.
Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.
Even more importantly, though, the more we move towards "I'm supervising a fleet of 50+ concurrent AI agents developing code on separate branches" the more the notion of the IDE starts to look like something you want to be able to launch in an unconfigured cloud-based environment, where I can send a link to my PM who can open exactly what I'm seeing in a web browser to unblock that PR on the unanswered spec question.
Sure, there's a world where everyone in every company uses Zed or similar, all the way up to the C-suite.
But it's far more likely that web technologies become the things that break down bottlenecks to AI-speed innovation, and if that's the case, IDEs built with an eye towards being portable to web environments (including their entire extension ecosystems) become unbeatable.
The last thing I want is to install dozens of JS extensions written by people who crossed that lower barrier. Most of them will probably be vibe coded as well. Browser extensions are not the reason I use specific browsers. In fact, I currently have 4 browser extensions installed, one of which I wrote myself. So the idea that JS extensions will be a net benefit for an IDE is the wrong way of looking at it.
Besides, IDEs don't "win" by having more users. The opposite could be argued, actually. There are plenty of editors and IDEs that don't have as many users as the more popular ones, yet still have an enthusiastic and dedicated community around them.
The most successful IDE of all time is ed, which is enthusiastically used by one ancient graybeard who is constantly complaining about the kids these days.
Nobody has told him that the rest of the world uses 250MB of RAM for their text editor because they value petty things like "usability" over purity. He would have a heart attack - the last time he heard someone describe the concept of Emacs plugins he flew into a rage and tried to organize a death panel for anyone using syntax highlighting.
People dunk on VS Code but it’s pretty damn good. Surely the best Electron app? I’m sure if you are heavily into EMACS it’s great but most people don’t want to invest huge amounts of time into their tools, they would rather be spending that time producing.
For a feature rich workhorse that you can use for developing almost anything straight out of the box, it within minutes after installing a few plugins, it’s very hard to beat. In my opinion lot of the hate is pure cope from people who have probably never really used it.
I used Visual Studio Code across a number of machines including my extremely underpowered low-spec test laptop. Honestly it’s fine everywhere.
Day to day, I use an Apple Silicon laptop. These are all more than fast enough for a smooth experience in Visual Studio Code.
At this point the only people who think Electron is a problem for Visual Studio Code either don’t actually use it (and therefore don’t know what they’re talking about) or they’re obsessing over things like checking the memory usage of apps and being upset that it could be lower in their imaginary perfect world.
I think the ship sailed
In order to build a web app, you will first need a web app
They have a chance to compete fresh with Fleet, but they are not making progress on even the basic IDE there, let alone getting anywhere near Cursor when it comes to LLM integration.
Have you actually given them a real test yet - either Junie or even the baseline chat?
neovim won the IDE wars before it even started. Zed has potential. I don't know what IntelliJ is.
It started as a modernized Eclipse competitor (the Java IDE) but they've built a bunch of other IDEs based on it. Idk if it still runs on Java or not, but it had potential last I used it about a decade ago. But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
As a person paying for the jetbrains ultimate package (all ides), I think going with vscode is a very solid decision.
The jetbrains ides still have various features which I always miss whenever I need to use another IDE (like way better "import" suggestions as an easy to understand example)... But unless you're writing in specific languages like Java, vscode is way quicker and works just fine - and that applies even more to agentic development, where you're using these features less and less...
- This isn't a scientific approach.
Java's big strength is that it's a memory safe, compiled, and sandboxed low level platform with over a quarter century of development behind it. But it historically hasn't handled computer graphics well and can feel very slow and bloated when something needs that - like a GUI. That weakness is probably a big reason why Microsoft rewrote Minecraft after they bought it.
> But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
What would you recommend instead of Swing on JVM? Since you have "1000 reasons", it should easy to list a few here. As a friendly reminder, they would need to port (probably) millions of lines of Java source code to whatever framework/language you select. The only practical alternative I can think of would be C++ & Qt, but the development speed would be so much slower than Java & Swing.Also, with the advent of wildly modern JVMs (11+), the JIT process is so insanely good now. Why cannot a GUI be written in Swing and run on the JVM?
“I never read The Economist” – Management Trainee, aged 42.
WebKit came from KDE's khtml
Every year is the year of Linux.
> if all that effort stayed inside the KDE ecosystem
Probably nowhere, people rather not do anything that contribute to something that does decisions they disagree with. Forking is beautiful, and I think improves things more than it hurts. Think of all the things we wouldn't have if it wasn't for forking projects :)
(Fixing IE6 issues was no fun)
Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.
I started webdev on FF with firebug. But at some point chrome just got faster with superior dev tools. And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.
Okay but that's not the tradeoff I was suggesting for consideration. Ideally nothing would have dominated, but if something was going to win I don't think it would have been IE retaking all of firefox's ground. And while I liked Opera at the time, that takeover is even less likely.
> Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.
Partly it was technical prowess. But google pushing it on their web pages and paying to put an "install chrome" checkbox into the installers of unrelated programs was a big factor in chrome not just spreading but taking over.
Since when you don't touch Firefox or try the dev tools ?
(Wrote via FF)
I use FF for browsing, but every time I think of starting dev tools, maybe even just to have a look at some sites source code .. I quickly close them again and open chrome instead.
I wouldn't know where to start, to list all the things I miss in FF dev tools.
The only interesting thing for me they had, the 3D visualizer of the dom tree, they stopped years ago.
Ah, yes. The famously sleazy "automatic security updates" and "performance."
It is amazing how people forget what the internet was like before Chrome. You could choose between IE, Firefox, or (shudder) Opera. IE was awful, Opera was weird, and the only thing that Firefox did better than customization was crash.
Now everyone uses Chrome/WebKit, because it just works. Mozilla abandoning Servo is awful, but considering that Servo was indirectly funded by Google in the first place... well, it's really hard to look at what Google has done to browsing and say that we're worse off than we were before.
How so?
Do you think thousands of googlers and apple engineers could be reasonably managed by some KDE opensource contributors? Or do you imagine google and apple would have taken over KDE? (Does anyone want that? Sounds horrible.)
Chromium is an upstream dependency (by way of Electron) for VSCode.
WebKit was an upstream dependency of Chromium, but is no more since the Blink/WebKit hard fork.
Meanwhile, JetBrains IDEs are still the best, but remain unpopular outside of Android Studio.
PyCharm’s lack of popularity surprises me. Maybe it’s not good enough at venvs
If there’s a workflow I’m missing please let me know because I want to love it!
> remain unpopular outside of Android Studio
What a strange claim. For enterprise Java, is there is a serious alternative in 2025? And, Rider is slowly eating the lunch of (classic) Visual Studio for C# development. I used it again recently to write an Excel XLL plug-in. I could not believe how far Rider has come in 10 years.In my current company, only I am using IntelliJ IDEs. Other people have never even tried them, except for Android Studio.
Hence even the infamous Ballmer quote.
I wonder how much Google shareholders paid for that 20 minutes. And whether it's more or less than the corresponding extremely small stock price boost from this announcement.
With vendor lock-in to Google's AI ecosystem, likely scraping/training on all of your code (regardless of whatever their ToS/EULA says), and being blocked from using the main VS Code extensions library.
I expect huge improvements are still to be made.
I don't think it's connected in any way, though. Their pricing page doesn't mention it. https://antigravity.google/pricing
if it were true, it would be a big miss to not point that out when you run out of credit, in their pricing page, or anywhere in their app.
I should also mention that the first time I prompted it, I got a different 'overloaded' type out of credit message. The one I got at the end was different.
I've rotated on paying the $200/month plans with Anthropic, Cursor, and OpenAI. But never Google's. They have maybe the best raw power in their models - smartest, and extremely fast for what they are. But they always drop the ball on usability. Both in terms of software surrounding the model and raw model attitude. These things matter.
It does not.
I didn't even get to try a single Gemini 3 prompt. I was out of credits before my first had completed. I guess I've burned through the free tier in some other app but the error message gave me no clues. As far as I can tell there's no link to give Google my money in the app. Maybe they think they have enough.
After switching to gpt-oss:120b it did some things quite well, and the annotation feature in the plan doc is really nice. It has potential but I suspect it's suffering from Google's typical problem that it's only really been tested on Googlers.
EDIT: Now it's stuck in a loop repeating the last thing it output. I've seen that a lot on gpt-oss models but you'd think a Google app would detect that and stop. :D
EDIT: I should know better than to beta test a FAANG app by now. I'm going back to Codex. :D
I complained to it that I had only made one image. It decided to make me one more! Then told me I was out of credits again.
What?! So was it only hallucinating that you were out of credits the first time?
Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.
You can't provide an API key for a project that has billing enabled?
Is there another world where $200/m is needed to run hundreds of agents or something?
am i behind and i dont even know it?
It’s very easy to run into limits if you choose more expensive models and aren’t grandfathered.
Yes, the auto model is good enough for me especially with well documented frameworks (rails, frontend madness).
Thanks for the response, looks like i'm in for a reckoning come New year's day
At no point in the future will these same companies offer the same rates for credits. WAtch your generated code turn into a walking, talking ad for the companies who pay for product placement.
This is great fundamental business advice. We are in the AI age but these companies see to have forgotten basic business things
Sounds like the modus operandi of most large tech companies these days. If you exclude Valve.
The state of Cursor "review" features make me convinced that the cursor devs themselves are not dogfooding their own product.
It drives me crazy when hundreds of changes build up, I've already reviewed and committed everything, but I still have all these "pending changes to review".
Ideally committing a change should treat it as accepted. At the very least, there needs to be a way to globally "accept all".
Cursor Settings -> Agents -> Applying Changes -> Auto-Accept on Commit
Interesting that a next-gen open-source-based agentic coding platform with superhuman coding models behind it can have UI glitches. Very interesting that even the website itself is kind of sluggish. Surely, someone, somewhere must have ever optimized something related to UI rendering, such that a model could learn from it.
The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."
On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.
When I downloaded it, it already came with the proper "Failed due to model provider overload" message.
When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.
I did not see functionality to have it test the app in the browser yet.
And the say:
Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won’t have to worry about, and you feel unrestrained in your usage of Antigravity
You have to wonder what kind of models did they run for this.
I am fed up with VSCode clones, if I have to put up with Electron, at least I will use the original one.
I think thats the beauty of opensource.
Oh ffs
They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.
The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).
It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.
Buck passed, product launched.
There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.
The GPM team was hugely passionate about music and curating a good experience for users, but YT leadership just wanted us to "reuse existing video architecture" to the Nth degree when we merged into the YT org.
After literally years of negotiations you got... what YTM is. Many of the original GPM team members left before the transition was fully underway because they saw the writing on the wall and wanted no part of it. I really wish I had done the same.
That and being able to mix my own uploaded tracks with online music releases into a curated collection almost made it a viable contender to my local iTunes collection.
And then... they just removed it forever. Bastards.
I worked on a team that wrote software for Chromecast based devices. The YTM app didn't even support Chromecast, our own product, and their responses on bug tickets from Googlers reporting this as a problem was pretty arrogant. It was very disheartening to watch. Complete organizational dysfunction.
I think YTM has substantially improved since then, but it still has terrible recommendations, and it still bizarrely blurs between video and music content.
Google went from a company run by engineers to one run by empire-building product managers so fast, it all happened in a matter of 2-3 years.
I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.
In response to your comment, yes, I would largely be in favor of moving forward only with whatever is said in the relevant meetings with the given attendees of a meeting. That assumes a reasonably healthy culture where these meetings are scheduled in good faith reasonable times for all relevant stakeholders.
Okay, but is it configurable? Also, can you configure it to write DRY code?
Why would you not at least link it to the pro and ultra accounts
at least you could upsell the pro subs to ultra. Millions of claude code and codex users who are into agentic coding is your servicable market paying attention today.
Now I'll delete antigravity and go back to codex / claude code / cursor ...
—No one, ever.
I assume that Copilot will have this model soon...
> You can verify your code quality at a glance, then ship with absolute confidence.
Proclaiming absolute confidence after a glance leaves me with scant confident in the merit of the confidence.
> You can skip verifying your code quality, then ship with absolute chutzpah.
The people at Windsurf who worked on this must be laughing at us driving on their Lambos and Ferraris.
They glued slop together, shipped this and now are in Tahoe drinking Martinis watching the sunset from their private chalets.
⌘F only shows 1 result. and 0 in the comments here!!
Additionally, there are issues setting up accounts (Singapore VPN solved that for me), no support for Workspace users, only a free tier that requires data sharing, no additional rate limits for paying Pro or Ultra customers, etc. Even worse, Gemini CLI currently does NOT provide Gemini 3 Pro for Ultra Business customers despite paying over € 260,- per month, which is frankly ridiculous.
Will be honest, I was speculating that the reason for the multi month delay between the first A/B tests of Gemini 3 class models and the final release was so they'd have all their dugs in a row. Have some time to test everything, improve tooling, provide new paid subscriptions and/or ensure existing ones get access to everything day one, but they didn't.
Gemini 3 Pro seems very interesting (to early to say), but compared to every other recent launch by OpenAI (5, 5.1, Codex variants), Anthropic (Sonnet and Haiku 4.5), even Kimi (K2 Thinking) and Z.AI (GLM-4.6), this is by far the least organized launch of any frontier lab.
A buggy IDE which is unusable for paying customers, no CLI access for Ultra business (and none at all for Pro of any kind), etc. is frankly embarrassing when considering what competitors manage to provide the day a model launches.
What have they been working on these last two months besides going on X and posting "3" every couple of days? Why is there no paid Antigravity tier, no way to use Workspace accounts, etc? Before launching in this state, I feel it'd have been better to delay a bit more if it was absolutely needed.
Also, correct me if I'm wrong but isn't this the fourth or fifth IDE built by Google for LLM assisted coding? What happened to IDX and Firebase Studio and aren't they also based on VSCode?
I remember a previous story months ago about Gemini that had Google PMs trying to hype their product, but it was all question about how nobody knows how to get Gemini API keys with any number of paid subscription.
On top of that how long until it’s https://killedbygoogle.com/ ?
I have to close 4+ after just a few minutes of poking around
The browser extension is really cool and it provides a needed tool for the agent to use. It used the extension to show the page that it updated in the task document (the task doc is great too). However it showed me a page and did it was done, when it was clearly not done and not what I asked for.
I was expecting weaker tooling and a better model. I got good tooling and a not very good model.
Maybe 3.1 will deliver?
It’s just google’s attempt at cursor. Nothing to see here.
opencode with it's superior feature set and ability to use any model provider i want is....
superior
why would you even bother with google at this point?
"Novel agent-first form factor" feels very buzz-wordy. Does it refer to an actual feature?
I ran into a neat website and asked it to generate a similiar UX with Astro and it did a decent-ish job of seeing how the site handled scrolling visually and in code and replicating it in a tidy repo.
Anyway, not a great first impression. I guess I'll try again in a few months.
Then I installed it and it was a VSCode fork.
Pressing the "Submit" button on their "Google Antigravity for Organizations Interest Form" (https://antigravity.google/interest-form) doesn't actually do anything for me (tried Firefox and Chrome) -> their metrics will indicate that there's no interest from organizations -> the product will be killed in a year.
</snark>
I really miss the days of the professional casualness and naturalness of something like the "mother of all demos" [0]. Like, can you imagine the guy wearing a turtleneck and going, "but wait!" and acting surprised after every sentence? It would NOT have been the same demo.
Yes, the professional actor that doesn't seem like a paid actor is preferable to the autistic weirdo. That's why they get paid the big bucks and we get relegated to the basement.
Jokes aside though, it's much broader than that, it's just that the zeitgeist dictates that everyone shifts from work to meta-work: musicians must impress not with their music but the way they make music, researchers must entertain, developers must manage agents, children watch someone else play games instead of playing themselves.
That's yet another increment to the already dizzying level of simulation per Jean Baudrillard.
I believe it is aimed at investors. Thus it will be forgotten the minute it stops influencing stock price.
Thus there is no need to take it literally as a developer tool - it's not.
I think it’s more likely aimed at the (internal) promotion committee.
Nice that it's built-in, Claude Code needs an MCP for this at least.
> User Feedback: Intuitively integrate feedback across surfaces and artifacts to guide and refine the agent’s work.
I wish they'd just let me edit the implementation plan directly instead of me having to explain the corrections. Claude Code has the same weakness. Explaining the corrections is slower than editing the plan manually, and it still keeps the incorrect text in context as well.
> An Agent-First Experience: Manage multiple agents at the same time
Sounds nice in theory but I assume you can run multiple agents for 5 minutes or so and then you're out of credits.
As a claude code user I'm not really sold on this product.
For example a while back vscode-pets[1] plugin became popular and tried it and noticed that the pet can only live within a window, whether its the explorer section or in its own panel, I thought it'd be more of a desktop pet that could be anywhere within VSCode but apparently there are limitations (https://github.com/tonybaloney/vscode-pets/issues/4).
So my guess is that forking VSCode and customizing it that way is much easier to do things that you can't with a plugin while also not having to maintain an IDE/Text editor.
The name "Google Antigravity" was chosen to convey the idea of making the software development process more weightless.
That's good because my arms were getting tired pushing code around in emacs everyday.If I write "float exp(float base, float exp){"
Then that is the source code and the rest is generated. Mixing it all up is as dumb as uploading a compiled binary or bytecode to git.
Especially annoying when you are working with other people and you can't tell what they actually wrote and know about.
> “Google Antigravity's Editor view offers tab autocompletion, natural language code commands, and a configurable, and context-aware configurable agent.”
Is it a typo or was there a reason to add configurable twice?
What's most astonishing is that I can't seem to find what actual platforms it works for. I don't doubt the LLM's can write code in almost any language and for almost all frameworks, with varying success.
But which languages/platforms/framework will the IDE work for technically, having compilers etc built in? I don't care if an LLM can help me with the code, if I then can't compile it within the same IDE!
They have a "full stack" use case here, which doesn't even suggest what this stack consists of? https://antigravity.google/use-cases/fullstack
Am I going crazy or are they just handwaving the _actual_ development tasks in all this?
Really reflects how companies are prioritizing hype and adoption over product quality.
(now off to download it...)
Anthropic and OpenAI are investing a lot into this space and are now competing directly with companies like Cursor. Cursor's biggest moat at the moment is their tab completion model, which doesn't exist in the Anthropic's and OpenAI's current offerings and is leagues ahead of Github Copilot's.
Antigravity is a VSCode fork that adds both Google's own tab complete and an agent composer, similar to products like https://conductor.build/. Assuming that Google doesn't shoot themselves in the foot (which they seem to like doing), we'll see if wrappers like Cursor / Windsurf / Cognition can compete against the big labs. It's worth noting that the category seems to be blurring, since Cursor has trained not only their own tab complete model but also their own agent model.
Personally I felt having immediate access to the VSCode extension ecosystem to be a huge boon and I quickly got a setup to my liking.
It seems to streamline my existing Claude Code workflow with a much better UI. The tab complete seems the best I've experienced and the text/image selection, adding comments and iterating on a plan is genius.
Depressing to see everyone here unable to see the forest for the trees.
I would happily pay 20 or whatever for 4x limits. I'm very curious what they end up offering. My major reservation is side project vibes. I think it's hard to believe on this long term unless Google themselves adopt it.
The problem is the rate limiting is both aggressive and has no option to pay to bypass.
Also call it "antigrav". Less of a mouthful
That's a huge advantage, it's means all the obvious stuff will just work. LSPs, debuggers, version control, customisation.
As much as I like Emacs, it's an insane pain to make all these things work.
If your value prop is agents on a codebase, there's no point in trying to reinvent those. They have basically been solved.
If you manage to even get the Pylance extension to show up (I had to change the "marketplace" settings) it will say: > This extension is not compatible with Antigravity
What the heck am I supposed to do with the Jedi fall back? Legitimate question: Can Jedi even highlight unused imports? Can it import symbols not found?
If Pylance doesn't work, fork it. But the LSP needs to just work out of the box.
I think you can use BasedPyright? Not sure if other Pyright variant work (e.g., * for Cursor/Windsurf etc.) but this one works for me.
In windsurf installing the Python plugin it will install Windsurf Pyright: https://marketplace.windsurf.com/extension/Codeium/windsurfP...
It looks like that is a fork of basedpyright. I have just been using basedpyright now. Better than Jedi / Windsurf Pyright (at least in the import area which was frustrating me).
Idk what cursor does, my free credits ran out a while ago :P
But the user experience for basic things has just been difficult. Unresponsive Agent Manager window. Agents hitting file permission errors because I don't have a proper "workspace". Agents getting blocked waiting for me to confirm a command line's "y/n".
Fysi•2mo ago