But apropos TFA, it's nice to see that telemetry is opt-in, not opt-out.
Subscribed to their paid plan just to keep the lights on and hoping it will get even better in the future.
This was a long time ago, but the way I did it was to use XcodeGen (1) and a simple Makefile. I have an example repo here (2) but it was before Swift Package Manager (using Carthage instead). If I remember correctly XcodeGen has support for Swift Package Manager now.
On top of that I was coding in VS Code at the time, and just ran `make run` in the terminal pane when I wanted to run the app.
Now, with SwiftUI, I'm not sure how it would be to not use Xcode. But personally, I've never really vibed with Xcode, and very much prefer using Zed...
1: https://github.com/yonaskolb/XcodeGen 2: https://github.com/LinusU/Soon
Huh?
Yes it's not the modern human but I think that's close enough.
I check back on the GitHub issue every few months and it just has more votes and more supportive comments, but no acknowledgement.
Hopefully someone can rescue us from the sluggish VS Code.
https://github.com/zed-industries/zed/issues/7992
I have a 1440p monitor and seeing this issue.
(not parent commenter, but hold same opinion)
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
Apparently all editors bothered doing, except Zed.
From the Issue:
> Zed looks great on my MacBook screen, but looks bad when I dock to my 1080p monitor. No other editor has that problem for some reason.
If they're running everything on the GPU then their SDF text rendering needs more work to be resolution independent. I'm assuming they use SDFs, or some variant of that.
Really, the screen isn't the issue given that on other editors OP says it is fine.
Knuth would be angry reading this :)
It looks like the relevant work needs to be done upstream.
Apple has removed support for font rendering methods which make text on non-integer scaled screens look sharper. As a result, if you want to use your screen without blurry text, you have to use 1080p (1x), 4k (2x 1080p), 5k (2x 1440p) or 6k screens (or any other screens where integer scaling looks ok).
To see the difference, try connecting a Windows/Linux machine to your monitor and comparing how text looks compared to the same screen with a MacOS device.
using pixel fonts on any non-integer multiplier of the native resolution will always result in horrible font rendering, I don't care what OS you're on.
I use MacOS on all kinds of displays as I move throughout the day, some of them are 1x, some are 2x, and some are somewhere in between. using a vector font in Zed looks fine on all of them. It did not look fine when I used a pixel font that I created for myself, but that's how pixel fonts work, not the fault of MacOS.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
try it and see. i bet that helps/fixes at least some of you suffering from this.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
The restore checkpoint/redo is too linear for my lizard brain. Am I wrong to want a tree-based agentic IDE? Why has nobody built it?
They fixed that with the new agent panel, which now works more like the other AI sidebars.
I was (mildly) annoyed by that too. The new UI still has rough edges but I like the change.
Vote/read-up here for the feature on Zed: https://github.com/zed-industries/zed/issues/17455
And here on VSCode: https://github.com/microsoft/vscode/issues/20889
You will not catch me using the words "agentic IDE" to describe what I'm doing because its primary purpose isn't to be used by AI any more than the primary purpose of a car is to drive itself.
But yes, what I am doing is creating an IDE where the primary integration surface for humans, scripts, and AIs is not the 2D text buffer, but the embedded tree structure of the code. Zed almost gets there and it's maddening to me that they don't embrace it fully. I think once I show them what the stakes of the game are they have the engineering talent to catch up.
The main reason it hasn't been done is that we're still all basically writing code on paper. All of the most modern tools that people are using, they're still basically just digitizations of punchcard programming. If you dig down through all the layers of abstractions at the very bottom is line and column, that telltale hint of paper's two-dimensionality. And because line and column get baked into every integration surface, the limitations of IDEs are the limitations of paper. When you frame the task of programming as "write a huge amount of text out on paper" it's no wonder that people turn to LLMs to do it.
For the integration layer using the tree as the primary means you get to stop worrying about a valid tree layer blinking into and out of existence constantly, which is conceptually what happens when someone types code syntax in left to right. They put an opening brace in, then later a closing brace. In between a valid tree representation has ceased to exist.
How can I follow up on what you're building? Would you be open to having a chat? I've found your github, but let me know how if there's a better way to contact you.
That's possible because the source of truth for the IDE's state is an immutable concrete syntax tree. It can be immutable without ruining our costs because it has btree amortization built into it. So basically you can always construct a new tree with some changes by reusing most of the nodes from an old tree. A version history would simply be a stack of these tree references.
I would recommend you check it out if you've been frustrated by the other options out there - I've been very happy with it. I'm fairly sure you can't have git-like dag trees, nor do I think that would be particularly useful for AI based workflow - you'd have to delegate rebasing and merge conflict resolution to the agent itself... lots of potential for disaster there, at least for now.
``` "openai": { "api_url": "https://openrouter.ai/api/v1", "version": "1", "available_models": [ { "name": "anthropic/claude-3.7-sonnet:beta", "max_tokens": 200000 }, ... ```
Just change api_url in the zed settings and add models you want manually.
> For the nth time, it's about enabling inline suggestions and letting anything, either LSP or Extensions use it, then you don't have to guess what the coolest LLM is, you just have a generic useful interface for LLM's or anything else to use.
An argument I would agree with is that it's unreasonable to expect Helix's maintainers to volunteer their time toward building and maintaining functionality they don't personally care about.
[1]: https://microsoft.github.io/language-server-protocol/specifi...
Also, the Helix way, thus far, has been to build a LSP for all the things, so I guess you'd make a copilot LSP (I be there already is one).
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
These last two months I've been trialing both Neovim and Zed alongside Helix. I know I should probably just use Neovim since, once set up properly, it can do anything and everything. But configuring it has brought little joy. And once set up to do the same as Helix out of the box, it's noticeably slower.
Zed is the first editor I've tried that actually feels as fast as Helix while also offering AI tooling. I like how integrated everything is. The inline assistant uses context from the chat assistant. Code blocks are easy to copy from the chat panel to a buffer. The changes made by the coding agent can be individually reviewed and accepted or rejected. It's a lot of small details done right that add up to a tool that I'm genuinely becoming confident about using.
Also, there's a Helix keymap, although it doesn't seem as complete as the Vim keymap, which is what I've been using.
Still, I hope there will come a time when Helix users can have more than just Helix + Aider, because I prefer my editor inside a terminal (Helix) rather than my terminal inside an editor (Zed).
And yet, it's hard to ignore the fact that coding practices are undergoing a one-in-a-generation shift, and experienced programmers are benefiting most from it. Many of us had to ditch the comfort of terminal editors and switch to Microsoft's VSCode clones just to have these new incredible powers and productivity boosts.
Having AI code assistants built into the fast terminal editor sounds like a dream. And editors like Helix could totally deliver here if the authors were a bit more open to the idea.
edit: they updated the AI panel! looking good!
Iced, being used by System76's COSMIC EPOCH, is not great in what regards? Serious question.
IMO Slint is milestones ahead and better. They've even built out solid extensions for using their UI DSL, and they have pages and pages of docs. Of course everything has tradeoffs, and their licensing is funky to me.
Calling iced not useful reads like an uninformed take
examples beyond tiny todo app/best practices would be a great start.
> Tutorials? That's for users to write.
sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
> sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
26.5k stars on github and a flourishing community of users, which grows noticeably larger every day. new features basically every week. bug fixes sometimes fixed in literal minutes.
it's not a matter of gatekeeping, but a matter of resources. iced is basically the brainchild of a single developer (plus core team members who tackle some bits and pieces of the codebase but not frequently), who already has a day time job and is doing this for free. would you rather him write documentation—which you and I could very well write—or keep adding features so the library can get to 1.0?
I encourage you to look for evidence that invalidates your biases, as I'm confident you'll find it. and you might just love the library and the community. I promise you a warm welcome when you join us on discord ;-)
here are a few examples of bigger apps you can reference:
https://github.com/squidowl/halloy
https://github.com/hecrj/icebreaker
https://github.com/hecrj/holodeck
and my smaller-scale examples (I'm afraid my own big app is proprietary):
https://github.com/airstrike/iced_receipts a simple app showing how to manage multiple screens for CRUD-like flows
https://github.com/airstrike/pathfinder/ a simple app showing how to draw on a canvas
https://github.com/airstrike/iced_openai a barebones app showing how to make async requests
https://github.com/airstrike/tabular a somewhat complex custom widget example
I'll be waiting for you on Discord ;-) my username is the same there so ping me if you need anything
and I forgot to link to a ridiculously cool new feature that dropped last week: time travel debugging for free
https://github.com/iced-rs/iced/pull/2910
check out the third and fourth videos!
This single-handedly convinced me not to rely on anything using Iced. I have no patience left for projects with that low a bus factor.
Have you had a chance to try the new panel? (The OP is announcing its launch today!)
The annoncement is about it reaching prod release, but they emailed people to try it out in the preview version.
edit: yes i missed something. i see the new feature. hell yeah!
Check out the video in the blog post to see the new one in action!
Press the 3-dots menu in the upper right of the panel, and then choose "New Text Thread" instead of "New Thread".
Editing and deleting not only your messages but also the LLM's messages should be trivial.
One of the coolest things about LLM tech is that it's stateless, yet we leave that value on the floor when UIs act like it's not.
EDIT: just gave it a shot and I get "unsupported GPU" as an error, informing me that my GPU needs Vulkan support.
Their detection must be wrong because this is not true. And like I said, other applications don't have this problem.
For one, not all applications are GPU accelerated.
Two, their UX may need to be improved for a specific hardware configuration. I have used Zed with good performance on Intel dGPU, AMD dGPU, and Intel iGPU without issue — my guess is a missing dependency?
Putting together a high quality, actionable bug report is a much higher bar that can often feel like screaming at the clouds.
I’m genuinely curious what you are getting out of it
As a Linux user, I am sadly accustomed to some software working in only a just-so configuration. A datapoint that the software is still Mac first development is useful to know. Zed might still be worth trying, but I have to temper my enthusiasm from the headline announcement of, “everything is great”.
I don't care about Zed fixing anything - they're Zed's issues, not mine. All I'm saying is that contrary to what someone else said about the software being "fast" I tried it and at startup, it was unusably slow. I'm what you would call a failed conversion.
> Also, how is whether the project is volunteer-run relevant? Would you file a support ticket for commercial software you use saying "it's slow" and then when they follow up asking for details about your setup, you say "sorry, you don't get free QA work from me"
So this is kind of needlessly antagonistic imo - the point between the lines is that FOSS projects run by volunteers get a lot more grace than venture backed companies that go on promotion blitzes talking about their performance.
seems like you needing a GPU would be your issue
Error message, hardware configuration, done.
From my perspective that is not something you do for zed, but something you do for your distro and hardware.
And ofc, your first comment was fine either way. But the attitude of the latter is just poor.
How about "I'm getting <1FPS perf on {specs}" instead of the snark.
The antagonistic part is assuming your specific Linux configuration is innately Zed’s issue. It’s possible simply mentioning it to them would lead you quickly and easily to a solution, no free labor needed. It’s possible Zed is prepared to spend their vast VC resources on fixing your setup, even—which seems to be what you expect. Point being there’s a middle ground where you telling Zed “hey it didn't work well for me” gives Zed the chance to resolve any issues on their end in order to properly convert you, if you truly are interested in trying their editor. You don’t need to respond to the suggestion with a lecture on how companies exploit free volunteer labor and anything short of software served up on a silver platter would make you complicit. It’s really a little absurd.
If I had to guess, your system globally or their rendering library specifically is probably stuck on llvmpipe.
At least it did a month or so ago, and at that time I couldn't figure out a practical use for the LLM-integration either so I kind of just went back to dumb old vim and IDEA Ultimate.
When its fast its pretty snappy though. I recently put revisiting emacs on my todo-list, should add taking Zed out for another round as well.
[1]: people experiencing sluggishness on Linux are almost certainly hit by a bug that makes the rendering falls back to llvmpipe (that is CPU rendering) instead of Vulkan rendering, but MacOS shouldn't have this kind of problems.
Edit: I just saw your edit to your reply here[1] and that's indeed what's happening. Now the question is “why does that happen?”.
[1]
Waiting for Robius / Makepad to mature a bit more. Looks very promising.
Man, so true. I tried this out a while back and it was pretty miserable to find docs, apis, etc.
IIRC they even practice a lot of bulk reexports and glob imports and so it was super difficult to find where the hell things come from, and thus find docs/source to understand how to use something or achieve something.
Super frustrating because the UI of Zed was so damn good. I wanted to replicate hah.
I wouldn’t hold my breath. GPUI is built specifically for Zed. It is in its monorepo without separate releases and lots of breaking changes all the time. It is pretty tailored to making a text editor rather than being a reusable GUI framework.
i think there's some desire from within zed to making this a real thing for others to reuse.
That kind of setup is fine for internal use, but it’s not how you'd structure a library meant for stable, external reuse. Until they split it out, version it properly, and stop breaking stuff all the time, it's hard to treat GPUI as a serious general-purpose option.
Other than that a beautiful editor.
It supports extensions for languages such as Java and seemingly that extension can build code, too.
Zed also contains Git-support out of the box, which sounds pretty much like a lightweight IDE.
Personally, I just use the terminal for my build tools and Zed talks to clangd just fine for autocomplete etc.
Tried using zed on Linux (pop os, Nvidia) several months ago, was terribly slow, ~1s to open right click context window.
I've spent some time debugging this, and turns out that my GPU drivers are not the best with my current pop os release, but I still don't understand how it might take so long and how GPU is related to right clicking.
Switched back to emacs, love every second. :)
I'm not sure if title referring to actual development speed or the editor performance.
p.s. I play top games on Linux, all is fine with my GPU & drivers.
Nvidia drivers in particular are terrible on Linux, so what OP is describing is likely some compatibility/version issue.
I will keep playing around with it to see if it's worth switching (from JetBrains WebStorm).
One way you could use LLMs w/o inducing brain mush would be for code or design reviews, testability, etc.
If you see codebases you like, stash them away for AI explanation later.
I’ve been using PyCharm Professional for over a decade (after an even longer time with emacs).
I keep trying to switch to vscode, Cursor, etc. as they seem to be well liked by their users.
Recently I’ve also tried Zed.
But the Jetbrains suite of tools for refactoring, debugging, and general “intelligence” keep me going back. I know I’m not the only one.
For those of you that love these vscode-like editors that have previously used more integrated IDEs, what does your setup look like?
I've learned to work around the loss of some functionality over the past 6 months since I've switched and it hasn't been too bad. The AI features in Zed have been great and I'm looking forward to the debugger release so I can finally run and debug tests in Zed.
I used to have one of these and recently got an M1 Max machine - the performance boost is seriously incredible.
The throttling on those late-game intel macs is hardcore - at one point I downloaded Hot[1], which is a menu bar app that shows you when you're being throttled. It was literally all the time that the system was slowing itself down due to heat. I eventually just uninstalled it because it was a constant source of frustration to know I was only ever getting 50% performance out of my expensive dev laptop.
This isn't a great solution, but in cases where I've wanted to try out Cursor on a Java code base, I just open the project in both IDEs. I'll do AI-based edits with Cursor, and if I need to go clean them up or, you know, write my own code, I'll just switch over to IntelliJ.
Again, that's not the smoothest solution, but the vast majority of my work lately has been in Javascript, so for the occasional dip into Java-land, "dual-wielding" IDEs has been workable enough.
Cursor/Code handle JS codebases just fine - Webstorm is a little better maybe, but not the "leaps and bounds" difference between Code and IntelliJ - so for JS, I just live in Cursor these days.
But Zed is a complete rewrite, which on one hand makes itsuper-fast, but otherwise is still super-lacking of integration with the existing vsix extensions, language servers, and what not. Many authors in this forum totally fail to see that SublimeText4 is super ultra fast also compared to Electron-based editors, but is not even close in terms of supported extensions.
The whole Cursor hysteria may abruptly end with CoPilot/Cline/Continue advancing, and honestly, havng used both - there isnt much difference in the final result, should you know what you are doing.
[0] https://plugins.jetbrains.com/plugin/20540-windsurf-plugin-f...
At the moment I’m using Claude Code in a dedicated terminal next to my Jetbrains IDE and am reasonably happy with the combination.
I've heard decent things about the Windsurf extension in PyCharm, but not being able to use a local LLM is an absolute non-starter for me.
That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]
Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?
I also laughed at the dig on VSCode at the start. For the unaware, the team behind Zed was originally working on Atom.
I work at Zed and I like using Rust daily for my job, but outside work I also like Elm, and Zig, and am working on https://www.roc-lang.org
These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.
More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.
I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!
I’d love a nvim plugin that is more or less just a split chat window that makes it easy to paste code I’ve yanked (like yank to chat) add my commentary and maybe easily attach other files for context. That’s it really.
Then, connect it using this line: `client = MCPClient(server_url=server_url)` (https://github.com/aperoc/toolkami/blob/e49d3797e6122fb54ddd...)
Happy to help further if you run into issues.
MCP Clients and servers can support both sse or stdio
Yours is the full agent, though... Nice.
[1] https://github.com/karthink/gptel
It's like lisp's original seven operators: quote, atom, eq, car, cdr, cons and cond.
And I still can't stop smiling just watching the agent go full turbo mode when I disable the `ask` tool.
you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)
That feature + native Git support has fully replaced VSCode for me.
While the initial 400 error is a bummer, I am actually surprised and admire its persistence in trying to create the file and in the end finding a way to do so. It forgot to define a couple of stuff in the code, which was trivial to fix, after that the code was working.
If you're okay sharing the conversation with us, would you mind pressing the thumbs-down button at the bottom of the thread so that we can see what input led to the 400?
(We can't see the contents of the thread unless you opt into sharing it with the thumbs-down button.)
I used github copilot's sonnet 3.7. I now tried copilot's sonnet 3.5 and it seems to work, so it was prob a 3.7 issue? It did not let me try zed's sonnets, so I don't know if there is a problem with zed's 3.7 (I thought I could still do 50 prompts with a free account, but maybe that's not for the agent?).
The pricing page was not linked on the homepage. Maybe it was, maybe it wasn't but it surely was not obvious to me.
Regardless of how good of a software it is or pretends to be I just do not care about landing pages anymore. Pricing page essentially tells me what I am actually dealing with. I knew about Zed when it was being advertised as "written in rust because it makes us better than everyone" trend everyone was doing. Now, it is LLM based.
Absolutely not complaining about them. Zed did position themselves well to take the crown of the multi billion dollar industry AI code editors has become. I had to write this wall of text of because I just want to just drop the pricing page link and help make people make their own decision, but I have to reply to "whats your point" comments and this should demonstrate I have no point aside from dropping a link.
I'm catching up on Zed architecture using deepwiki: https://deepwiki.com/zed-industries/zed
> ... 3. Baked into a closed-source fork of an open-source fork of a web browser
I laughed out loud at this one.
I might be missing the obvious, and I get no standard exists, but why aren't AI coding assistants just plugins?
Here's a nice recent post about it: https://felix-knorr.net/posts/2025-03-16-helix-review.html
VS Code forks (Cursor and Windsurf) were extremely slow and buggy for me (much more so than VS Code, despite using only the most vanilla extensions).
Basically, by default:
- You have the chat
- Inline edits you do use the chat as context
And that is extremely powerful. You can easily dump stuff into the chat, and talk about the design, and then implement it via surgical inline edits (quickly).
That said, I wasn't able to switch to Zed fully from Goland, so I was switching between the two, and recently used Claude Code to generate a plugin for Goland that does chat and inline edits similarly to how the old Zed AI assistant did it (not this newly launched one) - with a raw markdown editable chat, and inline edits using that as context.
Cline's an Agent, and you chat with it, based on which it makes edits to your files. I don't think it has manual inline edit support?
What I'm talking about is that you chat with it, you're done chatting, you select some text and say "rewrite this part as discussed" and only that part is edited. That's what I mean with inline edits.
For Agentic editing I'm happy with Claude Code.
I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
I've been trying to be active, create issues, help in any way I can, but the focus on AI tells me Zed is no longer an editor for me.
Do you think GPL3 will serve as an impediment to their revenue or future venture fundraising? I assume not, since Cursor and Windsurf were forks of MIT-licensed VS Code. And both of them are entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
Tangentially, do you think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? Would an AI art tool with sculpting and drawing benefit from being open source? I've talked with VCs that love open developer tools and they hate on the idea of open creative tools for designers, illustrators, filmmakers, and other creatives. I don't quite get it, because Blender and Krita have millions of users. Comfy is kind of in that space, it's just not very user-friendly.
I learned something from that code, cool stuff!
One question: how do you handle cutting a new breaking change in wit? Does it take a lot of time to deal with all the boilerplate when you copy things around?
You can sign up for the beta here - https://zed.dev/debugger - or build from source right now.
https://zed.dev/blog/fastest-ai-code-editor
It's fast paced, yet it doesn't blush over anything I'd find important. It shows clearly how to use it, shows a realistic use case, e.g. the model adding some nonsense, but catching something the author might have missed, etc. I don't think I've seen a better AI demo anywhere.
Maybe the bar is really low that I get excited about someone who demos an LLM integration for programmers to actually understand programming, but hey.
Apart from that, it's a hell of a lot better than alternatives, and my god is it fast. When I think about the perfect IDE (for my taste), this is getting pretty close.
Ah! So you can get that experience with the agent panel (despite "agent" being in the name).
If you click the dropdown next to the model (it will say "Write" by default) and change it from "Write" to "Minimal" then it disables all the agentic tool use and becomes an ordinary back-and-forth chat with an LLM where you can add context manually if you like.
Also, you can press the three-dots menu in the upper-right and choose New Text Thread if you want something more customizable but still not agentic.
Anyway you can always make your prompts to do or not do certain actions, they are adding more features, if you want you can ignore some of them - this is not contradictory.
vscode running a typescript extension (cline, gemini, cursor, etc) to achieve LLM-enhanced coding is probably the least efficient way to do it in terms of cpu usage, but the features they bring are what actually speeds up your development tasks - not the "responsiveness" of it all. It seems that we're making text editing and html rendering out to be a giant lift on the system when it's really not a huge part of the equation for most people using LLM tooling in their coding workflows.
Maybe I'm wrong but when I looked at zed last (about 2 months ago) the AI workflow was surprisingly clunky and while the editor was fast, the lack of tooling support and model selection/customization left me heading back to vscode/cline which has been getting nearly two updates per week since that time - each adding excellent new functionality.
Does responsiveness trump features and function?
I'm curious what you think of this launch! :D
We've overhauled the entire workflow - the OP link describes how it works now.
The free pricing is a bit confusing, it says 50 prompts/month, but also BYO API keys
So even if I use my own API keys, the prompts will stop at 50 per month?
Also, since it’s open source, couldn’t just someone remove the limit? (I guess that wouldn’t work if the limit is of some service provided by Zed)
Two nitpicks:
1) the terminal is not picking up my regular terminal font, which messes up the symbols for my zsh prompt (is there a way to fix this?)
2) the model, even though it's suggesting very good edits, and gives very precise instructions with links to the exact place in the code where to make the changes, is not automatically editing the files (like in the video), even though it seems to have all the Write tools enabled, including editing - is this because of the model I'm using (qwen3:32b)? or something else?
Edit: 3rd, big one, I had a js file, made a small 1 line change, and when I saved the file, the editor automatically, and without warning, changed all the single quotes to double quotes - I didn't notice at first, committed, made other commits, then a PR - that's when I realized all the quotes changes - which took me a while to figure out how they happened, until I started a new branch, with the original file, made 1 change, saved and then I saw it
Can this behavior be changed? I find it very strange that the editor would just change a whole file like that
2. Not sure.
3. For most languages, the default is to use prettier for formatting. You can disable `format_on_save` globally, per-language and per project depending on your needs. If you ever need to save without triggering format ("workspace: save without formatting").
Prettier is /opinionated/ -- and its default is `singleQuote` = false which can be quite jarring if unexpected. Prettier will look for and respect various configuration files (.prettierrc, .editorconfig, via package.json, etc) so projects can set their own defaults (e.g. `singleQuote = true`). Zed can also be configured to further override prettier config zed settings, but I usually find that's more trouble than it's worth.
If you have another formatter you prefer (a language server or an external cli that will format files piped to stdin) you can easily have zed use those instead. Note, you can always manually reformat with `editor: format` and leave `format_on_save` off by default if that's more your code style.
- https://zed.dev/docs/configuring-zed#terminal-font-family
- https://zed.dev/docs/configuring-zed#format-on-save
- https://prettier.io/docs/configuration
It would be nice for prettier to throw a user warning before making a ton of changes on save for the first time, and also let them know where they can configure it
If they had focused on
1. Feature-parity with the top 10 VSCode extensions (for the most common beaten path — vim keybindings, popular LSPs, etc) and
2. Implemented Cursor's Tab
3. Simple chat interface that I can easily add context from the currently loaded repo
I would switch in a beat.
I _really_ want something better than VSCode and nvim. But this ain't it. While "agentic coding" is a nice feature, and specially so for "vibe coding projects", I (and most of my peers) don't rely on it that much for daily driving their work. It's nice for having less critical things going on at once, but as long as I'm expected to produce code, both of the features highlighted are what _effectively_ makes me more productive.
1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.
2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.
3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?
I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.
I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).
I'm disappointed this wasn't the top comment on hn. Have we abandoned all culture?
I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.
Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.
But I'm not sure how to get predictions working.
When the predictions on-ramp window popped up asking if I wanted to enable it, I clicked yes and then it prompted me to sign in to Github. Upon approving the request on Github, an error popover over the prediction menubar item at the bottom said "entity not found" or something.
Not sure if that's related (Zed shows that I'm signed in despite that) but I can't seem to get prediction working. e.g. "Predict edit at cursor" seems to no-op.
Anyways, the onboarding was pretty sweet aside from that. The "Enable Vim mode" on the launch screen was a nice touch.
I have run into some problems with it on both Linux and Mac where zed hangs if the computer goes to sleep (meaning when the computer wakes back up, zed is hung and has to be forcibly quit.
Haven't tried the AI agent much yet though. Was using CoPilot, now mostly Claude Code, and the Jetbrains AI agent (with Claude 3.7).
Does it not do incremental edits like Cursor? It seems like the LLM is typing out the whole file internally for every edit instead of diffs, and then re-generates the whole file again when it types it out into the editor.
We actually stream edits and apply them incrementally as the LLM produces them.
Sometimes we've observed the architect model (what drives the agentic loop) decide to rewrite a whole file when certain edits fail for various reasons.
It would be great if you could press the thumbs-down button at the end of the thread in Zed so we can investigate what might be happening here!
Went from Atom, to VSC, to Vim and finally to Zed. Never felt more at home. Highly recommend giving it a try.
AFAIK there is overlap between Atoms and Zeds developers. They built Electron to built Atom. For Zed they built gpui, which renders the UI on the GPU for better performance. In case you are looking for an interesting candidate for building multi platform GUIs in rust, you can try gpui yourself.
This is clearly a Markdown backend problem, but not really relevant in the editor arena, except maybe to realize that the editor "shell" latency is just a part of the overall latency problem.
I still keep it around as I do with other editors that I like, and sometimes use it for minor things, while waiting to get something good.
On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
So far the only editor I've found that does this is Typora.
If you like Zed's collaboration features, I wrote a plugin that make Obsidian real-time collaborative too. We are very inspired by their work (pre agent panel...). The plugin is called Relay [0].
[0] https://relay.md
I'm also super interested in building this. OTOH Obsidian has a huge advantage for its plugin ecosystem because it is just so hackable.
One of the creators of Zed talked about their experience building Atom - at the time the plugin API was just wide open (which resulted in a ton of cool stuff, but also made it harder to keep building). They've taken a much stricter Plugin API approach in Zed vs. Atom, but I think the former approach is working out well for Obsidian's plugin ecosystem.
I followed the instructions and started with the latest versions (v143, VS 2022) for both build tools and spectre-mitigated libs. When that didn't work, I installed the last versions of v142-VS 2019, v141-VS2017, & v140-VS2015 and rebooted. The error persists.
I found a suggestion to change the locale for VS to US-en, but I checked and it's already set that way.
keybits•8h ago
(I've yet to dive deep into AI coding tools and currently use Zed as an 'open source Sublime Text alternative' because I like the low latency editing.)