OP here - I've talked in detail about how we rendered NES suggestions using only VS Code public APIs.
Most tools fork the editor or build a custom IDE so they can skip the hard interaction problems.
Our NES is a VS Code–native feature. That meant living inside strict performance budgets and interaction patterns that were never designed for LLMs proposing multi-line, structural edits in real time.
In this case, surfacing enough context for an AI suggestion to be actionable, without stealing attention, is much harder.
That pushed us toward a dynamic rendering strategy instead of a single AI suggestion UI. Each path gets deliberately scoped to the situations where it performs best, aligning it with the least disruptive representation for a given edit.
If AI is going to live inside real editors, I think this is the layer that actually matters.
wsxiaoys•19h ago
Most tools fork the editor or build a custom IDE so they can skip the hard interaction problems.
Our NES is a VS Code–native feature. That meant living inside strict performance budgets and interaction patterns that were never designed for LLMs proposing multi-line, structural edits in real time.
In this case, surfacing enough context for an AI suggestion to be actionable, without stealing attention, is much harder.
That pushed us toward a dynamic rendering strategy instead of a single AI suggestion UI. Each path gets deliberately scoped to the situations where it performs best, aligning it with the least disruptive representation for a given edit.
If AI is going to live inside real editors, I think this is the layer that actually matters.
Happy to hear your thoughts!