Zed editor AI: how to review code when the editor is built for AI collaboration
Zed is a Rust-built code editor designed from the ground up for two things: real-time multiplayer collaboration and AI-assisted development. It is not VS Code with an AI plugin attached to the side. The AI assistant is a structural component of the editor, accessible inline via Cmd+K for in-buffer edits and through a dedicated AI panel for longer exchanges. Zed positions itself as “the editor for the AI era” — not because it has better AI features than other editors, but because the editing model and the AI model were designed together rather than in sequence.
That design choice creates a set of review traps that are distinct from the ones you encounter with extension-based AI tools. The traps are not about which AI model Zed uses or whether the completions are accurate. They are about what happens to your evaluation posture when the editor itself is built around the assumption that AI is a collaborator rather than a foreign suggestion source.
The three traps
1. Editor-native AI endorsement
When an AI feature is a plugin installed into an existing editor, there is a visible seam in your mental model: here is the editor (proven, reliable), and here is an AI extension (to be evaluated separately on its own merits). The extension earns trust independently. This seam is visible in the UI — the AI panel is a panel, not a buffer; the inline suggestions have a distinct ghost-text color; the accept gesture is distinct from the edit gesture.
In Zed, the seam is absent by design. The inline assistant generates code directly in the editor buffer using the same font, the same color scheme, and the same cursor position as code you write yourself. The only visual distinction is a subtle highlight marking the generated range and a small accept/reject affordance. The editor’s overall credibility — its performance, its correctness on syntax highlighting, its reliable collaborative editing semantics — does not belong to a separate mental category from the AI-generated content inside it. When the AI and the editor share the same visual surface, the editor’s credibility bleeds into the AI output in a way that extension-based AI cannot replicate.
This is different from Cursor AI, where the IDE-as-AI creates a similar trust transfer but the AI features are visually distinct (ghost text, the Composer panel, the separate agent mode indicator). Cursor users generally know when they are receiving AI output. In Zed, the presentation is more uniform. The generated code looks like code that was there before you looked away. That uniformity is a design goal for fluent AI collaboration; it is also a design constraint that makes evaluation harder to trigger.
2. Inline buffer generation without a review-ritual trigger
Different locations in your development workflow carry different cognitive modes. When you are in the editor buffer, you are in writing mode. When you open a PR diff in GitHub, you are in review mode. When you switch to a terminal to run tests, you are in verification mode. These mode distinctions are not arbitrary — they reflect real differences in attention posture. Writing mode is forward-looking and generative; review mode is backward-looking and critical.
Zed’s inline assistant (Cmd+K) generates code at the cursor position, in-buffer, without a mode transition. Cursor Composer opens a separate panel; GitHub Copilot Workspace produces a PR diff view; Cline shows you a terminal-side diff you must approve. Each of these creates a context switch that can trigger review mode before you evaluate the output. Zed’s inline generation does not create a context switch. You prompted from the buffer; the output arrived in the buffer; your cursor is still at the generation point. The path of least resistance is to read from the cursor forward, which is reading in writing mode.
For short completions — a single method call, a one-line syntax transformation — this is manageable. The evaluation cost is low because the generated surface is small. For longer generations — a full function, a refactored block, a new data structure — the in-buffer presentation means the review happens without the mode shift that makes review work. You are reading a change in the same place where you make changes, which activates the same posture as continuing your own work, not evaluating someone else’s.
3. Collaborative buffer creates social proof for AI output
Zed’s foundational use case is multiplayer real-time editing: multiple developers working in the same file simultaneously with live cursors and collaborative editing semantics. In this model, when your collaborator makes a buffer change, you see it appear in the buffer. You trust it at a higher baseline than you would trust an external PR from an unknown contributor, because your collaborator is known, present, and accountable. You review collaborator edits, but you are not in adversarial-review mode. You are in collaborative-review mode: open, constructive, expecting the edit to be broadly correct.
Zed extends this collaborative model to its AI assistant. The AI is framed as another collaborator working alongside you in the session. When AI-generated code appears in the buffer in this frame, it inherits some of the collaborative trust. The code appears as a collaborative contribution to the shared buffer, not as a foreign insertion from an external source. The visual framing is identical: text in the buffer, at a cursor position, in your editing session.
This is distinct from the authority bleed in tools like JetBrains AI (where the IDE’s inspection authority transfers) or CodeRabbit (where an AI PR reviewer’s analysis creates a completion signal). The Zed trap is social-proof transfer: the collaborative editing frame trains a higher-baseline trust for buffer changes, and AI output appears within that frame. The AI is not an oracle delivering an answer; it is a collaborator making an edit. Collaborator edits get higher default trust than oracle answers in most developers’ mental models.
Three fixes
Read from the function signature, not from the cursor line. After Zed’s inline assistant generates code, your cursor is at or near the generation point. The natural reading start is where the cursor is — but that is the lowest-scrutiny starting point because your cursor was there in writing mode. Instead, press Escape, scroll up to the function signature or the nearest logical boundary above the generated range, and read top-down as if you are reviewing a function you did not write. The content is identical to what you would read starting at the cursor. The cognitive mode activated by reading from the signature boundary is different: you are evaluating a function, not continuing your work. That mode difference is why the same code produces different evaluation quality depending on where you start reading.
Write the invariant before invoking the assistant. Before pressing Cmd+K, write one line as a comment above the cursor: what the function or the surrounding code must still guarantee after the edit. Not what you want the AI to do — that is the prompt. What must be preserved regardless of what it produces. This takes fifteen to thirty seconds and converts the post-generation review from a general quality assessment into a binary check: does the generated code preserve the stated invariant? A binary check is what works under time pressure and in the buffer’s writing-mode context. General quality assessments require review mode and dedicated attention. The comment can be deleted immediately after review; the invariant framing it creates does not disappear when the comment does. The same pre-specification habit applies across AI coding tools generally: naming a concrete check before generation is the highest-leverage habit across every inline AI workflow.
Use git diff as the review surface, not the in-buffer markers. Zed’s accept/reject markers are the lowest-friction review surface: in-buffer, adjacent to the generated code, one keystroke away from acceptance. They are also inside the collaborative buffer frame, which is the source of the social-proof trap. After generating code but before accepting, open a terminal pane (Cmd+\) and run git diff. Read the diff there. The terminal is not the collaborative buffer. The diff format strips the buffer context that activates collaborative trust — you are reading a change set, not a buffer contribution. The accept decision still happens in the buffer, but the evaluation happens in the diff view. This is why code review in a dedicated diff interface produces different quality than reading a file directly in an editor: the format change carries a mode change. Zed makes it easy to have a terminal pane open alongside the editor; use it as a deliberate mode break between generation and acceptance.
What Zed AI gets right
Zed’s collaborative editing model is a genuine improvement for teams doing real-time pair programming with AI assistance. When two developers and an AI are working in the same file, the unified buffer model removes the context-switching cost that would otherwise come from switching between “editor” and “AI response panel” on every exchange. The AI writes where the humans write, which reduces cognitive overhead for tasks where the generation is short, the correctness bar is clear, and the review is fast.
Zed’s performance is also a real advantage for large files and complex codebases. The Rust-built rendering pipeline handles files that slow down Electron-based editors, and the native multi-buffer support (showing excerpts from multiple files in a single viewport) reduces the file-switching overhead that compounds during AI-assisted multi-file editing sessions.
The traps above appear specifically when the collaborative AI model is applied to larger or higher-stakes generation tasks where the evaluation burden is nontrivial — and where the buffer-native presentation makes it easy to stay in writing mode through a review that should have been done in review mode. Windsurf’s Cascade creates a related pattern: a generation model designed for long runs where watching the output in writing mode depletes the attention budget for the review that follows. The mode-switching problem is the shared root: AI tools that generate within the writing surface make it structurally harder to shift into evaluation mode before accepting.
ZenCode — stay in review mode during AI generation gaps
A VS Code extension that surfaces a 10-second breathing pause during AI generation gaps — keeping you in active review mode instead of passive waiting mode when the output lands.
Get ZenCode free