GitHub Copilot Chat: how to review code when the chat interface explains it for you
GitHub Copilot Chat is the conversational layer on top of inline Copilot — the panel where you type questions about your code, ask for fixes, or request explanations without leaving VS Code. Unlike inline Copilot’s tab-key reflex problem, Copilot Chat is deliberate: you asked the question, you waited for the response, you are now reading it. That makes the review problems it creates different, and in some ways harder to notice.
Copilot Chat’s three most-used commands — /explain, /fix, and @workspace — each create a specific judgment trap. None of them are bugs. They are features doing their jobs, and the traps come from how those jobs interact with the way developers evaluate code.
The three traps
1. The /explain trap
/explain gives you a fluent description of what the selected code does, rendered in the Chat panel adjacent to the editor. Reading the explanation while the code is still visible in the editor creates a particularly strong version of the “I just reviewed this” feeling. The code is right there. You just read a clear explanation of what it does. It feels complete.
It is not. The explanation describes what the code intends to do. A bug in the implementation is visible in the code; it is often invisible in a correct description of the intent. An off-by-one in a loop boundary, a null-check placed in the wrong branch, an error path that returns before updating state — these are implementation-level failures that a correct intent-level explanation does not catch.
The IDE proximity makes this more potent than the same trap in ChatGPT. When you paste code into ChatGPT, there is a context switch: you left the editor, you are in a new tab, reviewing feels like a separate deliberate activity. With Copilot Chat, the explanation is 200 pixels from the code in the same window. The frictionlessness that makes /explain useful is the same property that makes explanation-as-verification feel complete.
2. The /fix diff-trust trap
/fix proposes a code change for selected text. The proposal arrives as a diff — green additions, red deletions — inside the Chat panel. This is the same visual language as a suggested change in a GitHub pull request review. Every developer who has ever accepted a PR suggested change has built muscle memory that says: a diff in this format has been through a review process.
It has not. The diff is a proposal, not a reviewed suggestion. It arrives looking identical to something that has been evaluated, and that visual correspondence is exactly the trap.
The compounding version: if you used /explain first and the explanation was accurate, your confidence going into /fix is higher than it would be cold. The explanation was right, so the fix is probably right. The fix may correctly address the specific pattern the model identified while missing the edge case in the context that /explain described accurately at the intent level. Two correctly-framed outputs in sequence create more confidence than either creates alone, even when the second output has a gap the first did not catch.
3. The @workspace context confidence trap
@workspace instructs Copilot Chat to search your entire repository before answering. When it finds relevant files, it shows them: “Based on auth.ts, middleware.ts, and api-handler.ts, here’s what’s happening…” Seeing your own filenames in the response creates a strong sense that the answer is grounded in your actual codebase.
That feeling is partially correct and partially misleading. Copilot Chat found what its indexer found. Files that are too large, too new, or excluded from indexing are not in the search space. The indexer returns the most semantically similar results for the query — which may not be the most architecturally relevant files for the specific question you needed to answer. The auth module that sets the session expiry constraint may not surface in a query phrased around the token validation function that depends on it.
The problem is not that @workspace is unreliable. The problem is that seeing real filenames creates confidence that outstrips what the indexer can actually guarantee. An answer built from three relevant files is not the same as an answer built from all relevant files, and the response format makes those two things look the same.
Three fixes
After /explain, name one thing the explanation did not cover. Read the explanation. Then, before treating it as a review, identify one specific behavior of the code that the explanation did not address. The edge case on empty input. The behavior when the upstream call fails mid-stream. What happens on the second loop iteration after the first write. If you cannot name one gap, you are reading the explanation, not the code. The thing you name is what you check next in the actual code. This breaks the explanation-as-review loop at the moment it would otherwise close.
Treat /fix diffs the way you would treat a junior developer’s PR suggestion — read the reasoning, not just the code. A /fix proposal tells you what to change. It usually explains why. Read the why carefully: is it correcting the actual fault mode, or addressing a symptom? A fix that removes a null check without explaining why the null case cannot occur is applying a pattern without establishing the constraint. The diff review should verify the reasoning, not just confirm the change looks clean. If the explanation does not mention the constraint that makes the change safe, the explanation is incomplete regardless of whether the change is correct.
Before sending an @workspace query, name the file you expect to be most relevant. Identify which file or function should anchor the answer. After the response, check whether Copilot surfaced it. If it did not appear in the context list, the answer was built without your most important input. Ask again with the file explicit: @workspace #file:auth.ts what is the session expiry logic? forces the specific context you need rather than relying on the indexer’s semantic guess about what matters.
Copilot Chat versus Copilot Workspace
The review problems in Copilot Chat are distinct from the ones in Copilot Workspace. Workspace creates a plan-approval-as-code-review trap at the specification level — approving a natural-language plan creates a false “review done” milestone before any implementation is written. Chat creates explanation-as-review and diff-trust traps at the implementation level, while you are working through specific code.
They run in different moments of the workflow: Workspace when you are starting a task from a spec, Chat when you are debugging or verifying code you are actively editing. Both require the same underlying habit: staying in the reviewer seat rather than handing it to the model’s output.
The comparison with ChatGPT code review is also useful. ChatGPT’s chat interface creates the same explanation-as-verification trap, but requires a context switch out of the IDE. Copilot Chat creates it inside the IDE, with the code and explanation simultaneously visible — making the false sense of review more immediate and harder to interrupt.
The honest verdict
Copilot Chat’s commands are genuinely useful. The risks are specific and addressable. /explain creates an explanation-as-review trap that closes faster than the ChatGPT version because the code is right there. /fix creates a diff-trust shortcut that borrows visual authority from the PR review workflow. @workspace creates a false-complete-picture feeling that scales with how familiar your filenames look in the response. In each case, the fix is an active step you take before accepting the output: name what was not covered, verify the reasoning, confirm the right context was found. None of these require long pauses. They require staying in the reviewer seat instead of reading the model’s output as if the review already happened.
ZenCode — stay in review mode during AI generation gaps
A VS Code extension that surfaces a 10-second breathing pause during AI generation gaps — keeping you in active review mode instead of passive waiting mode when the output lands.
Get ZenCode free