GitHub Copilot for JetBrains IDEs: how to review code in IntelliJ, WebStorm, and PyCharm
GitHub Copilot in JetBrains IDEs — IntelliJ IDEA, WebStorm, PyCharm, GoLand, Rider, CLion — is a separate product from the JetBrains AI Assistant that ships bundled with recent IDE versions. Both appear as inline ghost-text completions and both have a chat panel, but they are built on different model stacks and use different context-gathering approaches. JetBrains AI Assistant is tightly integrated with the IDE’s own Program Structure Index — the deep semantic model JetBrains builds by indexing every file in the project — while GitHub Copilot operates primarily from file text and open editor tabs, the same context model it uses in VS Code and Neovim. Many JetBrains developers have both plugins active simultaneously, switching between them or not realizing which one produced a given completion.
The JetBrains environment creates review traps that do not arise in other editors. The IDE’s own analysis infrastructure — live inspections, Smart Completion, call hierarchy views, data flow analysis — is so capable that it tends to absorb the review attention that Copilot-generated code actually requires. This post covers the three traps specific to using GitHub Copilot inside a JetBrains IDE.
The three GitHub Copilot JetBrains code review traps
1. Live inspection neutralization
JetBrains IDEs run continuous code analysis in the background and display results as colored gutter icons, underlines, and the inspection stripe on the right side of the editor. Red means error, yellow means warning, gray means weak warning or suggestion. This system has trained a deep reflex in JetBrains developers: a clean inspection stripe — no red, no yellow — signals that the code is correct. After years of working in IntelliJ or PyCharm, the absence of inspection markers feels like a review pass. It is not.
JetBrains inspections are powerful but bounded. They catch null pointer dereferences where the type system can prove a value is possibly null, unreachable code, unused variables, type mismatches, API deprecations, and a range of framework-specific pattern violations if the right inspection profile is configured. What they cannot catch: behavioral logic errors where the code is syntactically and semantically valid but does the wrong thing for the domain, missing edge case handling that does not produce a null or type error, security vulnerabilities in application logic, and violated business rules that exist only in team knowledge rather than in the type system.
When Copilot generates a ten-line method and the inspection stripe immediately shows clean, the reflex fires: reviewed. The method may implement a retry loop that retries on every exception including InterruptedException, or an authorization check that returns true when the user object is null rather than throwing, or a database query that fetches all rows before filtering in memory. None of these produce inspection warnings. The clean stripe is real — the review it implies is not.
The fix is to decouple the two signals explicitly. After accepting any Copilot completion longer than a single expression, treat the clean inspection result as a compilation signal only — the code compiles and is syntactically valid — and perform a separate behavioral read of what the code actually does for the inputs it will receive in production. The inspection stripe answers “is this valid code.” You still need to answer “does this do the right thing.”
2. Smart Completion reflex transfer
JetBrains’ Smart Completion — triggered by Ctrl+Shift+Space — is type-aware in a way that other editors’ completion systems are not. It uses the full PSI model to propose only completions that are type-correct for the current expression context: if you are assigning to a List<UserDto>, Smart Completion shows only expressions that produce a List<UserDto>. It understands interfaces, generics, inheritance hierarchies, and data flow. A Smart Completion suggestion in IntelliJ has a meaningful prior probability of being correct, because the IDE has already checked the type constraint.
Copilot ghost-text completions appear inline in the editor at the same position as Smart Completion results and are accepted with the same Tab key. For developers whose hands have spent years accepting Smart Completion results via Tab, the acceptance of a Copilot ghost-text completion feels identical — the same visual position, the same keystroke, the same instantaneous insertion. The JetBrains Smart Completion reflex fires on Copilot completions because the two systems are perceptually indistinguishable at the point of acceptance.
The difference is significant. Smart Completion guarantees type correctness; Copilot ghost text guarantees pattern plausibility. A Copilot completion that accepts a userId: Long where the method signature expects a userId: UUID will produce an inspection error immediately — but a completion that accepts the right type with the wrong semantics (passing a session ID where a user ID is expected, both typed as String) passes type checking silently. The reflex was trained on completions that could not be semantically wrong in that way. Copilot completions can be.
The practical fix is to change the acceptance key for Copilot to something other than Tab in JetBrains Copilot settings, creating a deliberate motor discontinuity between the Smart Completion reflex and Copilot acceptance. The brief pause required to press a different key is enough to re-engage deliberate processing before insertion.
3. PSI context gap
JetBrains builds a Program Structure Index for every project: a full semantic model of every class, method, field, annotation, import, call site, data flow path, and reference across all source files and dependencies. This index powers the IDE features developers use constantly — Go to Definition, Find Usages, Call Hierarchy, Type Hierarchy, Rename refactoring, Extract Method. When a developer uses these features, they are navigating the PSI graph. The IDE’s understanding of the project is deep and structural.
GitHub Copilot does not have access to the PSI. It sees file text: the content of the open file and the content of a small set of recently opened files that fit its context window. When Copilot generates a service method that calls userRepository.findByEmail(email), it is pattern-matching on the string userRepository visible in the current class, not traversing the PSI to understand the full call graph, the transaction boundaries, or the caching contract on that repository method. The code looks architecturally integrated because it references real project entities by name. The PSI gap is that Copilot does not know what those entities actually do.
This matters most in projects with non-obvious behavioral contracts: repository methods that have cache-aside semantics with specific invalidation rules, service methods that must be called within a specific transaction scope, utility methods that modify shared state as a side effect of what their name suggests is a pure query. Copilot generates calls that reference the correct names but may violate the contracts attached to those names in the actual PSI graph. The IDE’s own AI Assistant has better access to this structural context — one practical use of having both tools available is to use JetBrains AI Assistant for suggestions in files with complex cross-file dependencies, and treat Copilot suggestions in those files as drafts requiring a Find Usages pass before acceptance.
What this means for your review process
The three traps share a common cause: the JetBrains environment is so analytically capable that its real-time feedback absorbs the attention that Copilot-generated code still requires. The inspection stripe is a genuine signal about code validity — not about behavioral correctness. Smart Completion is a genuine guarantee of type correctness — not about semantic correctness. The PSI is a genuine model of the project structure — not available to the model generating the completions.
The review habit that works in this environment is explicit. After accepting any Copilot completion that involves a method body (not just a single expression), perform three steps before moving on: check the inspection stripe for errors (this you already do), read the body for behavioral logic given the full input space, and use Find Usages on any externally-defined method you did not write to verify the calling convention. The second and third steps take thirty seconds each. They are the part the inspection stripe cannot do for you.
ZenCode builds the habit infrastructure for this kind of pause. The review moment after accepting a completion — the thirty seconds before you move to the next task — is exactly the gap ZenCode is designed for.
Build a calmer review habit
ZenCode helps you use the pauses between AI generations deliberately — turning each one into a focused review moment instead of a distraction.
Get ZenCode →VS Code extension · free to try
Related: JetBrains AI Assistant review · GitHub Copilot Chat review · GitHub Copilot Enterprise review · How to review AI-generated code