Cursor Notepads: how to review AI-generated code when your conventions live in a persistent document
Cursor Notepads is a feature that lets you write persistent context documents — architecture notes, coding conventions, project-specific guidelines — that Cursor’s AI reads automatically with every prompt in your project. Unlike .cursorrules (a single rules file that applies globally) or one-off chat context (which disappears after the session), Notepads are named documents you reference by mention in your prompt: @Notepad/conventions or @Notepad/architecture. The AI sees the full Notepad content as context alongside your prompt and generates code that attempts to conform to whatever you have documented.
The workflow Notepads enables is different from any other Cursor feature covered in this series. You are not just asking Cursor to follow rules — you are giving it a document that describes how your team thinks about code, and trusting that document to shape what the AI produces. That persistent, authored context creates three review traps that are distinct from the inline-completion, multi-file editing, and agent-mode traps in other posts.
The three Cursor Notepads review traps
1. Convention-anchoring
Suppose your Notepads document says: “Use the Repository pattern for all data access. Repositories take a database client in the constructor and expose methods that return domain objects, not raw rows.” You ask Cursor to add a new data access layer for a payments service. The suggestion implements a PaymentsRepository class with a constructor that accepts a database client and methods that return Payment domain objects. It matches the documented convention exactly.
The convention-anchoring trap fires here. Structural conformance with the documented pattern creates the feeling that the code has already passed a review: it’s doing what the Notepad says. Reviewers see the Repository shape and accept it as correct because it matches their documented expectation. But the Notepad describes a structural pattern, not the behavioral contract the new PaymentsRepository must satisfy. The correct Repository pattern shape can still fail to handle: the atomicity guarantee the payments service needs across two tables, the specific error types the calling code expects when a payment is not found versus when the database is unavailable, or the idempotency requirement for payment inserts that your Notepad never documented because it seemed obvious. The Notepad says how to structure the class. It does not say whether this class, structured this way, correctly implements what payments data access actually requires.
The fix is explicit: evaluate what the code does in context, not whether it matches the Notepad. Convention-conforming code still needs to be read for correctness at the task level. Notepads tell the AI what structural patterns you use — they do not tell it what behavioral contracts your new service needs.
2. Stale-rule drift
Notepads persist across sessions and accumulate. An early architectural decision becomes a rule. A newer decision becomes a later rule. Months later, both rules exist in the same document. The AI applies them both, weighted by its internal sense of which one dominates in the current context, with no indication to you of which rule it applied or that a conflict existed.
A concrete example: your Notepads say “use Promise chains for async operations” near the top (written when the project started in Node 12) and “prefer async/await everywhere” lower down (written when you upgraded). Cursor sees both. For a new async function, it generates async/await syntax — the later rule wins in this case. But for a more complex chaining operation where the Promise chain pattern is more established in the examples in the Notepad, it generates a .then() chain. Both results compile. Both look like they match “the Notepads said to.” The reviewer sees syntactically valid code in a pattern they recognize and accepts it without checking whether the pattern matches the current architectural direction.
The deeper version of this trap is when the stale rule is not just stylistic but structural. If your Notepad says “services communicate synchronously via direct function calls” from the early monolith phase and you have since moved to event-driven communication, a new service that Cursor generates using direct function calls will look correct to a reviewer who checks it against the Notepad. The Notepad says function calls are the convention. The Notepad is wrong about current architecture. The code will compile, pass review, and fail when it runs against a service that no longer has a synchronous interface.
The fix is to treat Notepad content as having a timestamp, even if it doesn’t. When reviewing AI output that references a Notepad rule, ask whether the rule is still current. Notepads that span more than a few weeks of active development accumulate stale rules faster than they accumulate new ones, and the AI cannot distinguish between the two.
3. Coverage-completeness illusion
A well-maintained Notepad is satisfying to look at: categories for authentication, data access, error handling, logging, testing conventions, naming rules. Fifteen rules. Examples for each. Clear prose. When Cursor generates code against a Notepad that comprehensive, there is a strong feeling that the AI has been given everything it needs. The context is complete. The conventions are documented. Review should be a spot-check.
This is the coverage-completeness illusion. Notepads document what you thought to write down, not what the AI needs to know to produce correct code for the specific task. Edge cases are systematically underrepresented: they are, by definition, the situations you did not anticipate when writing the general rules. Cross-cutting concerns that every developer knows but no one documented — that this service must emit audit events for every state change, that payment amounts should never be passed as floating-point, that this particular database table has a soft-delete pattern that requires every query to filter on deleted_at IS NULL — are absent from Notepads because they feel like shared knowledge that does not need to be written down.
The completeness of a Notepad correlates poorly with the coverage of the review surface. A fifteen-rule Notepad covers fifteen things you thought to document. The specific task may have fifteen more implicit requirements that are not in any Notepad and that the AI has no way to infer. A reviewer who uses the Notepad as a proxy for “the AI had enough context” will apply lighter scrutiny to exactly the gaps that the Notepad leaves uncovered.
The fix is to treat Notepads as context accelerators, not correctness guarantors. They speed up the AI by giving it structural direction. They do not replace reading the code for domain-specific correctness. The implicit requirements — the things everyone on the team knows without having written them down — are still your review responsibility, regardless of how comprehensive the Notepad looks.
Using Notepads without letting them substitute for your review
Notepads are genuinely useful. Consistent structural conventions across a codebase reduce the cognitive overhead of reading AI-generated code: if every data access layer looks like a Repository, the unfamiliar parts of a new service stand out against the familiar structure. That contrast is useful for review. Notepads create it reliably in a way that prompting alone does not.
The traps above are not failures of the feature — they are failures of the reasoning pattern that the feature encourages. Convention-anchoring happens when conformance is treated as correctness. Stale-rule drift happens when Notepad content is treated as current. Coverage-completeness illusion happens when Notepad comprehensiveness is treated as context sufficiency. All three patterns share the same root: the Notepad becomes a stand-in for the review rather than a tool that makes the review faster.
Reading the specific task’s behavioral requirements before opening the Notepad, auditing Notepad content for stale rules when reviewing code that references a Notepad you haven’t updated recently, and keeping an explicit list of implicit team knowledge that your Notepads do not capture are the three habits that preserve the benefit of Notepads while closing the traps they create.
Related reading: Cursor Rules (.cursorrules) on the structurally similar but session-scope-different traps in global rule files. Cursor AI IDE on the core inline completion and Tab-rhythm traps that apply regardless of Notepad configuration. Cursor Background Agents on reviewing code produced while you were away, where the Notepad context applies across a session you did not observe. How to review AI-generated code for the general five-check framework that applies across all AI coding tools.
Your Notepads say what the code should look like. ZenCode asks whether you checked what it does.
ZenCode surfaces one concrete review question before you accept — separate from what the Notepads document, what the convention says, or whether the code follows the pattern.
Try ZenCode free