Prisma: how to review AI-generated database code when a schema-first ORM generates typed queries and manages migrations

2026-05-04 · 5 min read · ZenCode

Prisma is a schema-first ORM for Node.js and TypeScript. You define your data model in a schema.prisma file using Prisma’s declarative language, and Prisma generates a fully typed client from that definition. Every query — findMany, create, update, upsert — is typed against your schema, with result types inferred from the fields and relations you defined. Schema changes are tracked as migration files through prisma migrate dev, applied to production with prisma migrate deploy, and the migration history is preserved in a _prisma_migrations table that tracks which migrations have been applied to which database.

Because Prisma’s schema language is declarative and its client API is straightforward, AI coding tools produce Prisma code readily and correctly. The assistant writes schema files, migration invocations, and query sequences that compile without TypeScript errors and run against the database without exceptions. The code looks complete and correct at every static analysis level. Three specific review gaps appear consistently in AI-generated Prisma code — gaps that TypeScript types and successful test runs cannot surface because they involve migration history integrity, multi-step atomicity, and the difference between what the query returns and what the API should expose.

The three Prisma code review traps

1. prisma db push applied without migration history

Prisma provides two distinct tools for synchronizing the database schema with schema.prisma. The first is prisma migrate dev, which generates a timestamped SQL migration file, applies it to the development database, and records the migration in the _prisma_migrations table. The second is prisma db push, which introspects the current database schema, computes the diff against schema.prisma, and applies the necessary DDL directly — without generating a migration file and without recording anything in the migration history.

AI-generated setup scripts, Dockerfiles, and CI configurations frequently use prisma db push rather than prisma migrate deploy. The reason is pragmatic: prisma db push requires no prior migration files to exist, works correctly from a fresh schema, and produces no output beyond a success confirmation. It is the path of least resistance when the AI generates initial project scaffolding or when a developer asks the AI to update the database to match a changed schema. The command succeeds; the database matches the schema; everything works.

The review gap appears at the production boundary. A team using prisma db push throughout development has no migration files, no migration history, and no rollback path. Deploying a schema change to production means running prisma db push against a production database that holds live data — a command that applies computed DDL without a review step, without a transaction boundary guaranteed by the migration runner, and without any record of what was applied. If the push partially fails, the migration history is empty and there is no SQL file to examine or reverse. If the schema change needs to be rolled back, there is no migration file to revert to and no record of the previous state in the _prisma_migrations table.

The review check: search every AI-generated configuration file, CI workflow, and entrypoint script for prisma db push. In a development-only context, db push is acceptable for rapid iteration. In any context that touches a shared or production database — CI integration tests, staging deployment scripts, production entrypoints, Docker Compose files for shared environments — replace prisma db push with prisma migrate deploy and verify that the migration files committed to the repository match the expected schema state. If the repository contains no migration files at all, the team is operating without migration history and the problem is upstream of the deployment script.

2. Multi-step mutations without $transaction()

Prisma’s client API is promise-based: every database operation returns a promise and is written with await. AI-generated code that performs multi-step mutations writes them as sequential await calls: create a user, then create a profile linked to that user, then create an initial subscription record. Each statement reads naturally as a sequence and compiles without issues. TypeScript types ensure the result of the first call is available as input to the second. The code runs correctly in testing because the full sequence always completes on small, consistent test data.

The review gap is that each await prisma.[model].[operation]() call executes in its own auto-committed database transaction. There is no implicit transaction wrapping the sequential calls. If the second operation fails after the first has succeeded — due to a constraint violation, a network timeout, a concurrent write that creates a conflict, or a database error on the second statement — the first operation is already committed and will not be rolled back. The database is left in a partial state: a User record exists with no corresponding Profile, or a Profile exists with no Subscription. This partial state violates the application’s data integrity assumptions without the database raising an error, because each individual operation was valid in isolation.

AI-generated code is particularly prone to this pattern because the multi-step nature of an operation is determined by the application’s domain logic — the relationship between creating a user and creating the linked resources is not visible in the database schema alone. The AI writes correct operations for each step; it does not infer that they form an atomic unit unless the prompt explicitly mentions transactions. Prisma supports both a sequential transaction array (prisma.$transaction([op1, op2, op3])) and an interactive transaction (prisma.$transaction(async (tx) => { ... })). Neither is added by default.

The review check: for every AI-generated function that contains more than one Prisma write operation (create, update, upsert, delete), ask whether all operations must succeed or fail together. If yes, confirm that the operations are wrapped in prisma.$transaction() rather than executed as independent sequential await calls. Pay particular attention to operations that create linked records (user + profile, order + line items, payment + ledger entry), operations that transfer values between records (inventory decrement + order increment), and operations that delete a parent and its children. These are the patterns where partial failure creates the most damaging inconsistency and where AI-generated code most reliably omits the transaction wrapper.

3. Implicit full-row selection exposing sensitive columns in API responses

Prisma queries return all columns defined in the model by default. A prisma.user.findUnique({ where: { id } }) call returns every field on the User model: id, email, name, createdAt — and also passwordHash, resetToken, resetTokenExpiresAt, twoFactorSecret, isInternalAdmin, and any other sensitive column that happens to be on the model. AI-generated code writes these queries without select: options because the full return type is immediately useful: TypeScript tells the AI that user.email and user.name are available, and the AI uses them. The query compiles; the result is typed; the application works.

The review gap appears when this query result is passed directly or indirectly to an API response. AI-generated API route handlers that query Prisma and serialize the result to JSON do so with the full model object. The TypeScript type of the result correctly includes passwordHash: string and resetToken: string | null. TypeScript does not flag serializing these fields to JSON as an error because serialization is a runtime operation that TypeScript does not model. The reviewer who looks at the TypeScript types sees a correctly typed User object being returned. The API consumer who inspects the response body sees the hashed password, the active reset token, and the internal admin flag in the JSON payload.

This pattern is compounded by object spreading. AI-generated code frequently transforms Prisma results with spread operators: const response = { ...user, token: jwt }. The spread copies every field from the Prisma result into the response object, including all sensitive columns, without any explicit enumeration that would make the exposure visible during code review. The code that creates the exposure is a single character — the ... — and the sensitive fields are implicit in the model definition that the AI loaded when generating the code.

The review check: for every Prisma query in AI-generated code that feeds an HTTP response — directly in a route handler, or indirectly through a service function whose result is serialized — confirm whether the query includes an explicit select: block that enumerates only the fields the response should contain. A query without select: returns all model fields, including fields that were added to the model after the query was written and that were never reviewed for exposure. Add an explicit select: to every query in the API response path; do not rely on downstream filtering or TypeScript types to prevent sensitive field exposure. Pay special attention to any code that spreads a Prisma result into a response object — spreading a full model is a reliable indicator that sensitive columns are being exposed.

Reviewing Prisma code without treating TypeScript correctness as API safety

Prisma’s type safety is a genuine engineering contribution: schema-derived types eliminate the gap between model definitions and query result types that plagues code written against raw database drivers. The review problem is that TypeScript correctness is not API safety. Code that compiles cleanly can still apply schema changes to production without migration history, commit partial database state when a multi-step mutation fails midway, and return sensitive columns in API responses because TypeScript does not model JSON serialization as a type boundary.

A practical review approach for AI-generated Prisma code: when you see prisma db push in any script that will run against a shared or production database, ask where the migration files are and what happens if this schema change needs to be rolled back. When you see multiple sequential Prisma write operations in a single function, ask whether they must all succeed or fail together and confirm a $transaction() wrapper is present. When you see a Prisma query result passed directly to a response or spread into a response object, open the schema and enumerate every field on that model — then confirm which of those fields should not appear in the API response. TypeScript validates the schema as declared; these three questions cover what it does not.


Related reading: Drizzle ORM on reviewing AI-generated database code where TypeScript-first schema inference and migration tooling create similar gaps between compile-time correctness and database runtime behavior. Supabase on reviewing AI-generated backend code where row-level security policies and database function boundaries create authorization gaps that compile correctly but enforce incorrectly at runtime. How to review AI-generated code for the general checklist that applies when AI generates database schema and query code for any TypeScript ORM.

The query is typed. ZenCode checks whether it’s safe.

ZenCode surfaces one concrete review question before you commit — including when AI-generated Prisma code passes all TypeScript checks but carries missing transaction boundaries, db push without migration history, or full-row selection that exposes sensitive columns in API responses.

Try ZenCode free

More posts on AI-assisted coding habits