Supabase AI: how to review code when AI generates your SQL, RLS policies, and Edge Functions
Supabase is an open-source backend platform built on PostgreSQL that provides a hosted database, authentication, object storage, and serverless Edge Functions. Its integrated AI assistant surfaces across the SQL Editor, the Table Editor, and the Edge Functions interface — generating SQL queries, Row Level Security policies, table schemas, and Deno-based server functions from natural language prompts. For developers building applications on top of Supabase, this AI integration removes the friction of writing boilerplate schema definitions, learning RLS policy syntax, and scaffolding Edge Function handlers from scratch.
That convenience creates a specific set of review risks that differ from reviewing AI-generated application code. Supabase AI generates code that operates at the infrastructure layer: it modifies live database schemas, defines security policies that govern data access for every request, and creates server-side functions that run with elevated privileges. The review obligations are correspondingly higher than for a utility function or UI component, and the failure modes are less visible because they do not surface as compile errors or obvious runtime exceptions. They surface as data exposure, runtime errors in production only, or migration failures after a schema has already been partially applied.
The three Supabase AI review traps
1. SQL Editor RLS bypass silently invalidates generated queries
The Supabase SQL Editor executes queries as the postgres superuser role. In PostgreSQL, Row Level Security policies are enforced for standard database roles — anon, authenticated, and custom roles — but are bypassed entirely for superuser connections unless the table is configured with FORCE ROW LEVEL SECURITY. The Supabase SQL Editor does not set FORCE ROW LEVEL SECURITY by default. When you run an AI-generated query in the SQL Editor and it returns the expected rows, you have confirmed that the query is syntactically correct and logically produces the right output for the postgres role. You have not confirmed anything about what the same query returns when executed from the Supabase JavaScript client using an anon or authenticated JWT.
The concrete pattern: you ask Supabase AI to generate a query that retrieves all posts for the current user. The AI generates a query using WHERE user_id = auth.uid(). The query runs in the SQL Editor and returns the correct rows. But if your posts table has an RLS policy that restricts SELECT to rows where user_id = auth.uid(), and the AI-generated query does not match the exact policy conditions — different column aliasing, a join that changes the scope of auth.uid(), or a policy using current_setting instead of auth.uid() — the client-side query will return zero rows or an empty result set with no error. The policy silently filters the output. The SQL Editor result was not a test of the RLS behavior at all.
The fix: for any AI-generated query that touches a table with RLS enabled, explicitly check the query against the table’s policies before treating the SQL Editor result as validation. Use SELECT * FROM pg_policies WHERE tablename = 'your_table' to list the active policies and verify that the generated query satisfies each policy condition for the role your application uses. If Supabase AI also generated the RLS policy, verify the policy and the query together — the AI may have written them consistently for the intended behavior, but the SQL Editor cannot confirm whether they interact correctly under a standard role.
2. Edge Function generation imports Node.js patterns into a Deno runtime
Supabase Edge Functions run on Deno, not Node.js. Deno has a different module system, a different standard library, different global objects, and explicit differences from Node.js that are intentional by design. AI code generators have been trained overwhelmingly on Node.js code — the vast majority of JavaScript server-side code in public repositories uses Node.js idioms. When Supabase AI generates an Edge Function, or when you use an external AI tool to generate a Supabase Edge Function and paste it into the editor, the generated code frequently contains Node.js patterns that will fail in Deno’s runtime.
The specific patterns to check: require() syntax instead of ES module import — Deno does not support CommonJS require() without a compatibility layer, and the error it produces at cold start is not always obvious. process.env.VARIABLE_NAME instead of Deno.env.get('VARIABLE_NAME') — process is not defined in Deno’s global scope; this throws a ReferenceError at runtime that does not appear as a type error in the editor. Buffer.from() instead of new TextEncoder().encode() or Uint8Array — Buffer is a Node.js built-in that does not exist in Deno. Node-native module imports like import fs from 'fs', import path from 'path', or import crypto from 'crypto' — Deno provides its own standard library for these operations at https://deno.land/std, with different API shapes. Third-party npm packages imported via bare specifiers (import express from 'express') do not resolve in Deno without explicit npm: prefix specifiers and compatibility configuration.
None of these errors are visible as syntax errors. The generated code is valid JavaScript. The runtime gap only surfaces when the function is deployed or invoked. The fix: for every Supabase Edge Function generated by AI, run a pass specifically checking for Node.js-specific globals, built-ins, and module patterns before deployment. Replace process.env calls with Deno.env.get(). Replace Buffer usage with the Deno equivalent. Replace require() with import. Verify that any third-party package imports use Deno-compatible specifiers from jsr:, npm:, or https://deno.land/x.
3. Generated schema migrations conflict with live state the AI cannot see
Supabase AI generates table definitions, foreign key constraints, indexes, and triggers based on the description you provide in the prompt. It has no access to your live database schema. The generated SQL is a best-effort schema construction against an inferred model of what your database might look like, not a verified migration against what your database actually contains. When the inferred model diverges from your live state — and it diverges whenever your schema has evolved since you last gave the AI full context — the generated migration will conflict with your actual schema in ways that are not visible in the generated SQL itself.
The concrete failure patterns: the AI generates a CREATE TABLE users statement, but your schema already has a users table with a different column set and CREATE TABLE will fail with a duplicate table error. The AI generates a foreign key referencing projects.id, but your actual primary key column is projects.uuid with a different type, causing the constraint to fail on creation. The AI generates an index on a column that already has a unique constraint — the index creation succeeds but the uniqueness semantics are now duplicated, adding write overhead. The AI omits a BEFORE INSERT trigger that currently runs on the table being modified; a new NOT NULL column added by the migration will immediately conflict with the trigger’s logic on the first insert after migration.
The fix: treat every AI-generated schema migration as a draft that requires verification against live state before application. Use Supabase’s supabase db diff command or a schema inspection query to list your current tables, columns, constraints, indexes, and triggers. Before running the generated migration, check every CREATE TABLE against existing tables, every foreign key reference against the actual primary key column name and type, and every column addition against existing triggers and constraints on that table. For migrations that modify existing tables rather than creating new ones, run the migration against a copy of your local development database first and inspect the diff output before applying to production.
Using Supabase AI without inheriting its infrastructure blind spots
The three traps share a structural cause: Supabase AI generates code that operates at the intersection of your application layer and your database infrastructure, but it has no access to the actual state of either at generation time. The SQL Editor RLS bypass means the AI’s query output cannot be validated in the same environment where it will execute in production. The Deno runtime gap means the AI’s Edge Function output cannot be validated by syntax or type checks alone — only runtime behavior reveals the failure. The schema drift gap means the AI’s migration output cannot be validated without comparing against live state the AI has never seen.
Each trap requires a separate verification pass that the Supabase AI interface cannot perform. For SQL: check against active RLS policies using the role your application uses, not the superuser context of the SQL Editor. For Edge Functions: audit every import, global reference, and built-in usage against Deno’s runtime before deployment. For schema migrations: diff the generated SQL against your live schema state before applying. These passes are independent of whether the generated code looks correct in the editor. They address gaps that are invisible to any review that stops at the generated text.
Related reading: Firebase Studio on the similar infrastructure generation review challenges when AI generates backend config and rules for a competing platform. Pulumi AI on IAM over-permissioning and cost blindspots when AI generates cloud infrastructure configuration. Snyk Code on security-focused review passes for AI-generated server-side code. OpenAI Codex on the environment isolation gap when a cloud agent generates code in a sandbox that differs from your production runtime. How to review AI-generated code for the base checklist that applies after the tool-specific traps have been addressed.
Supabase AI generated the schema. ZenCode asks whether you checked it against your RLS model and live state.
ZenCode surfaces one concrete review question before you apply AI-generated SQL or deploy a generated Edge Function — the infrastructure verification pass that the Supabase editor cannot perform against your actual database state.
Try ZenCode free