Terraform: how to review AI-generated infrastructure code when plan output shows changes but not whether they’re correct

2026-05-03 · 5 min read · ZenCode

Terraform is the dominant infrastructure-as-code tool for provisioning cloud resources. Developers describe the desired state of AWS, GCP, Azure, or any other provider in HCL configuration files, and Terraform computes a plan that shows the diff between declared state and actual state before applying any changes. AI assistants — GitHub Copilot, Claude, ChatGPT, and purpose-built IaC tools like Infracost AI and Spacelift AI — can generate complete Terraform configurations from natural language prompts. A developer describes an architecture in a few sentences and receives syntactically valid HCL that passes terraform validate and produces a parseable plan.

The risk is not that AI generates invalid Terraform. The risk is that AI generates Terraform that is valid, plan-clean, and wrong. Terraform’s plan output is optimized for showing what will change; it is not optimized for showing whether those changes are correct for the intended system. The three review traps below each exploit this gap between plan-validity and architectural correctness, and each requires a separate verification pass that terraform plan cannot perform.

The three Terraform AI review traps

1. terraform plan correctness gap

terraform plan performs two functions: it validates that the HCL configuration parses correctly against the provider schema, and it computes a diff between the declared desired state and the current state tracked in the state file. Both functions are about consistency and parsability, not about whether the declared resources are the right ones for the intended architecture. A plan that exits with Plan: 12 to add, 0 to change, 0 to destroy tells you that 12 resources are declared and none of them conflict with the current state. It tells you nothing about whether those 12 resources form the correct architecture.

The concrete pattern: you ask an AI assistant to generate Terraform for a web application backend with a load balancer, an auto-scaling group, and a database. The AI generates syntactically valid HCL. The plan runs cleanly. The plan output shows security groups, launch templates, an ALB, target groups, and an RDS instance being created — all the resource types you expect. But the generated security group rules allow 0.0.0.0/0 ingress on port 5432, the RDS instance is configured as publicly accessible, the auto-scaling group has a minimum capacity of 1 with no maximum set, and the ALB listener forwards HTTP not HTTPS. None of these are plan errors. The configuration is valid. The plan is clean. The architecture is wrong.

The fix: treat terraform plan output as a resource inventory, not a correctness verdict. For every AI-generated Terraform configuration, read the plan output as a list of resources and then audit each resource’s argument values independently. For networking resources, check every ingress and egress rule against the principle of least access. For database resources, check publicly_accessible, backup retention, and encryption settings explicitly. For compute resources, check instance types, scaling limits, and IAM role permissions. For load balancers, check whether HTTP listeners exist alongside HTTPS listeners and what the redirect behavior is. The plan confirms the resources will be created; you must confirm they should be created with those exact argument values.

2. Community module trust by name

Terraform’s public registry hosts thousands of community modules that encapsulate common infrastructure patterns. AI assistants select and call these modules by matching the module’s source path and description to the intent expressed in the prompt. The module name is the primary signal: terraform-aws-modules/rds/aws is selected because the name implies it creates an RDS instance. What the module actually creates — all of its internal resources, the defaults for its optional variables, the implicit dependencies between its outputs and other resources — is invisible to a developer who reads only the AI-generated module call.

The terraform-aws-modules/rds/aws module, for example, creates not just an aws_db_instance but also a subnet group, parameter group, option group, security group, CloudWatch log group, and IAM role by default. Many of these are created with defaults that the AI-generated module call does not override, because the AI selected only the variables it deems necessary to satisfy the stated intent. The module’s deletion_protection variable defaults to false. The skip_final_snapshot variable defaults to false but is often set to true in AI-generated calls to avoid plan errors in test environments — and left that way when the configuration is promoted to production. The storage_encrypted variable defaults to false on older module versions. None of these appear in the plan output unless you know to look for them in the resource arguments section.

The fix: for every AI-generated module call, open the module source on the Terraform registry and read the variable defaults. Do not assume that the AI has specified all security-relevant or cost-relevant variables. Specifically check: whether deletion_protection is set for stateful resources; whether skip_final_snapshot is set correctly for the environment; whether encryption is enabled; whether the module version is pinned and what changed between that version and the current one. The module name describes the primary resource; the module source describes everything else that gets created.

3. Moved and import block irreversibility

When AI-assisted refactoring involves renaming resources, extracting modules, or importing existing infrastructure into Terraform management, the AI generates moved blocks, import blocks, or shell commands using terraform state mv and terraform import. These operations manipulate the state file directly. Unlike resource changes that can be planned, reviewed, and left unapplied, state operations take effect the moment they are committed to the state file — either at apply time for moved and import blocks, or immediately for CLI state commands.

The recovery path from a wrong state operation is significantly harder than reverting a wrong resource configuration. If the AI generates a moved block with an incorrect destination address — referencing the wrong module path, wrong resource index, or a resource that does not yet exist — Terraform applies the state operation and the state file now contains an inconsistency. The next plan may show unexpected destroys and creates for the affected resources, or may fail with a state lock error. Restoring the previous state requires either a backup from the state backend’s versioning system (if it exists and was enabled) or a manual state edit that reintroduces all the risks of the original incorrect operation. The same applies to import blocks that reference the wrong resource ID: the state now associates the wrong real resource with the Terraform address, and every subsequent plan operates against incorrect ground truth.

The fix: treat AI-generated state operations with higher scrutiny than AI-generated resource definitions, not lower. Before committing any moved block, verify that both the from and to addresses exist in the configuration as it will be after apply — not as it exists now. Before running any import block or terraform import command, confirm the resource ID format for that specific resource type (IDs differ by provider, resource type, and region) and verify the ID against the live infrastructure using the provider’s console or CLI. Take a state backup before any state manipulation operation. For terraform state mv commands specifically, run with the -dry-run flag first and verify the output before executing without it.

Reviewing Terraform AI output without inheriting its correctness assumptions

The three traps share a structural cause: AI tools generate Terraform configurations by producing HCL that satisfies the stated intent according to their training data. They validate against provider schemas and produce configurations that plan cleanly. They do not validate against your actual security requirements, your cost envelope, the real state of resources you want to import, or the actual addresses of resources you want to move. terraform plan validates the configuration against the provider schema and the state file; it does not perform the validation the AI skipped.

Each trap requires a separate verification pass. For resource configurations: audit argument values independently of plan output, focusing on security-relevant settings that AI tends to set permissively. For module calls: read the module source defaults rather than assuming the AI specified all relevant variables. For state operations: verify addresses and resource IDs against live state before applying and take a backup first. These passes are complementary to the plan, not replaceable by it. A clean plan is a necessary condition for applying AI-generated Terraform; it is not a sufficient one.


Related reading: Pulumi AI on IAM over-permissioning and state drift when natural language generates cloud infrastructure — Terraform-adjacent review traps in an SDK-based IaC tool. GitHub Copilot for GitHub Actions on reviewing AI-generated CI/CD pipeline fixes, where YAML correctness and workflow correctness have the same gap as Terraform plan correctness. Semgrep on static analysis for security patterns in infrastructure code and how pattern-detection coverage differs from architectural review. Snyk Code on security scanning passes that complement the manual correctness review Terraform plan cannot perform. How to review AI-generated code for the base checklist that applies before the Terraform-specific traps are addressed.

Terraform plan ran clean. ZenCode asks whether you verified the argument values, module defaults, and state operations.

ZenCode surfaces one concrete review question before you apply AI-generated infrastructure — the correctness pass that terraform plan cannot perform against your security requirements and actual resource state.

Try ZenCode free

More posts on AI-assisted coding habits