Terraform Import Blocks: You're (Probably) Using Them Wrong
Import blocks seem simple but hide complexity that corrupts state, triggers replacements, and crashes pipelines. Here's what actually happens.
So you've got a bunch of resources sitting in AWS that someone created through the console. Maybe it was you six months ago. Maybe it was that contractor who left. Doesn't matter - now you need to get them into Terraform.
Import blocks landed in Terraform 1.5 and everyone thought "finally, no more sketchy bash scripts wrapping terraform import
commands." But here's the thing - I've seen more production incidents from import blocks than from the old CLI approach. Not because the feature is bad. Because people use it wrong.
The Mechanics Nobody Explains
Import blocks run during plan, not apply. Sounds obvious, right? But that's exactly why they break in ways you don't expect.
Here's what everyone tries first:
import {
id = "i-1234567890abcdef0"
to = aws_instance.web
}
And boom - "Cannot import to non-existent resource address."
You need the resource block first. Always. No exceptions:
resource "aws_instance" "web" {
# Yes, it can be empty during import
}
import {
id = "i-1234567890abcdef0"
to = aws_instance.web
}
But wait, there's more. Got resources with count
or for_each
? Your addressing changes:
# For count resources
import {
id = "i-1234567890abcdef0"
to = aws_instance.web[0] # Note the index
}
# For for_each resources
import {
id = "i-1234567890abcdef0"
to = aws_instance.web["prod"] # Note the key
}
Miss this and you'll stare at error messages for 20 minutes wondering what's wrong.
The for_each Trap That Gets Everyone
Terraform 1.7 added for_each
support to import blocks. Seems great for bulk imports:
locals {
instances = {
web = "i-1234567890abcdef0"
api = "i-0987654321fedcba0"
worker = "i-abcdef1234567890"
}
}
import {
for_each = local.instances
id = each.value
to = aws_instance.main[each.key]
}
Plot twist: combine this with -generate-config-out
and it fails. Just... fails. No clear error. The feature combo isn't supported.
And here's what really gets people - you can't use dynamic values:
# THIS DOESN'T WORK
data "aws_instances" "existing" {
filter {
name = "tag:ManagedBy"
values = ["manual"]
}
}
import {
for_each = toset(data.aws_instances.existing.ids) # Nope
id = each.value
to = aws_instance.discovered[each.key]
}
The IDs must be known at plan time. Period. No data sources. No resource attributes. Just static values.
Provider-Specific Nightmares
Each cloud provider has its own import ID format. AWS mostly uses simple IDs, but IAM resources want ARNs:
# EC2 instance - simple ID
import {
id = "i-1234567890abcdef0"
to = aws_instance.main
}
# IAM role - needs full ARN
import {
id = "arn:aws:iam::123456789012:role/my-role"
to = aws_iam_role.main
}
Azure? Everything needs the full ARM ID:
import {
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm"
to = azurerm_virtual_machine.main
}
One wrong character in that path and you're debugging for an hour.
GCP is flexible, which somehow makes it worse:
# All of these might work, depending on your provider config
import {
id = "projects/my-project/zones/us-central1-a/instances/my-instance"
to = google_compute_instance.main
}
import {
id = "my-project/us-central1-a/my-instance"
to = google_compute_instance.main
}
import {
id = "my-instance" # If project/zone are in provider block
to = google_compute_instance.main
}
The Cascade Effect Nobody Warns About
Import an RDS instance? Hope you're ready to import everything it touches:
# You start here...
import {
id = "my-database"
to = aws_db_instance.main
}
# But then you need...
import {
id = "my-subnet-group"
to = aws_db_subnet_group.main
}
import {
id = "sg-1234567890abcdef0"
to = aws_security_group.rds
}
import {
id = "my-parameter-group"
to = aws_db_parameter_group.main
}
# And it keeps going...
What started as "let me import this database" becomes importing 15+ resources. The dependency chains are real.
Configuration Generation: The False Prophet
Everyone gets excited about -generate-config-out
:
terraform plan -generate-config-out=generated.tf
Here's what you get:
resource "aws_instance" "web" {
ami = "ami-0123456789abcdef0"
instance_type = "t3.micro"
availability_zone = "us-east-1a"
subnet_id = "subnet-1234567890abcdef0"
vpc_security_group_ids = ["sg-1234567890abcdef0"]
associate_public_ip_address = true
hibernation_options {
configured = false
}
credit_specification {
cpu_credits = "standard"
}
metadata_options {
http_endpoint = "enabled"
http_protocol_ipv6 = "disabled"
http_put_response_hop_limit = 1
http_tokens = "optional"
}
# ... 50 more lines of stuff you didn't set
}
Every. Single. Attribute. Including computed ones that will change on next apply. Apply this without cleanup and watch your instance get replaced.
Modules Make Everything Worse
Importing into module resources requires full paths:
# Wrong
import {
id = "i-1234567890abcdef0"
to = aws_instance.web
}
# Right
import {
id = "i-1234567890abcdef0"
to = module.web_tier.aws_instance.web
}
But here's the kicker - you can't specify providers in import blocks. Got multi-region modules? You're in for pain:
module "us_east" {
source = "./region"
providers = {
aws = aws.us_east_1
}
}
module "us_west" {
source = "./region"
providers = {
aws = aws.us_west_2
}
}
# This uses whatever provider the module inherits
# Not what you specified in the import block
import {
id = "i-1234567890abcdef0"
to = module.us_west.aws_instance.web
# provider = aws.us_west_2 # CAN'T DO THIS
}
Performance Cliffs
Import 10 resources? Fine. Import 100? Take a coffee break. Import 500? Hope your state backend doesn't timeout.
Here's what happens at scale:
- Plan time goes exponential (30+ minutes)
- Memory usage explodes (8GB+ for large imports)
- State file bloats (10MB+ files)
- API rate limits hit hard
You know what handles this better? Platforms that understand infrastructure at scale. Just saying.
When Import Blocks Actually Save You
Lost your state file? Import blocks can reconstruct it systematically:
# disaster-recovery.tf
import {
id = "i-critical-web-server"
to = aws_instance.web
}
import {
id = "db-critical-database"
to = aws_db_instance.main
}
# Run plan to verify, then apply
The audit trail alone makes this better than CLI imports. Your git history shows exactly what got recovered when.
Brownfield migrations - that's where import blocks shine. You can check hundreds of import definitions into version control, review them in PRs, and execute systematically. Way better than a bash script with 500 terraform import commands.
The Right Way™
Here's my battle-tested workflow:
1. Pre-import validation
# Verify the resource exists
aws ec2 describe-instances --instance-ids i-1234567890abcdef0
# Check your provider auth
terraform providers
2. Write minimal resource blocks
resource "aws_instance" "web" {
# Empty is fine for import
}
3. Add import blocks
import {
id = "i-1234567890abcdef0"
to = aws_instance.web
}
4. Plan and verify
terraform plan
# Look for "Plan: 1 to import, 0 to add, 0 to change, 0 to destroy"
5. Apply and immediately remove import blocks
terraform apply
# Then delete the import blocks from your config
6. Fill in the resource configuration
resource "aws_instance" "web" {
ami = "ami-0123456789abcdef0"
instance_type = "t3.micro"
tags = {
Name = "web-server"
}
}
Common Failure Modes
Symptom | Cause | Fix |
---|---|---|
"Cannot import to non-existent resource" | Missing resource block | Add empty resource block first |
"Invalid import id" | Wrong ID format for provider | Check provider docs for exact format |
"Values not known until apply" | Using dynamic values in import | Use static IDs only |
Plan shows resource replacement | Generated config includes computed attributes | Remove computed attributes from config |
Import succeeds but state is empty | Wrong provider/region context | Verify provider configuration |
"Error acquiring state lock" | Concurrent imports or large import timeout | Increase lock timeout or reduce batch size |
Memory/performance issues | Too many simultaneous imports | Break into smaller batches |
The Scalr Advantage
Look, I'm not saying you need a platform to handle imports. But when you're dealing with hundreds of resources across multiple teams, having something that understands infrastructure lifecycle - including imports - at an organizational level? That's when raw Terraform starts showing its limits.
Platforms like Scalr provide controlled environments where import operations get the same policy checks, audit trails, and collaborative workflows as regular changes. No more "who imported what when" mysteries. No more state corruption from concurrent operations. Just saying.
What's Next
HashiCorp keeps improving import blocks. The identity
attribute in 1.12 helps with complex IDs. Performance gets better each release. But the fundamental complexity remains - you're synchronizing external reality with Terraform's view of the world.
My advice? Start small. Import one resource. Verify it works. Then scale up. And maybe consider whether wrestling with import blocks at scale is the best use of your time, or if a platform that handles this complexity might free you up for more valuable work.
The import block feature is powerful. But with great power comes great ways to corrupt your state file. Use it wisely.