Terraform resources: from basic blocks to advanced lifecycle management
Learn Terraform fast—grasp resource blocks, modules, and cutting-edge lifecycle management in one clear, practical guide.
Terraform resources form the backbone of infrastructure as code (IaC), allowing organizations to define, provision, and manage their infrastructure through declarative configuration files. This comprehensive guide examines both the fundamental structure of resource blocks and their complex behavior throughout the Terraform lifecycle.
Resource blocks form the foundation of Terraform configurations
Resource blocks are the primary building blocks in Terraform that define infrastructure objects to be created, updated, and managed. Each resource block represents a component of your infrastructure—from virtual machines and networks to DNS records and cloud services.
The basic syntax of a resource block follows a consistent pattern:
resource "resource_type" "resource_name" {
# Configuration arguments
argument1 = value1
argument2 = value2
}
Three essential components make up every resource block:
- The
resource
keyword that begins the declaration - The resource type (like
aws_instance
orazurerm_virtual_network
) indicating the infrastructure component - A resource name that serves as a logical identifier within your Terraform code
The resource type typically follows the pattern provider_resourcetype
, connecting resources to their respective providers. For example, aws_instance
resources are managed by the AWS provider, while google_compute_instance
resources are managed by the Google Cloud provider.
Within the block body, you'll define both required and optional arguments specific to the resource type. For instance, an AWS EC2 instance requires the ami
and instance_type
arguments at minimum:
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "WebServer"
}
}
Resources can also include nested blocks for more complex configurations:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
# Nested block for boot disk configuration
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
size = 100
}
}
}
Resource behavior throughout the terraform workflow
Understanding how Terraform manages resources throughout their lifecycle is crucial for working effectively with the tool. The behavior of resources during different operations determines how your infrastructure evolves.
The state file acts as terraform's source of truth
At the core of Terraform's operation is the state file—a JSON document that maps configuration to real-world infrastructure objects. This file contains:
- Resource metadata and attributes
- Resource dependencies
- A record of the most recent resource version
When you run Terraform operations, the state file is used to determine what changes need to be made to align your infrastructure with your configuration.
Plan, apply, and destroy operations follow predictable patterns
During the terraform plan
operation:
- Terraform reads the configuration and state file
- Terraform refreshes the state by querying providers about real infrastructure
- A dependency graph of all resources is created
- Each resource is evaluated to determine required changes
- An execution plan is displayed showing what will be created, updated, or destroyed
The terraform apply
operation follows similar steps but actually executes the changes:
- Terraform performs the same steps as in plan (unless applying a saved plan)
- Upon confirmation, changes are executed in dependency order
- Resources are processed in parallel where possible
- For each resource, appropriate provider methods are called
- The state file is updated with the new infrastructure state
When you run terraform destroy
:
- Terraform generates a plan that will destroy all resources in the state
- Upon confirmation, resources are destroyed in reverse dependency order
- Destroyed resources are removed from the state file
Create, update, or destroy decisions are determined through reconciliation
Terraform determines whether to create, update, or destroy resources through a reconciliation process between:
- The desired state (from your configuration files)
- The current state (tracked in the state file)
- The actual infrastructure (determined by API calls)
Terraform prefers to update resources in-place whenever possible, but some changes cannot be applied to existing resources due to API limitations. In these cases, Terraform will destroy the existing resource and create a new one, a process known as replacement.
Advanced resource configuration with meta-arguments
Meta-arguments are special arguments available across all resource types that modify how Terraform manages resources.
The depends_on meta-argument creates explicit dependencies
While Terraform automatically infers dependencies when one resource references attributes of another resource, sometimes you need to create explicit dependencies:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
depends_on = [
aws_iam_role_policy.example
]
}
This tells Terraform that the instance depends on the IAM role policy, even though it doesn't directly reference any of its attributes. Use depends_on
when a resource relies on another resource's behavior but doesn't directly reference it in its arguments.
Count and for_each create multiple resource instances
The count
meta-argument creates multiple instances of a resource based on a numeric value:
resource "aws_instance" "server" {
count = 4
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "Server ${count.index}"
}
}
The for_each
meta-argument creates multiple instances based on a map or set:
resource "aws_iam_user" "example" {
for_each = toset(["john", "mary", "bob"])
name = each.key
}
For_each is generally preferred over count for creating multiple resources with distinct configurations, as it handles changes to the collection better. With count
, if an element is removed from the middle of a list, all subsequent instances will be affected, potentially causing unnecessary resource replacement.
Provider selects which provider configuration to use
The provider
meta-argument specifies which provider configuration to use for a resource:
resource "aws_instance" "example" {
provider = aws.west
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
This is particularly useful when managing resources across multiple regions or with different credentials.
Customizing resource lifecycle behavior
The lifecycle block allows you to customize how Terraform manages resources during creation, updates, and deletion.
Create_before_destroy changes replacement order
By default, when Terraform needs to replace a resource, it destroys the existing resource first, then creates the replacement. The create_before_destroy
option reverses this order:
resource "aws_instance" "example" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
This helps minimize downtime, especially for critical resources. However, not all resources can coexist with their replacements due to naming conflicts or other constraints.
Prevent_destroy protects critical resources
To protect critical resources like databases from accidental destruction:
resource "aws_db_instance" "database" {
engine = "mysql"
instance_class = "db.t3.micro"
lifecycle {
prevent_destroy = true
}
}
This causes Terraform to reject any plan that would destroy the resource. Note that this protection only applies as long as the resource exists in the configuration—if the resource block is removed, the protection is removed as well.
Ignore_changes prevents terraform from "fixing" external changes
The ignore_changes
option tells Terraform to ignore changes to specific attributes:
resource "aws_instance" "example" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
tags = {
Name = "example-instance"
Environment = "production"
}
lifecycle {
ignore_changes = [
tags,
]
}
}
This is useful when:
- External processes modify certain settings
- You want to share management of a resource with another process
- Attributes are set only during creation but might change later
Replace_triggered_by forces replacement when dependencies change
Added in Terraform 1.2, this option creates an explicit dependency for replacement operations:
resource "aws_appautoscaling_target" "example" {
# ...configuration...
lifecycle {
replace_triggered_by = [
aws_ecs_service.example.id
]
}
}
This tells Terraform to replace the resource when the referenced resources change, even if the resource's own configuration hasn't changed.
Working with resource dependencies and references
Terraform's power comes from its ability to manage relationships between resources automatically.
Implicit dependencies are detected through attribute references
When one resource references attributes of another resource, Terraform automatically creates an implicit dependency:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "example" {
vpc_id = aws_vpc.main.id # Creates an implicit dependency
cidr_block = "10.0.1.0/24"
}
This ensures the VPC is created before the subnet, as the subnet needs the VPC's ID.
Resource attributes are referenced with consistent syntax
To reference a resource's attribute, use the syntax <RESOURCE_TYPE>.<RESOURCE_NAME>.<ATTRIBUTE>
. For example:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
# Reference the instance ID in another resource
resource "aws_eip" "ip" {
instance = aws_instance.web.id
}
When a resource uses count
or for_each
, the reference syntax changes:
For count
:
resource "aws_instance" "web" {
count = 3
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
# Reference a specific instance
resource "aws_eip" "ip" {
instance = aws_instance.web[0].id # References the first instance
}
# Reference all instances with splat expression
output "instance_ips" {
value = aws_instance.web[*].private_ip
}
For for_each
:
resource "aws_instance" "web" {
for_each = toset(["prod", "staging", "dev"])
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Environment = each.key
}
}
# Reference a specific instance
resource "aws_eip" "prod_ip" {
instance = aws_instance.web["prod"].id
}
Dynamic resource configuration with provisioners
Provisioners allow executing actions on local or remote machines as part of resource creation or destruction.
File provisioners copy files to newly created resources
resource "aws_instance" "example" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
provisioner "file" {
source = "local/path/to/file.txt"
destination = "/remote/path/file.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
}
This copies a file from the machine running Terraform to the newly created instance.
Local-exec and remote-exec execute commands locally or on resources
The local-exec
provisioner runs commands on the machine running Terraform:
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
The remote-exec
provisioner runs commands on the resource:
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx",
"sudo systemctl start nginx"
]
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
HashiCorp recommends using provisioners as a last resort due to their limitations. Alternatives include:
- Cloud-init or other initialization systems
- Configuration management tools (Ansible, Chef, Puppet)
- Immutable infrastructure with pre-configured images
Advanced resource manipulation techniques for more complex scenarios
For more sophisticated resource management, Terraform offers several advanced techniques.
Local values simplify complex expressions
Local values provide a way to assign names to expressions for reuse:
locals {
common_tags = {
Environment = var.environment
Project = var.project_name
Owner = "DevOps Team"
}
}
resource "aws_instance" "example" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
tags = merge(local.common_tags, {
Name = "example-instance"
})
}
This helps simplify complex expressions, calculate derived values once, and create a layer of abstraction.
Dynamic blocks generate repeatable nested configurations
Dynamic blocks generate repeatable nested blocks within resources:
resource "aws_security_group" "example" {
name = "example"
dynamic "ingress" {
for_each = var.service_ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
}
This is particularly useful for resources that contain multiple similar nested blocks, like security group rules. However, use dynamic blocks sparingly, as they can make configuration harder to read.
State management techniques for existing infrastructure
Terraform provides mechanisms to import existing infrastructure into its state:
Command line import:
terraform import aws_instance.example i-abcd1234
Configuration-driven import (Terraform 1.5+):
import {
to = aws_instance.example
id = "i-abcd1234"
}
For managing state, Terraform offers several commands:
terraform state list
: List resources in the stateterraform state show
: Show detailed state of a specific resourceterraform state mv
: Move resources within stateterraform state rm
: Remove resources from state without destroying them
Best practices for terraform resource management in production
Effective resource management in production environments requires following established best practices.
Organize resources logically using files and modules
- Use modules to encapsulate related resources and create reusable components
- Separate infrastructure into logical layers (networking, security, compute, data)
- Follow a consistent naming convention for resources
- Organize files by functionality, not resource type
Use remote state with proper security and locking
- Always use a remote backend for state in production
- Enable state locking to prevent concurrent modifications
- Implement state encryption at rest
- Set up regular state backups
- Control access to state files with fine-grained permissions
Apply proper version constraints to terraform and providers
terraform {
required_version = "~> 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
This ensures consistent behavior across different environments and team members.
Implement a robust CI/CD pipeline for terraform deployments
- Use a version control system for all Terraform code
- Always review plans before applying changes
- Use pull requests and code reviews for infrastructure changes
- Consider implementing automated testing for Terraform code
Common anti-patterns to avoid
Watch out for these common anti-patterns when working with Terraform resources:
Hardcoding values instead of using variables
Anti-pattern:
resource "aws_instance" "example" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
tags = {
Environment = "production"
}
}
Better approach:
resource "aws_instance" "example" {
ami = var.ami_id
instance_type = var.instance_type
tags = var.tags
}
Overusing count instead of for_each
Use for_each
instead of count
when the collection might change to avoid unexpected resource recreations.
Neglecting resource lifecycle settings
Consider how resources will be updated or replaced and use lifecycle blocks appropriately:
create_before_destroy
for zero-downtime updatesprevent_destroy
for critical resourcesignore_changes
for attributes managed outside of Terraform
Overusing provisioners for configuration management
Instead of relying heavily on provisioners:
- Use cloud-init or user data for instance initialization
- Employ infrastructure immutability with pre-baked images
- Leverage configuration management tools for complex setups
Conclusion
Terraform resources form the foundation of infrastructure as code, allowing for declarative, version-controlled infrastructure management. Understanding both their basic structure and advanced behavior throughout the lifecycle is essential for effective Terraform usage. By following best practices and avoiding common anti-patterns, you can build robust, maintainable, and scalable infrastructure configurations that serve your organization's needs.
As you progress from basic to advanced Terraform usage, continue to refine your approach to resource management. Regular testing, continuous improvement, and staying updated with Terraform's evolving capabilities will ensure successful infrastructure management in production environments.