How to Set Up Scheduled Drift Detection for Terraform and OpenTofu

Infrastructure drift is inevitable. Someone makes a manual change in the AWS console during an incident, an auto-scaling event modifies a resource outside of Terraform’s control, or a teammate applies a hotfix directly to production. The question isn’t whether drift will happen — it’s whether you’ll catch it before it causes a compliance violation, a failed deployment, or a security incident.

Scheduled drift detection solves this by running automated checks at regular intervals, comparing your actual infrastructure state against what your Terraform or OpenTofu configuration declares. This guide walks through how to set it up across different approaches, from manual cron jobs to platform-native solutions.

For a broader overview of what drift is and why it matters, see our comprehensive guide to Terraform drift detection.

Why Scheduled Detection Beats Ad-Hoc Checks

Running terraform plan manually to check for drift works in theory, but it falls apart in practice. Teams forget. Engineers context-switch. And the longer drift goes undetected, the harder it becomes to untangle what changed, who changed it, and whether it was intentional.

Scheduled detection eliminates the human factor. It catches drift within hours instead of weeks, surfaces changes while context is still fresh, and creates an audit trail that compliance teams actually care about. The difference between “we check for drift sometimes” and “we detect drift within 24 hours” is often the difference between passing and failing a SOC 2 audit.

Approach 1: DIY with Cron and terraform plan

The simplest approach uses Terraform’s built-in exit codes. When you run terraform plan -detailed-exitcode, it returns exit code 0 for no changes, 1 for errors, and 2 for detected drift. You can wrap this in a script and schedule it with cron or a CI/CD pipeline.

Basic Shell Script

#!/bin/bash
# drift-check.sh
cd /path/to/terraform/config
terraform init -backend-config=prod.hcl -no-color
terraform plan -detailed-exitcode -no-color -out=drift.plan 2>&1

EXIT_CODE=$?
if [ $EXIT_CODE -eq 2 ]; then
  echo "DRIFT DETECTED"
  terraform show -no-color drift.plan | curl -X POST \\
    -H 'Content-type: application/json' \\
    --data '{"text": "Drift detected in production"}' \\
    "$SLACK_WEBHOOK_URL"
elif [ $EXIT_CODE -eq 1 ]; then
  echo "ERROR running plan"
fi
rm -f drift.plan

Schedule with Cron

# Run drift check daily at 6 AM UTC
0 6 * * * /opt/scripts/drift-check.sh >> /var/log/drift-check.log 2>&1

Limitations of the DIY Approach

This works for a handful of workspaces, but it doesn’t scale. You need to manage credentials for every provider, maintain separate scripts per workspace, handle state locking conflicts with production runs, and build your own alerting and reporting. By the time you’ve solved all of these problems, you’ve built a rudimentary drift detection platform — which is exactly what dedicated tools are designed to do.

Approach 2: CI/CD Pipeline Scheduling

A step up from cron is running drift checks through your CI/CD system. GitHub Actions, GitLab CI, and Jenkins all support scheduled triggers.

GitHub Actions Example

name: Drift Detection
on:
  schedule:
    - cron: '0 6 * * 1-5'
  workflow_dispatch: {}

jobs:
  detect-drift:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        workspace: [networking, compute, database, monitoring]
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
      - name: Check for drift
        id: plan
        run: |
          cd workspaces/${{ matrix.workspace }}
          terraform init
          terraform plan -detailed-exitcode -no-color
        continue-on-error: true
      - name: Alert on drift
        if: steps.plan.outcome == 'failure'
        run: echo "Drift detected"

This is better than raw cron — you get logs, parallel execution across workspaces, and integration with your existing alerting. But you’re still managing credentials, paying for CI/CD minutes, and dealing with state lock contention when drift checks overlap with real deployments.

Approach 3: Platform-Native Drift Detection with Scalr

Scalr includes built-in drift detection that eliminates the infrastructure overhead of DIY approaches. Instead of building and maintaining scripts, you configure drift detection at the environment level and Scalr handles the rest.

Setting It Up

Drift detection in Scalr is enabled per environment rather than per workspace. This is a deliberate design choice: it ensures consistent coverage across all workspaces in an environment without requiring individual configuration. To enable it:

  1. Navigate to your environment settings in the Scalr UI.
  2. Enable drift detection and set your preferred schedule — daily, weekly, or a custom interval.
  3. All workspaces within that environment will be checked on the defined schedule.

There’s no script to maintain, no credentials to manage separately, and no state lock conflicts — Scalr coordinates drift checks with regular runs automatically.

Monitoring and Notifications

When Scalr detects drift, the results appear in a dedicated Drift Detection tab, separate from your regular run history. This means drift findings don’t get buried in a stream of plan/apply runs. You can also build custom dashboards that aggregate drift status across all workspaces in your organization, giving platform teams a single view of infrastructure health.

For real-time alerting, Scalr integrates with Slack to notify your team when drift is detected. Rather than parsing CI/CD logs or monitoring email, your on-call engineer gets a Slack message with the affected workspace and a direct link to review the changes.

Acting on Detected Drift

Once drift is detected, Scalr presents three options directly in the UI: Ignore for expected changes, Sync State to update state to match reality, or Revert Infrastructure to roll back unauthorized changes. Scalr deliberately keeps a human in the loop for remediation — fully automated rollbacks sound appealing until they revert an emergency scaling event at 3 AM.

For a deeper look at when to use each remediation strategy, see our guide to drift remediation strategies.

Choosing Your Approach

Cron + shell scripts work for small teams with fewer than 10 workspaces and simple provider setups.

CI/CD pipelines are a good fit for teams that already have robust pipeline infrastructure and want drift detection without adding another tool.

Platform-native detection makes sense once you’re managing dozens of workspaces or need organizational visibility into drift across teams. The setup cost is near zero, and the ongoing maintenance cost is actually zero.

Whichever approach you choose, the most important thing is that drift detection runs automatically and consistently. A scheduled check that runs every day catches problems that ad-hoc manual checks never will.