Migrating from SST to OpenTofu

In this post, I cover why I moved my website’s infrastructure from SST v2 to OpenTofu, what that infrastructure looks like, and the issues I ran into along the way. I also used Claude Code to implement the changes (and it even helped write the post).

Why Replace SST?

When I first built this site, SST v2 was a great choice. The AstroSite construct abstracted away all of the CloudFront, Lambda, S3, and Route53 wiring, and I was up and running in minutes. But SST v2 is now effectively abandonware. The team moved on to SST v3, which is a complete rewrite with a different architecture built on Pulumi. Rather than continue to use SST and migrate to the latest version, I opted to use OpenTofu. Terraform (and by extension OpenTofu) is the most widely used infrastructure-as-code tool in the industry, and I wanted to show my flexibility by using this tool.

I chose OpenTofu specifically over Terraform because it is fully open-source under the MPL-2.0 license. Terraform changed its license to BSL in 2023, which restricts certain commercial use cases. OpenTofu is a community fork that maintains full compatibility while keeping the open-source commitment. For a personal project, the license difference doesn’t matter much in practice, but I prefer to build on open foundations when possible.

Keeping Lambda

I could have simplified things by switching to a fully static S3-hosted site. My site doesn’t have any dynamic routes at the moment, so it would work fine. But I intentionally kept the Lambda + CloudFront architecture. If I ever want to add server-rendered pages, API routes, or authenticated sections, a static setup would require a significant rearchitecture. Keeping SSR on Lambda means those features are one Astro component away.

I also swapped the Astro adapter from astro-sst to @astro-aws/adapter, which is purpose-built for deploying Astro to AWS Lambda without any SST dependency. It produces three output directories after astro build:

The Infrastructure

The OpenTofu config lives in infra/ and provisions:

The CloudFront distribution has three categories of cache behavior:

# Public folder assets (favicons, images, fonts) — long cache, served from S3
dynamic "ordered_cache_behavior" {
  for_each = ["/favicons/*", "/images/*", "/fonts/*"]
  content {
    path_pattern    = ordered_cache_behavior.value
    target_origin_id = "s3"
    cache_policy_id  = "658327ea-f89d-4fab-a63d-7e88639e58f6" # CachingOptimized
    ...
  }
}

# Built assets (/_astro/*) — content-hashed, long cache, served from S3
ordered_cache_behavior {
  path_pattern    = "/_astro/*"
  target_origin_id = "s3"
  cache_policy_id  = "658327ea-f89d-4fab-a63d-7e88639e58f6" # CachingOptimized
  ...
}

# Everything else — no cache, forwarded to Lambda
default_cache_behavior {
  target_origin_id         = "lambda"
  cache_policy_id          = "4135ea2d-6df8-44a3-9df3-4b5a84be39ad" # CachingDisabled
  origin_request_policy_id = "b689b0a8-53d0-40ab-baf2-68738e2966ac" # AllViewerExceptHostHeader
  ...
}

State is stored remotely in S3 with a DynamoDB lock table, both in us-west-2:

backend "s3" {
  bucket         = "johnlien-me-tofu-state"
  key            = "www/terraform.tfstate"
  region         = "us-west-2"
  dynamodb_table = "johnlien-me-tofu-locks"
  encrypt        = true
}

CI/CD

The GitHub Actions workflows were updated to replace the pnpm sst deploy and pnpm sst diff steps with OpenTofu equivalents. The deploy pipeline now:

  1. Builds the Astro site
  2. Runs tofu init and tofu apply to provision infrastructure and deploy the Lambda
  3. Syncs dist/client/ to S3
  4. Invalidates the CloudFront cache
- name: Build
  run: pnpm build
- name: Initialize OpenTofu
  run: tofu -chdir=infra init
- name: Apply infrastructure
  run: tofu -chdir=infra apply -auto-approve -var="aws_region=${{ secrets.AWS_REGION }}"
- name: Sync static assets to S3
  run: aws s3 sync dist/client s3://$(tofu -chdir=infra output -raw s3_bucket) --delete
- name: Invalidate CloudFront cache
  run: aws cloudfront create-invalidation --distribution-id $(tofu -chdir=infra output -raw cloudfront_distribution_id) --paths "/*"

The review pipeline runs tofu plan on pull requests so infrastructure changes are visible before merging.

Using Claude Code

I used Claude Code to handle most of the migration. I gave it the existing sst.config.ts and stacks/Site.ts and asked it to convert them to OpenTofu. It generated all nine .tf files, updated package.json, and removed the SST-specific files in one pass.

It wasn’t perfect. The initial provider version constraint (~> 5.0) was carried over conservatively from what SST was using under the hood, which caused the nodejs24.x issue. The AllViewer origin request policy was the wrong choice for Lambda Function URLs — a known gotcha that Claude corrected once I reported the 403. The forwarded_values deprecation was caught when I asked it to review the config for similar mistakes.

For the debugging steps — identifying that the 403 was a Host header issue, and that missing assets were a combination of an empty S3 bucket and missing CloudFront behaviors — Claude reasoned through the response headers and error codes correctly without me having to spell out the cause.

Overall it handled the mechanical parts of the migration well and saved a few hours of boilerplate. The areas where it needed correction were mostly versioning assumptions and AWS-specific gotchas that required real deployment feedback to surface.