Cost Optimization with S3 Storage Lens and Lifecycle Policies
Naveen Teja
2/27/2026

As data lakes and automated backups grow, Amazon S3 storage costs can quickly spiral out of control. Many organizations store terabytes of infrequently accessed logs or legacy backups in the S3 Standard tier, paying a premium for fast-retrieval performance they do not actually need.
The solution is implementing aggressive S3 Lifecycle Policies based on data access patterns. You can configure rules to automatically transition objects to cheaper storage tiersβsuch as S3 Standard-Infrequent Access after 30 days, and eventually to S3 Glacier Deep Archive after a year. This automated tiering can reduce storage costs by up to 90%.
Defining these lifecycle rules in Terraform is straightforward. You attach a lifecycle configuration resource directly to your S3 bucket. The following example transitions objects to Infrequent Access after 30 days, moves them to Glacier after 90 days, and permanently deletes them after 365 days.
resource "aws_s3_bucket_lifecycle_configuration" "archive_logs" {
bucket = aws_s3_bucket.logs.id
rule {
id = "archive-and-expire-logs"
status = "Enabled"
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 90
storage_class = "GLACIER"
}
expiration {
days = 365
}
}
}You might also like

Migrating from EC2 to AWS Fargate: A Step-by-Step Guide

Multi-Region Active-Active Architecture on AWS

Implementing AWS GuardDuty with Automated Threat Response

OpenTofu vs Terraform in 2024: Migration Guide and Key Differences

Zero-Trust Networking on AWS with IAM Identity Center and SCPs
