ContainersDevOps

Migrating from EC2 to AWS Fargate: A Step-by-Step Guide

NT

Naveen Teja

3/2/2026

Migrating from EC2 to AWS Fargate: A Step-by-Step Guide

EC2-based container deployments require you to manage the underlying instances: patching the OS, right-sizing instance types, managing Auto Scaling Groups, and handling cluster capacity. As container workloads grow, this operational overhead becomes a significant burden on small engineering teams who would rather ship features than manage servers.

AWS Fargate removes the EC2 layer entirely. You define your task (CPU, memory, container image, environment variables) and Fargate handles provisioning, scaling, and retiring the underlying compute invisibly. You pay per vCPU-second and GB-second consumed by your running tasks — idle capacity costs nothing. For most containerized microservices, migrating from EC2 launch type to Fargate reduces operational overhead by 60% and lowers costs for services with variable traffic patterns.

The migration path is: audit your current ECS services to identify task CPU/memory requirements, convert any tasks using host networking or privileged mode (Fargate doesn't support these), ensure your images don't require root access, and update the launch_type on each ECS service to FARGATE. You also need to move to awsvpc networking mode which gives each task its own ENI and private IP. The Terraform below shows a complete Fargate task definition and service configuration with ALB integration and auto-scaling.

fargate-migration.tf
resource "aws_ecs_task_definition" "app" {
  family                   = "my-app"
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu                      = 512
  memory                   = 1024
  execution_role_arn       = aws_iam_role.ecs_execution.arn
  task_role_arn            = aws_iam_role.ecs_task.arn

  container_definitions = jsonencode([{
    name      = "app"
    image     = "${aws_ecr_repository.app.repository_url}:latest"
    essential = true
    portMappings = [{
      containerPort = 3000
      protocol      = "tcp"
    }]
    logConfiguration = {
      logDriver = "awslogs"
      options = {
        awslogs-group         = "/ecs/my-app"
        awslogs-region        = "us-east-1"
        awslogs-stream-prefix = "ecs"
      }
    }
  }])
}

resource "aws_ecs_service" "app" {
  name            = "my-app"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.app.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.private[*].id
    security_groups  = [aws_security_group.app.id]
    assign_public_ip = false
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.app.arn
    container_name   = "app"
    container_port   = 3000
  }
}

# Auto-scaling
resource "aws_appautoscaling_policy" "ecs_cpu" {
  name               = "ecs-cpu-autoscaling"
  policy_type        = "TargetTrackingScaling"
  resource_id        = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.app.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"

  target_tracking_scaling_policy_configuration {
    target_value = 70.0
    predefined_metric_specification {
      predefined_metric_type = "ECSServiceAverageCPUUtilization"
    }
  }
}