After major EC2 instance launch failures in the Northern Virginia (US-EAST-1) region, AWS has confirmed that throttles were restored to pre-event levels and services are now operating normally. Here is what happened, who was impacted, and what teams should do next.
๐ด What Happened?
On December 18, 2025, AWS US-EAST-1 experienced widespread EC2 instance launch failures. Teams attempting to scale their infrastructure or launch new instances received capacity errors. The issue affected multiple instance types and availability zones within the region.
๐ Who Was Impacted?
Any team relying on auto-scaling groups, spot instances, or on-demand EC2 launches in US-EAST-1 was affected. Services that depend on dynamic scaling - such as web applications, data pipelines, and batch processing jobs - experienced degraded performance or complete outages.
๐ง What AWS Did
AWS engineering teams identified the root cause as an internal capacity management issue. They worked to restore throttle limits to pre-event levels, redistribute capacity across availability zones, and communicate status updates via the AWS Service Health Dashboard.
โ What Teams Should Do Now
Review your auto-scaling configurations and ensure multi-region or multi-AZ redundancy. Set up CloudWatch alarms for EC2 launch failures. Consider using AWS Resilience Hub to assess and improve your application's resilience. Document your incident response playbook for future regional outages.
๐ฎ Key Takeaway
No cloud provider is immune to outages. The teams that recover fastest are those with multi-region architectures, automated failover, and well-practiced incident response procedures. Use this event as a prompt to review your own resilience strategy.
