Processing Service (Europe)
Hi All ,
There is a Major Outage in AWS North Virginia Region( us-east-1). Multiple services hosted on the region got impacted. AWS acknowledged the issue and working on it. Please see the AWS Status page for your reference.
AWS Status Page
Affected Services:
Latest Update from the AWS side
Increased Error Rates and Latencies
Oct 20 2:27 AM PDT We are seeing significant signs of recovery. Most requests should now be succeeding. We continue to work through a backlog of queued requests. We will continue to provide additional information.
Latest Update from the AWS side
Latest Update from the AWS side
Oct 20 10:03 AM PDT We continue to apply mitigation steps for network load balancer health and recovering connectivity for most AWS services. Lambda is experiencing function invocation errors because an internal subsystem was impacted by the network load balancer health checks. We are taking steps to recover this internal Lambda system. For EC2 launch instance failures, we are in the process of validating a fix and will deploy to the first AZ as soon as we have confidence we can do so safely. We will provide an update by 10:45 AM PDT.
Summary for the AWS Outage:
Latest Update from the AWS side Oct 20 10:38 AM PDT Our mitigations to resolve launch failures for new EC2 instances are progressing and the internal subsystems of EC2 are now showing early signs of recovering in a few Availability Zones (AZs) in the US-EAST-1 Region. We are applying mitigations to the remaining AZs at which point we expect launch errors and network connectivity issues to subside.
Latest Update from the AWS side Oct 20 11:22 AM PDT Our mitigations to resolve launch failures for new EC2 instances continue to progress and we are seeing increased launches of new EC2 instances and decreasing networking connectivity issues in the US-EAST-1 Region. We are also experiencing significant improvements to Lambda invocation errors, especially when creating new execution environments (including for Lambda@Edge invocations). We will provide an update by 12:00 PM PDT.
Latest Update from the AWS side Oct 20 3:53 PM PDT Between 11:49 PM PDT on October 19 and 2:24 AM PDT on October 20, we experienced increased error rates and latencies for AWS Services in the US-EAST-1 Region. Additionally, services or features that rely on US-EAST-1 endpoints such as IAM and DynamoDB Global Tables also experienced issues during this time. At 12:26 AM on October 20, we identified the trigger of the event as DNS resolution issues for the regional DynamoDB service endpoints. After resolving the DynamoDB DNS issue at 2:24 AM, services began recovering but we had a subsequent impairment in the internal subsystem of EC2 that is responsible for launching EC2 instances due to its dependency on DynamoDB. As we continued to work through EC2 instance launch impairments, Network Load Balancer health checks also became impaired, resulting in network connectivity issues in multiple services such as Lambda, DynamoDB, and CloudWatch. We recovered the Network Load Balancer health checks at 9:38 AM. As part of the recovery effort, we temporarily throttled some operations such as EC2 instance launches, processing of SQS queues via Lambda Event Source Mappings, and asynchronous Lambda invocations. Over time we reduced throttling of operations and worked in parallel to resolve network connectivity issues until the services fully recovered. By 3:01 PM, all AWS services returned to normal operations. Some services such as AWS Config, Redshift, and Connect continue to have a backlog of messages that they will finish processing over the next few hours. We will share a detailed AWS post-event summary.
Our Cloud provider outages has been resolved. Thanks for your patience.
There will be maintenance done in all Trimble Connect regions on Sun August 24th, 2025, 02:00-03:00 AM UTC. During this time, we expect temporary slowness in operations related to Comments. Features that involve creating comments, listing comments, opening comments and updating comments, may experience longer response times than normally, but other than that will operate normally. After the maintenance has completed, the response times will return back to normal. We apologize for the inconvenience.
The scheduled maintenance has been completed.
The scheduled maintenance has been completed.
We are seeing performance degradation in our
We have identified the cause of the performance degradation and are working on a fix.
A fix has been implemented; we are closely monitoring system performance.
The issue causing the performance degradation has been resolved. Thanks for your patience.
We are seeing delays in the Trimble Connect Processing Service, which is responsible for uploaded 3D model conversions and are currently investigating the issue.
The issue causing the disruptions has been resolved. Thanks for your patience.