Historical record of incidents for Wasabi Technologies
Report: "System Errors in US-EAST-2"
Last updateOn 23 May 2025 at 23:00 UTC, Wasabi’s us-east-2 \(Manassas\) vault began seeing a large increase in S3 API traffic, causing regional resources to become strained with excessive internal connections. This excessive, sustained increase in traffic to the region was outside of normal operating conditions. At 08:12 UTC on 24 May 2025, this high resource usage began to impact the vault, causing an increase in 5xx errors for API calls to S3 buckets. Upon investigation from our Operations Team, it was found that a small number of accounts were responsible for consuming a large portion of available resources within the region, causing the increase in HTTP 5XX error responses for other customers. Our Operations Team took action by limiting the available connections to these accounts, which allowed for the overall active connections to the region to drop substantially. By 08:32 UTC on 24 May 2025, the region was back to normal operation.
This incident has been resolved. Please contact support@wasabi.com if you continue to see errors related to this incident.
We have made changes, traffic is operating normally, and we are monitoring the systems.
The issue has been identified, and we are making appropriate changes to the system.
We are currently experiencing system errors in our US-EAST-2 region. Customers may experience elevated HTTP 5XX error responses when interacting with their Wasabi bucket(s). We will update this page as we have more information.
Report: "System Errors in US-EAST-2"
Last updateThis incident has been resolved. Please contact support@wasabi.com if you continue to see errors related to this incident.
We have made changes, traffic is operating normally, and we are monitoring the systems.
The issue has been identified, and we are making appropriate changes to the system.
We are currently experiencing system errors in our US-EAST-2 region. Customers may experience elevated HTTP 5XX error responses when interacting with their Wasabi bucket(s). We will update this page as we have more information.
Report: "WACM / Wasabi Account Control API - System Issues"
Last updateOn 28 May 2025 at 19:00 UTC, our Engineering and Operations Teams began investigating subaccount creation failures occurring within our WACM/WAC API services. These failures resulted in an HTTP 5XX response error when clients requested the creation of a subaccount utilizing the API or WACM UI. Upon investigation, it was found that a recently deployed software update was responsible for these errors from a bug which caused the API to be unable to reach an internal service due to query timeouts. Once the bug was discovered, our Engineering Team adjusted an internal parameter, allowing API requests to succeed properly. On 29 May 2025 at 03:30 UTC, the WACM/WAC API services were returned back to normal operation.
The WACM/WAC API service is back to normal and we are marking the event as resolved. Once the analysis is completed by our internal team, we will update the status page with the Post Mortem information.
We identified the cause and fixed the issue. The WACM and WAC API are back to normal and no errors should occur now. We will continue monitoring the System status.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating System errors when users try to use WACM/Wasabi Account Control API to create subaccounts resulting in Internal Error result.
Report: "WACM / Wasabi Account Control API - System Issues"
Last updateThe WACM/WAC API service is back to normal and we are marking the event as resolved. Once the analysis is completed by our internal team, we will update the status page with the Post Mortem information.
We identified the cause and fixed the issue. The WACM and WAC API are back to normal and no errors should occur now.We will continue monitoring the System status.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating System errors when users try to use WACM/Wasabi Account Control API to create subaccounts resulting in Internal Error result.
Report: "System Errors in AP-NORTHEAST-1"
Last updateOn 21 May 2025 at 02:45 UTC, Wasabi’s Tokyo vault was impacted by system errors, causing client requests to result in 5xx errors for any API calls to their S3 buckets. This issue was due to an internal failure in the Tokyo Region where the backend server had a hardware fault, impacting the cluster of which it was a member. Our Operations Team identified the failure and took corrective action by performing a cluster failover to resolve the issue, which returned the service back to an operational state at 10:55 UTC 21 May 2025.
This incident has been resolved, please contact support@wasabi.com with any outstanding questions.
A fix has been implemented and we are monitoring the results.
The issue has been identified and we are working towards to a solution.
We are currently experiencing system errors in our AP-NORTHEAST-1 region. Customers may experience elevated HTTP 5XX error responses when interacting with their Wasabi bucket(s)/objects. We will update this page as we have more information.
Report: "System Errors in AP-NORTHEAST-1"
Last updateThis incident has been resolved, please contact support@wasabi.com with any outstanding questions.
A fix has been implemented and we are monitoring the results.
The issue has been identified and we are working towards to a solution.
We are currently experiencing system errors in our AP-NORTHEAST-1 region. Customers may experience elevated HTTP 5XX error responses when interacting with their Wasabi bucket(s)/objects. We will update this page as we have more information.
Report: "Maintenance Activity Advisory for EU-WEST-3"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Wasabi Customers, We want to inform you about a series of upcoming maintenance window in our London (eu-west-3) region. We will be conducting system work during the following period: Tuesday 27 May 2025 12:00-16:00 UTC We expect there will be no impact on service during this maintenance, but out of an abundance of caution, we wanted to inform customers of this activity. We understand the inconvenience this may cause and assure you that our team will work diligently to prevent disruptions. To stay informed about the progress of this maintenance activity, please subscribe to https://status.wasabi.com. We apologize for any inconvenience this maintenance may cause and appreciate your patience and understanding as we strive to enhance our services. If you have any questions or concerns, please don't hesitate to reach out to our support team at support@wasabi.com. Thank you for your cooperation. Wasabi Technical Support
Report: "Wasabi.com Try Free Sign-up Form Returning Server Error"
Last updateFrom approximately 13:45 to 17:15 UTC, customers using the Free Trial Form Sign Up on Wasabi.com may have been met with a “Server Error” when submitting their request. Due to an error within one of our website modules, a portion of the service was impacted, which prevented the forms from being submitted properly. That error was resolved and the forms are now properly submitted and processed.
This issue has been resolved.
We have identified the issue with the sign-up form submission, and have rolled back changes to restore functionality. This issue should be resolved for all users now, please reach out to support@wasabi.com with any remaining questions.
The Try Free Sign-up form on wasabi.com is currently returning Server Error responses upon submission. We are investigating this issue and will update the page here when we have more information.
Report: "Wasabi Storage Accounts Receiving 'StorageQuotaExceeded' Error Message"
Last updateFrom 2025-05-07 03:00 UTC to 2025-05-07 16:00 UTC, a subset of customers experienced data upload issues due to an incorrect storage quota exceeded flag applied to their subaccounts. Specifically, some subaccounts configured with no quota value on their account were incorrectly marked as having exceeded their storage limit. As a result, these subaccounts were temporarily prevented from uploading data. This behavior was the result of a recent update to our billing system, which treated an empty storage quota value as a valid threshold of zero GB and triggered the excessive storage message. By 14:07 UTC, the problematic update was rolled back, and the affected accounts were unflagged. By 16:00 UTC, all impacted subaccounts were verified as having been reset correctly and the service had been returned to normal operational status. ` `
The issue affecting some RCS and PayGo accounts with an incorrect 'StorageQuotaExceeded' error message has been resolved. Please reach out to support@wasabi.com if you require further assistance. NOTE: This entry has been updated to reflect that the incident affected not only RCS accounts but some PayGo accounts as well.
A fix has been implemented and we expect that there will be no more new instances of this issue for any Wasabi accounts. If you continue to experience any issues with your Wasabi account, please reach out to our Support Team at support@wasabi.com NOTE: This entry updated to reflect that the incident affected not only RCS accounts but some PayGo accounts as well.
We have isolated this issue and are working to ensure all impacted accounts are adjusted appropriately.
We are investigating Storage Quota Exceeded error messages for RCS accounts, our teams are working to resolve this issue.
Report: "System Errors in EU-CENTRAL-1 Region"
Last updateOn 04 April 2025 from 06:20 UTC to 12:20 UTC, we experienced system errors in our eu-central-1 \(Amsterdam\) region. This issue affected the S3 service and impacted customer’s ability to read and write to/from their Wasabi buckets within the region. Starting at 05:11 UTC, our Operations Team was notified of a spike in internal system read operations in our eu-central-1 region across multiple storage zones. Approximately 1 hour and 9 minutes after this spike at 06:20 UTC, our storage sub-systems errored to read and write requests, impacting customers. Wasabi’s Operations Team was able to identify a recent upgrade to our internal system as responsible for the spike in storage read operations, that eventually led to the storage system errors. Operations proceeded to rollback this upgrade to a previous stable release, which then restored operations on the storage system, leading to a restoration of service for customers. By 12:20 UTC, all S3 services in the eu-central-1 region were operational.
This incident has been resolved.
We have identified and resolved the issue. All affected resources are back in service. We are currently monitoring the status. If you see any error, please reach out to support@wasabi.com for assistance.
We are continuing to investigate this issue.
We are currently investigating system errors in our EU-CENTRAL-1 region. Our team is working to isolate the issue and provide a resolution.
Report: "System Errors in EU-WEST-3 Region"
Last updateOn 08 April 2025 from 20:05 UTC to approximately 20:16 UTC, Wasabi services in our London (eu-west-3) region was unavailable due to an internal routing issue which impacted the region's ability to process incoming customer requests. At 20:05 UTC, our Operations Team was notified by our internal monitoring service that customer traffic was being impacted within the region. Upon investigation, it was found that internal DNS routing within the eu-west-3 region was failing to resolve internal hosts. Wasabi's Operations Team re-deployed the DNS stack and once re-deployment was completed at 20:16 UTC, customer connections to the region was restored.
Report: "System Errors in AP-NORTHEAST-2 Region"
Last updateOn 08 April 2025 from 14:13 UTC to approximately 14:39 UTC, Wasabi services in our Osaka \(ap-northeast-2\) region was unavailable due to an internal routing issue which impacted the region's ability to process incoming customer requests. At 14:13 UTC, our Operations Team was notified by our internal monitoring service that customer traffic was being impacted within the region. Upon investigation, it was found that internal DNS routing within the ap-northeast-2 region was failing to resolve internal hosts. Wasabi's Operations Team re-deployed the DNS stack and once re-deployment was completed at 14:39 UTC, customer connections to the region was restored.
This incident has been resolved.
We have identified and resolved the issue. All affected resources are back in service. We are currently monitoring the status. If you see any error, please reach out to support@wasabi.com for assistance.
We are currently investigating system errors in our AP-NORTHEAST-2 region. Our team is working to isolate the issue and provide a resolution.
Report: "System Errors to access Wasabi services"
Last updateOn 27 March 2025 from 10:09 UTC to approximately 10:14 UTC, Wasabi services in our Ashburn \(us-east-1\), Manassas \(us-east-2\), Hillsboro \(us-west-1\), Plano \(us-central-1\), Amsterdam \(eu-central-1\), Tokyo \(ap-northeast-1\), Osaka \(ap-northeast-2\), London \(eu-west-1\), and Frankfurt \(eu-central-2\) regions were unavailable due to an issue, impacting customer traffic and Web Console connectivity. Wasabi Engineering and Operations teams were notified by our automated monitoring systems of an issue with a spike in connection between internal service worker nodes. This spike led to the inability to open new connections within the services. Wasabi’s Operations Team manually restarted the services, which corrected the issue and allowed for client connections to resume successfully. By 10:14 UTC, all internal services were restarted and all services returned to normal operating conditions.
Between 10:09PM to 10:14PM UTC, we experienced errors with connecting to the Wasabi Web Console, STS, and IAM services. It was a temporary issue, auto-resolved and Operations Team validated it. There is no longer any further impact, and all services are now running without issue.
Report: "System Errors when connecting to Wasabi services"
Last updateOn 17 March 2025 from 10:34 UTC to approximately 10:50 UTC, Wasabi services in our Ashburn \(us-east-1\), Manassas \(us-east-2\), Hillsboro \(us-west-1\), Plano \(us-central-1\), Amsterdam \(eu-central-1\), Tokyo \(ap-northeast-1\), Osaka \(ap-northeast-2\), London \(eu-west-1\), and Frankfurt \(eu-central-2\) regions were unavailable due to an issue which impacted the ability for internal services to communicate effectively within all Wasabi services. Wasabi Engineering and Operations teams were notified by our automated monitoring systems at 10:34 UTC of an issue with a spike in connection between internal service worker nodes. This spike led to the inability to open new connections within the services. Wasabi’s Operations Team noted that automatic restarts of the services did not clear the issue, prompting manual intervention to perform a manual restart of internal service nodes, which corrected the issue and allowed for client connections to resume successfully. By 10:50 UTC, all internal services were restarted and all services returned to normal operating conditions.
Between 10:40 to 10:50 UTC, we experienced errors with connecting to the Wasabi Web Console, STS, and IAM services. Our Operations Team has taken corrective action and there is no longer any further impact, and all services are now running without issue.
Report: "Wasabi.com experiencing HTTP 5XX errors"
Last updateFrom 09:00 to 11:00 UTC on 19 February 2025, visitors experienced HTTP 5XX errors when attempting to view our Wasabi.com website. These errors were caused by an update to the site, which resulted in multiple modules not loading properly. Once the issue was identified, the changes were reverted, and the website was fully operational around 11:00 UTC.
Report: "Wasabi Account Control Manager Console page not loading"
Last updateOn 05 February 2025 from 05:06 AM UTC to approximately 06:04 AM UTC, Wasabi Account Control Manager Console \(WACM\) was unavailable, resulting in errors 404 or 502 when trying to access the WACM URL. The issue was caused by internal component restart with a scenario of DB lock causing the slowness due to a backup routine and as consequence it took additional time for the service to be reestablished. Once the internal team stopped the backup routine, it was possible to proceed with the tasks needed and the restart action completed as expected, bringing the service up and running normally.
This incident has been resolved.
The action to fix for the issue (errors 404 and 502) to access the Wasabi Account Control Manager Console was completed and the service has returned to normal. We are currently monitoring it.
Our teams had identified the issue and are working on the fix.
We are currently investigating an issue to access the Wasabi Account Control Manager Console page. Our customers might experience 404 and 502 errors when accessing the Wasabi Account Control Manager Console.
Report: "Issues Accessing WACM Console"
Last updateOn 06 January 2025 from 05:54 UTC to 08:02 UTC, Wasabi Account Control Manager Console \(WACM\) was unavailable, resulting in HTTP 4XX or 5XX errors when trying to access the WACM URL. The issue was caused by a failure in restarting internal services responsible for maintaining web service availability. This failure was further worsened by delays due to heavy database load, which the service relies on. Once the team identified a database lock as the cause of the slowness and the extended time required to reestablish the service, the long-running query causing the delay was terminated. The services were then restarted successfully, restoring the web service to its normal state.
This incident has been resolved.
We are currently investigating issues with accessing the WACM Console. Users may experience HTTP 404 or 502 error responses.
Report: "System Errors in EU-SOUTH-1 Region"
Last updateOn 10 January 2024 from 15:57 UTC to 16:45 UTC, we experienced an issue where clients received 4XX level errors while accessing the S3 buckets in the eu-south-1 \(Milan\) region. During an internal configuration change, a software bug caused the invalidation of TLS certificates used by internal servers to establish mutual TLS handshakes with load balancers. Consequently, the load balancers could not successfully connect to the internal servers, leading to 4XX errors returned to client requests. Wasabi’s Engineering and Operations teams identified the issue and manually reconfigured the TLS certificates to restore functionality. A fix was deployed to the eu-south-1 region to address the underlying bug and to prevent this problem from happening again.
This incident has been resolved.
We have identified and corrected the issue, and our EU-SOUTH-1 (Milan) region is now back to a fully operational state. We will continue to monitor the region.
We are currently investigating issues in our EU-SOUTH-1 (Milan) region.
Report: "WACM Utilization Data updates may be delayed"
Last updateAll systems have been performing as expected throughout the Holiday period and have shown no issues. Returning the system to normal operational status.
As a result of a recent database update, Wasabi’s Invoice generation was paused and reactivated on December 23, 2024. The system is being managed and monitored to ensure that the data continues to be properly reported through the remainder of the month (and thus the year-end). If you are utilizing the WAC API in order to obtain your Account Utilization data on a daily basis in order to prepare your own Invoices, we recommend that you schedule any collection of data via the WAC API after 05:00 UTC to allow for the process to complete. Please report any problems you encounter with the display of Account Utilization data in WACM to support@wasabi.com.
Report: "System Errors in US-EAST-1 & US-EAST-2 Regions"
Last updateOn 7 December 2024 from 15:22 UTC to 21:30 UTC, Wasabi experienced a loss of power event in our US-EAST-1 and US-EAST-2 data centers. At 15:22, our Operations Team noticed that infrastructure within the US-EAST-1 and US-EAST-2 regions failed to respond to standard smoke tests and monitoring tools, and reviewing the activity for the regions indicated a full loss of power to all server racks and infrastructure within the building. At 16:00 UTC, Wasabi received confirmation from Iron Mountain that power loss for the entire building has occurred. By 16:15 UTC, Iron Mountain Operations begins to restore power to an incremental number of racks for Wasabi’s infrastructure, allowing our Operations Team to run systematic health checks across all server nodes, and by 21:30 UTC we have confirmation that all systems are running optimally and both US-EAST-1 and US-EAST-2 regions were fully operational.
Services in both regions have been restored. Please reach out to support@wasabi.com if you see any issues related to this incident.
All systems are now back online and fully operational. We are continuing to monitor the regions and will update this page as we have more information.
Power has been fully restored to the us-east-1 region and we are continuing to work on bringing all systems back online. We will update this page as we have more information.
Power has been fully restored to the us-east-2 region. We are continuing the process of restoration to the us-east-1 region and will update this page as we have more information.
We are continuing the process of restoring power and bringing systems back up. We will update this page as we have more information.
We have identified a power issue at the data center which is in the process of being restored. We will update this page as we have more information.
We are continuing to investigate the system errors in the us-east-1 and us-east-2 regions. We will update this page as we have more information.
We are currently investigating an increase in 500 level HTTP responses on customer traffic to the us-east-1 and us-east-2 regions.
Report: "Degraded performance in the EU-West-2 (Paris) region"
Last updateOn Sunday 17 November at approximately 00:14 UTC, an incident occurred in the eu-central-2 \(Paris\) region, resulting in elevated HTTP 5XX error responses and slow response times to buckets impacting some customers. During an upgrade to the region, an error caused the service to allocate more connections than needed resulting in an overload to one of the service components. This caused an increase in errors returned to a subset of customer buckets which were served by this component. To address the situation, the Operations Team reverted the update to restore service. By 15:11 UTC on 17 November 2024 service in the eu-central-2 region was fully restored to normal operational status. The error was corrected and the update was subsequently re-deployed successfully.
This incident is now resolved.
We have identified the issue and are making changes to address the traffic. We are continuing to monitor the system as well
Our Operations team continues to work with the network traffic team to isolate the problem, we will update this page with more information soon
We are currently investigating degraded performance in the EU-West-2 (Paris) region
Report: "System Errors affecting some Wasabi Regions"
Last updateAfter monitoring, we have confirmed the issue is resolved and there is no further impact on our services.
We have isolated the issue "Account Connection Limited" errors and engineering has made changes to restore the service. We will update this page with any further information.
We are currently investigating an increase in "Account Connection Limited" responses from the S3 system on a number of Wasabi Storage Regions. We will update this page as we have more information.
Report: "Network issues affecting all regions"
Last updateOn 30 November 2024 from 2024-11-30 00:17 UTC to approximately 2024-11-30 03:00 UTC, Wasabi experienced an issue where client connection attempts to Wasabi Cloud Storage and the Web Console were impacted across all storage regions, resulting in all API calls to be returned as HTTP 5XX errors to clients. The cause behind this service degradation was due to an error within the internal messaging queue service responsible for taking client requests and routing them to our global database cluster. The internal messaging queue service failed to appropriately route these client requests across all nodes. Wasabi’s Engineering and Operations teams was able to mitigate this issue by manually configuring internal servers to route requests across multiple database instances, allowing the system to recover and respond to requests appropriately. Once this action was taken, which restored service, the teams then worked to correct the root cause by working to recover the internal messaging queue and resume the automated task of proper client request handling.
This incident has been resolved.
The system has been restored to fully operational in all regions. We will populate the Postmortem section of this incident with more complete details as soon as possible.
We have isolated the issue and resolution is underway. We expect all regions except us-east-1 to be successful. There is still some level of error responses being seen in Ashburn and we continue to work on that. We will continue to monitor all regions and update their status as well as that of us-east-1 as we continue to make progress.
We continue to investigate the issue. Some traffic continues to receive errors.
We are currently investigating reported network errors across all regions. Access to both Console and S3 services may return errors. We will update this page as we have more information.
Report: "System Errors in EU-CENTRAL-2 Region"
Last updateFrom 19:30 UTC 12/2/24 to 15:11 UTC 12/3/24 customers may have experienced elevated HTTP 5XX error responses and slow response times to buckets in our eu-central-2 (Frankfurt) region. Retries to the region were successful and the region is now fully operational.
Report: "Network issues affecting all regions"
Last updateWe experienced an issue that interrupted services for a brief period at 23:41 UTC on December 1, 2024. The system config was adjusted and service resumed at approximately 23:51 UTC.
Report: "Wasabi Managment Console login issue"
Last updateThis issue was resolved by 1:55PM UTC.
We have identified and resolved the issue. The access to our Management Console is back to normal and operational again. We will continue to monitor our services.
We are currently investigating intermittent issues affecting Wasabi Managment Console login.
Report: "Power Issues in the EU-CENTRAL-1 region"
Last updateFrom 2024-10-31 at 21:10 UTC to 2024-11-01 at 04:00 UTC, we experienced issues with a subset of customer Buckets that were unavailable due to an unexpected power distribution issue while a planned power maintenance window was in progress at the EU-Central-1 region. The maintenance window required to run some servers on a single source and when the switch occurred, the power infrastructure could not keep up with the power draw from some of the infrastructure racks resulting in power down for those racks. The impact resulted in failures in Reads/Write operations and Billing for impacted buckets. Our Operations team worked with the Equinix Datacenter team to restore the power and service availability was re-established as soon as the maintenance was over, and power was completely restored.
The incident has been resolved. Please reach out to support@wasabi.com if you see any issues related to this incident.
The power distribution issue has been resolved and all affected resources are back in service. We are currently monitoring the status. If you see any error, please reach out to support@wasabi.com for assistance.
We have identified the issue and impacted resources. Team continue to work to restore service.
A facility power distribution issue is impacting several services in the eu-central-1 region. Direct Connect links are impacted as well as a subset of buckets. We continue to work with the facility to restore service.
We have reports of power issues in our EU-CENTRAL-1 region and some users may be experiencing errors when interfacing with their Wasabi buckets hosted there. We will update our status page with more information as it becomes available.
Report: "Wasabi Account Control Manager Console views for Control Accounts, Sub-Accounts and Invoices not loading"
Last updateFrom 2024-10-31 00:42 UTC to 2024-10-31 10:03 UTC, we experienced an issue where Control Accounts, Sub-Accounts and Invoice data in WACM was not loading properly when attempted to be viewed by the client. Upon investigation by our Engineering Team, it was found that the nodes responsible for fetching and delivering this data were in a bad state. Once the nodes were restored to a functional state, our team started the process of re-indexing the data, which would allow the data to be served via the WACM portal successfully. At 10:03 UTC on October 31, 2024, the WACM data was fully available to customers.
This incident has been resolved.
The issue has been identified and a fix is being implemented.
While some of the views have been restored, we continue to work to resolve the issue. This page will be updated as more progress is made.
We are currently investigating this issue and expect to have these views restored shortly.
Report: "System Errors in US-CENTRAL-1"
Last updateIn Wasabi’s us-central-1 region, an incident occurred where a storage system I/O module became inoperable and prevented access to some disks that it served. Simultaneously, the software managing these disks improperly took multiple other disks offline. As a result of this incident, some objects on these disks are no longer accessible. The software problem that triggered this event has now been addressed and Wasabi is working on the appropriate remedy with affected customers and partners.
Report: "System Errors in US-WEST-1 Region"
Last updateFrom 2024-09-25 13:49 UTC to 2024-09-25 13:54 UTC, an incident occurred in the Hillsboro region, resulting in performance degradation for customers and impacting their ability to make S3 API calls against buckets in the region. During an upgrade to the region, the Operations team encountered an issue where a misconfiguration caused a service to falsely report as ready for operation. This resulted in service readiness failures and triggered failures within the region. To address the situation, the Operations Team corrected the misconfiguration and restarted the affected service. By 13:54 UTC, the S3 service in the US-WEST-1 region was fully restored, and normal operations resumed.
This incident is now resolved. Please reach out to support@wasabi.com if you continue to see any errors related to this incident.
We are continuing to investigate this issue.
Our Operations Team has corrected the issue, and the US-WEST-1 region is back in an operational state. We will continue to monitor the region.
Our Operations Team has identified the issue and is working to correct the system errors.
We are continuing to investigate this issue.
We are currently investigating system errors occurring in our US-WEST-1 region.
Report: "System Errors in CA-CENTRAL-1"
Last updateFrom 2024-09-18 17:55 UTC to 2024-09-18 18:08 UTC, we experienced an issue where client connection attempts to Wasabi Cloud Storage were impacted during an upgrade to our services in the CA-CENTRAL-1 region. The activity had no expectations to create any service impact and began on 2024-09-18 at 13:00 UTC. During the maintenance task, two internal software components had a networking port conflict, causing a delay on the restart process for our load balancer network component. Once our Operations Team identified the condition and cause for the failure, immediate corrective action was taken, and service availability was reestablished.
This incident is now resolved. Please reach out to support@wasabi.com if you continue to see any errors related to this incident.
Our Operations Team identified and corrected the cause of the system errors. Performance to the region is restored and we will continue to monitor the region.
We are currently experiencing system errors in our CA-CENTRAL-1 region.
Report: "System Errors in US-EAST-1 Region"
Last updateFrom 2024-09-16 04:30 UTC to 2024-09-16 12:30 UTC, we experienced an issue within our US-EAST-1 region causing customers to receive 5XX errors and a reduced ability to ingest data to customer buckets within the region. The system user-servers reached capacity with logs resulting in a failure of our streaming service. Because the user-servers were busy writing logs, they had reduced capability to handle requests. Additionally, messages that were unable to be published were written to disk, further increasing I/O operations on the system. By 12:30 UTC, our Operations team had taken corrective action by emptying the streaming queue, and restarted the user-server services. After these actions were performed, ingest to our US-EAST-1 region had been fully restored.
This issue is resolved, customers may have seen some elevated level of 500 HTTP responses between 04:25 and 12:40 UTC on 16 September 2024
We have identified and resolved the issue. We are continuing to monitor services.
We are currently investigating an increase in 500 level HTTP responses on customer traffic to the us-east-1 region
Report: "System Errors in EU-SOUTH-1 Region"
Last updateThis incident has been resolved.
We are currently investigating issues in our EU-SOUTH-1 region.
Report: "System Errors Affecting All Regions"
Last updateBetween 10:32 UTC 2024-08-06 and 20:40 UTC 2024-08-07, we experienced three instances affecting both S3 and user services in all regions. Starting at 10:32 UTC 2024-08-06, our queueing service reached a full capacity state which impacted our database cache causing it to become unresponsive. The Wasabi Operations team initiated a restart to the primary database in an attempt to clear out all stale connections to the database while simultaneously clearing the queuing service queue. When this action failed to bring the database into a fully operational state, the secondary database instance was promoted to primary. At 11:20 UTC the S3 service was fully operational again. Between 13:17 UTC and 13:23 UTC, the database was restarted once more by Operations in order to fully incorporate our queueing service library. Between 02:55 UTC to 03:35 UTC on 2024-08-07, a second event occurred when our Operations team identified a configuration issue within the queueing service and the previously promoted secondary database instance. This configuration issue was causing timeouts to occur on user services such as our Web Console, WAC API, and WACM interface. Our Operations team then promoted the primary database back to production, alleviating these issues. There was no impact to S3 services during this event. Between 20:30 UTC to 20:44 UTC on 2024-08-07, a third event occurred when an automation cluster was failing to be seen by our automation service, causing a small decrease in accepted traffic to our S3 vaults. Our Operations team then recreated and redeployed this cluster, fully restoring the S3 service.
The operations team has resolved this issue and restored service to normal levels. We will post a postmortem shortly.
We are currently investigating issues with logging into our Web Console and errors in Wasabi regions.
Report: "Intermittent System Errors Affecting Console Access"
Last updateBetween 10:32 UTC 2024-08-06 and 20:40 UTC 2024-08-07, we experienced three outages affecting both S3 and user services in all regions. Starting at 10:32 UTC 2024-08-06, our queueing service reached a full capacity state which impacted our database cache causing it to become unresponsive. The Wasabi Operations team initiated a restart to the primary database in an attempt to clear out all stale connections to the database while simultaneously clearing the queuing service queue. When this action failed to bring the database into a fully operational state, the secondary database instance was promoted to primary. At 11:20 UTC the S3 service was fully operational again. Between 13:17 UTC and 13:23 UTC, the database was restarted once more by Operations in order to fully incorporate our queueing service library. Between 02:55 UTC to 03:35 UTC on 2024-08-07, a second event occurred when our Operations team identified a configuration issue within the queueing service and the previously promoted secondary database instance. This configuration issue was causing timeouts to occur on user services such as our Web Console, WAC API, and WACM interface. Our Operations team then promoted the primary database back to production, alleviating these issues. There was no impact to S3 services during this event. Between 20:30 UTC to 20:44 UTC on 2024-08-07, a third event occurred when an automation cluster was failing to be seen by our automation service, causing a small decrease in accepted traffic to our S3 vaults. Our Operations team then recreated and redeployed this cluster, fully restoring the S3 service.
This incident has been resolved.
We have identified and resolved the issue. The access to our Management Console is back to normal and operational again. We will continue to monitor our services.
We are currently investigating intermittent issues affecting Wasabi Management Console Access. Team is working to resolve this issue. There is no impact on S3 API calls.
Report: "Wasabi Managment Console MFA authentication issue seen"
Last updateBetween 10:32 UTC 2024-08-06 and 20:40 UTC 2024-08-07, we experienced three outages affecting both S3 and user services in all regions. Starting at 10:32 UTC 2024-08-06, our queueing service reached a full capacity state which impacted our database cache causing it to become unresponsive. The Wasabi Operations team initiated a restart to the primary database in an attempt to clear out all stale connections to the database while simultaneously clearing the queuing service queue. When this action failed to bring the database into a fully operational state, the secondary database instance was promoted to primary. At 11:20 UTC the S3 service was fully operational again. Between 13:17 UTC and 13:23 UTC, the database was restarted once more by Operations in order to fully incorporate our queueing service library. Between 02:55 UTC to 03:35 UTC on 2024-08-07, a second event occurred when our Operations team identified a configuration issue within the queueing service and the previously promoted secondary database instance. This configuration issue was causing timeouts to occur on user services such as our Web Console, WAC API, and WACM interface. Our Operations team then promoted the primary database back to production, alleviating these issues. There was no impact to S3 services during this event. Between 20:30 UTC to 20:44 UTC on 2024-08-07, a third event occurred when an automation cluster was failing to be seen by our automation service, causing a small decrease in accepted traffic to our S3 vaults. Our Operations team then recreated and redeployed this cluster, fully restoring the S3 service.
This incident has been resolved.
We have identified and resolved the issue affecting the console and bucket/sub-user creation in the system. All systems are operational and we will continue to monitor the services.
We continue to investigate this issue. This is also impacting the ability to create buckets and sub-users in the system. Access to current buckets is unaffected.
Customers attempting to authenticate to the Wasabi Management Console using MFA are still impacted following this morning's Service Incident. We are working to restore this access. S3 connectivity is not impacted by this.
Report: "System Errors Affecting All Regions"
Last updateFrom 2024-06-19 00:57 UTC to 2024-06-19 04:36 UTC, we experienced issues in all Wasabi regions affecting S3, Wasabi Account Control \(WAC\) API, and Console services. Our WAC API and Console services were affected between 00:57 UTC - 02:45 UTC and 03:55 UTC - 04:36 UTC, while our S3 service was affected between 01:55 UT - 02:45 UTC. At 00:57 UTC it was noticed by our Operations Team that there were connection issues between our Console and WAC API services and our global database with no impact to other \(S3\) services. While debugging these connection issues, it was noted at 1:55 UTC that S3 services were now also affected. After further investigation from the Operations Team, it was seen that our queuing service within the global database had failed, resulting in these services unable to communicate with the database. To correct this issue, Operations had to restart both the global database instance, as well as the queuing service to restore connections for the affected services. Once both the database and queuing service had re-initialized at 2:45 UTC, services were restored. At 03:55 UTC, it was noticed that the Console and WAC API were beginning to exhibit the same symptoms as before with a loss of connectivity to the database. Upon further investigation from the Operations Team, it was noted that the queuing service had run low on resources. Operations increased available resources to the service and at 04:10 UTC the services were brought back online. At 04:36 UTC it was confirmed that all services were operational.
This incident is now resolved. Please reach out to support@wasabi.com if you continue to see any errors related to this incident.
Our Operations team have been taken actions to resolve the issues. Currently all services are up and we are continuing to monitor them.
We are currently facing issues with logging into Wasabi Management Console and Wasabi Account Control API, our teams are working to resolve it.
Our Operations team have identified the issue and actions have been taken to address it. Currently all services are up and we are continuing to monitor them.
We are experiencing system errors in all regions.
This is also affecting our Wasabi Account Control API. Our Operations Team is investigating this issue.
We are currently investigating issues with logging into our Web Console.
Report: "System Errors in US-EAST-1"
Last updateFrom 2024-06-10 18:25 UTC to 2024-06-10 18:43 UTC, we experienced an issue in our US-EAST-1 \(Ashburn\) vault which resulted in degraded performance in the region. This impacted customer’s ability to send and retrieve data to/from their buckets in the region, and access services such as the Wasabi Web Console and Wasabi Account Control API. At around 17:48 UTC, the Wasabi Operations team began deploying updates in the US-EAST-1 region. After the update was completed and the process of restarting internal services was started, one service failed, causing communication between our client servers and database to fail. Our Operations Team took action by removing the applied configuration of the failed service and redeploying the service. After this action, the internal services came up successfully and communication between our client servers and database was restored, restoring service in the region. At 18:43 UTC all services in the US-EAST-1 region were operational.
This incident is now resolved. Please reach out to support@wasabi.com if you continue to see any errors related to this incident.
Our Operations Team identified and corrected the cause of the system errors. Performance to the region is restored and we will continue to monitor the region.
We are currently experiencing system errors in our US-EAST-1 region.
Report: "Degraded performance in US-EAST-1 and US-EAST-2 regions"
Last updateFrom 2024-05-30 15:36 UTC to 2024-05-31 05:00 UTC, we experienced elevated temperatures in our us-east-1 and us-east-2 regions. This issue was due to a cooling system failure in one of the Iron Mountain Datacenter \(IMDC\) buildings which hosts our storage sub-system and database servers. This failure caused temperatures to pass a safe operating threshold, causing systems to involuntarily shutdown in order to prevent any damage. Between 2024-05-30 17:00:00 UTC and 2024-05-31 05:00:00 UTC, our Operations Team worked on each server rack to bring each individual component back up safely, ran integrity checks on the hardware and replaced faulty equipment. At 2024-05-31 05:00:00 UTC, our services were returned to a fully operational status.
Services in our us-east-1 and us-east-2 regions have been restored and all faults related to the data center cooling issues have been mitigated. For any issues, please reach out to our Support Team at support@wasabi.com
Services to our us-east-1 and us-east-2 regions have been restored. Out of an abundance of caution we will continue to monitor the regions throughout the weekend. For any issues, please reach out to our Support Team at support@wasabi.com
Systems have been restored to operational status. We continue to monitor these services. If you experience any issue please reach out to our support team.
We are continuing the restoration of the impacted components. A large number of servers have been rebooted & restored and we are working on the remaining servers in both regions. We anticipate the full process to complete between 4-8 hours. We will continue to update our status page.
Recovery operation is currently underway to bring back the impacted systems. This will take between 6 to 12 hours to complete. We will continue to update here as progress is made.
We are seeing a large amount of HTTP 500-level errors being returned to client requests due to this incident. Please check the status page regularly to receive the latest updates.
Wasabi's Operations Team has been informed that Iron Mountain Datacenters in the us-east-1 and us-east-2 regions are experiencing a cooling issue, impacting the operating temperatures of Wasabi server hardware in those regions.
Report: "Login issues with Wasabi Management Console"
Last updateOn 3/25/24 from 15:02 UTC to 3/25/24 at 15:53 UTC, Wasabi experienced issues with logging into the Wasabi Web Console, returning an error message to users of "10000ms limit exceeded". During this timeframe, customers may have experienced issues with logging into the Wasabi Web Console and performing actions that rely on this application such as viewing billing information and SSO authentication. The Wasabi Operation Team noted that the cause of this issue was due to a large influx of connections coming from incoming client connections. Action was taken by increasing the available number of server hardware to handle these types of requests, which allowed our Operations Team to restore connections to the affected services.
This incident has been resolved.
We are currently investigating issues with logging into the Wasabi Management Console.
Report: "Issues in US-EAST-1"
Last updateOn 12 May 2024 at 23:56 UTC until 13 May 2024 at 00:22 UTC, Wasabi had an issue in the US-EAST-1 \(Ashburn\) vault which resulted in degraded performance in the region. This impacted customer’s ability to send and retrieve data to/from their buckets in the region. The queueing service in US-EAST-1 region failed to accept new connections due to an overloaded memory issue. This caused issues with communication to our global database cache, which resulted in an inability to serve customer requests. Our Operations Team restarted the global database cache and queuing service. The restarting of both services resulted in connections being restored for the vault. At 00:22 UTC, 13 May 2024, all services were restored to the US-EAST-1 region.
Between 23:56 to 00:22 UTC we saw issues with accepting incoming client requests in our US-EAST-1 region. Our Operations Team has resolved this issue and the region is now operating normally.
Report: "Performance issues with US-EAST-1 region"
Last updateOn 07 May 2024 at 15:40 UTC, Wasabi had an issue in the US-EAST-1 \(Ashburn\) vault which resulted in severely degraded performance in the region. This impacted customer’s ability to send and retrieve data to/from their buckets in the region. At around 15:40 UTC, the Wasabi Operations team was deploying updates in the US-EAST-1 region which caused our queueing service to fail to accept new connections due to an overloaded memory issue. The queuing service failing to accept new connections caused issues with communication to our global database cache, which resulted in an inability to serve customer requests. To mitigate this issue, our Operations Team found it necessary to restart the global database cache and queuing service. Once the queuing service was restarted, it was allotted additional memory resources to avoid another overload scenario while the deployment was completed. The restarting of both services resulted in connections being restored for the vault. At 16:02 UTC, 07 May 2024, all services were restored to the US-EAST-1 region.
This incident has been resolved.
The issue affecting US-EAST-1 performance has been resolved. We will continue to monitor the region.
We are currently investigating performance issues with our US-EAST-1 region.
Report: "Issues Accessing Wasabi Services"
Last updateFrom 2024-04-03 11:55 UTC to 2024-04-03 12:25 UTC, we experienced a global outage with all Wasabi services, including S3, IAM, STS, WACM, WAC API, and Console, with our us-west-1 region having an extended outage lasting until 13:06 UTC. At 11:55 UTC our Operations Team was notified by our alerting system that our global database was beginning to experience memory-related issues outside of its normal operating range. Ten minutes later at 12:05 UTC the database crashed, causing all services and APIs to fail to accept any incoming requests from clients, resulting in a global outage of all services. At 2024-04-03 12:05 UTC, our Operations Team began the manual process of rebooting this database server instance to restore database operation. Once rebooted and all safety checks were completed, service was restored at 12:25 UTC to 12 out of 13 regions, with our us-west-1 region being the outlier. The regional servers in our us-west-1 region had difficulty with restoring the connection to our global database, which caused our Operations Team to take action by manually restarting these servers to restore connection. By 13:06 UTC, services were restored in our us-west-1 region.
This issue has been resolved.
Services have been restored. We will continue to monitor all regions and services.
Our Operations Team has identified the cause of the issue and are working to correct it.
We are currently investigating issues with our Wasabi service.
Report: "Issues logging in with Wasabi Web Console"
Last updateOn Tuesday February 27, 2024 at approximately 20:30:00 UTC, consistent timeout errors when attempting to log into the Wasabi Management Console were being seen by users. Wasabi Operations Team noted that some services were not running properly due to a memory issue. These services were restarted, and normal Wasabi Management Console logins resumed on February 27, 2024 at approximately 21:30:00 UTC.
This incident has been resolved.
The Wasabi Web Console has been restored back to an operational status.
We are currently investigating issues with logging into the Wasabi Web Console.
Report: "Reserved Capacity Storage Accounts Receiving 'StorageQuotaExceeded' Error Message"
Last updateFrom 2024-01-30 00:30 UTC to 2024-01-30 16:10 UTC, there was an issue experienced by our Reserved Capacity Storage \(RCS\) customers who had exceeded their purchased storage quota with the error ‘StorageQuotaExceeded’ when attempting to upload data to their Wasabi bucket\(s\). Any quota imposed on an RCA account is a soft limit and should not have affected the account’s ability to upload data to their bucket\(s\), however due to a bug in a recently deployed update, our billing subsystem at 00:30 UTC on 2024-01-30 subsequently flagged all RCS accounts that had exceeded their purchased capacity, and imposed a hard limit cap on the account, preventing any PUT API requests. At 15:20 UTC on 2024-01-30, our Billing team isolated the issue, and a fix was starting to be developed. At 15:50 UTC, the fix was deployed to our billing subsystem, and by 16:10 UTC all affected RCS accounts were back to fully operational status.
The issue affecting some RCS accounts with an incorrect 'StorageQuotaExceeded' error message has been resolved. The actual times here for the incident were 3AM - 11 AM EST.
We are currently investigating an issue where our Reserved Capacity Storage (RCS) customers are receiving an incorrect "StorageQuotaExceeded" error message.
Report: "We are currently investigating slow responses to IAM & WAC API"
Last updateFrom 2024-01-29 15:00 UTC to 2024-02-01 06:00 UTC, we experienced an issue with our IAM and WAC API operations resulting in the possibility of a slow response time to client requests. The root cause of these slow API responses was caused by a high number of duplicate requests to our system which required multiple services to communicate and process these requests in the order in which they were received. This high rate of duplicate requests caused a backlog in processing in our billing subsystem which was unable to respond to the requests at the speed in which they were being sent to our system. Due to this bottleneck in the billing subsystem, all requests to IAM and WAC API were delayed until they could be processed in the order in which they were received. While the root cause of this issue began at 15:00 UTC on 2024-01-29, our system was able to keep up with the request rate until approximately 12:30 UTC on 2024-01-31 when we were notified of an increasing delay in IAM and WAC API requests. At 16:00 UTC on 2024-01-31, our team was able to identify the source of the requests and block the source of duplicate requests. At 17:00 UTC 2024-01-31, our Operations and Engineering Teams began the recovery process to complete all requests in the queue and streamline the acceptance of new requests to our systems. By 06:00 UTC 2024-02-01, the recovery process was completed, and all systems were fully operational allowing normal response times to our IAM and WAC APIs.
We have resolved the issue as of 6:00 UTC. IAM and WAC API operation performance has been restored.
Our team has successfully isolated the root cause of the issue, and we are actively working on implementing a solution.
Our Engineering team continues to isolate the problem, we will update this page with more information soon.
We are currently investigating slow responses to IAM & WAC API operations.
Report: "Errors with Wasabi Account Control API"
Last updateOn 16 January 2024 at 10:38 UTC, an internal firewall configuration issue prevented client connection requests to the Wasabi Account Control API, preventing users from logging into their Wasabi accounts. The internal connection failure prevented necessary Wasabi services from being able to successfully communicate across all regional subnets. Our Operations team isolated the problem, made changes to the firewall ACLs, and re-advertised client connections. At 13:18 UTC, the configuration issue was resolved, and internal connections were able to successfully communicate.
Resolved This incident was resolved by 12:50 UTC Update We are continuing to monitor for any further issues. Monitoring Access/requests to the Wasabi Account Control API is now working. The cause was a network connection to one of the databases that has now been restored. Investigating We are experiencing issues with the Wasabi Account Control API. Any requests to this API service may experience delays and/or failures.
Report: "Error Logging into the Wasabi Console"
Last updateOn 16 January 2024 at 10:38 UTC, an internal firewall configuration issue prevented client connection requests to the Wasabi Management Console, preventing users from logging into their Wasabi accounts. The internal connection failure prevented necessary Wasabi services from being able to successfully communicate across all regional subnets. Our Operations team isolated the problem, made changes to the firewall ACLs, and readvertised client connections. At 13:18 UTC, the configuration issue was resolved, and internal connections were able to successfully communicate.
This incident was resolved by 12:50 UTC
We are continuing to monitor for any further issues.
The access to the Wasabi Console is working. The cause was a network connection to one of the databases that has now been restored.
We are investigating reports of timeouts when attempting to access the Wasabi Management Console (console.wasabisys.com).
Report: "We are currently investigating system errors in US-EAST-1"
Last updateOn Sunday, January 21st 2024, at approximately 12:00 UTC a hardware failure in our US-EAST-1 datacenter resulted in two database nodes within the region to reconnect to our cluster while in a non-functional state. The introduction of these nodes to the database cluster within the region caused the database to become unhealthy, resulting in an inability for this cluster to service incoming requests from clients due to an exponential increase in unhealthy connections. By 16:30 UTC our Engineering Team had identified the cause of these unhealthy connections and by 19:00 UTC had prepared a fix to restore this fault. By 20:15 UTC the fix was fully deployed across the cluster, bringing the connection state back to healthy and available for incoming client requests.
This incident is now resolved. Please reach out to support@wasabi.com if you continue to see any errors related to this incident.
We have identified the issue and made changes to our Database to mitigate the system errors. We are continuing to monitor the system.
Our Operations team continues to work with the S3 team to isolate the problem, we will update this page with more information soon.
We are currently investigating system errors in the US-EAST-1 region and our Operations team is investigating.
Report: "Error Accessing Wasabi systems"
Last updateConsole services are restored and the services are now fully operational.
S3 services are restored and we continue to work on the full restoration of console services.
Service is restored and we are monitoring the system at this time.
A system error is impacting access to all sites. We'll continue to update as we isolate the issue.
We are investigating reports of a Network error when attempting to access the Wasabi Management Console (console.wasabisys.com).
Report: "Degraded Performance in CA-Central-1 Region"
Last updateWe experienced a sudden period of increased number of connections to the system resulting in some impacted customer traffic due to failed requests during this time. We were able to restore the system to normal levels of operation shortly thereafter around 21:30 UTC 2023-11-10.
Report: "Investigating System Outage on US-West-1"
Last updateOn 2 November 2023, a major power issue was encountered in the us-west-1 region infrastructure. Multiple utility power sources failed and power was reverted to generators to maintain services. Several hours later, prior to the restoration of local power, the redundant generators failed and Wasabi services \(and services from other SaaS and IaaS tenants at this location\) were impacted. Our facility service providers worked with local power companies to first recover the generator power and then restore local power. Once power was restored, we were able to restore services. The providers continue their investigations to isolate the cause of the initial power failures, but faults have been identified in the generator system and repairs are scheduled there to address that issue.
This incident has been resolved.
Our us-west-1 storage region is fully operational. We will continue to monitor the region.
Power to Wasabi’s us-west-1 storage region has been restored. Most components have been returned to service which has helped remediate the impact. We are currently working to restore the remaining affected services and return to fully operational status.
16:30 UTC 2023-11-02 - We are continuing to work with the US-West-1 data center to ensure power stability, while simultaneously continuing efforts to restore the S3 service to be fully operational. We expect some impact to customer traffic during this time.
We have restored power to the US-West-1 data center and are working on restoring the S3 service.
We are continuing to investigate this issue.
We have noticed an issue with electrical feed into the us-west-1 region, we are working to have this restored now.
Report: "Issues With Billing Process"
Last updateFrom 2023-08-30 17:00 UTC to 2023-08-31 15:10 UTC, we experienced an issue where the billing process that runs each night failed to complete. The issue was the result of an upgrade to the database billing services path which left the previously running billing job halted. A manual intervention was performed to ensure proper stoppage of the old billing job and start a new one. The billing job was restarted at 2023-08-31 13:00 UTC and monitored until completion. At 15:10 UTC on August 31, 2023, the billing job completed successfully.
17:00 UTC 2023-08-30 - We are currently experiencing issues with our Billing process. 15:10 UTC 2023-08-31 - This issue has been resolved.
Report: "WAC API returning 500 InternalError for some commands in ap-northeast-1 region"
Last updateAt 12:59 GMT on 24 August 2023 we deployed an upgrade to our services in the ap-northeast-1 region to improve request latency in the targeted region. Upon deployment, a key component that allows the WAC API to talk to our back-end billing system failed to communicate between the two services. Due to the communication failure between services, requests made through the ap-northeast-1 WAC API endpoint were impacted. At 03:49 GMT on 25 August 2023 the issue was identified and corrected to allow WAC API services to resume against the region.
12:59 GMT 2023-08-24 - We are experiencing issues in the AP-NORTHEAST-1 region for some calls to our Wasabi Account Control (WAC) API with a 500 InternalError response. 03:49 GMT 2023-08-25 - This issue has been resolved.