Historical record of incidents for Linode
Report: "DBaaS - Degraded provisioning"
Last updateProvisioning is degraded due to ongoing GCP issue. Google notes that some services are recovering. We are still monitoring.
Report: "Scheduled Network Maintenance - US-Central (Dallas)"
Last updateWe will be performing an emergency network maintenance in our US-Central (Dallas) data center from 05:00 (UTC) until 09:00 (UTC) on Wednesday, May 28th. While we do not expect any downtime, a brief period of increased latency or packet loss may occur.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Scheduled Network Maintenance - US-East (Newark)"
Last updateWe will be performing an emergency network maintenance in our US-East (Newark) data center beginning 28 May 2025 02:00 (UTC) until 28 May 2025 06:00 (UTC). During this time, users with resources in the US-East (Newark) region may experience intermittent issues when attempting to take actions like creating, updating, and deleting resources within this region
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Emerging Service Issue - [US-SEA]"
Last updateWe haven't observed any additional issues with the Linode job processing and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
A fix has been implemented and we are monitoring the results.
Our team is investigating an emerging service issue affecting Linode job processing (power on, reboot, shutdown, Cloud Firewall apply, etc.) in the US-SEA region. We will share additional updates as we have more information.
Report: "Emerging Service Issue - [US-SEA]"
Last updateOur team is investigating an emerging service issue affecting Linode job processing (power on, reboot, shutdown, Cloud Firewall apply, etc.) in the US-SEA region. We will share additional updates as we have more information.
Report: "Connectivity Issue - BR-GRU (São Paulo)"
Last updateOn May 13, 2025, at approximately 16:10 UTC, our monitoring systems detected connectivity issues in our São Paulo region. As a result, customers experienced intermittent connection timeouts and errors across all services deployed in this region during the impact window. Investigations revealed that an uplink was accidentally disconnected during a maintenance event performed in this data center, which caused the disruptions observed by our customers. We restored the link at around 16:47 UTC, and after monitoring our systems for a period of time, we confirmed that the issue was fully resolved. To prevent this issue from occurring in the future, Akamai will review and enhance the processes followed during network maintenance. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our BR-GRU (São Paulo) data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to monitor for any further issues.
At this time we have been able to correct the issues affecting connectivity in our BR-GRU (São Paulo) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an issue affecting connectivity in our BR-GRU (São Paulo) data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Report: "Scheduled Emergency Network Maintenance - US-Central (Dallas)"
Last updateWe will be performing an emergency network maintenance in our US-Central (Dallas) data center from 05:00 (UTC) until 09:00 (UTC) on Wednesday, May 20th. While we do not expect any downtime, a brief period of increased latency or packet loss may occur.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Maintenance for Cloud Manager, API, and CLI"
Last updateThis scheduled maintenance has been rescheduled to occur on May 21st.
The Linode Cloud Manager, API, and CLI will be offline for scheduled maintenance between 02:00 UTC and 05:00 UTC on May 14, 2025. During this window, running Linodes and related services will not be disrupted, but account management access and Support tickets will be unavailable.Please ensure that you complete critical or important jobs in the Cloud Manager or API before the maintenance window. We will update this status page once this event is complete and Linode customers have full access to all Linode services.Customers who need assistance from Linode Support during this time will need to call 855-454-6633 (+1-609-380-7100 outside of the United States) to contact our Support team. Please note that our Support team will not be able to assist with issues related to the Cloud Manager or API, authenticate users to their accounts, or respond to Support tickets for the duration of the maintenance window. As soon as our Support team regains access, we will answer tickets in the order they are received.Impacts on Current Linode Customers:Current Linode customers will not be able to log in to the Cloud Manager, interact with the API, or perform any administrative or management functions. This includes Create, Remove, Boot, Migrate, Back Up, Shut Down, etc.The scheduled maintenance will impact the Kubernetes API. Dynamic aspects of LKE that rely on the Linode API will also be impacted, to include items such as autoscaling, recycling, rebooting, attaching/detaching PVCs, Node Balancer provisioning, as well as the ability to create new clusters. Cluster nodes and running workloads will not be affected.Impacts on Users Trying to Create Linode Accounts / Awaiting Account Authentication:While the Linode Cloud Manager is offline during the maintenance period, we are unable to accept requests for new accounts or authenticate accounts for users awaiting full account access.Thank you for your patience and understanding.
The Linode Cloud Manager, API, and CLI will be offline for scheduled maintenance between 02:00 UTC and 05:00 UTC on May 14, 2025. During this window, running Linodes and related services will not be disrupted, but account management access and Support tickets will be unavailable.Please ensure that you complete critical or important jobs in the Cloud Manager or API before the maintenance window. We will update this status page once this event is complete and Linode customers have full access to all Linode services.Customers who need assistance from Linode Support during this time will need to call 855-454-6633 (+1-609-380-7100 outside of the United States) to contact our Support team. Please note that our Support team will not be able to assist with issues related to the Cloud Manager or API, authenticate users to their accounts, or respond to Support tickets for the duration of the maintenance window. As soon as our Support team regains access, we will answer tickets in the order they are received.Impacts on Current Linode Customers:Current Linode customers will not be able to log in to the Cloud Manager, interact with the API, or perform any administrative or management functions. This includes Create, Remove, Boot, Migrate, Back Up, Shut Down, etc.The scheduled maintenance will impact the Kubernetes API. Dynamic aspects of LKE that rely on the Linode API will also be impacted, to include items such as autoscaling, recycling, rebooting, attaching/detaching PVCs, Node Balancer provisioning, as well as the ability to create new clusters. Cluster nodes and running workloads will not be affected.Impacts on Users Trying to Create Linode Accounts / Awaiting Account Authentication:While the Linode Cloud Manager is offline during the maintenance period, we are unable to accept requests for new accounts or authenticate accounts for users awaiting full account access.Thank you for your patience and understanding.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Connectivity Issue - IN-MAA (Chennai) and IN-BOM-2"
Last updateThe networking issue caused by an upstream problem with one of our peering partners has been resolved. The impacted regions were IN-MAA (Chennai) and IN-BOM-2 (Mumbai), occurring between 19:00 and 21:00 UTC. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an issue that affected connectivity in our IN-MAA (Chennai) and IN-BOM-2 data centers, related to a suspected outage with one of our peering partners. During this time, users may had experienced Packet loss and intermittent connection errors for services deployed in these data centers. We will share additional updates as we have more information.
Report: "Connectivity Issue - IN-MAA (Chennai) and IN-BOM-2"
Last updateOur team is investigating an issue that affected connectivity in our IN-MAA (Chennai) and IN-BOM-2 data centers, related to a suspected outage with one of our peering partners. During this time, users may had experienced Packet loss and intermittent connection errors for services deployed in these data centers. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Mumbai"
Last updateThe networking issue caused by an upstream problem with one of our peering partners has been resolved. The impacted regions were Mumbai (ap-west) and Mumbai 2 (in-bom-2), occurring between 1845 and 2000 UTC. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an emerging upstream networking issue affecting connectivity in Mumbai. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Mumbai"
Last updateOur team is investigating an emerging upstream networking issue affecting connectivity in Mumbai. We will share additional updates as we have more information.
Report: "Service Issue - LAX"
Last updateOn April 24, 2025 between 17:32 - 17:55 UTC, we identified a brief period of impact to network traffic connectivity in our US-LAX \(Los Angeles\) DataCenter. During the period of impact, customers experienced intermittent connection timeouts and errors for all services deployed in this data center. Akamai found that one of the border routers had increased memory since the impact occurred. As a preventative measure, we rebooted this border router to return it to a stable state. We determined that the router's kernel behavior could be reliably reproduced using a traffic pattern that overwhelms the memory buffers, leading to self-initiated reboots. A configuration fix was identified that limits the rate of subnet route probes, and started to be applied. The network became and remained stable until a brief period of impact between 19:55 - 19:58 UTC on April 25, 2025. Following the application of a configuration change on one of the border routers, at approximately 19:47 UTC we experienced a recurrence of BGP-related instability, starting with symptoms observed at 19:54 UTC. The resulting investigation revealed that the performance degradation was due to a misconfiguration that was allowing a high rate of failed requests to occupy resources on the CPU. In order to fully mitigate the impact, we developed and implemented a Rate-limiting configuration fix. This process was undertaken in phases, commencing with the border routers most adversely affected, and was concluded at approximately 20:40 UTC on April 30, 2025. Following an extensive monitoring period of our systems, we verified that the issue had been resolved. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional issues with our US-LAX (Los Angeles) Data Center, and will now consider this incident resolved. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We haven’t observed any additional connectivity issues since 19:58 UTC on April 25, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We identified a brief period of impact to network traffic connectivity in our US-LAX (Los Angeles) data center between 19:55 - 19:58 UTC on April 25, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We haven’t observed any additional connectivity issues since 17:55 UTC on April 24, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We identified a brief period of impact to network traffic connectivity in our US-LAX (Los Angeles) DataCenter between 17:32 - 17:55 UTC on April 24, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to investigate this issue.
Our team is investigating an emerging service issue affecting all services in the LAX. We will share additional updates as we have more information.
Report: "Cloud Manager and API performance degradation"
Last updateAkamai became aware of performance degradation impacting APIv4 and Cloud Manager, including requests to the Object Storage and LKE endpoints. The issue started at approximately 15:55 UTC on February 2nd, 2025 and was mitigated at 16:25 UTC. During that time, some users may have experienced performance degradation and an elevated error rate when attempting actions via APIv4 or Cloud Manager The issue was caused due to a misconfiguration for interactions between backend services, leading to resource exhaustion when a backend service unexpectedly failed. The issue was initially mitigated when the backend service came back online and we’ve subsequently taken steps to adjust configurations where needed and add monitoring in order to help prevent a recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
At this time we have been able to correct the issues, and haven't observed any additional issues with the Cloud Manager and API service since. If you continue to experience problems, please open a Support ticket for assistance. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
After additional investigation, we have confirmed that the impact caused by this issue during the impact window was not limited to Cloud Manager & API Object Storage and LKE requests only. This issue, also included degradation for a portion of all Cloud Manager and API requests. We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
Our team became aware of a service issue that affects the Linode Cloud Manager and API. The issue started at approximately 15:55 UTC on February 2nd, 2025 and was mitigated at 16:25 UTC on February 2nd, 2025. During that time, some users may have experienced performance degradation when attempting Object Storage and LKE requests. Our team has identified the issue affecting the Cloud Manager and API. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Report: "Connectivity Issue - BR-GRU (São Paulo)"
Last updateWe are continuing to monitor for any further issues.
At this time we have been able to correct the issues affecting connectivity in our BR-GRU (São Paulo) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
Our team is investigating an issue affecting connectivity in our BR-GRU (São Paulo) data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Report: "Connectivity Issue - BR-GRU (São Paulo) (INVESTIGATING)"
Last updateOur team is investigating an issue affecting connectivity in our BR-GRU (São Paulo) data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Report: "Service Issue - Compute - AP-Northeast-2 and JP-TYO-3"
Last updateStarting around 23:18 UTC on May 11th, 2025, Compute customers were unable to login or access Compute instances hosted on AP-Northeast-2 \(Tokyo 2\) & JP-TYO-3 \(Tokyo 3\) data centers. Akamai's investigation revealed that the impact was caused due to network connectivity issues at the internet exchange provider side. Akamai's network team worked with the internet exchange provider and confirmed that the issue started following a network change at their end. For immediate mitigation, Akamai's network team applied the deny-all rule to the affected links with the internet exchange. The impact was mitigated at around 00:40 UTC on May 12, 2025. We will continue to work with the internet exchange provider to understand the root cause and will take appropriate preventive actions. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our AP-Northeast-2 (Tokyo 2) and JP-TYO-3 (Tokyo 3) data centers, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity in our AP-Northeast-2 (Tokyo 2) and JP-TYO-3 (Tokyo 3) data centers. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our subject matter experts are continuing to investigate this issue. The next update will be provided as we make progress.
Our team is investigating an emerging service issue affecting connectivity to Compute Instances in our Tokyo 2 data center. We will share additional updates as we have more information.
Report: "Service Issue - Compute - AP-Northeast-2 and JP-TYO-3"
Last updateAt this time we have been able to correct the issues affecting connectivity in our AP-Northeast-2 (Tokyo 2) and JP-TYO-3 (Tokyo 3) data centers. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
Our subject matter experts are continuing to investigate this issue. The next update will be provided as we make progress.
Our team is investigating an emerging service issue affecting connectivity to Compute Instances in our Tokyo 2 data center. We will share additional updates as we have more information.
Report: "Service Issue - Compute - Tokyo 2"
Last updateOur team is investigating an emerging service issue affecting connectivity to Compute Instances in our Tokyo 2 data center. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Compute - Tokyo 2"
Last updateOur team is investigating an emerging service issue affecting connectivity to Compute Instances in our Tokyo 2 data center. We will share additional updates as we have more information.
Report: "Maintenance for Linode Support Ticketing System"
Last updateOn Thursday, May 8th, 2025, at 9:00AM EDT (May 8th, 2025, at 13:00 UTC), scheduled maintenance will be performed on the Linode Support Ticketing System. Expected downtime for this maintenance will be about 20-30 minutes. This should not impact the ability to access services or utilize Cloud Manager. Customers who need assistance from Linode Support during this time will need to call 855-454-6633 (+1-609-380-7100 outside of the United States) to contact our Support team. Please note that our Support team will not be able to assist with issues related to the Cloud Manager or API, authenticate users to their accounts, or respond to Support tickets for the duration of the maintenance window. As soon as our Support team regains access, we will answer tickets in the order they are received.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Service Issue - Red Hat Enterprise Linux boot issues"
Last updateStarting around 20:29 UTC on May 5, 2025, Compute customers were unable to boot servers utilizing Red Hat Enterprise Linux \(RHEL\)-based distributions \(Alma Linux, Rocky, CentOS, etc\). Investigation into the issue revealed that this started following the rollout of the new GRUB2 image \(software release\). The release was intended to support the latest version of Fedora and to fix compatibility issues with newer RHEL-based distributions. However, a software defect inadvertently broke compatibility with older RHEL-based distributions. The image failed to boot and was sitting at the GRUB prompt following this update. Upon discovering the root cause of this situation, we rolled back this release. This action was completed and the immediate impact was mitigated at 02:56 UTC on May 6, 2025. Customers running a Linode-supplied kernel were not impacted by this incident, regardless of distribution. Our teams are continuing to investigate this event and will take appropriate actions to prevent any recurrence. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
The issue started at 20:29 UTC on May 5, 2025. The investigation revealed that the issue started following the rollout of the software release to support the latest version of Fedora. We rolled back the release to mitigate the impact. We can confirm that the issue is now mitigated as of 02:56 UTC on May 6, 2025 and no longer occurring. We apologize for the impact and thank you for your patience and continued support. Our subject matter experts are continuing to investigate the root cause and will take appropriate preventive actions. We are committed to making continuous improvements to make our systems better and prevent recurrence.
Our subject matter experts are actively investigating this issue. We will provide the next update as we make progress.
The current workaround for the issue is booting your Linode with the Latest 64-bit kernel. You can change the kernel your Linode is using to boot by following the instructions here: https://techdocs.akamai.com/cloud-computing/docs/manage-the-kernel-on-a-compute-instance
Our team is investigating a service issue affecting the boot process of Red Hat Enterprise Linux (RHEL)-based distributions. Systems running non-64-bit kernels may fail to boot, while distributions using the 64-bit kernel remain unaffected.
Report: "Service Issue - Red Hat Enterprise Linux boot issues"
Last updateOur team is investigating a service issue that affects the Red Hat Enterprise Linux (RHEL) boot. During that time, some users may experience issues when attempting to access these systems. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Connectivity EU-West(London) and EU-Central (Frankfurt)"
Last updateWe have not observed any further issues related to this and will now consider the situation resolved. Please open a Support ticket if you have any questions or concerns.
After investigation, we have confirmed that the connectivity issues in these locations were due to a problem outside of our network with an upstream provider. Connectivity has been restored, and we are monitoring for any subsequent problems.
Our team is investigating an emerging service issue affecting connectivity in EU-West(London) and EU-Central (Frankfurt). We will share additional updates as we have more information.
Report: "Latency issues with Block Storage in Osaka"
Last updateWe haven’t observed any additional issues with the Block Storage service in Osaka, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the issue affecting the Block Storage service in our Osaka data center. At this time we have been able to correct the issues affecting the service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an emerging service issue affecting block storage hosts in Osaka. We will share additional updates as we have more information.
Report: "Latency issues with Block Storage in Osaka"
Last updateOur team is investigating an emerging service issue affecting block storage hosts in Osaka. We will share additional updates as we have more information.
Report: "Service Issue - LAX"
Last updateWe haven’t observed any additional connectivity issues since 19:58 UTC on April 25, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
We identified a brief period of impact to network traffic connectivity in our US-LAX (Los Angeles) data center between 19:55 - 19:58 UTC on April 25, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
We haven’t observed any additional connectivity issues since 17:55 UTC on April 24, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
We identified a brief period of impact to network traffic connectivity in our US-LAX (Los Angeles) DataCenter between 17:32 - 17:55 UTC on April 24, 2025. We are continuing to monitor this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
We are continuing to investigate this issue.
Our team is investigating an emerging service issue affecting all services in the LAX. We will share additional updates as we have more information.
Report: "Service Issue - Miami (us-mia)"
Last updateOn April 8, 2025, at 18:14 UTC, routine maintenance at the Miami \(MIA3\) data center, specifically involving network ingress hosts, led to a significant disruption to both inbound and outbound network traffic. The immediate impact was mitigated by 19:45 UTC through the disabling of the affected ingress hosts. This maintenance was not expected to cause any impact to production traffic. Initial pre-maintenance testing and the early phases of the maintenance procedure were completed successfully, without any indication of risk or instability. However, some investigation has since pointed to misrouting and overloading of specific ingress hosts as a contributing factor to the disruption. A more thorough root cause analysis is ongoing to fully understand the underlying conditions and contributing factors. We are actively investigating the behavior of the impacted networking infrastructure and working to reproduce the issue in a controlled development environment. This will help us identify the root cause and refine our maintenance procedures to prevent similar incidents in the future. Additionally, we are expanding ingress capacity at MIA3 and planning the deployment of additional spare ingress hosts to support future growth and improve resilience. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our Miami data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity in our Miami data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are investigating an emerging service issue affecting traffic in our Miami data center. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Connectivity EU-West(London) and EU-Central (Frankfurt)"
Last updateAfter investigation, we have confirmed that the connectivity issues in these locations were due to a problem outside of our network with an upstream provider. Connectivity has been restored, and we are monitoring for any subsequent problems.
Our team is investigating an emerging service issue affecting connectivity in EU-West(London) and EU-Central (Frankfurt). We will share additional updates as we have more information.
Report: "Scheduled Network Maintenance - JP-TYO-3 (Tokyo 3)"
Last updateWe are canceling the maintenance scheduled in our JP-TYO-3 (Tokyo 3) data center for 1 May 2025, 23:00 UTC that was expected to be completed on 2 May 2025, 03:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our JP-TYO-3 (Tokyo 3) data center from 23:00 UTC on May 1, 2025 to 03:00 UTC on May 2, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - JP-OSA (Osaka)"
Last updateWe are canceling the maintenance scheduled in our JP-OSA (Osaka) data center for 29 April 2025, 22:00 UTC that was expected to be completed on 30 April 2025, 02:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our JP-OSA (Osaka) data center from 22:00 UTC on April 29, 2025 to 02:00 UTC on April 30, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - ES-MAD (Madrid)"
Last updateWe are canceling the maintenance scheduled in our ES-MAD (Madrid) data center for 30 April 2025, 05:00 UTC that was expected to be completed on 30 April 2025, 09:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our ES-MAD (Madrid) data center between 05:00 UTC and 09:00 UTC on April 30, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Emergency Network Maintenance - AP-West (Mumbai)"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
We will be performing an emergency network maintenance in our AP-West (Mumbai) data center beginning 24 April 2025 17:00 (UTC) until 24 April 2025 21:00 (UTC). During this time, users with resources in the AP-West (Mumbai) region may experience intermittent issues when attempting to take actions like creating, updating, and deleting resources within this region
Report: "Service Issue - Linode Kubernetes Engine"
Last updateOn April 1, 2025, users of the Linode Kubernetes Engine \(LKE\) began experiencing issues connecting to their clusters. While internal cluster services continued to function, the LKE Dashboard and external access were impacted due to a DNS resolution problem. The issue was traced back to an internal dependency within our LKE DNS system. An earlier update to a database caused the DNS service to hang. This led to the DNS servers associated with the LKE Dashboard becoming unresponsive. Once the root cause was identified, we restarted the affected DNS services. This action restored full functionality, and access to LKE services returned to normal. We monitored the system to ensure stability and officially resolved the incident on April 3, 2025. To prevent this issue from happening again, we are improving how our DNS services recover from transient connectivity failures to their dependencies. We are also enhancing our monitoring to detect similar problems sooner and updating our deployment process to better coordinate changes between dependent systems. We sincerely apologize for the disruption and thank you for your patience during this incident. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional issues with the LKE service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the LKE service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the underlying cause of the issue affecting DNS functionality within the Linode Kubernetes Engine (LKE) service. We are in the process of implementing a fix. Users may have experienced difficulty connecting to their cluster's control plane, but internal cluster services remain unaffected at this time. We will continue to provide updates as we work to fully resolve the issue. Thank you for your patience.
Our team is actively investigating an issue impacting DNS functionality within the Linode Kubernetes Engine (LKE) service. We will provide further updates as more information becomes available.
Our team is investigating an issue affecting the Linode Kubernetes Engine (LKE). During this time users may have difficulty connecting to their cluster control plane. Internal cluster services are not impacted at this time. We will share additional updates as we have more information.
Report: "Emergency Network Maintenance - AP-South (Singapore)"
Last updateThe scheduled maintenance has been completed.
Verification is currently underway for the maintenance items.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
We will be performing an emergency network maintenance in our AP-South (Singapore) data center beginning 24 April 2025 17:00 (UTC) until 24 April 2025 21:00 (UTC). During this time, users with resources in the AP-South (Singapore) region may experience intermittent issues when attempting to take actions like creating, updating, and deleting resources within this region
Report: "Connectivity Issue - Asia Data Centers"
Last updateWe haven’t observed any additional connectivity issues in our Asia region data centers and will now consider this incident resolved. The third party has addressed the underlying issues, resulting in improved performance and stability. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
The third-party continues working on a fix for this issue, and due to the complexity of this problem, it will take longer to mitigate it completely. In the meantime, we continue monitoring our systems. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity from our Asia to Europe Data Centers. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified a third-party issue affecting connectivity from our Asia to Europe Data Centers between ~02:10 to ~14:58 UTC on January 25, 2025. During this time, users may have experienced latency and potential congestion on some providers between Asia and Europe. If you are still experiencing issues and unable to <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a>, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
Report: "Connectivity Issue - Asia Data Centers"
Last updateWe haven’t observed any additional connectivity issues in our Asia region data centers and will now consider this incident resolved. The third party has addressed the underlying issues, resulting in improved performance and stability. If you continue to experience problems, please open a Support ticket for assistance.
The third-party continues working on a fix for this issue, and due to the complexity of this problem, it will take longer to mitigate it completely. In the meantime, we continue monitoring our systems. If you are still experiencing issues, please open a Support ticket for assistance.
At this time we have been able to correct the issues affecting connectivity from our Asia to Europe Data Centers. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
Our team has identified a third-party issue affecting connectivity from our Asia to Europe Data Centers between ~02:10 to ~14:58 UTC on January 25, 2025. During this time, users may have experienced latency and potential congestion on some providers between Asia and Europe. If you are still experiencing issues and unable to open a Support ticket, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
Report: "Emerging Service Issue - Network Connectivity - US - Central (Dallas)"
Last updateThe root cause was traced back to a congestion on a third-party transit provider. We haven’t observed any additional connectivity issues towards our US - Central (Dallas) region, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Starting approximately 23:00 UTC on 22 April 2025, we observed intermittent networking issues towards our US - Central (Dallas) location from various geolocations. We have traced these issues down to a 3rd party transit provider and re-routed the traffic to Dallas through alternate transit providers. We have not observed any more packet loss since 02:30 UTC on 23 April 2025 and are continuing to monitor these routes. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an emerging service issue affecting network connectivity in US Central (Dallas). We will share additional updates as we have more information.
Report: "Emerging Service Issue - Network Connectivity - US - Central (Dallas)"
Last updateThe root cause was traced back to a congestion on a third-party transit provider. We haven’t observed any additional connectivity issues towards our US - Central (Dallas) region, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Starting approximately 23:00 UTC on 22 April 2025, we observed intermittent networking issues towards our US - Central (Dallas) location from various geolocations. We have traced these issues down to a 3rd party transit provider and re-routed the traffic to Dallas through alternate transit providers. We have not observed any more packet loss since 02:30 UTC on 23 April 2025 and are continuing to monitor these routes. If you continue to experience problems, please open a Support ticket for assistance.
Our team is investigating an emerging service issue affecting network connectivity in US Central (Dallas). We will share additional updates as we have more information.
Report: "Scheduled Network Maintenance - SE-STO (Stockholm)"
Last updateWe are canceling the maintenance scheduled in our SE-STO (Stockholm) data center for 29 April 2025, 01:00 UTC that was expected to be completed on 29 April 2025, 05:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our SE-STO (Stockholm) data center between 01:00 UTC and 05:00 UTC on April 29, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - IT-MIL (Milan)"
Last updateWe are canceling the maintenance scheduled in our IT-MIL (Milan) data center for 24 April 2025, 04:00 UTC that was expected to be completed on 24 April 2025, 08:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our IT-MIL (Milan) data center between 04:00 UTC and 08:00 UTC on April 24, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - IN-MAA (Chennai)"
Last updateWe are canceling the maintenance scheduled in our IN-MAA (Chennai) data center for 23 April 2025, 18:30 UTC that was expected to be completed on 23 April 2025, 22:30 UTC. It will be rescheduled to a later date that will be communicated in advance.
We are updating the schedule for the maintenance to a slightly earlier timeframe. We will now be performing network maintenance in our IN-MAA (Chennai) data center between 18:30 UTC and 22:30 UTC on April 23, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Updated Schedule: We will be performing network maintenance in our IN-MAA (Chennai) data center between 18:30 UTC and 22:30 UTC on April 23, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - US-LAX (Los Angeles)"
Last updateWe are canceling the maintenance scheduled in our US-LAX (Los Angeles) data center for 23 April 2025, 00:00 UTC that was expected to be completed on 23 April 2025, 04:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our US-LAX (Los Angeles) data center between 00:00 UTC and 04:00 UTC on April 23, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Scheduled Network Maintenance - US-SEA (Seattle)"
Last updateWe are canceling the maintenance scheduled in our US-SEA (Seattle) data center for 21 April 2025, 10:00 UTC that was expected to be completed on 21 April 2025, 14:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our US-SEA (Seattle) data center between 10:00 UTC and 14:00 UTC on April 21, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Emerging Service Issue - Connectivity Across Multiple Regions"
Last updateOn April 16, 2025, at approximately 20:10 UTC, we noticed numerous internal alerts fired across multiple data centers, indicating that various servers were unreachable for an extended period. At the same time, customers started reporting issues with multiple products. During the impact window, customers may have experienced issues with all Compute Products, which prevented them from deploying new Linodes, booting Linodes, and performing other host-level jobs. There was intermittent and/or total loss of network connectivity, as well as an inability to access services such as Object Storage or Linode Kubernetes Engine \(LKE\). The investigation has indicated that the issue can be attributed to a configuration anomaly on BGP route reflector infrastructure providing routes for Compute services; however, it has been determined that only those route servers that had undergone a restart at some point were experiencing this impact. Akamai implemented the correct new configuration across all affected route servers at approximately 22:15 UTC to alleviate the residual impact. Nevertheless, a degree of impact reemerged in specific locations during the mitigation efforts. This process was undertaken in phases, commencing with the regions most adversely affected, and was concluded at approximately 03:45 UTC on April 17, 2025. Following an extensive monitoring period of our systems, we verified that the issue has been resolved. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our multiple data centers, and will now consider this incident resolved. During the impact window, we did not experience issues with Cloud Manager and API. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issue affecting the Cloud Manager and API. We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a>, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
We are continuing to work on a resolution for this issue. Subsequent updates around mitigation status will be posted as progress is made.
After performing mitigation actions we became aware the issue is still occurring; the issue is identified and a fix is being implemented.
At this time we have been able to correct the issues affecting connectivity. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We've identified a root cause and are applying a mitigation as quickly as possible. We'll continue to post updates as this incident develops.
We are working quickly on mitigating this issue. We will share additional updates as we have more information.
Our team is investigating an emerging service issue affecting connectivity in multiple regions. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Connectivity Across Multiple Regions"
Last updateWe haven’t observed any additional connectivity issues in our multiple data centers, and will now consider this incident resolved. During the impact window, we did not experience issues with Cloud Manager and API. If you continue to experience problems, please open a Support ticket for assistance.
At this time we have been able to correct the issue affecting the Cloud Manager and API. We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to open a Support ticket, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
We are continuing to work on a resolution for this issue. Subsequent updates around mitigation status will be posted as progress is made.
After performing mitigation actions we became aware the issue is still occurring; the issue is identified and a fix is being implemented.
At this time we have been able to correct the issues affecting connectivity. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please open a Support ticket for assistance.
We've identified a root cause and are applying a mitigation as quickly as possible. We'll continue to post updates as this incident develops.
We are working quickly on mitigating this issue. We will share additional updates as we have more information.
Our team is investigating an emerging service issue affecting connectivity in multiple regions. We will share additional updates as we have more information.
Report: "Scheduled Network Maintenance - AU-MEL (Melbourne)"
Last updateWe are canceling the maintenance scheduled in our AU-MEL (Melbourne) data center for 18 April 2025, 02:00 UTC that was expected to be completed on 18 April 2025, 06:00 UTC. It will be rescheduled to a later date that will be communicated in advance.
We will be performing network maintenance in our AU-MEL (Melbourne) data center between 02:00 UTC and 06:00 UTC on April 18, 2025. Although we do not anticipate any impact, during this time there may be brief network connectivity disruptions.
Report: "Emerging Service Issue - Job Processing in Distributed Compute regions"
Last updateWe have not observed any additional issues following the move to monitoring on this event, and this can be considered resolved.
We are continuing to monitor for any further issues.
A fix has been implemented, and jobs in Distributed Compute regions are processing again. We will continue to monitor for further impact. Please contact us at 855-454-6633 (+1-609-380-7100 Intl.) or <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> if you are having issues.
Our team is investigating an emerging service issue affecting the ability to process jobs in all Distributed Compute regions. We will share additional updates as we have more information.
Report: "Service Issue - Intermittent connection drops on LKE pod to pod traffic"
Last updateWe haven't observed any additional issues with the Linode Kubernetes Engine (LKE), and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
At this time we have been able to correct the issue affecting the Linode Kubernetes Engine (LKE). We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a>, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
The investigation continues, at the time we have been able to confirm the issue is causing intermittent instances of 1-2 mins connection timeouts between pod to pod traffic. These timeouts have been identified in only 2 of our Data Centers with only few ocurrences and we continue to work to identify the cause. Due to the intermittent nature of the issue the investigation will take additional time and will provide additional updates as progress is made. Should you notice issues with symptoms that align to this, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue.
Our team is investigating an issue affecting the Linode Kubernetes Engine (LKE). We will share additional updates as we have more information.
Report: "Service Issue - Intermittent connection drops on LKE pod to pod traffic"
Last updateWe haven't observed any additional issues with the Linode Kubernetes Engine (LKE), and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
At this time we have been able to correct the issue affecting the Linode Kubernetes Engine (LKE). We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to open a Support ticket, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue and will provide additional updates as progress is made.
The investigation continues, at the time we have been able to confirm the issue is causing intermittent instances of 1-2 mins connection timeouts between pod to pod traffic. These timeouts have been identified in only 2 of our Data Centers with only few ocurrences and we continue to work to identify the cause. Due to the intermittent nature of the issue the investigation will take additional time and will provide additional updates as progress is made. Should you notice issues with symptoms that align to this, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email open a Support ticket for assistance.
We are continuing to investigate this issue and will provide additional updates as progress is made.
We are continuing to investigate this issue.
Our team is investigating an issue affecting the Linode Kubernetes Engine (LKE). We will share additional updates as we have more information.
Report: "Emerging Service Issue - Object Storage - SEA"
Last updateOn March 31, 2025, between 01:20 and 04:20 UTC, customers on Objectstorage sea1 experienced latency issues. Initial investigation identified slow operations on specific OSDs and attempts to restart them did not resolve the problem. Further analysis linked the issue to a large bucket which was misconfigured on the backend and was creating concentrated high load on some cluster components overloading them. The issue was mitigated at 12:54 UTC on March 31, 2025, after changing settings of the bucket which was identified to be causing the issue . Akamai will continue to investigate the root cause and take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
At this time, we have been able to correct the issues affecting the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team continues to investigate the issue affecting the Object Storage in Seattle. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team has identified the issue affecting the Object Storage in Seattle. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is investigating an emerging service issue affecting Object Storage in SEA. We will share additional updates as we have more information.
Report: "Emerging Service Issue - DNS Resolution - Italy and Switzerland"
Last updateOn March 12th at 22:43 UTC, we began receiving customer reports regarding DNS resolution issues in the form of time outs affecting multiple domains in northern Italy and Switzerland. These issues prevented users from accessing various websites and disrupted normal operations for the impacted domains. Following an initial assessment by Akamai, it was determined that the issue was limited to domains utilizing Linode authoritative nameservers and the Akamai Shield NS53 product. Specifically, the affected nameservers were [ns1.linode.com](http://ns1.linode.com), [ns2.linode.com](http://ns2.linode.com), [ns3.linode.com](http://ns3.linode.com), [ns4.linode.com](http://ns4.linode.com), and [ns5.linode.com](http://ns5.linode.com). Further investigation revealed that DNS requests to Linode origin nameservers were timing out when routed through the Shield NS53 data center in Rome, which was recently brought online and started receiving traffic on March 11th at 13:32 UTC. Log analysis from this region indicated that Linode’s authoritative nameservers were rejecting requests originating from the Rome Shield NS53 infrastructure. It was subsequently discovered that the backend IPs of the Shield NS53 region in Rome were not present in Linode’s authoritative server ACLs, leading to the service disruption. To resolve the issue, the missing backend IPs were added to Linode’s authoritative nameserver ACLs on March 13th at approximately 22:30 UTC, restoring normal DNS resolution and mitigating the incident. Also, to prevent the issue from happening in the future Akamai will create additional alerting and review current procedures to check connectivity from new Shield NS53 datacenters to Linode’s authoritative nameservers and enhance a process for adding new Shield NS53 backend IPs to the Linode origin nameservers’ ACLs.
We haven’t observed any additional issues with our Hosted DNS Service, and will now consider this incident resolved. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We have been able to correct the issue affecting our Hosted DNS Service at 20:25 UTC, on March 13th, 2025. We will be monitoring this to ensure that connectivity remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a ticket</a> with our Support Team.
Our team has identified the issue affecting our Hosted DNS Service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
After our initial investigation we have been able to confirm the connectivity issues in our DNS Services is only impacting to users located in Italy and Switzerland. During this time, users may experience timeouts or errors when making DNS lookups against domains configured to use ns1-5.linode.com. We are continuing to investigate the issue and will share additional updates as more information is available.
Our team is investigating an emerging service issue impacting connectivity to our DNS Services for a subset users located in Europe. We will share additional updates as we have more information.
Report: "Compute Support - Degraded Services"
Last updateWe have identified and resolved an issue detected with several data center endpoints which caused the aforementioned situation. Please resume normal communications with our Support team through <a href="https://cloud.linode.com/support/tickets?type=closed&drawerOpen=true">opening a Support ticket from the Linode Manager</a>, contacting support@linode.com, or giving us a call.
We are currently experiencing issues with Customer Support tooling, including responding to tickets, receiving emails to support@linode.com, and accessing service records. Our team is investigating this issue and we are working as quickly as possible to have services restored. If you need immediate assistance, please give our team a call at 855-454-6633 (US) / +1-609-380-7100 (global) to make us aware of your situation.
Report: "Emerging Service Issue - London (EU-West)"
Last updateWe haven’t observed any additional connectivity issues in our London (EU-West) data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity in our London (EU-West) data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an emerging service issue affecting our London (EU-West) data center. We will share additional updates as we have more information.
Report: "Service Issue - Object Storage (LON)"
Last updateBetween ~15:16 UTC and 20:55 UTC on March 24, 2025, customers may have experienced errors and timeouts when accessing Object Storage resources or an inability to create buckets in the London data center.. This issue stemmed from a memory usage spike on our infrastructure platform, which led to continued degradation until we safely updated memory allocation across leader nodes. This resolved the problem. We have since identified the source of this memory spike and are formulating a long-term mitigation strategy.. In the meantime, we increased memory resources globally to reduce the risk of impact on other clusters. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team continues to investigate the issue affecting the Object Storage service in London. During this time, users may experience connection timeouts and errors with this service.
Our team is investigating an issue affecting the Object Storage service in London. During this time, users may experience connection timeouts and errors with this service.
Report: "Service Issue - Object Storage (LAX)"
Last updateStarting March 15, 2025, multiple customers started seeing 502 and 503 response codes for object storage from the Los Angeles \(“LAX”\) datacenter. After investigating the issue, we have established that it was caused by a software defect. To address it, we have installed a new software version and we have monitored the LAX datacenter for storage performance. After monitoring the LAX datacenter for an extended period of time, we have concluded that the issue has been addressed and normal operations could resume. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the issue affecting the Object Storage service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is investigating an issue affecting the Object Storage service in the Los Angeles region. During this time, users may experience connection timeouts and errors with this service.
Report: "Emerging Service Issue - Linode Kubernetes Engine - Multiple Regions"
Last updateWe haven’t observed any additional issues with the LKE Control Plane pods, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the LKE control plane pod issues across all regions. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the issue affecting the LKE Control Plane Pods. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
We are continuing to investigate this issue.
Our team is investigating an emerging service issue affecting the Linode Kubernetes Engine in multiple regions. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Node Balancers - All Regions"
Last updateWe haven’t observed any additional issues with the Node Balancers, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the NodeBalancer service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a ticket</a> with our Support Team.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
Our team is investigating an emerging service issue affecting Node Balancers in all regions. We will share additional updates as we have more information.
Report: "Cloud Manager and API performance degradation"
Last updateStarting on February 6, 2025 at approximately 07:19 UTC, we began to observe intermittent slowness, timeouts, and request failures when attempting to access the Cloud Manager and Linode API. Our investigation revealed that this issue was caused by ongoing scheduled OS upgrades that had the unforeseen effect of causing an increase in API request failures. We mitigated the issue at 10:45 UTC on February 6, 2025, once the OS software upgrades concluded. We have identified process and tooling gaps that could have helped prevent this issue. We are actively working on preventative measures to ensure such incidents are avoided in the future. We apologize for the impact. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven't observed any additional issues with the Cloud Manager or API, and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
Our team is aware of an issue that affected the Cloud Manager and API service between 07:19 UTC and 10:45 UTC on February 6, 2025. During this time, users may have experienced connection timeouts and errors with this service. Our team identified the issue affecting the Cloud Manager and API service. At this time, we have been able to correct the issues affecting the Cloud Manager and API service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Service Issue - Network Connectivity - Washington, DC"
Last updateOn March 10, 2025, between 17:50 and 18:10 UTC, we experienced intermittent connectivity disruptions and origin forward errors due to a power supply unit \(PSU\) mishap at our fabric site in Washington DC \(US-IAD\), which impacted multiple customers. The issue was caused by human error during data center maintenance, where a working PSU was mistakenly unseated instead of the faulty one. This confusion arose due to the similar appearance of two different PSU models and their distinct indicator lights. While the working PSU was re-seated at 17:52 UTC, the impact persisted until 18:10 UTC. We take full responsibility for the error and are updating our procedures to prevent similar issues in the future. We sincerely apologize for the disruption and appreciate your patience as we work to ensure the continued reliability of our services. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our Washington, DC data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity in our Washington, DC data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an emerging service issue affecting network connectivity in our Washington, DC data center. We will share additional updates as we have more information.
Report: "Emerging Service Issue - Compute Instance Creation - Distributed Locations"
Last updateAt approximately 21:00 UTC on February 27, 2025, we became aware of an issue affecting the deployment of Linode compute instances in distributed regions. The problem arose due to the \`disk\_encryption\` parameter not being omitted from API requests to distributed sites, which caused an API error and resulted in failed deployments of compute instances in distributed regions. The issue was traced to a bug in Cloud Manager, where the response for distributed regions incorrectly included the disk encryption capability. This was passed along in the payload, leading to failures during instance creation. Since disk encryption cannot be disabled in distributed regions—because our entire compute infrastructure is encrypted—this caused the deployment errors. A Cloud Manager software fix was deployed at 00:30 UTC on February 28, 2025, which corrected the issue. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We have not observed any additional issues following the move to monitoring on this event, and this can be considered resolved.
As of 00:30 UTC on February 28th, 2025, we have been able to correct the issue affecting compute instance creation in distributed region locations in Cloud Manager. We will be monitoring this to ensure that the service remains stable. If you are still experiencing issues and unable to <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a>, please call us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com.
Our team has identified the issue affecting compute instance creation in distributed region locations in Cloud Manager, API is not affected. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is investigating an emerging service issue affecting compute instance creation in distributed region locations. We will share additional updates as we have more information.
Report: "Service Issue - Object Storage"
Last updateBeginning on February 19, 2025, at 23:00 UTC, we observed 403 errors when customers tried to access E2 and E3 endpoint buckets with "Public Read" permissions in Object Storage. The issue was traced to a recent software update that changed the default behavior for "Public Read" access, now requiring explicit bucket policies. Previously, legacy access allowed “Public Read” access without these policies. Customers transitioning from older to newer and enhanced Object Storage infrastructure versions were not expecting this change, causing the errors. We rolled back the enhanced version at 12:42 UTC on February 20, 2025, and confirmed resolution by 13:16 UTC. To prevent future issues, we are enhancing communication around default behavior changes, improving testing for public access scenarios, and updating documentation to clarify the need for bucket policies in the enhanced version compared to the legacy version. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to investigate this issue. Impacted regions are updated.
We are continuing to investigate the issue. The impacted Object Storage regions are limited to US-SEA (Seattle), EU-WEST (London), AU-MEL (Melbourne), EU -Central (Frankfurt), AP-South (Singapore), AP-WEST (Mumbai).
We are currently investigating an issue impacting the Object Storage service. During this time, customers may experience 403 access errors when attempting to access E2 and E3 endpoint buckets that are configured with "Public Read" permissions.
Report: "Connectivity Issue - Distributed Locations"
Last updateBeginning at 20:17 UTC on February 27, 2025, we identified an issue impacting DNS functionality for newly provisioned distributed Linodes across all distributed regions. This resulted in DNS loss for some newly deployed Linodes in distributed regions. After investigating, we determined that the root cause was a misconfiguration in the network configuration, which failed to specify DNS resolvers for newly deployed customer Linodes. This caused the affected Linodes to be provisioned without DNS functionality, leading to connectivity issues for some customers. To resolve the issue, we deployed a fix that ensured the network configuration file correctly included the required DNS resolvers. After deploying the fix, we conducted extensive testing to ensure that newly provisioned Linodes were correctly configured with DNS resolvers, eliminating the connectivity issue. We also verified that rebooting affected Linodes would restore DNS functionality for them. We monitored the situation closely and confirmed that no additional issues were observed across all distributed regions. The issue was fully resolved on February 28, 06:43 UTC, after a fix was successfully rolled out. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
Following a fix being put in place, we haven’t observed any additional connectivity issues across all distributed regions, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are still in the process of rolling out the fix for the DNS issue impacting newly provisioned distributed Linodes across all distributed regions. Our team is working diligently to resolve the problem, and we will provide another update once the fix is fully implemented.
We are still in the process of rolling out the fix for the DNS issue impacting newly provisioned distributed Linodes across all distributed regions. Our team is working diligently to resolve the problem, and we will provide another update once the fix is fully implemented.
Our team has identified the issue impacting DNS functionality for newly provisioned distributed Linodes across all distributed regions. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
We are currently investigating an issue impacting DNS functionality for newly provisioned distributed region Linodes across all distributed regions. As a result, users may experience DNS loss on newly deployed distributed Linodes. We will continue to provide updates as we gather more information. Thank you for your patience.
Report: "Service Issue - Connectivity Issues - All Regions"
Last updateOn March 17, 2024, at approximately 16:38 UTC, we observed numerous internal alerts fired across multiple data centers, indicating that various servers were unreachable for an extended period. At the same time, customers started reporting issues with multiple products. During the impact window, customers may have experienced networking issues across all data centers, with those in Amsterdam \(NL-AMS\) being more significantly affected. In Amsterdam, customers faced difficulties such as an inability to deploy new Linodes, boot existing Linodes, and other host-level jobs, along with a near-total data center outage. Additionally, issues accessing services like Object Storage and Linode Kubernetes Engine \(LKE\) may have occurred. The investigation revealed that the issue was caused by an ongoing release of Akamai Compute’s internal backend API component. The release caused route servers to build incomplete route tables for BGP due to inconsistencies in the API response during the rollout, resulting in connectivity issues in all data centers. When the release concluded at 17:10 UTC, data became consistent between endpoints again, and most data centers recovered independently. To mitigate the remaining impact, we proceeded to restart route servers in all data centers at approximately 18:30 UTC. This process was completed in phases, beginning with the most impacted regions first, and was completed at approximately 21:34 UTC on March 17, 2025. After monitoring our systems for some time, we confirmed that the issue was resolved. For the near-term, we have placed a hold on backend API releases until we have a better way of ensuring route data for route servers is contiguous. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our data centers, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We have successfully mitigated the network connectivity issues affecting all regions. At this time, we are observing a return to normal service, and disruptions, including difficulties with deploying, booting, or using Linodes, are no longer occurring. We are continuing to monitor the fix closely to ensure overall systems stability. Thank you for your patience as we ensure the ongoing reliability of our services. We will provide further updates if necessary.
We are continuing to work on a fix for this issue.
We are actively working to resolve the networking issues affecting all regions. Users in Amsterdam (NL-AMS) may experience more significant disruptions, including difficulties deploying, booting, or using Linodes. Our team is focused on resolving these issues as quickly as possible, and we will provide further updates as they become available.
We are still actively working to resolve the network connectivity issue impacting all regions. We believe we have identified the root cause and are currently implementing a fix. Our team continues to monitor the situation closely, and we will provide additional updates as more information becomes available. Thank you for your continued patience.
Our team is investigating an emerging service issue affecting network connectivity in multiple regions . We will share additional updates as we have more information.
Report: "Service Issue - Connectivity Issues - Multiple Regions"
Last updateWe became aware of a connectivity issue related to intermittent performance degradation and latency issues on workload on multiple Data Centers. The issue was observed intermittently between 08:26 UTC and 10:54 UTC on March 18, 2025. We can confirm that the issue is now resolved, and the service has resumed normal operation. Customers and partners can view additional details about the incident by logging in to: https://community.akamai.com/customers/s/feed/0D5a700000KtyFoCAJ or reaching out to Akamai Support. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent a recurrence of this issue.
Report: "Emerging Service Issue - Cloud Manager Payment Processing"
Last updateAt this time we have resolved the issue affecting payments being processed successfully. We will continue to monitor for additional issues. If you continue to experience problems please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a>.
We are continuing to work on a fix for this issue.
Our team is investigating an emerging service issue preventing payments from processing successfully. This includes users attempting to sign up for a new account as well as adding or updating payment methods to existing accounts. We will share additional updates as we have more information.
Report: "Service Issue - Block Storage (US-IAD)"
Last updateWe have resolved the issue affecting Block Storage in (IAD, which occurred between 10:19 and 12:10 UTC on 17 March 2025. During this time, customers may have experienced read/write timeouts or errors when accessing resources in the Block Storage US-IAD data center. We believe the issue has been fully resolved, and we are actively monitoring the service to ensure stability. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Connectivity Issue - Dallas"
Last updateOn January 23, 2025 at approximately 21:12 UTC, one of the IEN transit uplinks in our Dallas, TX region experienced impact due to an external event, impacting roughly 40-50% of both ingress and egress traffic to the data center. At approximately 21:17 UTC, traffic was able to reroute to a healthy circuit restoring service. We are working with our network partner to identify the root cause, as well as investigating why the failover of routes internally took as long as it did. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our Dallas data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We have identified a brief period of impact to network traffic connectivity in our Dallas data center between ~21:12 to ~21:17 UTC. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Service Issue - Linode Kubernetes Engine"
Last updateStarting around 01:00 UTC on March 5, 2025, some customers began experiencing issues with multiple LKEs \(Linode Kubernetes Engine\) failing to autoscale when IP ACL was enabled, resulting in service disruptions. Approximately 20-25 clusters across various regions were affected globally. Akamai’s investigation traced the issue to the latest software release deployment. New nodes were unable to join LKEs with IP ACL enabled, preventing the autoscaler feature from functioning properly. As a temporary mitigation, Akamai restarted the gateway pods, but this provided only short-term relief. Further analysis identified the root cause as a software defect. To address the issue, Akamai rolled back the release. The rollback was completed around 15:00 UTC on all affected clusters, effectively resolving the problem. Akamai will continue to investigate the root cause and take appropriate preventive actions. We apologize for the impact and thank you for your patience and continued support. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional issues with the LKE service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the LKE service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to investigate this issue and will provide the next update as we make progress.
Our team is investigating an issue affecting the Linode Kubernetes Engine (LKE) related to auto-scaling when IP ACL is turned on. We will share additional updates as we have more information.
Report: "Connectivity Issue - Santiago"
Last updateAt approximately 15:14 UTC on March 5th, 2025, customers began reporting intermittent connection timeouts and errors for all services deployed in our Santiago data center. After initial investigations, these issues were linked to a scheduled maintenance taking place in this location. During this necessary event, two machines essential to the workload in this region unexpectedly started experiencing connectivity issues. We restored the connectivity of these servers at 17:25 UTC, which mitigated the immediate impact of this situation. After monitoring our systems for some time, we confirmed that the issue was resolved. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our Santiago data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting connectivity in our Santiago data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the issue affecting connectivity in our Santiago data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
We are continuing to investigate the issue. If you are experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating an issue affecting connectivity in our Santiago data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Report: "Service Issue - Linode Kubernetes Engine"
Last updateWe haven’t observed any additional issues with the LKE service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the LKE service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to work on a fix for this issue and will provide an update as soon as the solution is in place.
This issue affects DockerHub over IPv6, limiting the rate between certain Linode data centers and DockerHub. This issue impacts products like LKE, LKE-E, and other services that rely on image pulls from DockerHub repositories. We are continuing to work on a fix for this issue and will provide an update as soon as the solution is in place.
Our team has identified the issue affecting the LKE service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is investigating an issue affecting the Linode Kubernetes Engine (LKE). We will share additional updates as we have more information.
Report: "Ingress Capacity Issues in Singapore and Frankfurt"
Last updateAfter further investigation it was determined that this issue is not causing any customer impact or service disruption, we apologize for the confusion.
Our team is investigating an issue affecting connectivity in our Singapore and Frankfurt data centers. During this time, users may experience intermittent connection timeouts and errors for all services deployed in these data centers. We will share additional updates as we have more information.
Report: "Generalized Edge Compute (Gecko) Issues in Santiago"
Last updateFor further updates regarding this issue please check https://status.linode.com/incidents/z84gg3pcdh1c.
Our team is investigating an issue affecting connectivity in our Santiago Gecko data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Report: "Service Issue - Linode Kubernetes Engine"
Last updateWe have resolved the issue affecting the LKE service, which occurred between 20:50 and 22:50 UTC on 4 March 2025. During this time, customers may have observed potential networking issues, including disruptions related to coreDNS inside some LKE clusters. We believe the issue has been fully resolved, and we are actively monitoring the service to ensure stability. If you continue to experience any issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Service Issue - Linode Kubernetes Engine"
Last updateOn February 5, 2025, at approximately 21:38 UTC, DockerHub imposed rate limits on image pulls from our IPv6 IP space across multiple regions. This resulted in 429 Too Many Requests errors when making both unauthenticated and authenticated image pull requests over IPv6. As a consequence, several products, including LKE and LKE-E, as well as other services relying on DockerHub image pulls, were impacted. The issue was traced back to DockerHub’s abuse prevention measures, which rate-limited traffic based on IPv6 address checks. DockerHub clarified that the failure occurred due to exceeding their abuse limit, which was triggered by how IPv6 checks the first 64 bits of an address. As an interim fix to get around this block, we’ve modified our DNS resolvers to return an 'A' record instead of an 'AAAA' record, ensuring image pull requests to DockerHub use IPv4 instead. This action successfully mitigated the issue at approximately 04:30 UTC on February 6, 2025. To prevent a recurrence of this issue, we are actively exploring long-term solutions that help ensure that our services are not impacted by DockerHub’s abuse prevention measures going forward. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional issues with the LKE and LKE-E services, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the LKE service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to work on a fix for this issue. We will provide the next update as we make progress.
We have identified the issue affecting LKE, LKE-E, and other services, where image pull requests to DockerHub over IPv6 are being rate-limited. We are working on implementing a fix to resolve the rate limiting issue, and we will provide an update once the solution is in place. Thank you for your patience.
We are aware of an issue affecting DockerHub over IPv6, causing rate limiting between certain Linode data centers and DockerHub. This is impacting products like LKE, LKE-E, and other services that rely on image pulls from DockerHub repositories. We are actively investigating the issue, and will provide additional updates as more information becomes available.
Report: "Generalized Edge Compute (Gecko) Clusters Unreachable in Johannesburg"
Last updateWe haven't observed any additional issues Generalized Edge Compute (Gecko) service in our Johannesburg data center and will now consider this incident resolved. If you continue to experience issues, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
At this time we have been able to correct the issues affecting the Generalized Edge Compute (Gecko) service in our Johannesburg data center as of 19:54 UTC on February 25th, 2025. We will be monitoring this to ensure that connectivity remains stable. If you are still experiencing issues to reach their services in Johannesburg, please contact us at 855-454-6633 (+1-609-380-7100 Intl.), or send an email to support@linode.com for assistance.
Our team has identified the issue affecting Generalized Edge Compute (Gecko) service in our Johannesburg data center. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is investigating an issue affecting the Generalized Edge Compute (Gecko) service in our Johannesburg data center. Currently, there is no impact however from 11:50 UTC to 12:23 UTC customers using Gecko in Johannesburg may have been unable to reach their services. We will share additional updates as we have more information.
Report: "Service Issue - Object Storage"
Last updateOn February 4, 2025, at 18:30 UTC, customers started experiencing latency, intermittent connection timeouts, and errors when attempting to access the Object Storage service in the Washington, Chicago, Seattle, Paris, and Amsterdam data centers. Investigation revealed that the issue was being caused by a misconfiguration in an ongoing firewall change. To mitigate the impact, we rolled back the change at 19:07 UTC, and it fully propagated at 19:25 UTC. During the monitoring period, the Chicago and Washington data centers were still experiencing problems, so additional actions were taken to address the ongoing impact. We mitigated the issue for Chicago at 20:16 UTC and for Washington at 20:29 UTC. After monitoring our services for an extended period, we confirmed that the incident was fully resolved. In addition, the firewall change that initiated this situation was successfully completed at 13:20 UTC on February 11, 2025. This summary provides an overview of our current understanding of the incident, given the information available. Our investigation is ongoing, and any information herein is subject to change.
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to work on a fix for this issue. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to work on a fix for this issue. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team has identified the issue affecting the Object Storage service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Our team is aware of an issue that affected Object Storage service between 18:30 UTC and 19:25 UTC. During this time, users may have experienced connection timeouts and errors with this service. Our team identified the issue affecting the Object Storage service. We have implemented a fix. At this time, we have been able to correct the issues affecting the Object Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Emerging Service Issue - Connectivity - Seattle, WA"
Last updateWe have not observed any additional issues following the move to monitoring on this event, and this can be considered resolved.
A fix has been implemented and we are monitoring the results.
Our team is investigating an emerging service issue affecting general connectivity in US-SEA. We will share additional updates as we have more information.
Report: "Service Issue - Object Storage (ORD2)"
Last updateThe situation involving Object Storage in our Chicago (ORD) location continues to be internal-only, and no customer impact has been seen. We are resolving this alert in response. Please <a href="https://cloud.linode.com/support/tickets?type=closed&drawerOpen=true">open a Support ticket from the Linode Cloud Manager</a> if you require assistance from our team.
The issue encountered for Object Storage in our Chicago location has been determined to be internal-only and is not expected to impact customers. We will monitor for any developments to the contrary for a period of time to ensure this remains true. If you experience any issues with this service and require immediate assistance, please <a href="https://cloud.linode.com/support/tickets?type=closed&drawerOpen=true">open a Support ticket from the Linode Cloud Manager.</a>
Our team is investigating an issue affecting the Object Storage service in our Chicago data center. During this time, users may experience connection timeouts and errors with this service.
Report: "Connectivity Issue - DE-FRA-2 (Frankfurt 2)"
Last updateStarting around 03:22 UTC on February 5, 2025, the end-user traffic that traversed through one of the data centers in Frankfurt experienced availability issues. Akamai's investigation revealed that the issue was caused by human error during a planned maintenance activity to upgrade network routers to the latest firmware version in the affected data center. The maintenance was scheduled for a specific set of routers, where only a subset was supposed to be taken out of service for the code upgrade. However, the engineer inadvertently took the entire set of routers out of service, leading to a service interruption for the customer’s traffic. The existing internal alerts were triggered as expected, and the appropriate actions were taken. The change was rolled back to mitigate the impact. The issue was mitigated at around 03:52 UTC on February 5, 2025. We have identified process and tooling gaps that could have helped prevent this issue. We are actively working on preventive measures to ensure such incidents are avoided in the future. We will take appropriate preventive actions once the root cause & contributing factors are determined. We apologize for the impact. This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.
We haven’t observed any additional connectivity issues in our DE-FRA-2 (Frankfurt 2) data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Between 03:22 UTC and 03:52 UTC on February 5, 2025, our DE-FRA-2 (Frankfurt 2) data center experienced a network disruption that may have affected connectivity for some customers. Instances remained powered on and running locally during this period. Our team identified the issue and implemented a resolution, restoring normal network operations at 03:57 UTC. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Connectivity Issue - Toronto"
Last updateWe haven’t observed any additional connectivity issues in our our service provider Toronto data center, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to monitor for any further issues.
Our team became aware of an issue affecting connectivity in our service provider Toronto data center caused by a power outage. The issue started at approximately 18:21 UTC on February 05, 2025 and was mitigated at approximately 19:13 UTC on February 05, 2025. During this time, users could have experienced intermittent connection timeouts and errors for all services deployed in this data center. We will be monitoring this to ensure that it remains stable. If you are still experiencing issues, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Service Issue - Object Storage - US-MIA (Miami)"
Last updateOur team has identified an issue affecting the Object Storage service between approximately 21:30 UTC on January 30, 2025, and 01:30 UTC on January 31, 2025. During this time, users may have experienced 403 errors. We have implemented a fix as of 01:30 UTC, on January 31, 2025, and haven't observed any additional issues with the Object Storage service since. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Report: "Service Issue - Block Storage Osaka"
Last updateWe haven’t observed any additional issues with the Block Storage service in Osaka, and will now consider this incident resolved. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
At this time we have been able to correct the issues affecting the Block Storage service in Osaka. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
Our team is investigating a recurrence of the issue affecting the Block Storage service in our Osaka data center. During this time, users may experience connection timeouts and errors with this service. We will share additional updates as we have more information.
At this time we have been able to correct the issues affecting the Block Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please <a href="https://cloud.linode.com/support/tickets">open a Support ticket</a> for assistance.
We are continuing to investigate this issue.
Our team is investigating an issue affecting the Block Storage service in our Osaka data center. During this time, users may experience connection timeouts and errors with this service. We will share additional updates as we have more information.