Historical record of incidents for Path
Report: "GTT Maintenance in Sofia"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
GTT will be performing maintenance in Sofia which may cause an interruption in service.
Report: "Upcoming Route Server Activation in Phoenix, USA"
Last updateThe scheduled maintenance has been completed.
Verification is currently underway for the maintenance items.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
We are pleased to announce the activation of a new route server in Phoenix, USA, to enhance connectivity and performance for our customers across the Western US region.Please note, this activation will only affect customers using GRE tunnels within the Western US region, who will be moved to the new route server. This may result in a temporary BGP flap as routes converge and stabilize. The disruption is expected to be brief, with any flapping lasting only a short period.We appreciate your understanding as we work to improve your network experience. For any questions or further assistance, please reach out to our support team.
Report: "GTT Emergency Maintenance in Sofia"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
GTT will be performing an emergency maintenance in Sofia which may cause an interruption in service.
Report: "Madrid Outage"
Last updateWe are currently investigate an outage affecting our Madrid point of presence.
Report: "Madrid Outage"
Last updateWe are currently investigate an outage affecting our Madrid point of presence.
Report: "Phoenix Outage"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently investigating an outage with our Phoenix PoP.
Report: "New York Outage"
Last updateWe're investigating an outage with our New York PoP
Report: "New York Outage"
Last updateWe're investigating an outage with our New York PoP
Report: "New York Outage"
Last updateThis incident has been resolved.
We have identified the issue as being an L2 connectivity problem between our appliances and upstream and are working towards a resolution
We have received confirmation of service interruption from our upstream and are working with them to resolve the problem
We're currently investigating a problem with one of our upstream carrier ports in our New York location
Report: "Singapore PoP Outage"
Last updateThis incident has been resolved.
We are currently awaiting Telstra to run a new cross connect and Tokyo will return to full performance.
We are currently investigating an outage with our Singapore point of presence.
Report: "Tokyo PoP Outage"
Last updateThis incident has been resolved.
We estimate completion in the next few days.
We are currently awaiting Telstra to run a new cross connect and Tokyo will return to full performance.
The datacenter has been rejecting the shipments, we've contacted them multiple times confirming its a legitimate package and it has been escalated to our account manager, awaiting further instructions from them.
We have pre-emptively ordered a replacement part as we continue to look into this issue.
We are currently investigating an issue with our Tokyo Point of Presence, we've already reached out to an on-site technician and will update as we learn more.
Report: "Frankfurt PoP Incident"
Last updateThe issue in the Frankfurt PoP has been successfully mitigated. All systems are operating normally. We will continue to monitor for stability.
We are currently looking into an issue at our Frankfurt Point of Presence and will provide updates as we make progress.
Report: "Miami PoP Outage"
Last updateThis incident has been resolved.
We are awaiting a new cross connect from our transit provider and Miami will return to full capacity.
Awaiting implementation progress from Equinix.
Working together with various Equinix technicians we've found the issue, we are awaiting the electrician on site to return to work and Miami will be returned to full capacity. We will be bringing on additional capacity soon to mitigate further issues with the Miami location.
We have contact with a Tier 2 technician to resolve this issue. We will update this post as we learn more.
We are currently investigating an outage with our Miami point of presence.
Report: "Packet Loss in London"
Last updateThis incident has been resolved.
Our upstream provider has determined the cause, and is working to reroute traffic.
We've identified an issue causing packet loss with an upstream provider in London, and have engaged the provider to investigate. An update will be provided once available.
Report: "Los Angeles Migration"
Last updateThe location is now operational following its migration to a new facility.
We are currently migrating our Los Angeles PoP.
Report: "Dallas Migration"
Last updateThe location is now operational following its migration to a new facility.
The Dallas migration is almost complete, we are only waiting for transit providers to run their ports now.
Report: "API Problems"
Last updateThis incident has been resolved.
We have identified and resolved the API issue and are currently working on determining the root cause.
We've identified a glitch within our API system that is concurrently impacting our portal. We acknowledge the issue and are actively investigating to pinpoint the root cause and implement the necessary fixes.
Report: "Silicon Valley Outage"
Last updateThis incident has been resolved.
Traffic has been routed back to the site. We are monitoring for any issues
The issue with GTT has been resolved, and we rerouting traffic back to the site shortly
GTT has estimated a resolution of January 7th.
GTT has identified the root cause for the outage, and is working with facility technicians and their engineers on a resolution.
GTT has confirmed an outage in Silicon Valley, and are currently investigating the root cause. Updates will be provided as they become available.
Report: "Miami Outage"
Last updateThis incident has been resolved.
A fix has been implemented by GTT and we are monitoring the results.
GTT has classified the issue as a major outage due to the isolation of one of their routers within Miami. We are awaiting further details.
Path has restored connectivity for local prefixes with a workaround while we wait for a response from our upstream. In the meantime, WAF traffic and remote protection customers are still diverted away to other PoPs
Our monitoring has detected a reachability problem between our Miami PoP and other North American sites over our primary upstream. We believe this to be related to the previous outage, which was the result of our carrier's router becoming isolated from the rest of their network. For the time being, Path's team has escalated the problem to said carrier and diverted traffic away from Miami.
Report: "Singapore Outage"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating an outage at our Singapore point of presence.
Report: "Miami Outage"
Last updateNow that we have confirmed everything is stable again, we have reinstated the PoP into production.
We have collaborated with our upstream provider to resolve the issue. Currently, we are monitoring the system's stability before reintroducing the PoP into production.
One of our transit providers is having issues in our Miami, Florida region. We have already reached out and are attempting to route around it on our end.
Report: "Sofia Outage"
Last updateThe issue we observed turned out to be a false positive and had no impact on our PoP. Therefore, we will proceed to close this incident.
We are currently experiencing an issue at our Sofia PoP and are investigating the root cause.
Report: "Singapore Outage"
Last updateTechnicians have replaced broken components and Singapore has returned to full capacity.
Singapore should be coming back online.
Shipments have been accepted, awaiting an update from datacenter technician for further updates.
We are in contact with the datacenter and awaiting inbound shipments.
We are currently investigating outages to specific clients in Singapore.
Report: "FL-IX Outage"
Last updateFL-IX traffic has returned to normal and monitoring has determined the connection to be stable. This incident is now resolved.
FL-IX technicians have implemented a fix and are monitoring the status. We will be moving traffic back to FL-IX shortly and will continue to monitor on our end.
We've identified the cause of the issue on FL-IX's end, and we have diverted traffic away from FL-IX to other carriers. We are awaiting an update from FL-IX for further information.
We are currently investigating an outage with FL-IX in Miami.
Report: "Newyork PoP degradation"
Last updateAfter investigation, we identified the root cause as a failure in one of our routers responsible for terminating a GRE session. This triggered an automatic traffic rerouting to a nearby PoP, which led to cascading issues. We promptly redistributed the traffic across other PoPs and repaired the affected hardware before returning it to production.
We have identified an issue at our New York PoP and are currently investigating it.
Report: "Miami Outage"
Last updateThis incident has been resolved.
We've received a response from our transit provider and connectivity has been restored. We are monitoring before returning full traffic to the PoP.
One of our transit providers is having issues in our Miami, Florida region. We have already reached out and are attempting to route around it on our end.
Report: "Chicago PoP degradation"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have identified an issue with one of our upstream connections in Chicago, which is affecting our inter-site tunnels at this PoP. We are currently collaborating with the upstream provider to determine the root cause.
Report: "London Partial Outage"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are currently investigating issues with outbound connections in our London PoP.
Report: "Miami outage"
Last updateThis incident has been resolved.
We've identified the problem to be a brief loss of connectivity with one of our upstreams and are reaching out to them for further details as to what occurred
Miami has come back online but we are still investigating the cause
We are currently investigating an issue affecting the entirety of our Miami location.
Report: "Dallas Outage"
Last updateThis incident has been resolved.
An onsite technician has misidentified an important connection to the switch. We've reached out to resolve the issue.
We are currently investigating an issue effecting the entirety of our Dallas location.
Report: "Singapore Partial Outage"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Singapore PoP Incident"
Last updateThis incident has been resolved.
We have brought the Singapore PoP online with a new upstream. We are currently monitoring the situation to ensure stability before concluding the incident.
We are currently working on bringing the Singapore PoP online and adding a new carrier to the blend at this Pop within the next few hours. As a result, there will be expected route changes. We also strive to minimize downtime during our maintenance.
All equipment has arrived and is on-site. Our on-site team is working remotely with our vendor to provision the equipment.
Our vendor has identified a potential spare part, we are working closely with them at this time. Currently, they provided a potential ETA of Monday, May 20th. As further information becomes available we will provide updates.
We have identified the issue and determined that the cause was related to the switch hardware. While this normally would be handled with redundant equipment, we are upgrading the network. We are working with our local vendors to source the replacement equipment to restore full services. In the meantime, all services have been shifted to other locations, and remote protection services have been restored. We will post updates on the replacement hardware here.
We have learned from Equinix that there was a cooling and power issue at this PoP around the same time. We are currently working with Equinix to identify the source of the problem.
We have identified an issue with our Singapore PoP and are currently investigating. Appropriate actions will be taken based on our findings.
Report: "Chicago PoP Incident"
Last updateMTU issue was resolved
We have identified an issue with one of our upstream connections in Chicago, which is affecting our intersite tunnel at this PoP. We are currently collaborating with the upstream provider to determine the root cause.
Report: "New York PoP Incident"
Last updateAfter thorough monitoring, we can confirm that everything is functioning normally and remains stable at this point of presence (PoP). Therefore, we will proceed to close this incident ticket.
Unfortunately, a recently introduced software bug in our system was creating a black hole for newly announced prefixes. Upon noticing the issue in New York, we redirected traffic to the Chicago PoP, but encountered further complications due to the same bug. Promptly identifying the root cause within our software, we swiftly applied a hotfix. Following thorough confirmation of stability, we returned traffic to New York and can affirm its continued smooth operation. We'll maintain monitoring to ensure seamless progress before concluding this incident.
To minimize downtime, we have temporarily redirected traffic away from the New York PoP while we work on resolving the issue.
We have identified a routing issue at our New York PoP and are currently investigating it. We will provide updates as soon as we have more information.
Report: "Amsterdam PoP Incident"
Last updateThis incident has been resolved.
Unfortunately, a human error caused an issue on our router in Amsterdam. We immediately noticed and reverted the changes. However, some of the BGP sessions were restarted. We can now confirm that everything is stable and back to production.
We have identified an issue at our Amsterdam PoP and are currently investigating it.
Report: "Chicago Packet Loss"
Last updateStability in Chicago has been restored and traffic should continue to balance.
We are investigating an issue in Chicago that is causing packet loss.
Report: "Lumen/Level3 Congestion in Europe"
Last updateThis incident has been resolved.
Lumen/Level3 appears to have adjusted routing to alleviate the congestion.
Lumen/Level3 (AS3356) appears to be experiencing heavy congestion and packet loss in Europe, specifically in London and Frankfurt. We are monitoring the situation and will update once there is a change in the situation.
Report: "Google Singapore routing traffic to the US"
Last updateThis incident has been resolved.
Google has made routing changes that appear to have resolved the issue. We will continue to monitor.
We've identified a Google Singapore issue where all traffic sent to Google across peering or transit paths in Singapore is being carried to the west coast of the US. This is creating additional latency for anycast-based Google services like DNS. We have reached out to Google and are awaiting resolution or additional information.
Report: "Frankfurt PoP Incident"
Last updateThis incident has been resolved.
The maintenance appears to be completed, we are ensuring the network remains stable and comes back online safely.
GTT is currently performing maintenance on their core infrastructure in Frankfurt.
We've noticed a problem at our Frankfurt Point of Presence (PoP) and are currently in the process of restoring the service.
Report: "ERA-IX Incident"
Last updateHeavy BUM traffic from ERA-IX led to a cascading issue for us while we were in the process of rerouting. Our team promptly began investigating the root cause and successfully restored the PoP. At present, we will keep ERA-IX offline until we ensure its stability.
We've identified a problem with traffic flowing through ERA-IX and have opted to deactivate the port and divert traffic elsewhere. We're collaborating with ERA-IX to ascertain the estimated service recovery time and will keep you informed. Ref: https://www.era-ix.com/statistics
Report: "FL-IX Incident"
Last updateThis incident has been resolved.
Path has identified the issue and at this time our FLIX sessions have been established and we are monitoring the situation.
The Path NOC has been made aware of an outage with our FLIX sessions. We are investigating this issue and will update when more information is available.
Report: "London PoP incident"
Last updateThis incident has been resolved.
After conducting our investigation, we can confirm that the traffic has returned to the same level as before. We will continue to monitor the situation to ensure smooth operation.
While we were in the process of installing a new cross-connect in our London Point of Presence (PoP), unfortunately, the port was activated inadvertently, resulting in a brief loop on our core switch and temporarily blocking traffic for a few seconds. We are currently evaluating the situation to ensure that everything returns to normalcy.
Report: "Amsterdam PoP Outage"
Last updateUpon monitoring, we can verify that all systems are stable, and thus, this inciden can be marked as closed.
We have replaced the faulty server and redirected the traffic to the Amsterdam PoP. We will continue to monitor the situation and take appropriate action as needed.
We've identified a problem with one of our appliances responsible for BGP termination. Consequently, we've rerouted GRE traffic for our customers away from this Point of Presence (PoP) as we address the issue.
Report: "Degraded performance - Global"
Last updateThis incident has been resolved.
The traffic has already been redirected and evenly distributed across our other upstream sources. We'll collaborate with the mentioned service provider to investigate the underlying cause of the issue.
We've observed an issue with one of our carriers and are presently investigating to pinpoint the underlying cause.
Report: "Chicago Outage"
Last updateUpon verifying the power feed status with Equinix, we can affirm that no additional disruptions are anticipated. Consequently, we will restore GRE traffic to Chicago as it was previously.
After conducting a more in-depth examination, we observed that maintenance work by Equinix on a Power Distribution Unit (PDU) triggered a kernel panic in one of our devices. To prevent potential disruptions, we have relocated GRE customers away from this Point of Presence (PoP).
We are continuing to work on a fix for this issue.
We observed a failure in Chicago with one of our server. To address this issue, we promptly removed it from production and redistributed the traffic across other operational servers. However, it's worth noting that there was a noticeable decrease in traffic during the failover period
Report: "Frankfurt Outage"
Last updateAfter locating what we believe to have been at fault, our team has implemented a fix in addition to making thee necessary changes to proactively identify the catalyst for this incident before it has the chance to occur.
Our team has restored the remaining services in Frankfurt and is monitoring for stability while we continue to grasp a full understanding of what occurred
While the site appears to have come back online, we are still attempting to identify the root cause
Our team has been alerted of and is investigating a connectivity problem relating to our Frankfurt PoP. We will provide updates when we have more information
Report: "Silicon valley PoP Incident"
Last updateWe've successfully eliminated the problematic server affecting TCP traffic and are now focused on identifying its precise root cause. Normalcy should be restored in the Silicon Valley Point of Presence.
In the maintenance period at https://status.path.net/incidents/jrqx8yb6tyhk, following the installation of a new scrubbing server, a problem was identified with its handling of TCP traffic. Presently, we are redirecting traffic away from this server and preparing to remove it from the production environment.
Report: "London PoP Incident"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We observed a hardware failure in one of our appliances, leading to automatic redirection of traffic to alternative scrubbing servers. However, this issue was evident in both TCP applications and GRE traffic. We are actively investigating to determine the underlying cause of the hardware malfunction.
Report: "Chicago PoP Incident"
Last updateThis incident has been resolved.
We have reconfigured the new load balancer under a different Linux kernel and introduced it back into production. Our team is closely monitoring the health of this machine to verify that it continues to serve traffic properly.
We attempted to introduce additional capacity in CHI. While it was expected to happen seamlessly, we shortly noticed a problem with it after reintroducing it which resulted in us having to immediately pull it. We're working on rectifying the problem and bringing it back into rotation
Report: "FRA & AMS PoP outage"
Last updateThis incident has been resolved.
We've identified a problem with one of our upstream on certain points of presence (POPs). As a response, we automatically redirected the traffic through alternative transit providers. However, this led to disruptions in GRE connections as well. We are actively collaborating with the upstream provider to investigate and address the root cause of the issue.
Report: "London PoP Outage"
Last updateWe have pinpointed a malfunctioning QSFP that was causing disruptions on the switchboard, impacting the entire operation of the PoP. We have since replaced the defective module, and after monitoring, everything appears to be stable. Consequently, we are closing this incident.
We are continuing to monitor for any further issues.
We identified a hardware issue that led to a chain reaction in our dataplane. Our redundancy mechanisms swiftly transferred the traffic to Switch B, resulting in a BGP flap across all sessions. Currently, the traffic is at its usual levels, and real-time convergence is underway. We will monitor and investigate the root cause of this in next upcoming minutes.
We've detected a problem with our London Point of Presence (PoP) and are presently looking into the underlying cause.
Report: "Arelion congestion between London and Europe"
Last updateThis incident has been resolved.
Arelion is currently not experiencing significant traffic congestion. We are closely monitoring the situation to ensure that this improvement remains consistent.
We are awaiting an update from Arelion regarding this situation. Several diversions have been added already to steer traffic around this.
Arelion has identified heightened traffic congestion on specific network routes as a result of fiber interruptions in their core network. We've implemented rerouting measures to circumvent these issues. Nevertheless, some ISPs that solely rely on Arelion may still experience increased latency and packet loss.
Report: "API Problems"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We've identified a glitch within our API system that is concurrently impacting our portal. We acknowledge the issue and are actively investigating to pinpoint the root cause and implement the necessary fixes.
Report: "Degraded performance in New York"
Last updateThis incident has been resolved.
We collaborated with Hurican Electric to address the issue, and since then, the traffic at this Point of Presence (PoP) has remained stable. Consequently, we can now consider this incident resolved.
It seems that the disconnections are linked to a problem within Hurricane Electric at the DE-CIX, where it ceased announcing prefixes on two occasions for a brief period. We are actively engaged in monitoring the situation and endeavoring to identify a potential workaround to mitigate this issue.
We are currently in the process of investigating multiple reported issue that has arisen for the New York Point of Presence (NYC PoP).
Report: "Degraded performance in Chicago"
Last updateWe can confirm our link with GTT has been stable since few hours and also Equinix cooling issue has been solved. ------------------ Equinix IBX Site Staff reports that colocation temperatures continue to remain stable with a current average temperature remaining at 95F degrees. Four portable coolers are now online and eleven more are being installed. For customers who powered down equipment, we strongly recommend waiting to restore operations until the temperatures reach a lower overall range.
We have successfully redirected a significant portion of the traffic away from GTT in this Point of Presence (PoP), resulting in the traffic levels at our PoP returning to normal. We will continue to monitor the situation and collaborate with GTT to restore the missing uplink.
Message from our Transit (GTT): Following the ongoing outage at Equinix CH1 that caused multiple devices to overheat, we can confirm that there is a slight decrease in the temperature but it is still above threshold. As per Equinix's last update, they were able to restore two chillers, with a third functioning at redundant capacity. The engineers continue to work to restore all 6 chillers but there is no ETR. As mentioned, the onsite engineers have deployed all available floor fans, opened all available doors, and engaged a local rental company to source additional fans and portable spot coolers. We also have an onsite tech waiting on standby just in case troubleshooting is needed when the temperature drops. We will keep you posted on the progress.
Due to a persistent problem at Equinix Chicago, we are experiencing issues affecting our transit services. We are actively addressing the issue by rerouting customer traffic through alternative Points of Presence (PoPs) to contain and resolve the issue.