Historical record of incidents for Servebolt
Report: "Maintenance window to upgrade software packages in Europe & Singapore"
Last updateThe scheduled maintenance has been completed.
Verification is currently underway for the maintenance items.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
We want to inform you about an upcoming upgrade on our hosting servers that will improve the overall performance, security, and reliability of your hosted websites and applications.What's Changing:During this upgrade, we will be making enhancements to the underlying server infrastructure and updating software packages. These improvements are designed to provide you with a better hosting experience.What You Need to Know:- There is no action required on your part. Our technical team will handle the entire upgrade process seamlessly.- While we do not anticipate any service interruptions, it's possible there may be brief moments of reduced performance during the maintenance window. Our team will be monitoring the process closely to minimize any impact.- Your data and websites will remain safe and secure throughout the upgrade. Our robust backup and security measures are in place to ensure data integrity.We appreciate your trust in Servebolt as your web hosting provider. Our goal is to continuously improve our services to offer you the best possible hosting experience.If you have any questions or concerns regarding this upgrade, please do not hesitate to contact our support team. We are here to assist you and address any inquiries you may have.
Report: "Issues with search functionality in our Help Center"
Last updateCookie bot was adapting the load order of script that do not contain cookies. Although these scripts are 'essential' to the site Cookie Bot was not autoloading them on page initialzation, but later. We adapted our search scripts to check if the algoliasearch() function is available before initalizing the search front end control scripts. Search is now available.
We are currently investigating an issue with the search functionality on our Servebolt Help Center.
Report: "Routing issues via Cloudflare"
Last updateOn March 25th, 2025, Servebolt experienced a network routing issue that resulted in degraded performance and, for some users, temporary unavailability. The issue primarily affected traffic routed through Cloudflare in Amsterdam and was due to routing irregularities between Cloudflare and ERA-IX \(Eranium Internet Exchange\), which lies outside Servebolt’s infrastructure. While the root of the problem was external, we treat any service disruption with the utmost seriousness. Our team responded quickly and worked actively with our upstream partners to identify and resolve the issue. ## **Impact** * **Who was affected:** Clients with traffic routed through Cloudflare’s node in Amsterdam, primarily clients in the Netherlands and nearby regions. This includes traffic to website running Accelerated Domains and Servebolt CDN. * **What happened:** Intermittent or complete unavailability of services. * **Who was not affected:** Clients served via other regions of the Cloudflare network, or clients not using Cloudflare, Accelerated Domains or Servebolt CDN ## **Preliminary Root Cause** Current information indicates that the operator of ERA-IX, a major internet exchange in Amsterdam, [identified a routing issue between their systems and Cloudflare](https://status.eranium.io/cm8ob9lqs004czae34bwj2it3) at **10:46 CET**. To mitigate the issue, they temporarily disabled their connection to Cloudflare at **11:16 CET**. However, the change did not fully take effect across all upstream networks. This caused inconsistencies in how traffic was routed, which continued to disrupt connectivity for some users. Although our upstream provider was not the source of the issue, a manual refresh of their network configuration at **15:30 CET** successfully restored normal traffic flow between Cloudflare, ERA-IX, and affected networks. This incident occurred within one of the foundational layers of the internet — completely outside of Servebolt’s infrastructure — but it still impacted how some users were able to reach our platform, primarily in and around the Netherlands. Identifying the root cause of incidents at this level of the internet is often time-consuming. The infrastructure involved spans multiple independent networks, each with limited visibility into the others. This is why resolution and confirmation took several hours, despite active collaboration between the involved parties. ## **Next Steps** We are currently awaiting further clarification from Cloudflare, Blix, and Eranium. A full and final report will be published once this information is available — expected **this week.**
This incident has now been resolved. We will continue to monitor the impacted network, and work with our partners to fully understand the root cause.
We believe the issue has been found, and are currently monitoring the network.
Working with our ISP and Cloudflare, we believe we are close to a solution for the issue. We will give more updates as we move forward.
We are noticing other platforms with the same issues as us. We are currently trying to manually re-route all network traffic outside of the affected area.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
Ongoing investigations indicate that the issue is likely within the Cloudflare network. We are actively collaborating with Cloudflare to resolve the matter.
As we are continuing to investigate the issue, we have identified that our clients are still receiving traffic to their sites, although not at the same routes and volume as expected. We will continue to investigate the root cause.
We are continuing to investigate this issue along side our ISP.
We are currently investigating a routing issue in Amsterdam. More information will be shared when it becomes available.
Report: "DDoS attack on Amsterdam and Dallas locations"
Last updateThis incident has been resolved.
In the following time periods: 04:45 to 05:07 UTC for Dallas 06:36 to 06:51 UTC for Amsterdam we were seeing intermittent reduced availability for some customers hosted in these two locations. We believe the attack to be fully mitigated as of 05:07 UTC for Dallas and 06:51 UTC for Amsterdam
There are ongoing DDoS attacks against our Dallas and Amsterdam locations.
Report: "Incident detected, new crons fails to sync to server"
Last updateAll environments have now been re-synced from our Admin Panel, and this incident is resolved.
We've identified an issue on our Servebolt Linux 8 platform where cron jobs created in our Admin Panel does not sync to the environments. We have implemented a fix for this issue, and will re-sync all cron jobs not currently synced to the environments. If you have any questions or notice an issue linked to this, please contact our Support.
Report: "Hardware issue - dag-ams.servebolt.cloud"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing a hardware issue with this web server. Our team is actively investigating the problem to resolve it as quickly as possible. We apologize for any inconvenience this may cause and appreciate your patience during this time. We will provide updates as soon as we have more information.
Report: "karsten-dfw under heavy load/possible DDoS"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are currently monitoring.
We are continuing to work on a fix for this issue
We are currently able to manage the load on the server, while still investigating a permanent solution.
We are still looking for mitigations to get the current problem under control.
We are continuing to investigate the issue, while implementing mitigating measures.
The server karsten-dfw is currently under unsustainable load, possibly due to an ongoing DDoS attack.
Report: "Outage"
Last updateThis incident has been resolved.
Our technicians continue to work on-site. Most servers in the region are now back at full health.
A hardware failure regarding a network switch was identified. Our technicians on-site are bringing servers online now. Some servers in the region are already live. Others to soon follow.
Our operations crew has reached the location and is now doing an on-site investigation. An update is expected to soon follow.
Our servers are currently experiencing a network event in this region. Our technicians are on the way to investigate. An update will follow shortly.
Report: "Auke-ams under suspected DDoS"
Last updateThe issue has been found to be related to a configuration change that was put in place to enhance the caching engine for static resources. The issue triggered a invalidation at certain traffic levels causing invalidation and cascading queries to source server. This have now been reverted and the methodology will be reevaluated.
We are currently investigating the issue and are working on mitigations.
Report: "Connection lost to Dallas datacenter"
Last updateThis incident has been resolved.
Problem has now cleared. We are awaiting a report from our network provider on the issue.
We are continuing to investigate this issue
We are currently investigating this issue.
Report: "Karsten-dfw experiencing slow response times"
Last updateKarsten-DFW continues to operate normally. The problem has been resolved.
Abnormal traffic and server activity has been resolved on Karsten-DFW. We will continue monitoring the server.
We are currently investigating the issue and will report when we have more information.
Report: "Database failure on jitze-sgp.servebolt.com"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Problem has been identified and resolved. All services are functioning and operating normally.
We've had a database crash and fail for our jitze-sgp node, and are working on recovering the service.
Report: "Elinor-osl sent old emails after a stopped queue had gone unnoticed"
Last updateAt 20:34 a stale email queue with cronlog output was restarted when routine maintenance was conducted. The queue wasn't correctly configured in our monitoring systems and had therefore gone unnoticed. Upon restarting it a batch of emails was sent erroneously before the queue was stopped. The problem has been identified and fixed, we apologize for the confusion caused by this. We're implementing further checks and monitoring rules to prevent anything like this from occurring again.
Report: "database lockups"
Last updateAll services are back to normal operation
we're investigating an issue where several mariadb instances have locked up and are mitigating the problems.
Report: "Bolt55 down"
Last updateThis incident has been resolved.
Further issues identified, system downgraded to older kernel.
A fix has been implemented and we are monitoring the results.
bolt55 are experiencing issues and are currently being investigated.
Report: "Outage for "reidun-osl" in Oslo data center"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We've identified an issue with the currently active kernel and are working on implementing a fix. The services from reidun are currently partially crashed.
Report: "Partial outage in London."
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We're seeing degraded performance and some sites showing blank pages in London. Currently investigating the issue.
Report: "Excess traffic load on stine-ams"
Last updateThe incident has been resolved. Traffic levels have returned to normal on the server.
We are continuing to monitor server stability. We will continue to provide updates until we are confident that the situation has been fully resolved.
The server appears to have stabilized and sites now appear to be active.
We're experiencing an elevated level of inbound traffic on this server. Some sites are reporting degraded performance. We are currently looking into the issue and attempting to mitigate.
Report: "Outage for "elinor-osl" in Oslo data center"
Last updateSystems have stabilized and are functioning normally. We keep monitoring for anomalies.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are experiencing unscheduled reboots of the server, currently investigating the issue.
Report: "Network issues in Amsterdam and Oslo"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified as a issue with one of the IP transit providers. Both locations have very good peering so impact should be minimal.
We are currently investigating this issue.
Report: "Performance issue on odd-nyc"
Last updateThis incident has been resolved.
The operations team have identified and resolved an issue and are currently monitoring to ensure system stability.
We are currently investigating degraded performance on the odd-nyc server. Updates to follow shortly.
Report: ""arnt-nyc" unavailable in data center New york"
Last update# Recent Service Disruption — Our Apologies On Thursday, March 10th at approximately 05:10 EST we experienced a specific server outage at our New York data center. This resulted in service disruption for all websites hosted on this server. With any significant event that affects our customers, we conduct an extensive examination to understand the root cause and develop a course of action to improve our systems and procedures. To that end, we wanted to provide a synopsis of the situation that occurred and our reassurance that we are working diligently to proactively mitigate and prevent future outages. ## Here's what happened At 05:10 EST we performed urgent security updates to the Linux kernel. We perform these kinds of updates quite frequently, and they usually don't last any longer than 10 to 30 seconds. In order to prevent a related outage we had yesterday, we prepared and tested a firmware update separately. Unfortunately, something went wrong and the BIOS chip didn't process the update as expected. We attempted a new firmware upgrade through the management controller. But it started erroring out with strange errors like size mismatch. The management controller had lost contact with the BIOS chip or got confused in some manner. After various attempts to get the server back online by power cycling through the management controller, we unfortunately had to revert to employing physical help with our data center provider to power cycle the server physically to reset all hardware state in order for it to reboot. At 06:20 EST the server started coming back online and at 06:37 EST it was fully operational again. ## Here's what we're doing Going forward, we will be adding additional steps in ensuring firmware updates are taken care off separate from emergency security updates. We'll be adding new spare servers to our New York datacenter to have even more extra capacity when we need it in such cases. We'll also consider the use of remote controlled power distribution units \(PDUs\) where possible going forward. Outages disrupt your life and your business. We understand and we take our responsibility to you very seriously. We sincerely apologize for the disruption and the inconveniences this likely has caused you. Please allow me to take this opportunity to thank you for your business and provide my personal assurance that we are dedicated to meeting our commitment to you. Sincerely, Erlend Eide CEO [Servebolt.com](http://Servebolt.com)
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "outage for "sterre-lon" in London data center"
Last update# Recent Service Disruption — Our Apologies On Wednesday, March 9th at approximately 08:15 CET we experienced a specific server outage at our London data center. This resulted in service disruption for all websites hosted on this server. With any significant event that affects our customers, we conduct an extensive examination to understand the root cause and develop a course of action to improve our systems and procedures. To that end, we wanted to provide a synopsis of the situation that occurred and our reassurance that we are working diligently to proactively mitigate and prevent future outages. ## Here's what happened Earlier that day, at 04:10 CET we performed urgent security updates to the Linux kernel. We perform these kinds of kernel security updates quite frequently and usually don't last any longer than 10 to 30 seconds. We were actively monitoring the server's performance, but the server was performing as expected. We have performed the same update on about 40 servers already without any problems, but after about 4 hours at approximately 08:15 CET MariaDB started crashing, ramping up to full outage at 09:10 CET. Our Operations team started working on the problem, but it quickly became evident that the MariaDB logs had been corrupted. In the time that followed we initiated a full restore from the backup server to a spare server in case the data turned out to be permanently damaged. In the meantime Operations was continuously working on recovering the corrupted databases. At 11:20 CET we were able to successfully confirm the full recovery of 97% of the affected databases. The remaining 3% unfortunately had to be restored from the backup server. At 13:00 CET all databases were restored and recovered and the incident closed. ## Here's what we're doing In our research into the root cause of the issue we've identified it as incompatible firmware versions. Going forward, we will be adding additional steps in ensuring incompatibilities are mitigated, and taken care off separate from emergency security updates. Outages disrupt your life and your business. We understand and we take our responsibility to you very seriously. We sincerely apologize for the disruption and the inconveniences this likely has caused you. Please allow me to take this opportunity to thank you for your business and provide my personal assurance that we are dedicated to meeting our commitment to you. Sincerely, Erlend Eide CEO [Servebolt.com](http://Servebolt.com)
This incident has been resolved. The majority of databases were recovered without data loss. Only a few needed to be restored from backup.
We’ve been able to restore most of the databases and are currently monitoring the server.
We are continuing to work on fixing the issue. Please reach out to support if you have any urgent questions.
We are continuing to work on a fix for this issue.
We are experiencing a database crash on "sterre-lon" currently. Unfortunately the databases seems to have corrupted beyond repair, and are now beeing restored from backups.
Report: "Partial network outage"
Last updateThe network incident affecting one server in Amsterdam has been resolved.
We are currently experiencing some network issues with respect to a server in AMS. We are looking into it and will update as new information comes in.
Report: "Support chat and billing service provider currently experiencing issues. We will update as more information comes in."
Last updateServices now appear stable.
We are currently investigating this issue.
Report: "Singapore Connecitivy issues"
Last updateThis incident has been resolved.
A fix has been implemented
The issue is still not identified
Our ISP has acknowledged the routing issue and is investigating
Some customers are currently experiencing difficulty to connect to our Singapore data center.
Report: "Sterre-LON has problems with the database"
Last updateThis incident has been resolved.
We have implemented a fix and monitoring the situation closely.
The issue has been identified and a fix is being worked on.
We are still investigating this issue. It has the highest priority.
Sterre-lon is experiencing database issues causing 500 errors. We are investigating.
Report: "potential network instability for site London"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating a networking issue in London. Services are mostly OK but there are some hickups from time to time.
Report: "Host egil-osl is currently unavailable"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "Problem with transit traffic in Oslo location"
Last updateThis incident has been resolved.
A fix was implemented about 20 minutes ago. We did not observe any major loss in actual traffic, so we expect most users have not noticed and the impact to have been minimal.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Connectivity issues"
Last updateWe’ve written a postmortem on our main site about the Telia related connectivity issues: [https://servebolt.com/articles/telia-went-down-and-so-did-we/](https://servebolt.com/articles/telia-went-down-and-so-did-we/)
This incident has been resolved.
We are now seeing some more recovery for New York with more traffic reaching the site.
The problem has been identified as an issue with a major internet backbone provider. We are currently observing mostly normal operation in Europe, but New York is still limited. However it appears that if your site is behind Accelerated Domains, Cloudflare or Fastly, it should be mostly operational also in New York.
We are observing some connectivity issues globally. It seems to affect the internet in general.
Report: "Control Panel momentarily unavailable"
Last updateThe Control Panel is fully operational again.
The issue has been identified and a fix is being implemented.
Our Control Panel is currently unavailable and is expected to be up and running in approximately 30 minutes.
Report: "Connectivity issue in New York"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The problem has been identified as a upstream routing / BGP related issue affecting a large chunk of our transit connectivity at this site.
We are currently investigating this issue.
Report: "Database problem on bolt54 in Oslo"
Last updatebolt54 in Oslo experienced a database problem at 13:15 CET. Service was recovered 13:37.
Report: "Server configuration issues"
Last updateThis incident has been resolved.
A temporary fix has been implemented, making all services run again. A permanent fix will be rolled out within a few hours.
We have identified the issue and working on implementing a fix.
We are currently investigating a problem with our internal server configuration system. This leads to configuration changes to sites/environments, git checkout, and other services in our control panel doesn't work. All changes made in the control panel that are affected by this issue will be rolled out once the system is up and running again.
Report: "Database stability problem in New York"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are investigating a database stability issue affecting the node "bolt70" in New York datacenter.
Report: "Problems with services in Johannesburg data center"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Some databases are logging errors in the Johannesburg location. To fix this we need to deactivate some websites temporarily. We expect to have service restored in around 15 minutes.
Report: "Some databases unavailable for customers in Oslo data center"
Last updateA fix was implemented 09:55 CET and databases have been available since then.
Some MariaDB/MySQL databases are unavailable in our Oslo location. We have identified the problem and are working on a fix.
Report: "Performance problem in Johannesburg location"
Last updateThis incident has been resolved.
Our Johannesburg location is currently suffering from a severe performance degradation issue. To attempt a fix for this problem we need to take some sites off line for a few minutes. We expect this to last less than 5 minutes.
Report: "Problems with stine-ams in Amsterdam datacenter"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently experiencing performance problems with stine-ams in Amsterdam.
Report: "Stability problems with databases"
Last updateThis incident has been resolved.
All databases in all locations should now be fully stable. We are monitoring the results.
Databases located in datacenters Amsterdam and Johannesburg are now also back to beeing fully stable. Oslo remains in progress.
Databases located in datacenters New York, London and Singapore are now back to beeing fully stable. We are continuing to work on the remaining sites. We are not aware of any significant impact to customer traffic, however a small number of requests have failed, as well as some cronjobs.
The issue has been identified and a fix is being implemented.
We are investigating some intermittent stability problems with databases on our platform.
Report: "London servers unavailable"
Last update## Recent Service Disruption — Our Apologies On Friday, February 26th at approximately 11:14 CET we experienced a server outage at our London data center. This resulted in service disruption for all websites hosted in this data center. With any significant event that affects our customers, we conduct an extensive examination to understand the root cause and develop a course of action to improve our systems and procedures. To that end, we wanted to provide a synopsis of the situation that occurred and our reassurance that we are working diligently to proactively mitigate and prevent future outages. ## Here's what happened The neighboring server rack in our data center was scheduled to be decommissioned. Remote hands from the remote data center personnel unfortunately unplugged equipment in the wrong rack affecting our servers directly. They pulled out networking and power cables, power supplies etc - taking all our servers down. This was noticed at 11:14 CET by our monitoring services and this is when our investigation started. At 11:33 it was clear that a human error was at play and we knew everything had to be reconnected. Rewiring the servers in a rack takes a lot of time and is very difficult work. That's because the right cables need to be plugged into the correct computers and network ports for everything to work as intended. At 13:28 CET the first server was fully restored again and work continued to get the rest online as well. At 13:48 CET all cables and hardware to all remaining servers had been configured correctly again and traffic was restored. ## Here's what we're doing We are in the process of working on solutions with our upstream data center providers to implement practices that will reduce the possibility of events like this happening by human error again. Outages disrupt your life and your business. We understand and we take our responsibility to you very seriously. We sincerely apologize for the disruption and the inconveniences this likely has caused you. Please allow me to take this opportunity to thank you for your business and provide my personal assurance that we are dedicated to meeting our commitment to you. Sincerely, Erlend Eide CEO [Servebolt.com](http://Servebolt.com)
This incident has been resolved.
All the London traffic seems to have recovered at 13:48 CET, we are actively validating and monitoring the situation.
Approximately 50% of our London traffic is back online since 1:27 CET. Our provider is still working on getting the rest up and running as soon as practically posible.
We are currently investigating this issue.
Report: "Server catrine-ams unavailable"
Last update# Recent Maintenance Failure We experienced a hardware failure on one server during our announced service window in the early morning on Sunday, February 14th at approximately 04.29 CET in our Amsterdam data center. These technical issues forced us to move the sites to a backup server. No customer data was lost during this process. ## Here's what happened During the planned update and upgrade procedures, the server stopped responding and was diagnosed as dead after approximately 2 hours of attempted recovery by the Servebolt operations team and on-site personnel. After the decision to move the sites to backup infrastructure was made it took about an hour for all services to be recovered at 09:08 CET. ## Here's what we're doing We've successfully moved over all Bolts and their sites to a backup server without any data loss. We have started the process of replacing the affected server and will move customer Bolts back during a nightly service window, once we have finished that process. Sincerely, Erlend Eide CEO [Servebolt.com](http://Servebolt.com)
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently moving the data to a new system.
During maintenance this server lost all connectivity. We are working on getting it back up.
Report: "Servebolt.com"
Last updateWe had an incident when migrating our internal DNS zones, affecting servebolt.com. We recovered from the incident within 15 minutes.
Report: "Some connectivity issues with Johannesburg location"
Last updateThis incident has been resolved.
The issue has cleared
There is currently significant packet loss on some transit paths for the Johannesburg location. We are investigating.
Report: "Some connectivity issues with Johannesburg location"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are seeing some connectivity issues with our Johannesburg location from some networks in the USA. As traffic levels are looking normal we believe this is a minor problem. This is the same issue as the previous incident, unfortunately it was closed prematurely.
Report: "Some connectivity issues with Johannesburg location"
Last updateThis incident has been resolved.
We are seeing some connectivity issues with our Johannesburg location from some networks in the USA. As traffic levels are looking normal we believe this is a minor problem.
Report: "Slow connectivity to certain networks from Oslo data center"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are seeing slow connectivity to certain networks from the Oslo data center. This seems to impact services on AWS among other things, resulting in reduced performance for sites using some external services. We are working to remedy.
Report: "Connectivity issues in Singapore"
Last updateThis incident has been resolved.
There was an issue with traffic dissapearing within one of our upstream transit providers. The traffic has been re-routed to restore connectivity.
Connectivity is currently very limited for our location in Singapore. We are investigating.
Report: "Connectivity issue in New York"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Our main transit link is having problems due to an outage at Telia Carrier. Unfortunately, the backup provider link is also down for a different reason. Our networking provider is working hard on resolving the issue.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Control panel maintenance"
Last updateThis incident has been resolved.
The maintenance is finished and we're monitoring that everything is working as expected
The maintenance is still in progress. We are sorry for the inconvenience.
We need to do an unscheduled maintenance of our control panel.