LUMASERV

Is LUMASERV Down Right Now? Check if there is a current outage ongoing.

LUMASERV is currently Operational

Last checked from LUMASERV's official status page

Historical record of incidents for LUMASERV

Report: "Network disturbance at FRA01"

Last update
postmortem

At approximately 15:00 CET on April 2nd, 2025, we experienced a brief network disruption affecting our Frankfurt \(FRA\) infrastructure. The incident occurred during a scheduled maintenance window for one of our upstream providers. The network disruption lasted for approximately seven minutes, during which some of our services experienced reduced accessibility. Our network engineering team identified the issue immediately and implemented necessary measures to restore service stability. While we typically handle such maintenance windows without any customer impact, this particular incident resulted in a brief service interruption. We have conducted a thorough review of our maintenance procedures and will be implementing additional safeguards to prevent similar incidents in the future. We understand the impact this may have had on your operations and apologize for any inconvenience caused. Our team remains committed to providing reliable service and continuous improvement of our infrastructure management processes. If you have any questions or concerns, please don't hesitate to reach out to us.

resolved

This incident has been resolved.

monitoring

The network issue has been fixed and our network is stable again. We keep monitoring and will provide updates here.

investigating

We are currently experiencing network issues at our Frankfurt locations.

Report: "Major outage of our compute infrastructure in FRA-2"

Last update
resolved

All systems and services have been stable since the recovery and the normal operational state has been fully recovered. The root cause was a power outage in one of our locations. We don't have information about the exact cause yet but are in contact with the datacenter provider to get more information about the incident. To ensure that a similar incident can be preempted or mitigated, we will do an internal review and inform you about our steps as soon as this has been completed.

monitoring

All services are running as expected again. We will keep monitoring and let the incident open till we publish an official statement tomorrow.

monitoring

We brought all services back up and are closely monitoring them

identified

We are currently booting all affected servers again. The services will be back online in the next minutes.

identified

We've identified the issue and are working on recovering all services ASAP

investigating

We're currently experiencing a major outage on our compute infrastructure in our FRA-2 location. We're investigating the issue and will provide additional information soon

Report: "Single Hypervisor Outage"

Last update
resolved

This incident has been resolved.

monitoring

The hypervisor has been taken out of service and all services have been distributed to different hypervisors. We're closely monitoring the situation and we'll post postmortem once we have all information on what happened.

investigating

One of our hypervisors became unavailable, we're currently investigating the issue

Report: "Network outage in FRA"

Last update
resolved

This incident has been resolved.

monitoring

The issue has been mitigated, we continue to monitor and post an update once we have all information on what exactly happened

investigating

We are currently investigating

Report: "Temporary packet loss due to outage of an uplink provider"

Last update
resolved

One of our uplink providers experienced a hardware failure resulting in an outage of that uplink. Our redundancy took over automatically but there was temporary packet loss noticeable during the switch-over phase.

Report: "Hardware failure on virtualisation node ls-ds-46"

Last update
resolved

This incident has been resolved.

investigating

Today at 18:47 CEST we had a hardware failure on our virtualization node ls-ds-46. After the problem could not be solved via IPMI, we migrated all VMs to other hosts and restarted them. By 19:24 at the latest, all VMs were completely restarted. The affected host was deactivated immediately and will be investigated and tested before being reactivated.

Report: "Hardware failure on ls-ds-32"

Last update
resolved

This incident has been resolved.

investigating

On 8.42pm CEST we've experienced a hardware failure on our cloud server hypervisor "ls-ds-32". We've taken this node out of service for now and rebooted all VM's on another hypervisor. At 9.06 p.m. at the latest, all VMs were online again.

Report: "Kernel errors with supervisor ls-ds-33"

Last update
resolved

This incident has been resolved.

identified

A few minutes ago we've encountered kernel errors with our supervisor ls-ds-33. This caused a high CPU usage and we had to reboot a few VM's on this host. The VM's are back online and we are investigating the cause of this.

Report: "Cloud storage performance issues"

Last update
resolved

This incident has been resolved.

identified

We've identified the problem and the systems look stable since 9.56am.

investigating

Since 9.42am we are encountering a performance issue at our cloud storage which is based on CEPH. Our technicians are currently investigating the cause and we will keep you updated here.

Report: "Minor network disturbance"

Last update
resolved

Today at 00:21 local time (CEST) our monitoring systems detected a network outage in our core network in Frankfurt. We quickly investigated the issue and found out that a session to one of our upstream providers got lost. Due to multi-redundancy the traffic was automatically distributed to other upstream providers so the impact was barely even noticeable. We got in contact with the upstream and were able to re-establish the session and recover full redundancy.

Report: "Network disturbance"

Last update
resolved

This incident has been resolved.

monitoring

Today at 8.30pm packet loss at our cloud infrastructure at FRA-1 has been reported from our monitoring systems. At 8.52pm the problem has been identified and has been fully resolved. The error has been occurred due to an misconfigured interface on one of our hypervisors which caused a high load within our network. The configuration has been corrected and the network is now stable again. We apologize for your inconvenience.

Report: "Kernel errors with supervisor ls-ds-42"

Last update
resolved

We have identified the cause and solved this for future.

investigating

We are continuing to investigate this issue.

investigating

All servers on this host has been successfully been migrated to other hosts in online-mode. We will now investigate the host further.

investigating

We're currently experiencing kernel errors related to the boot disks used in our supervisor ls-ds-42. We are currently investigating the problem further and will migrate all virtual servers to other nodes. If online-migration isn't possible, we will have to reboot the VMs running on this host. We will keep you updated here.

Report: "Partial packet loss"

Last update
resolved

Due to a software issue at our upstreams router an ARP conflict has occurred which caused a flap in a BGP session. We apologize for any inconvenience.

monitoring

All connections are stable again since 1.13pm. We're awaiting a reply from our upstream provider and keep monitoring.

investigating

We are continuing to investigate this issue.

investigating

A minute ago we've experienced packet loss via one of our upstream providers. We've opened a ticket for a quick investigation.

Report: "Packet loss due to huge incoming ddos attacks"

Last update
resolved

This incident has been resolved.

monitoring

Today at 3.09pm we saw some packet loss for about a minute due to a bigger ddos attack. We resolved this issue and keep monitoring.

Report: "Network disturbance at kleyrex"

Last update
resolved

Since 3.37pm CEST we're experiencing issues for our peering sessions at KleyReX. For now we've shutdown all sessions and wait for an official statement from their network operators before we will reactivate the uplink. We're currently routing all traffic via our transit providers so there is no further impact to our customers services.

Report: "Outage of our Core-Network"

Last update
resolved

The incident has been resolved.

monitoring

We were able to fix the error successfully and all networks are back online. We're now monitoring the network to ensure that there are no further errors.

identified

Some of our IP-Routes are back online. We're still working on the issue to get all routes back online.

identified

We found the problem with our core routers. We are already in the process of replacing the defective network hardware with new ones and installing our configurations on the new devices.

investigating

We currently have an outage in our Core-Network. We're already investigating the issue.

Report: "Partial Outage of our Core-Network"

Last update
resolved

After monitoring for 48 hours we can close this incident. This incident has been resolved.

monitoring

We implemented a fix and keep monitoring. Since 7.30pm the network is stable again.

identified

We've identified the reason for this incident and are currently implementing a fix.

investigating

Since 7:26 PM we have had various connection problems in our core network. We are currently investigating the issue.

Report: "Degraded performance at our cloud server cluster"

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

Currently we are experiencing some performance impact at some of our cloud server hosts. We are looking deeper into this and will keep you up to date in this incident.

Report: "Outage at DENIC registration system"

Last update
resolved

DENIC has resolved the incident.

investigating

DENIC (registry for .de TLD) is currently experiencing an outage their production environment which affects all domain actions related to .de TLD. Domain name resolution is not affected by this.

Report: "Partial packet loss"

Last update
resolved

Today between 1.02pm and 1.04pm CET we've experienced packet loss over certain routes. We've contacted the upstream provider immediately who solved it for the moment and is currently implementing methods to prevent further incidents.

Report: "Issue at KleyReX, Frankfurt"

Last update
resolved

Since yesterday 7.48pm CET the connections are stable again. We provider of KleyReX has confirmed us that the network is stable again and no further interruptions are expected. We will activate the peering sessions by end of the day again.

monitoring

A fix has been implemented and we are monitoring the results.

identified

KleyReX NOC has responded that the issue has been resolved and they are looking for the reason for the service interruption now. We will keep the BGP sessions via KleyReX down until the problem has been identified and the reason is clear.

identified

Today at 7.36pm CET some session at KleyReX Frankfurt have timed out with the effect that some routes produced packet loss for a few seconds. We've temporarly deactivated our BGP session and are now routing the traffic through other routes. A support ticket at KleyReX has been created a few minutes ago and we are waiting for a response.

Report: "Degraded management of VMs running on ls-ds-34 hypervisor"

Last update
resolved

The hypervisor has been rebooted between 2.25am and 2.27am CET. All VMs were up and running again after about three minutes. Furthermore we have implemented the announced further redundancy to prevent issues like this in future.

identified

In order to regain the full manageability of all VMs running on this host we will do a scheduled reboot tomorrow (Sunday, 15th november) between 2 and 2.30am CET. To solve this problem generally we will replace the disk caused the problem and add a further redundancy to avoid eventually further problems. In our customer interface you can look at the VM detail page if you are affected by this maintenance. For questions please feel free to contact our support. We apologize for the inconvenience.

identified

Since today 6am CET the management of VMs running on hypervisor ls-ds-34 is degraded. This incident affects only the management via our customer interface and our API, not the processes running in the VM or the reachability. If you need to control your VM via our interface, please feel free to contact our customer support. We can find an individual solution for you until the general problem with the hypervisor is solved. For a permanent solution we will schedule one further maintenance where we will have to do a controlled reboot.

Report: "Network issues"

Last update
resolved

The issue has been identified and resolved. We will provide more detailed information later that evening.

investigating

We are continuing to investigate this issue.

investigating

At 5.41pm our monitoring reported a few network issues regarding a few routes at our location in Interxion, Frankfurt. We've contacted the upstream provider for further details.

Report: "Network issues"

Last update
resolved

For now main traffic going to end customers will be sent over RETN (AS9002) to prevent eventual connection aborts resulting in Core Backbone (AS33891) network. We will stay in contact with the NOC for further information.

monitoring

Our upstream provider has disabled import and export to Core-Backbone until there is an official information about this incident. We will keep you up to date here.

monitoring

The network is stable again. The problem were there for about one minute. From our side it looks like a problem of the upstream provider Core-Backbone. We've contacted our provider and will inform you once we have a reply.

investigating

At 5.40pm our monitoring reported a few network issues regarding a few routes at our location in Interxion, Frankfurt. We've contacted the upstream provider for further details.

Report: "Partial packet loss via some routes"

Last update
resolved

According to the statement from Core Backbone a JTAG has done some changes at their router fra30 which led to a short service disruption at a few routes. We will stay in contact to achieve

investigating

Between 1.20pm and 13.32pm CEST on a few incoming routes via our upstream Core Backbone our monitoring reported some packet loss. The network is stable again and we've opened an incident at their NOC for further information.

Report: "Virtualisation cluster performance degraded"

Last update
resolved

Starting from today 7.15pm CEST our monitoring reported a performance degradation at two virtualisation nodes. This was influenced by a planned hardware maintenance in combination with a massive CPU usage of a few customers. We immediately started live balancing of the VMs and started a further hardware node at 7.40pm. Since around 7.45pm all monitoring values are back to normal. We will keep monitoring intensively in the next hours.

Report: "Core Backbone network outage"

Last update
resolved

Today between 2.07pm and 2.14pm some routes didn't worked as expected due to an error at Core Backbone. The official outage is documented at their status page: https://status.core-backbone.com/index.php?id=2829

Report: "Network outage at our network in Interxion"

Last update
resolved

This incident has been resolved.

monitoring

The network is stable again. During a not announced maintenance at our upstream provider combahton it came to a network outage of about three minutes. We contacted our supplier immediately to get information about the outage. Currently an incident is created and we will inform you as soon as there are new information.

identified

Today at 8.06am our monitoring detected packet loss at our core network at Interxion. The. network is now stable again and we're currently going to deeper to check for the reason for the outage.

Report: "Network issues"

Last update
resolved

The outgoing traffic has been switched back to normal. We were able to solve this problem with our upstream provider today evening.

monitoring

Outgoing traffic is still carried out via alternative routes using our backup upstream because our upstream provider hasn't fixed the BGP session for outgoing traffic yet. We will stay in contact with them and update you as soon as it is resolved. Those changes doesn't affect the latency or availability of you services.

monitoring

We are continuing to monitor for any further issues.

monitoring

We've deactivated our outgoing BGP session to combahton for the moment. The network is stable and we will keep monitoring.

investigating

Currently we are experiencing some network issues related to BGP sessions to our upstream provider combahton. We are currently checking the problem and will keep you up to date her.

Report: "BGP session flap for outgoing routes"

Last update
resolved

At 7.35pm it came to a BGP session flap between our edge and core routers. Due to this we had a short period of a minute where outgoing traffic could not be sent.

Report: "Degraded performance in our cloud environment"

Last update
resolved

This weeks Monday we detected a degraded performance in our cloud environment between 2pm and 9pm. The performance impact was related to our network storage (CEPH) were it came to limitations regarding our network hardware which caused IO delays in IO-heavy applications. As we noticed the performance impact we started to check the reason and find a solution. To solve the impacts for longer term we increased the existing limitations by upgrading the network hardware and some existing software limits. Before the scheduled maintenance for the hardware and software upgrade at Tuesday 4pm took place unfortunately we noticed another impact at Tuesday between 2pm and 3.30pm. After our technicians placed the upgrade all measured values were stable in the next days. We apologize for the inconvenience. For all questions please feel free to contact us via phone or email.

Report: "Packet loss due to a massive incoming ddos attack"

Last update
resolved

Starting from 11:50pm CEST a massive ddos attack against an internal used subnet which is not permanently mitigated by our ddos filters has been started and lead to a service disruption. We've activated the ddos filter for the whole subnet to mitigate the ddos attack. Now we are filtering the attack completely and will leave the filters activated to prevent future attacks.

monitoring

For now all routes are looking stable again. We are currently monitoring the results and come back with an explanation shortly.

investigating

We're currently experiencing a network outage at FRA2, Interxion. Currently we're looking into this and come back to you as soon as we have further information.

Report: "Partial packet loss via some routes"

Last update
resolved

The incident has been resolved. Transit via Core-Backbone ins enabled again and routing has been switched back to normal.

monitoring

There is still an issue with Core Backbones router. We're keeping the session deactivated until the issue from their side has been closed. For now the traffic is being routed via alternative upstreams.

monitoring

Today at 3pm CEST an outage happened at our transit career Core Backbone at their router fra10.core-backbone.com. For now the BGP session to this device as been deactivated. We will continue monitoring and will reactivate the session once the problem has been solved.

Report: "Network performance issues"

Last update
resolved

This incident has been resolved.

monitoring

Since 19:01 CEST the network is stable and up.

investigating

Currently we are some packet loss. We are checking this issue.

Report: "BGP Session flap"

Last update
resolved

Today at 5:32pm a BGP session flap to our edge router occurred. We're currently looking into this. While losing the BGP session a network disruption on outgoing connections of about minute could be seen.

Report: "Outgoing packet loss"

Last update
resolved

Between today 7pm and 9pm we've experienced about 5% packet loss via some routes, especially via routes at DE-CIX Frankfurt. Our technicians we're able to solve the incident by rerouting traffic via other upstreams until we're getting feedback to our message at the affected upstream provider.

Report: "Degraded network availability"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

At 15:40 CET for a few seconds we saw again packet loss via some routes. We're doing some optimizations and keep monitoring.

monitoring

We are continuing to monitor for any further issues.

monitoring

We are continuing to monitor for any further issues.

monitoring

For now the network looks stable again. We will keep monitoring and currently checking configuration changes to optimize for future.

investigating

Currently we are experiencing packet loss via some routes. We are investigating the problem.

Report: "Packet loss via some routes"

Last update
resolved

Today between 1:25 and 1:27am we experienced some packet loss via some routes due to big ddos attacks from public cloud networks. Now we're mitigating the attacks completely and our networks looks stable now. We will keep monitoring.

Report: "EURid API errors"

Last update
resolved

This incident has been resolved.

monitoring

The issue has been identified and a fix was deployed on our production system. We are monitoring the results.

investigating

We're experiencing a few errors in communicating with EURid registry and are currently looking into the issue.

Report: "Outage at NicAPI"

Last update
resolved

We've deployed a hotfix to solve this. The API is now up again.

investigating

We're currently experiencing an error at our NicAPI due to a failed deployment.

Report: "Partial network outage"

Last update
resolved

Today at 8:37pm a partial network outage occured at our location at Interxion Frankfurt. After the first analysis it looks like one of our core routers has crashed. For the moment all network connections are stable and using our second virtual chassis member. Our technicans are currently on the way to Frankfurt to check for issues onsite and restore the full redundancy.