Historical record of incidents for INVERS GmbH
Report: "Cellular connectivity issues"
Last updateWe currently experience cellular connectivity issues. We are investigating.
Report: "Cellular connectivity issues in several European countries"
Last updateThis incident has been resolved.
Our metrics returned to normal and we are monitoring for further occurrences.
Our partner has identified the issue and is working on a fix.
We currently experience cellular connectivity issues in several European countries. We are investigating with ours partners and will provide more information as soon as possible.
Report: "Cellular connectivity issues in Spain and Portugal"
Last updateThe power outage in Spain and Portugal has been resolved which our metrics also confirm.
The connectivity issues are caused by a widespread power outage in Spain and Portugal. Other countries are not affected.
Update: At least customers operating in spain are impacted, we can see ongoing issues with cellular connectivity in local networks.
We currently experience cellular connectivity issues as reported by our connectivity partners. We are investigating.
Report: "Issues Cloudboxx (REST API), OneAPI (REST API), Fleetcontrol"
Last updateA postmortem is now available at [support@invers.com](mailto:support@invers.com).
This incident has been resolved.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
A fix has been implemented and we are seeing our services come back online. We are monitoring closely.
OneAPI is affected by this outage as well. We identified the cause of the issue and are working on a fix with highest priority.
We are currently experiencing issues with our CloudBoxx REST API (including Installer Apps). Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
We are currently experiencing issues with our CloudBoxx REST API (including Installer Apps). Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Cellular connectivity issues with Telekom LTE-M network in Germany"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Our monitoring systems detected cellular connectivity issues with Telekom LTE-M network in Germany. We are investigating with our partners.
Report: "Cellular connectivity issues with Vodafone DE in Germany"
Last updateAfter detailed monitoring we consider this incident as resolved as no new cases occurred.
The issue only affects a small subset of devices which are using LTE-M of Vodafone DE network. We forced a switch to other networks for the affected devices and are monitoring the results.
We processed a workaround for the affected CloudBoxxes (around 50 in total) which caused a significant number of them to receive commands again. Unfortunately some are still affected, we are working on a solution.
We are continuing to work on a fix for this issue.
We currently experience cellular connectivity issues. We are investigating.
Report: "Short delay of event message queue ~18:00 UTC to approx. ~18:03 UTC"
Last updateWe experienced a very short delay of 2-3 minutes due to a frozen broker VM. This could be solved by rebooting the affected VM. Analysis of the root cause of this is ongoing to prevent this from happening in the future.
Report: "Cellular connectivity issues Telekom in Germany"
Last updateThis incident has been resolved.
The values are back to normal, connectivity at Deutsche Telekom is restored. Nevertheless, we will continue to monitor it explicitly over the next few days.
The situation at the provider Deutsche Telekom is stabilizing with a positive trend, but until there is a lasting improvement, we will continue to use alternative providers. This continues to affect only Germany and the Deutsche Telekom network.
The error could be localized further while we keep the devices on alternative providers. We will post updates here.
The provider Deutsche Telekom continues to be affected by massive disruptions. Since most of the connections have now been moved to other providers over night (Europe), our customers are not affected as much, but due to the slightly poorer performance and network coverage of the other networks, the error rates in the low single-digit percentage range are higher. We have a sync appointment with the specialists right away, and we will provide further updates here.
Our partners are working with Deutsche Telekom on a solution; until then, we will use alternative providers.
Almost all connections have now been switched from Deutsche Telekom to alternative providers (where available).
The problems with the provider Telekom in Germany continue. According to the analyses so far, the problem lies outside of INVERS and our direct partners. Other countries and networks are not affected. Telekom's error rate is about 2% higher than usual. We will start redirecting connections to other providers.
The issue only affects Telekom DE network in Germany. We are investigating this issue with our partners.
We currently experience cellular connectivity issues. We are investigating.
Report: "Temporary interruptions in hosting ~23:21 UTC to approx. ~23:45 UTC"
Last updateDuring this time, we experienced interruptions in the hosted services. This meant that the systems were not continuously available. The systems are again stably available after the interruption, the detailed analysis is still ongoing.
Report: "Monitoring cellular connectivity issues - Vodafone in Germany"
Last updateThis incident has been resolved.
The troubleshooting is still ongoing, but the system is running reliably on the alternative routes. We will close the case here if it remains stable.
All affected devices are now on alternative networks and communicating without problems. We are in the process of checking and repairing the route to Vodafone with our partners.
We are seeing an increased error rate with Vodafone Germany. The devices connect to alternative providers; we are working with our partners to stabilize the situation.
Report: "Connectivity issues Orange Belgium, affecting provider Orange in Belgium only"
Last updateThis incident has been resolved.
The values are back to a normal level, we continue to monitor it and have also passed it on to our connectivity providers for further research.
The values in our graphs are stabilizing again somewhat. We will close the incident when it has stabilized again sustainably.
There are problems in Belgium, the devices with the latest firmware will change networks.
Report: "General issues"
Last updateOur hosting provider notified us that the incident has been resolved.
Our hosting provider has identified routing problems and is in contact with their providers.
We are currently experiencing issues with our services. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Telekom Germany connectivity degradation"
Last updateThis incident has been resolved.
Our partner has implemented a fix. We are already seeing improvements and are further monitoring the results.
Our partner has identified the issue and will investigate with the network operator. Affected devices switch to other networks.
Our monitoring systems detected a connectivity degradation of Telekom Germany network. Devices using this network might face data communication issues. We are investigating with our partners.
Report: "TDC Denmark connectivity degradation"
Last updateThis incident has been resolved.
Our partner has forced all affected devices to other networks. We are already seeing improvements and are further monitoring the results.
Our partner applied a temporary block of TDC Denmark network. All devices currently using TDC network are now manually forced to another network.
Our partner has identified the issue and will apply a temporary block of TDC Denmark network soon.
Our monitoring systems detected a connectivity degradation of TDC Denmark network. Devices using this network might face data communication issues. We are investigating with our partners.
Report: "OneAPI (REST API) issues affecting the login also to FleetControl"
Last updateA postmortem is now available at [support@invers.com](mailto:support@invers.com).
This incident has been resolved.
The systems remain stable. We will close the case if it remains so over a longer period of time.
The login function is available again, and we are continuing to work on the final stabilization. The classic API of the CloudBoxx and thus customer operation was/is not affected, the OneAPI also worked the whole time, but a new authentication was not possible, as in FleetControl.
We are continuing to work on a fix for this issue.
We are currently restarting some services and will continue to monitor the values.
We are currently experiencing issues with our OneAPI REST API. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "General issues for the CloudBoxx systems in hosting"
Last updateA postmortem is now available at [support@invers.com](mailto:support@invers.com).
This incident has been resolved.
The systems remain stable. We will monitor it over the next few hours and close the case here if it remains stable. When we have completed the detailed analysis, we will post information here that the post mortem / incident report is available.
We are continuing to monitor for any further issues.
The systems have been stable again for several minutes and have now processed most of the data again.
We currently see that various systems are recovering. We keep working on this with highest priority to find the root cause and ensure that everything is back to normal as soon as possible.
There seems to be an issue with the uplinks of our hosting provider for CloudBoxx, OneAPI and FleetControl, our legacy systems aren't affected. We are in contact with the hosting provider to get this solved as soon as possible.
We are currently working on this issue with all available technical personnel and will provide additional information as soon we gain more insight.
We are currently experiencing issues with our services. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Telefonica o2 Germany LTE service degradation"
Last updateThis incident has been resolved.
After adjustments made by the Connectivity provider in the area of O2 Germany, the values are now sustainably better again. We'll set the case to ‘solved’ if it stays that way.
A large proportion of the devices have been transferred to alternative network operators in Germany. We are keeping an eye on the situation with our partners.
The values are not yet optimal, but since the error rate is below 1% and the devices can use other network operators, we will leave the incident open until there is a complete normalization and regularly check the values for fluctuations in one direction or the other.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
Our monitoring systems detected an LTE service degradation of Telefonica o2 Germany network. Devices using this network might face short disconnects. We are investigating with our partners.
Report: "Possible mobile connectivity delay"
Last updateThis incident has been resolved.
Nothing conspicuous to see, should it stay that way, we close the incident
One of our SIM providers is currently experiencing delays for some of their connections. We are closely monitoring but it seems that our devices aren't affected by this.
Report: "Problems with putting vehicles into service"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Setups that were carried out during the installation of vehicles are not processed correctly.
Report: "Delay in configuration update in SmartControl and FleetControl"
Last updateThis incident has been resolved.
Processing went much faster than expected, the issue should be solved. We are still monitoring to ensure everything works again as intended.
We are currently experiencing a delay in updating setups in our database. Applying the configuration to a cloudboxx itself isn't affected, only the status and model in SmartControl and FleetControl. A fix has already been deployed. We are seeing a steady decrease in the accumulated tasks, but it will take around two hours to process all of them.
Report: "Delay in trip events"
Last updateThis incident has been resolved.
We experienced a short delay in trip events for a few minutes due to high cpu load. The issue was fixed, we are closely monitoring.
Report: "CloudBoxx Events (RabbitMQ) issues (Rare cases of increased delays)"
Last updateAll maintenance work was completed on schedule with plenty of buffer. The median values were below the alarm threshold the whole time, and there are no outliers now either.
The first maintenance of the fiber-optic link has been successfully completed, and the latency values are back to a very low level. We will continue to monitor the situation, but so far there have been no median values for delays above the limits. We will leave the status page switched on until the end of the maintenance window tomorrow morning at 6 UTC.
The summary of the current situation: Maintenance is currently being carried out on the fiber optic line between our data centers. This maintenance was announced and is leading to slightly longer transmission times in the low double-digit millisecond range. However, our transmission times for the systems are still within the normal range, with rare cases of increased delays. The maintenance is continuing as planned, and we are keeping an eye on the situation.
To clarify the current situation: There is currently no increase in the median value of the delay of events, etc. However, there are some outliers in terms of transmission time. We are in contact with the data center teams to improve it.
The restrictions that we are currently measuring in processing are due to maintenance work that is currently being carried out on the interconnection between the data centers. According to initial information, this should not have any impact. Commands to the vehicles are not affected.
We are currently experiencing issues with our CloudBoxx Events Message Queue. We noticed a delay for some messages. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Cellular connectivity issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Only CloudBoxx 1020 in some countries is affected.
One global roaming hub had problems, mainly affected O2 UK, Orange Belgium and some others. Situation got better in between, we keep monitoring.
We currently experience cellular connectivity issues. We are investigating.
Report: "FleetControl issues"
Last updateA postmortem is now available at [support@invers.com](mailto:support@invers.com).
The issue is resolved and FleetControl is back to normal. Again for clarification: Sharing and rental business was not affected by this incident.
We have implemented a fix. We are monitoring the results.
We have identified an issue with FleetControl. This should not affect business operations like sharing and rentals. We are working on a fix.
We are currently experiencing issues with our FleetControl service. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "General issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently experiencing issues with our services. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Cellular connectivity delays on some units, not affecting CloudBoxx 1020"
Last updateA postmortem is now available at [support@invers.com](mailto:support@invers.com).
Metrics stayed at normal levels since it last update. We're closing the incident and will follow-up with our partners for measures to prevent this from happening again.
We are back to normal levels, will continue to monitor overnight and close the incident tomorrow morning German time if it remains stable.
We continue to monitor it, we are close to the normal values.
The emergency maintenance that was started by our connectivity provider's partner at around 19 UTC has improved the situation according to our internal graphs. The missing connections have dropped to a tenth of the maximum missing connections. However, they are still above the normal level. the trend is now clearly and continuously positive. The INVERS teams will continue to monitor the issue and keep you up to date. We are downgrading the impact classification by one level.
The error rates are slowly improving, but because there are currently hundreds of thousands of devices sending data, the incoming data traffic has to be controlled, so it doesn't happen all at once.
We are seeing further improvements in error rates, but we are not out of the woods yet.
The restoration of the server system is unfortunately delayed, so we do not yet see any improvement. We will stay tuned.
The planned maintenance window is over, we will check with our partners whether the emergency maintenance was successful and post further updates.
The Partner Carrier is still engaged in their emergency maintenance operation. Updates will provided once they are available.
Emergency maintenance has been started and should be completed by around 19:00 UTC. All teams on our side that can ensure a proper restart are ready. We have also activated additional resources to be able to process the traffic as quickly as possible.
This is a Europe-wide outage of one of the main roaming platforms used to connect users in other countries to their home operator. It is currently also affecting a massive number of users of internet services abroad, beyond car sharing or M2M services. That is why all available resources are working on the topic. We have been informed that the partner is now planning a major emergency maintenance which should bring operations back to normal. We are seeing a slight easing, but are waiting for our monitoring teams to confirm the improvement.
We are still in close contact with our partners, but there is still no reliable information on when the problem will be finally resolved. We are still active with all teams and will remain so.
The work on the specific network component is still ongoing, unfortunately there is no estimate of how long it will take. All teams will continue to work on the issue until it is finally resolved. We sincerely apologize for any inconvenience caused. CloudBoxx 1020 units are still not affected.
C-level has been involved for some time, all the necessary teams are working on the issue, but there is still no ETA. We will continue to post updates here.
The partner continues to work on the specific solution, which has not yet been fully implemented.
The partner of the connectivity provider found the specific issue in the network, just confirmed, and they are working on a specific fix.
We are continuing to investigate this issue.
Other teams have been brought in, there are no visible changes yet, we are working hard on the issue
The increase in faulty connections has leveled off, but the number of errors is not yet decreasing. The teams are continuing to work on the cause and we will keep you up to date.
There is an outage at a partner of our connectivity service provider. This has been confirmed by the partner. The emergency teams are working on a quick solution, INVERS teams are monitoring it from our side. we will post regular updates here.
We see an increase in missing heartbeats, we are in contact with the Connectivity Partner to solve it as soon as possible
Some CloudBoxx units have a slightly longer response time, we are working with our partners on this topic and will let you know if there is anything new.
Report: "[CloudBoxx 1020] Cellular Connectivity issues in United Kingdom"
Last updateLess devices were affected than initially assumed. All switched to alternative connectivity.
We are currently experiencing cellular connectivity issues of CloudBoxx 1020 in United Kingdom. The devices will switch to the other SIM within minutes.
Report: "OneAPI and Insights Events (RabbitMQ) issues"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
The incident has been resolved.
A fix to minimize latencies of the speeding-events has been implemented and we are monitoring the results. Further measures are discussed with the external provider.
We have seen an improvement but the processing is not yet back to normal levels. We are working on the issue with the relevant teams and will provide further updates here.
The error has been fixed, pending data is still being processed, which will take about 30 minutes.
We are in contact with the external provider of the service that is causing the delays and are working together to find a solution.
It only affects speeding events, the normal customer journey is NOT affected.
We are currently experiencing issues with our OneAPI and Insights Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Cellular connectivity issues in UK"
Last updateThis incident has been resolved.
O2 UK has a problem, devices will switch to other networks.
O2 UK has a problem, devices will switch to other networks.
Report: "Cellular connectivity issues"
Last updateThe incident report is ready to be requested from our support or your TAM.
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
We currently experience cellular connectivity issues. We are investigating.
Report: "Processing interruptions today, 15:07 UTC to approx. 15:21 UTC"
Last updateThe incident report is ready to be requested from our support or your TAM.
Today, 15:07 UTC to approx. 15:21 UTC, there were problems with routing in an INVERS data center according to the metrics. This caused interruptions in sending and receiving data to and from the vehicles. CloudBoxx-based systems were affected. The error was quickly localized and fixed, this has since been confirmed by our own log files, in addition to the log files of the data center. We will only carry out an in-depth analysis and ensure that this problem does not occur again. Our emergency teams continue to keep a particularly close eye on all metrics throughout the evening. We apologize for the delays in the process and in the customer-related systems.
We experienced a short outage of some services. Most of them are back to normal, OneAPI events might be delayed but are also recovering. We are monitoring and analyzing to find the root cause and prevent this from happening in the future.
Report: "Cellular connectivity issues"
Last updateThis incident has been resolved.
We are seeing some minor problems establishing connections on the LTE-M network. Our partners confirmed this and the values are now back within the normal range. We are continuing to monitor this.
Report: "Cellular connectivity issues"
Last updateIt affected Telekom Germany only and is solved since ~7 UTC. Devices switched over to alternative networks directly.
We are currently experiencing cellular connectivity issues with some networks. Most of our devices changed network so there should be little to no impact on our customers' businesses . We are investigating with our partners.
Report: "Cellular connectivity in Germany"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Telekom Germany has a problem, devices will switch to other networks.
Report: "Mobile Connectivity"
Last updateOne of our SIM providers encountered a network issue that started at 14:55 UTC with an impact on LTE data session creation for a group of users. A fix was implemented at 16:02 UTC and we are now monitoring the situation closely. Measures to prevent this from happening in the future were already taken.
Report: "Degraded performance"
Last updateThis incident has been resolved.
Our connectivity providers informed us about a service degradation caused by the roaming network. This mainly affected CloudBoxxes in UK with 2G network. This issue has already been solved. We will set the incident to resolved if it remains stable.
Our connectivity providers informed us about a service degradation caused by the roaming network. Our metrics only show a minor impact on CloudBoxx operation but we would like to notify about this.
Report: "Maintenance Connectivity (subsequent entry)"
Last updateOn April 8th there was an 3,5 hours emergency maintenance which led to a short <5 min interruption of connectivity for certain CloudBoxxes (type 1020) during this time window. For the sake of transparency, we list this here again on the status page.
Report: "CloudBoxx Events (RabbitMQ) issues"
Last updateRunning smoothly for an hour now. Marking the incident as resolved.
We identified the issue and managed to restore publishing of all messages. We are now monitoring to make sure everything is back to normal. Only one customer was affected and we are in contact already.
We are currently experiencing issues with our CloudBoxx Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "CloudBoxx Events (RabbitMQ) issues"
Last updateThis incident has been resolved.
The problem has already been identified. Only concerns one customer with which we are already in contact.
We are currently experiencing issues with our CloudBoxx Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Monitoring cellular connectivity issues not affecting CloudBoxx units"
Last updateThis incident has been resolved.
Devices are coming back. We close the incident if it's stable for 30 min.
A fix has been implemented and we are monitoring the results.
The fix is a bit delayed, we keep working on the issue.
The issue has been identified and a fix is being implemented.
The global network operates center will proceed some attempts to fix it until 12:45 UTC. We keep you posted.
The case was escalated, we keep working on it
We received reports of cellular connectivity issues. No CloudBoxx units affected, it affects legacy iBoxx devices. We are working on a solution with high priority.
Report: "Cloudboxx (REST API) issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently experiencing issues with our CloudBoxx REST API (including Installer Apps). Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Disturbance of the CloudBoxx setup process"
Last updateThe root cause is shared with this incident:https://status.invers.com/incidents/7y8bw4fcn28y , so we will close this one.
We are investigating the issue. Vehicle-information is not safed in setup process with installer-apps or in the FleetControl vehicle information section.
Report: "OneAPI and Insights Events (RabbitMQ) issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently experiencing issues with our OneAPI and Insights Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "FleetControl issues"
Last updateThe issue has been resolved.
The critical infrastructure experienced a short outage again. The problem has been identified and resolved. We are monitoring our system.
Issue has been resolved again. We will continue to investigate for the root cause of these hiccups.
The problem appears to have resurfaced.
We experienced a short outage of critical infrastructure. The problem has been identified and resolved. We are monitoring our system.
Everything seems to be working fine again. We continue to investigate the cause of the issue.
We are currently experiencing issues with our FleetControl service. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "CloudBoxx Events (RabbitMQ) issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently experiencing issues with our CloudBoxx Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "CloudBoxx Events (RabbitMQ) issues"
Last updateThe incident has been resolved, all queues are caught up.
We are currently experiencing issues with our CloudBoxx Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Cloudboxx (REST API) issues"
Last updateA post mortem is now available at [support@invers.com](mailto:support@invers.com).
It's possible again to log in to FleetControl and the OneAPI with a new requested password. You might need to clear the cache of the browser before. If you have any further problems, please contact our support team.
The API and Installer Apps are back since a while.
In order to restore access to FleetControl and the OneAPI at short notice, it is necessary to reset the user's password. To do this, use the "Forgot Password?" function. We will prepare it and provide information here when it is possible.
We are still working on restoring the backup. The current disruption should have no impact on the customer rental process. Configuration changes, in-fleeting via SmartControl and the use of FleetControl are still affected.
We are continuing to work on a fix for this issue.
Rollback in progress.
We are continuing to work on a fix for this issue.
Installer Apps and some commands to CloudBoxxes are affected. We are working on a fix to the affected services.
We are currently experiencing issues with our CloudBoxx REST API (including Installer Apps). Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "Monitoring of reported network delays in Austria, Hungary and Slovenia and Croatia"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
It mainly affects Austria, Hungary and Slovenia and Croatia.
We got reports about minor network delays below the critical warning level. We are currently investigating this and created an "identified" incident. This status will be escalated if the delays turn out to be critical, otherwise it will be changed to "resolved" if there are no significant delays.
Report: "Cellular connectivity issues in Austria, Slovenia and Croatia"
Last updateThis incident has been resolved.
All units are back, we keep monitoring and close this incident if it keeps stable.
A fix has been implemented and we are monitoring the results.
Slovenia and Croatia are affected, too. But the devices are coming back now.
We currently experience cellular connectivity issues in Austria. We are investigating.
Report: "CloudBoxx Events (RabbitMQ) issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing issues with our CloudBoxx Events Message Queue. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.
Report: "FleetControl issues"
Last updateWe are currently experiencing issues with our FleetControl service. Our emergency standby staff is investigating. We’ll post an update with details and scope of the incident as soon as possible.