Historical record of incidents for Universign
Report: "Incident report"
Last updateThis incident has been resolved.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Incident report"
Last update**Incident Report – December 18, 2024** On December 18, 2024, between 3:20 PM and 4:45 PM \(CET\), we experienced an issue affecting our transaction service with some periods of unavailability. Other services on the platform, such as the Seal and Timestamp APIs, were not impacted. Our system relies on several storage servers \(SAN\) to store customer data and ensure redundancy across physical devices. One of these servers, dedicated to temporarily storing transaction-related documents \(from creation to archival 30 days later\), encountered an internal issue and automatically switched to read-only mode as a protective measure. After multiple unsuccessful attempts to resolve the issue with this server, we decided to redirect transaction document storage to another server. This operation triggered a data transfer from the affected server to the new one. Unfortunately, this process extended the duration of the incident, as it was essential to ensure that signers could still access transactions created before the incident. We sincerely apologize for any inconvenience this may have caused and thank you for your understanding. We remain committed to providing you with the best possible service experience. If you have any questions, please do not hesitate to contact our support team. **Rapport d'incident – 18 décembre 2024** Le 18 décembre 2024, entre 15h20 et 16h45 \(CET\), un incident a impacté notre service de transaction avec des périodes d'indisponibilité. Les autres services de la plateforme, tels que les APIs de cachet électronique et d’horodatage, n’ont pas été affectés. Notre système repose sur plusieurs serveurs de stockage \(SAN\) pour conserver les données de nos clients et garantir leur redondance sur différents dispositifs physiques. L’un de ces serveurs, dédié au stockage temporaire des documents de transaction \(de leur création à leur archivage 30 jours plus tard\), a rencontré un problème interne et s'est automatiquement mis en mode lecture seule par mesure de sécurité. Après plusieurs tentatives infructueuses pour résoudre le problème sur ce serveur, nous avons décidé de rediriger le stockage des documents de transaction vers un autre serveur. Cette opération a déclenché un transfert des données entre les deux serveurs. Malheureusement, ce processus a prolongé la durée de l’incident, car il était nécessaire pour garantir l’accès aux transactions créées avant l’incident. Nous tenons à nous excuser pour les désagréments causés et vous remercions pour votre compréhension. Nous restons mobilisés et engagés pour vous fournir la meilleure expérience client possible. Si vous avez des questions, n’hésitez pas à contacter notre équipe de support.
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Identified major outage"
Last updateWe observed a degradation/interruption of our services along with a new type of use of the service coming from a newly registered customer. To avoid further issues, we have disabled said client's access as well as deactivated the ability to register for the service from our website. We are currently analyzing to understand how this type of access creates an overload on our platform. Measures will be taken soon to avoid such impacts on our services.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified major outage"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified major outage"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified major outage"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified Major outage"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Incident report"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe encountered difficulties on our platform between 13/03/2024 6:30 p.m. and 14/03/2024 1:06 a.m. More information will be made available as it becomes available to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident on the 24/01/2024 between 11:38 AM and 11:43 AM. After a routine data center intervention, a network cable was connected to the wrong port, causing a network outage lasting less than 5 minutes due to Spanning Tree protocol reconfigurations. In order to avoid such problems in the future, this type of intervention will be added to the list of interventions requiring a shutdown of the platform.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Identified major outage"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Incident report"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateThis incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident on the 18/10/2023 between 3:05 PM and 5:08 PM. **Timeline**: 18/10/2023 3:05 PM: Latency alarm on WS service 18/10/2023 3:09 PM: Intermittent alarms on unavailability of WS service **Resolution of the incident:** 18/10/2023 3:09-4:15 PM: Backoffice stopped 18/10/2023 3:09-4:25 PM: Reboot of overloaded servers 18/10/2023 3:20 PM: Server taken out and back into the pool 18/10/2023 4:25 PM: Server taken out and back into the pool 18/10/2023 4:31 PM: IP address blocked **End of incident:** 18/10/2023 5:08 PM: All services are operational 19/10/2023 12:27 PM: IP address unblocked **Identified root cause:** We are investigating high contention at the database level which caused increased latency of queries, going so far as to make the service unavailable. **Preventive actions:** We are investigating high contention at the database level which caused increased latency of queries, going so far as to make the service unavailable. We are investigating changing our database management service. We are investigating integrating a query mitigation service.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident which occurred on 06/10/2023 between 2:43 p.m. and 5:43 p.m. Detection time * 06/10/2023 2:43 p.m. - 5:43 p.m.: latency alerts on WS service * 06/10/2023 4:43 p.m. - 5:40 p.m.: intermittent alerts on WS service unavailable Resolution * 06/10/2023 2:48 p.m.: backoffice shutdown * 06/10/2023 2:54 p.m. - 4:50 p.m.: exit of a server from the pool * 06/10/2023 4:18 p.m. - 4:32 p.m.: restart and switchover of load balancers * 06/10/2023 4:38 p.m.: increase in limits per server on load balancers * 06/10/2023 4:40 p.m. - 6:00 p.m.: monitoring and analysis Incident end time * 06/10/2023 5:43 p.m.: all U1 services are operational Cause of the incident We are investigating high contention at the database level which introduced an increase in query latency, going so far as to make the service unavailable. We will get back to you as soon as we have more information. Remediation : We are investigating high contention at the database level which introduced an increase in query latency, going so far as to make the service unavailable. We are investigating changing our database management service.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident between 28/09/2023 11:27 AM and 12:28 PM. Detection time: • 09/28/2023 11:27 a.m.: alert on unavailable services • 09/28/2023 12:21 p.m.: application alerts, degraded service Resolution : • 09/28/2023 11:32 a.m.: restart of impacted network equipment • 09/28/2023 12:21 p.m.: implementation of new network routes • 09/28/2023 12:28 p.m. - 12:41 p.m.: restart of application services Incident end time: • 09/28/2023 12:21 p.m.: U1 services are degraded \(<5% of requests impacted\) • 09/28/2023 12:52 p.m.: all U1 services are operational Cause of the incident: Updating network equipment caused a redundancy fault which is currently being investigated with the manufacturer. Remediation : Ports on other network devices have been reset, returning to the original network routes. An investigation with the equipment manufacturer is underway to understand why the switch was not automatic and transparent. At the same time, we will study the replacement of this network equipment.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident between 29/08/2023 5:56 PM and 30/08/2023 10:15 AM. **Timeline:** * 29/08/2023 5:56-6:03 PM: First alarm on intermittent unavailability of services * 29/08/2023 6:06-6:22 PM: Alarm on intermittent latency of services * 29/08/2023 9 PM - 30/08/2023 10:15 AM: Alarms on application memory for part of the services **Resolution of the incident:** * 29/08/2023 6:02 PM: Reduction of the limit on load balancers * 29/08/2023 6:11 PM: Exclusion of a server from the load balancer * 29/08/2023 6:38 PM - 30/08/2023 10:14 AM: Log analysis and successive restarts of application services **End of incident:** 30/08/2023 10:15 AM **Identified root cause:** Several instances experienced slowness which caused an overload on those instances, thus causing partial instability on all services of the platform. **Preventive actions:** To prevent the platform from overloading, we will be working on improving the settings of load balancers. In addition, we will be auditing the application configuration for each server to prevent slowness.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident on 07/07/2023 between 11:03 AM and 12:37 PM. An issue on one of our application's infrastructure components generated significant instability on the platform. The necessary steps were taken to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident on 29/06/2023 between 4:06 PM and 5:20 PM. An issue on one of our application's infrastructure components generated significant instability on the platform. The necessary steps were taken to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe apologize for the inconvenience caused by the incident on 05/06/2023 between 10:56 AM and 3:24 PM. An issue on one of our application's infrastructure components generated significant instability on the platform. The necessary steps were taken to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident on 30/03/2023 between 5:01 p.m. and 5:18 p.m. An unusual load has been detected on our services. The necessary steps were taken to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Identified minor outage"
Last updateWe are sorry for the inconvenience caused by the incident on 06/03/2023 between 10:20 a.m. and 12:20 a.m. A problem on a component of our application on the infrastructure side generated a strong instability of the platform. Everything necessary was done as quickly as possible to restore the situation.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Identified minor outage"
Last updateWe are sorry for the inconvenience caused by the incident on 06/02/2023 between 2:01 p.m. and 4:01 p.m. and 3:16 p.m. A load problem has been detected on our infrastructure. We intervened to stabilize performance as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident which took place between 16/12/2022 9:05 p.m. and 17/12/2022 4:44 p.m. A problem on one of our servers was not detected on time. Once the problem was identified, the service was restored to 100%.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident on 15/12/2022 between 3:31 p.m. and 3:55 p.m. A temporary and outsized increase in demands on our application led to a significant performance degradation. The necessary steps were taken to restore the situation as quickly as possible.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident on 11/18/2022 between 11:01 a.m. and 11:52 a.m. Following the failure of a component of the architecture, performance deteriorated. Everything necessary has been done to restore the situation.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 27/09/2022 between 10:40 am and 11:30 am. A sudden increase in the load on Universign's servers led to a slowdown or even short service interruptions. Everything necessary has been done as soon as possible to restore optimal use of the Universign applications.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 10/08/2022 between 2:42 pm and 4:38 pm. An unplanned reboot \(bug\) of a firewall had a side effect on the communication between some servers causing a disruption of services. Everything necessary has been done to correct the problem.
We encountered difficulties on our platform between 2:42 p.m. and 4:38 p.m. More information will be available later.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 16/12/2021 between 2:45 pm and 3:30 pm. Following a failure of two network components on our infrastructure, the webapp and the API service were no longer accessible. An intervention on the impacted datacenter made it possible to put the two services back in working order.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 18/10/2021 between 3:27 pm and 3:55 pm. Following a malfunction of an application component, we encountered performance problems leading to service interruptions. As soon as it was detected, the necessary measures were taken to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Identified minor outage"
Last updateWe are sorry for the inconvenience caused by the incident of 21/06/2021 between 4:27 pm and 4:30 pm. During the preparation of a new component aimed at improving the general performance of our infrastructure, a configuration error caused the unavailability of Universign services. The problem was quickly corrected.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 01/06/2021 between 9:04 am and 09:55 am. Following the failure of services on a server, requests for connections to the portal were not handled in the normal way. The withdrawal of the server supporting these services made it possible to find a correct situation with regard to access to the portal.
We encountered performance degradation on our web portal between 9:04 am and 9:55 am. More information will be made available later.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 24/05/2021 between 7:40 am and 09:00 am. Due to the intermittent unavailability of a server, the connection service no longer responded satisfactorily to requests. Once the source of the problem was identified, the situation was restored.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 30/03/2021 between 12:47 pm and 12:49 pm. An infrastructure component has stopped functioning correctly but not enough to force a failover to the backup system. The failover was done manually in order to restore the services.
We encountered difficulties on our platform for 2 minutes. More information will be made available later.
Report: "Identified minor outage"
Last updateWe are sorry for the inconvenience caused by the incident of 12/11/2020 between 10:46 am and 12:06 pm. Following the degradation of performance leading to the partial unavailability of certain services and after investigation, a component of our infrastructure no longer fulfilled its role without any time reporting errors preventing the switch to the backup system. This component has been restarted and will be replaced.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing some performance degradations on our platform but our services are operational. We are continuing to work on a fix for this issue.
We are facing a minor outage. The problem is identified and we are working on a fix right now.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 06/25/2020 between 10:21 am and 10:26 am. Following an unusual extraction \(substantial CSV export\), the performance of the platform was impacted. Everything has been done to restore the situation as quickly as possible.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 02/02/2021 between 7:00 am and 7:40 am. A maintenance operation took place on 02/02/2021 from 7:00 am, on a low-level component of our infrastructure. This operation should have been transparent. However, this operation had a significant impact on the availability of our services and we apologize for it. The operation, which should have lasted only a few minutes, however took longer than expected without the possibility of stopping it to restore the situation without making it worse. The decision was taken to complete the operation and restore the affected services as quickly as possible.
We encountered difficulties on our platform between 7:00 am and 7:40 am. More information will be available later. We deeply apologize for the inconvenience.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 25/11/2020 between 9:54 and 10:38. A wrong parameter in the release materiel deployed on servers caused memory crashes for a specific service. The parameter has been corrected and a process will be setup to verify parameters before deployment.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe have encountered difficulties on our platform between 11:29 et 11:34 More information will be made available as it is conveyed to us.
Report: "Information"
Last updateWe have traced the slowdowns of this afternoon as well as the incident of the evening to a defective component in our global file storage system. Our engineers are working hard to normalize the situation. Altough we will try our best to avoid them, slowdowns may still be experienced tomorrow. We will post an update if slowdowns are observed on our side. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Most connections are still currently handled by the platform. We currently observe a timeout rate of about 2%. We will keep you updated.
We are still experiencing a very important number of connections on our platform and you may experience slower responses than usual. Responses times up to several seconds have been observed. However, most connections are still currently handled by the platform. We currently observe a high timeout rate of about 7%. We will keep you updated."
Report: "Information"
Last updateWe have been experiencing slowdowns on our platform since 15:36. Further information will be sent to you later.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 27/05/2020 between 20:33 and 20:40. Following an unusual activity on a component of the infrastructure, the services were unavailable for 7 minutes. Actions have been taken to restore the situation and stabilize performance in the long term.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 26/05/2020 between 9:30 and 10:32. After the implementation of a new version in the production environment, the platform underwent strong slowdowns until 10:00. A rollback was therefore made to restore the situation. During this operation, a configuration error partially made the platform unavailable for approximately 15 minutes. The causes have been analyzed and will be corrected soon to avoid this type of incident.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are currently experiencing difficulties on our platform. More information will be made available as it is conveyed to us.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 04/05/2020 between 9:34 and 9:42. Due to exceptional activity on our platform, an administration task normally without incidence, caused blockages on the file system resulting in the unavailability of applications. The task was stopped in order to restore the services. The administration task will be performed after this exceptional activity has passed.
This incident started at 9:34 and was resolved at 9:42. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 19:04/2020 between 16:10 and 16:30. Due to an unusual and momentary load, many requests were executed on one of our databases, which led to the rapid filling of the disk space reserved for keeping the logs. As soon as the incident was reported, we took the necessary steps to resolve the problem and access our applications. We have also implemented a solution to ensure the proper functioning of the database in the future.
This incident started at 16:10 and was resolved at 16h30. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 18/04/2020 between 11:50 and 12:15. Due to an unusual and momentary load, many requests were executed on one of our databases, which led to the rapid filling of the disk space reserved for keeping the logs. As soon as the incident was reported, we took the necessary steps to resolve the problem and access our applications. We have also implemented a solution to ensure the proper functioning of the database in the future.
This incident started at 11:50 a.m. and was resolved at 12:15 p.m. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
Report: "Incident report"
Last updateWe are sorry for the inconvenience caused by the incident of 10/03/2020 between 10:20 and 12:08. The service has been degraded on our platform for the following actions: * reset a forgotten password ; * add a user to an organization; * create a numeric identity. Cause: the submission of an email address is validated by a service provider. This one returned "invalid email" messages for the duration of the incident. Correction: the service linked to this provider has been disconnected, only the format of the email addresses is validated \(the domain name is no longer, and we no longer check whether the address is blacklisted\). This error is not technical, so it was not reported by our monitoring system, hence the duration of the incident. Action to be taken: monitor this type of functional error from service providers.
We are facing degraded service on our platform. The problem has been identified and we are currently working on a solution.
Report: "Identified major outage"
Last updateWe are sorry for the inconvenience caused by the incident of 13/02/2020 at 16:23. The service was degraded on the Web Services platform \([ws.universign.eu](http://ws.universign.eu)\), with HTTP 502 and 503 responses for part of the requests, between 16:23 and 17:13. No data has been lost or corrupted. The transactions could be continued as soon as the incident ended. After a period of increased surveillance, the incident was closed at 17:33
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified major outage"
Last updateWe are sorry for the inconvenience caused by the incident of 29/01/2020 at 13:00. Our application was not available between 13:00 and 13:40. This unavailability was caused by the expiration of a certificate. No data has been lost or corrupted. The transactions could be continued as soon as the incident ended. After a period of increased surveillance, the incident was closed at 14:29.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are still facing an issue ou our platform We are hoping to solve this outage before 1.40 pm
We are continuing to work on a fix for this issue.
We are facing a major outage. The problem is identified and we are working on a fix right now.
Report: "Identified minor outage"
Last updateWe are sorry for the inconvenience caused by the incident of 09/12/2019 at 9:50. Our web application was not available between 9:50 and 13:56. This unavailability is a consequence of the previous day's incident. We had to intervene again on our front servers. No data has been lost or corrupted. The transactions could be continued as soon as the incident ended. Given the sequence of the 2 incidents, we decided to exercise increased surveillance during a 24-hour period, and therefore waited until the next day to close this incident, le 10/12/2019 at 9:50.
This incident has been resolved. We are preparing a global report about this incident. We deeply apologize for the inconvenience.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for the web application.
We are continuing to work on a fix for the web application.
We are facing a minor outage. The problem is identified and we are working on a fix right now.