Historical record of incidents for TSG Global
Report: "Emergency Maintenance Notice - Messaging Stack - SMS & MMS"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
🚨 Emergency Maintenance NotificationWe are informing you of an emergency maintenance operation required due to unexpected hardware issues on a critical server instance.This instance supports core components of our messaging stack (SMS, MMS, 10DLC, Toll-Free, and Short Code). A brief interruption may occur, potentially causing minor delays in message delivery for less than five minutes.🔧 Maintenance DetailsDate: Thursday, May 29, 2025Time: 02:00 AM to 04:00 AM EDTDuration: 2 Hours📌 Impact OverviewSome messages may experience a short delivery delay (under 5 minutes). Normal service is expected to resume immediately after the system reboot.✅ What’s Not AffectedAll other services—including voice, web portal access, and customer support—will remain operational and unaffected.📞 Communication & SupportIf you have any questions or experience issues, please reach out to our support team at support@tsgglobal.com.Thank you for your continued trust in TSG Global.
Report: "Voice termination degraded service"
Last updateThe issue affecting our voice termination services has been fully resolved, and service remains stable. After 24 hours of monitoring, we have confirmed full functionality. We appreciate your patience and understanding during this time. Please reach out if you experience any further issues.
The issue affecting our voice termination services has been has been fully restored. We are closely monitoring the system to ensure stability. We apologize for any inconvenience this may have caused and appreciate your patience and understanding.
Our downstream provider has identified the potential cause of the degraded performance with voice termination and is implementing corrective actions. We are monitoring the situation closely and will confirm once service is fully restored. Thank you for your continued patience.
Our team remains in contact with our downstream provider, who is actively investigating the root cause of the degraded performance affecting voice termination. We will share further updates as soon as additional details become available. Thank you for your patience.
Currently, our voice stack is experiencing degraded performance with terminating calls. We are investigating this issue with our down stream provider and will provide updates as they are made available.
Report: "Upstream Provider Service Degradation"
Last updateThis incident has been resolved.
We are continuing to work on a fix for this issue.
Customers may be experiencing some latency with SMS messages due to an Upstream Provider issue. Our Upstream Provider is addressing the root cause of these latencies, and their engineers are actively working on implementing a fix.
Report: "Delayed MMS Delivery"
Last updateThis incident is now considered resolved.
MMS Delays have been resolved. We are actively monitoring.
We are actively investigating reports of MMS delayed delivery.
Report: "Long Code Service Degradation - T-Mobile"
Last updateThe 10DLC delivery hub delivery hub has confirmed that this issue has been resolved.
The 10DLC delivery hub received notification that T-Mobile is experiencing intermittent latency issues impacting SMS messaging services. We will provide more information as it becomes available.
Report: "UPSTREAM - Delayed SMS Delivery to Verizon and T-mobile"
Last updateMessaging continues to deliver as expected to all carriers. We will continue to monitor with our peering partner, but as of this notice, it is resolved. We will close this incident. Any messages that were not received during this outage should be resent.
Our peering partner identified the cause for messages not being delivered to some Verizon and T-Mobile end users and has implemented a fix. Currently, messages are being delivered to all destination carriers. We are continuing to monitor service levels.
Our peering partner is still investigating the issue.
We are continuing to investigate this issue.
We are currently investigating delayed SMS delivery to both Verizon and T-mobile carriers. We will update you as soon as we have more information. This is not an issue with TSG's network but something we are investigating with our peering partner. This seems to be only affecting 10DLC and is not affecting Toll-Free or Shortcode
Report: "Toll-Free SMS/MMS/DLR Outage"
Last updateThis incident has been resolved. A post-mortem will be provided once we get one from our vendor.
At this time, Our toll-free messaging vendor's messaging services have been restored. Queued transactions have begun to send. We are monitoring the system and will provide more information soon.
We have identified that the current outage is due to an issue with Our toll-free messaging vendor's AWS RDS instance. This is under escalated investigation with AWS. We will provide more information as it becomes available from our vendor.
Our toll-free messaging vendor is experiencing an outage related to all toll-free messaging and DLRs. We are awaiting an update from our vendor.
Report: "Inbound SMS Messaging Not Being Delivered"
Last updateWe apologize for the issues you may have encounter with inbound SMS delivery yesterday morning. Please find the RFO/postmortem below, and if you have any additional questions, please email us at: support@tsgglobal.com # SMS Inbound Partial Outage ## Overview In the early AM, inbound SMS traffic to customers was reduced due to a message queuing system configuration error. Once the issue was identified around 9:45 AM PST, it was resolved by deleting and recreating the queues by 11:05 AM PST. ## What Happened Due to a known bug in the previous version of our message queuing engine, we had to perform an emergency update early in the AM to avoid a potential full disk issue \(which we were alerted to over the weekend\). In short, some queue snapshot cleanups did not complete correctly, which was causing disk usage to rapidly increase. The version update was performed according to the official manual, and was supposed to cause ZERO impact or downtime due to built-in redundancy. After the update was performed, one “corrupt” queue had to be recreated to force the deletion of old snapshots and restore disk space. Once the queue was recreated, everything seamed normal and traffic was flowing, and no alarms were triggered. Were notified later in the morning by some TSG Global clients that some \(but not all\) inbound messages are being delayed, not received at all, or were unaffected. The TSG Global Response Team hopped on a call within 5 minutes of the first report, and began to diagnose the partial outage issue. # Resolution The TSG Global team investigated both our aggregator partners systems \(who also had an unplanned extended maintenance window\) as well as our internal systems. After some diagnosis, we determined that it was our message queuing system that was dropping/misrouting messages. To fix the issue, we reset the entire configuration by deleting and recreating the routing rules for our queues, which restored expected functionality and normal traffic resumed as expected. ## Root Causes There are two possible root causes: 1. deleting the “corrupt” queue which is part of a "queues cluster" caused routing issues 2. the queuing engine version upgrade itself had undocumented issues We are leaning towards the first root cause as the culprit due to the behaviors exhibited. There is no related documentation about this issue in the upgrade documentation, and it never happened when we performed upgrades in the past, nor did it affect other queues with similar or same configurations. ## Impact Some customers experienced delayed inbound deliveries or no inbound delivery at all for some part of their traffic. ## What Went Well * Quick Response: the Team responded promptly upon reporting from customers, confirmed that more DLRs being routed to customers than SMS messages, and we identified the queue causing the issue and recreated it to restore normal operation. ## What Didn’t Go So Well * We had no specific alerts in place to notice this partial outage \(e.g. that only some vs. all queues were not receiving traffic, or that some unroutable errors were being received, or that our overall outbound traffic processing had deviated by a large percentage vs. a typical weekday morning\). ## Action Items For Our Team * PagerDuty alerts should be raised when a significant amount of messages are not being routed properly, or some thresholds of normal business operations are not being met unexpectedly. * When using consistent hash exchange routing, we should investigate/build a “dead letter queue” to route all messages not able to route to our round-robin queues \(it can be any of the existing queues - probably the one with lowest index\). Will add to our roadmap.
This incident has been resolved.
The root cause has been identified, and working with our peering partner, a fix has been implemented. Some messages may not have been corrupted and not delivered to your endpoint. An official post-mortem will be available by COB Tuesday, 08/22/2023. We will continue to monitor to ensure message delivery continues as expected.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently investigating an issue with inbound messages not being delivered. We will update once we have more information.
Report: "Inbound SMS to webhook customers stopped"
Last updateOverview Inbound SMS traffic towards webhook customers partially stopped due to the database read replica lag. What happened Due to an increase in our database read replica lag, inbound traffic towards webhook customers stopped. Our SMS application was unable to fetch messages from the database since those messages were not yet available in the read replica due to the lag spike. Resolution As soon as the issue was identified, the quickest resolution was to deploy a hotfix to reconfigure all applications to read from the writer replica as temporary solution. Later, a hotfix was implemented to read from writer replica as fallback, in case the record is not found in the reader instance, if the lag ever increases again. Root Causes The root cause was due to the increase in database read replica lag. Applications were processing messages faster than records were propagated to the read replica. Applications tried to fetch messages and since those were not available they went into the retry queue so were delivered with a long delay. Impact Some HTTP webhook inbound traffic was delayed in the evening/early AM hours PST between 6/15/23 and 6/16/23. What did we learn? Since the outage was only partial, our existing metrics/alarms did not catch the issue and escalate it appropriately. We have added additional metrics and new alarms to alert for this kind of issue to prevent it from occurring again. We will also be performing some database maintenance in the near future to address the root cause.
Report: "Local Inbound Voice Partial Disruption"
Last updateDear Customers, We are pleased to inform you that the incident affecting Local Voice Origination has been resolved. All services have been fully restored, and monitoring has shown normal operations. We apologize for any inconvenience caused and appreciate your patience throughout this process. If you have any questions or concerns, please don't hesitate to contact our support team. Thank you for your understanding and continued support. Sincerely, TSG Global Incident Management Team
Dear Customers, We are pleased to inform you that the service interruption affecting certain Local Voice Origination numbers has been resolved. All affected services have been fully restored, and you can now resume normal operations. Our peering partner has successfully implemented the necessary fix, allowing for the complete restoration of the affected services. We appreciate your patience and understanding throughout the duration of this incident. While the issue has been resolved, we will continue to closely monitor the situation to ensure the stability and reliability of our services. Our team remains vigilant to address any potential concerns promptly. If you encounter any further issues or have any questions, please do not hesitate to contact our support team at support@tsgglobal.com, who can assist you. Thank you for your cooperation and understanding. Sincerely, TSG Global Incident Management Team
Dear Customers, We would like to provide you with an update regarding the ongoing partial outage impacting certain Local Voice Origination numbers. Please note that this incident does not affect all Local Voice Origination services. Our peering partner has successfully identified the root cause of the issue and is actively working towards a resolution and complete restoration of all affected Local Voice Origination numbers. While progress has been made, there is still work to be done. We are closely collaborating with our peering partner to expedite the resolution process. They are diligently working on applying the necessary fix to restore full functionality to the affected services. We understand the inconvenience this may have caused and appreciate your patience and understanding. Our priority is to ensure a swift resolution and minimize any disruption to your business operations. We will continue to closely monitor the situation and provide timely updates as we receive them from our peering partner. Please refer to our status page for the latest information on the incident and its progress toward resolution. Status Page: status.tsgglobal.com Thank you for your understanding and cooperation. Sincerely, TSG Global Incident Management Team
Our peering partner has identified the root cause for the partial service interruption. They are working to apply the fix and restore all affected services. We will update once more information is provided.
We are investigating a partial service disruption on some local inbound voice services. We will update as we have more information.
Report: "Degraded Performance with Industry OSR Provider"
Last updateTSG has monitored OSR requests are completing now, and most backlogged requests should have been processed. If you notice any submissions not enabled, please resubmit the requests. Thank you for your patience as the OSR worked through this incident.
The OSR is continuing to work on the outage. We apologize for any inconvenience this is causing. We will update once the status is provided.
The issue has been identified and a fix is being implemented.
TSG Global has been informed of an industry-wide degraded performance event with the OSR (messaging enablement for the ecosystem). This event impacts all carriers and all providers. This does impact SMS/MMS enablement as well as Campaign association We will post updates as we are provided them by the OSR.
Report: "Duplicate Text Messages Being Received"
Last updateThis incident has been resolved. DLRs are available to all clients who have been requesting them.
We are excited to share an important update regarding Delivery Receipt (DLR) reports. As promised in our last communication, we have continuously worked with our aggregate to resume DLR delivery. We are now pleased to announce that we have made significant progress and are gradually enabling DLR delivery requests for clients who request them. This process will be incremental as we closely monitor the status to ensure optimal performance and reliability. Our team is dedicated to maintaining the high quality of our messaging services, and we appreciate your patience and understanding throughout this transition. Stay tuned for more updates as we work towards fully restoring DLR delivery for all clients. Should you have any questions or concerns, please don't hesitate to contact our support team. Thank you for your continued trust in our services!
Messaging continues to operate at expected levels. Delivery Receipt (DLR) reports, however, continue to be suspended. We continue to work with our aggregate to resume DLR delivery, and we will provide updates on DLR reports as they become available. Thank you for your patience and understanding throughout this process.
Messaging continues to operate at expected levels. Delivery Receipt (DLR) reports, however, continue to be suspended. We'll keep monitoring the system to ensure optimal performance and provide updates on DLR reports as they become available. Thank you for your patience and understanding throughout this process.
We're pleased to confirm that the situation remains stable; messaging continues to operate at expected levels. Delivery Receipt (DLR) reports, however, continue to be suspended. We'll keep monitoring the system to ensure optimal performance and provide updates on DLR reports as they become available. Thank you for your patience and understanding throughout this process.
Service levels for messaging continue to operate at expected levels. Please note that Delivery Receipt (DLR) reports remain suspended at this time. We appreciate your ongoing patience and will provide another update in 12 hours.
Our team continues to closely monitor normal service levels for SMS messages following the clearing of the backlog queue. Please note that Delivery Receipt (DLR) reports remain suspended at this time. We appreciate your ongoing patience and will provide another update in 12 hours.
We are pleased to report that the backlog queue has been processed, and there are no longer any delays with inbound messages. Delivery Receipt (DLR) reports remain suspended, and we'll provide further updates as the situation evolves. We appreciate your continued patience and understanding.
We are delighted to announce that the aggregator's patch has been effective, with no errors reported thus far. As of 14:00 UTC (10:00 EST), instances of duplicate messages have significantly decreased, although the backlog queue is still being processed. New outgoing messages might experience temporary delays while we continue clearing the queue. Please be aware that Delivery Receipt (DLR) reports remain suspended for now, and we will share an update once they are back online. Thank you for your patience and understanding during this time.
The aggregate gateway has implemented a solution and we will be closely monitoring its performance with them over the next 72 hours. During this time, their system is processing a backlog of inbound messages, and you may experience some duplicate message deliveries. Please note that new incoming messages may be temporarily delayed as we clear the queue. Delivery Receipt (DLR) reports are currently suspended, and we will provide an update once they resume. We appreciate your patience and understanding.
12:10 PM PST Update: there was a temporary outbound message delay while we troubleshooted the issue with our aggregator. All messages were queued during that period, and the queue has now drained. We are now going to be disabling delivery receipts (DLRs) to continue testing as we are seeing positive momentum towards getting this resolved. Please be advised that you may not receive DLRs for a period of time while we continue working to fix this issue.
10:53 AM PST Update: we are still working with our aggregator to work on correcting this issue, and have been actively on a call with them all morning. We are migrating traffic between binds and performing other tests to find the root cause and get it fixed. Unfortunately, due to the complexity of this issue, implementing a temporary fix at the software layer to prevent duplicate messages is a last-resort option. We will provide an update again later this morning.
1:30 PM PST update: we have been on calls continually with our aggregator today attempting to resolve this ongoing issue. We will continue working through the night and the early AM to get this issue resolved for you. Unfortunately, we don't have an explicit ETA at this time. We can share that issue is transient in nature and is randomly affecting a subset of our overall traffic for all customers. We are performing some deep network analysis (PCAPs, etc.) and queue analysis in tandem with our aggregator to get this fixed as soon as possible. We will provide another update soon.
Firstly, we apologize for the continued inconvenience. We continue working with our aggregator to fix this issue. We have been in constant contact with their technical team, and the root issue stems from our aggregator's side - not TSG Global. We will provide a full postmortem once the issue is resolved. If you have any questions, please reach out to support@tsgglobal.com
A fix has been implemented and we are monitoring throughout the evening for any new issues. We will resolve this incident in the AM on 4/6/2023 if no further issues are encountered. Again, we appreciate your patience as we worked with our aggregator to resolve the issue, and will provide a full postmortem once we collect all the necessary details.
Firstly, we apologize for the inconvenience. We are continuing to work with our aggregator to fix this issue via Zoom. We have been in constant contact with their technical team, and the root issue is stemming from our aggregator's side - not TSG Global. We will provide a full postmortem once the issue is resolved. If you have any questions, please reach out to support@tsgglobal.com
We have reports from customers that duplicate text messages are being received. We are aware of the issue and are working with our aggregator to resolve the issue/implement a fix. It is intermittent, and will have an update soon.
Report: "Inbound SMS not delivered via SMPP"
Last updateAgain, we sincerely apologize for the recent outage you may have experienced with inbound SMS traffic towards SMPP customers having been stopped due to the faulty application version deploy. Below is the post-mortem: ‌ Overview Inbound SMS traffic towards SMPP customers stopped due to the faulty application version deploy. ‌ What Happened? Due to the faulty application version deploy, all inbound SMS traffic to SMPP customers stopped between ~5 AM EST on 1/30/23 through ~2 AM EST on 1/31/23. This newest release was tested on staging with new unit testing/data, and due to errors with these new tests, the issue was not caught. ‌ Resolution As soon as the issue was identified, the previous version was restored and the traffic resumed. ‌ Root Causes This issue was caused by a faulty application release deploy whose issues were not caught on staging. There were multiple causes why this issue was not noticed earlier: 1. an issue with one large client having SMPP connection issues over the weekend prior to the faulty release being deployed to staging 2. an issue with one of our upstream vendor binds at the same time 3. joint metrics monitoring both HTTP API and SMPP clients that did not show a traffic drop to 0, since HTTP API traffic resumed/was unaffected, and for that reason alarms were not being triggered ‌ Impact All SMPP customers inbound SMS traffic was delayed few hours, queued, and delivered in a large batch once connections resumed. ‌ What went well? * As soon as the issue was correctly detected, reverting applications to the previous version quickly resolved the issue. ‌ What didn't go so well? * Alarms did not go off and the issue was not immediately noticed by our team via Slack or PagerDuty * Staging testing did not catch the issue \(again, this was newer test data used for traffic mocking on staging\) ‌ Action items * Additional metrics will be added to SMSC and API endpoints to monitor those separately and alarms should be added accordingly * Additional metrics will report client binds being restarted and the alarm should follow if that rate exceeds some reasonable threshold * Staging mocking apps should be improved \(already work in progress, partially already done\) to catch errors like these
We sincerely apologize for the recent outage you may have experienced with inbound SMS traffic towards SMPP customers having been stopped due to the faulty application version deploy. Please review the post-mortem of what happened, and how we learned from the situation.
Report: "SMS Failing or Latency Issues"
Last updateThis incident has been resolved.
We are currently investigating issues with SMS failing for HTTP users, as well latency issues for SMPP users. Updates to come.
Report: "SMS/MMS Deliverability Latency"
Last updateThis incident has been resolved.
We are currently investigating some latency related to inbound/outbound SMS/MMS messaging.
Report: "Inbound/Outbound SMS & DLR Deliverability Outage"
Last updateHi there, First and foremost, we apologize for the message deliverability issues experienced by SMS customers on September 29th, 2022 beginning around 3:15 PM PST. We understand message deliverability is an important aspect of many of our customer’s applications, and that every lost message, or minute of unavailability, impacts your business. Below is our postmortem from the Event, and our Remediation Plan / Action Items that we plan to take in order to prevent this particular issue from happening again. If you have any questions about the below postmortem, or would like more details, please email the team at: support@tsgglobal.com # Overview From about 3:10 PM PST to 5:51 PM PST on Thursday, September 29th, 2022, there was an outage related to our SMS stack that impacted the transporting of both inbound and outbound SMS messages, as well as delivery receipts \(DLRs\). The root cause was due to the crashing \(and unfortunate subsequent crash/reboot/crash cycles\) of what we dub our “Nimbus” pipe that handles all SMS messages. MMS messages and voice services are handled via a separate pipe/system, and were unaffected. We use a Kubernetes-based architecture, and usually a crashed pod/application is able to recover without any human intervention, and so we need to investigate why that was not the case in this instance. After several attempts to get the service to reboot manually, and attempting to disable several other applications that may have been a possible catalyst for the Event, we discovered that this particular instance had some legacy code relating to how this particular pod was mounted and configured. We implemented a quick fix by stripping these legacy boot requirements, deploying a new version, and the pod was able to boot normally and resume traffic. # What Exactly Happened At 3:10 PM PST several automated alarms \(Prometheus/Grafana feeding into PagerDuty/Slack\) were triggered notifying TSG Global staff that there was an issue with our SMS service \(specifically, that a K8s pod was not healthy, followed by alarms that queues were getting larger and not emptying appropriately\). TSG Global technical staff quickly began investigating the issue. We attempted to reboot the pod / make it healthy, but the pod continued to crash and not be available. We attempted to disable several other applications associated with the Nimbus pipe that we suspected may be playing a role in our errors, ruled out any external/vendor issues, attempted to roll-back to several older versions of the application, rebooted the queue - and we were still experiencing crashes. We then needed to dig into some legacy code associated with the application. Our “Nimbus” pipe is an application that is deployed as a singleton stateful set \(STS\) in Kubernetes \(K8s\). Again, the core issue is that it was crashing and failed to boot continuously. After additional review, we found that this particular STS was configured to mount the disk in 'ReadWriteOnce' mode, meaning only one K8s pod can access the disk at the same time. As a result, when the old pod \(which crashed\) did not release the disk correctly \(which is still being investigated\), when the new pod spawned, it was continuously unable to mount it and access the necessary data from the disk. # Resolution Since the data read from the disk is not critical for normal operation, there was a quick fix published/deployed that omitted reading data from the disk on boot, which enabled the application to boot successfully without accessing the disk. Once the “Nimbus” pipe was booted properly, SMS messaging and DLRs resumed flowing as normal \(and some queued messages were delivered in a “burst” of traffic immediately upon revival of the service\). # Root Cause The “Nimbus” application was unable to read data from the mounted disk because disk was not properly released by the previous instance that crashed. Crashing is something that can always happen, but resources should be released and not left attached to a “zombie” instance that did not shutdown completely - thereby preventing a new healthy instance from spawning and booting appropriately. # Impact * Sending outbound SMS messages was delayed for all clients during the outage period * Receiving inbound SMS messages and DLRs was delayed for all clients during the outage period * While attempting to troubleshoot and resolve issues, one group of inbound \(MO\) messages was intentionally dropped from the queues and were never delivered. However, clients can access those messages in the TSG system via API since those were stored in our databases. # What Went Well * Pre-programmed alerts in Slack/via PagerDuty went off immediately as a result of the Event, and on-call staff quickly notified relevant parties immediately of the severity level so that we could work towards resolution. * Investigation and work towards resolving the issue started within minutes after the Event first being triggered. # What Did Not Go So well * Determining the root cause took much longer than we wanted it to. * There was some misleading information in some logs, and a human factor in misreading some of the logs. It was very, very early in the AM for individuals handling this particular issue. * Once the root cause was identified, there was a period of time misused by the team in believing the instance would eventually release the resource, and that it would be able to boot. * The final resolution of disabling the problematic part of the code could have been done much earlier, since that particular function of the boot sequence was not critical to the overall health of the pod/function of our services. # Our Remediation Plan / Action Items * Review all existing settings and current architecture to prevent similar issues in the future. * We commit to immediately reviewing any other applications deployed to K8s and notating any inconsistencies in how those applications boot/deploy. We also commit to making some long-term architectural changes that will first be rigorously tested on our staging environment, including possible fail-over applications to provide better redundancy. * We will also investigate/scope what is needed to replay messages back to clients in the Event a queue does need to be emptied for any reason, so that customer applications can continue to function as expected without manual intervention by customers. * Clearer logging and less noise. * We have employed AWS CloudWatch, Bugsnag, and Prometheus/Grafana in various states to alert our team about issues and allow us to investigate issues. We need to consolidate and build around one solution in both our staging and production environments to allow for better quality logging and simplified investigation processes. * Consolidate alerts to our developer-managed email inbox.
A fix has been implemented and messages should be flowing normally. We will continue monitoring for any additional issues.
We have identified the core services affected and we are continuing to work on a fix. No ETA at this time. Stay tuned for updates.
We are currently investigating this issue.
Report: "Outbound Messaging Latency"
Last updateThis incident has been resolved.
We have identified an issue with outbound messaging and some latency (several seconds or minutes) impacting customers. We are escalating with our upstream and hope to have an update soon.
Report: "Inbound Webhook Latency"
Last updateThis incident has been resolved.
The issue has been identified and we are investigating a longer term fix.
We are currently investigating this issue.
Report: "Inbound Message Delay"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the systems for any additional issues.
The issue has been identified and the inbound message queue is draining, and inbound messages should be resuming.
We are currently investigating delays related to inbound messages (SMS/MMS and DLRs). We will have an update shortly.
Report: "Partial Voice Outage"
Last updateThis incident has been resolved. Our underlying LRN provider had an internal issue that has since been reverted/fixed, and services should be fully functional now.
We are currently investigating issues with our voice services.
Report: "TSG Voice Congestion Issues"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the system for any other issues.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently investigating congestion issues with the TSG Global voice services. We hope to have an update soon.
Report: "This is an example incident"
Last updateWhen your product or service isn’t functioning as expected, let your customers know by creating an incident. Communicate early, even if you don’t know exactly what’s going on.
Empathize with those affected and let them know everything is operating as normal.
As you continue to work through the incident, update your customers frequently.
Let your users know once a fix is in place, and keep communication clear and precise.