Historical record of incidents for Workspot
Report: "GCP Service Alert"
Last updateGCP update 12:09 PDT "Our engineers are continuing to mitigate the issue and we have confirmation that the issue is recovered in some locations."
GCP is reporting 11:46 PDT "Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue". https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW
Report: "Workspot Control Affected Due to Downstream Platform Issue"
Last updateWe are currently investigating this issue. We have escalated it by raising a priority ticket with the platform vendor and are working with them to resolve it as quickly as possible. We will provide updates as we learn more.
Report: "Workspot Europe Control Upgrade to R19.1 on June 3rd at 0530 UTC"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This release will require an update window of up to 10 minutes. During this window, administrators will not be able to log in to Control, and end users will not be able to launch new non-persistent desktop sessions. Existing end-user sessions will not be impacted, and end users can launch persistent desktops.
Report: "Workspot US Watch Upgrade to R6.1.4 on May 31st, at 0230 UTC"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This release will require an update window of up to 10 minutes. During this window, administrators cannot log in to Workspot Watch. All other services, Control, Trends, and end-user sessions will not be impacted. To know more about Workspot Watch release updates, Please refer: https://docs.workspot.com/docs/using-workspot-watch
Report: "Workspot Trends Upgrade to R2.3.2 on May 31st at 0300 UTC"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This release will require an update window of up to 10 minutes. During this window, Trends users cannot log in to Workspot Trends. All other services, Control, Watch, and end-user sessions will not be impacted.To know more about Workspot Trends, Please refer: https://docs.workspot.com/docs/using-workspot-trends
Report: "Workspot Control Upgrade to R19.1 on May 31st at 0200 UTC"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This release will require an update window of up to 10 minutes. During this window, administrators will not be able to log in to Control, and end users will not be able to launch new non-persistent desktop sessions. Existing end-user sessions will not be impacted, and end users can launch persistent desktops.
Report: "Azure Service Alert – East US Region VMs might be impacted"
Last updateAs per Microsoft, this issue has been resoved. Here is the incident summary, shared by Microsoft: Issue Summary: Between 09:07 UTC and 16:25 UTC on 29 May 2025, a platform issue resulted in an impact to the following services in the East US region: - Virtual Machines & Virtual Machine Scale Sets: Error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for resources hosted in this region. This impact was restricted to a single Availability Zone (AZ01), Physical AZ01. Retries may have been successful. - Azure Synapse Analytics: Issues while executing Spark jobs through Synapse Pipelines or Notebooks, encountering the error code "CLUSTER_CREATION_TIMED_OUT". Retries may have been successful. - Azure Data Factory: Activity or Pipeline run failures and delays due to dataflow activity failures. MS Response Timeline: - 09:07 UTC: Customer impact began. - 09:12 UTC: Auto-recovery attempts started, including load-shedding and failover. - 09:15 UTC: Service monitoring detected spikes in VM failures; investigation began. - 11:45 UTC: Platform engineers terminated problematic service instances to free compute resources. - 12:53 UTC: Services started processing backlogged VM requests, with some customers still seeing timeouts and throttling. - 13:15 UTC: Engineers redirected VM deployment traffic to alternate management services to speed recovery. - 13:48 UTC: Failover progress noted, backlog began draining. - 13:58 UTC: Azure Data Factory service restored. - 14:09 UTC: Azure Synapse Analytics service restored. - 16:25 UTC: All services fully restored; customer impact mitigated.
Issue Summary: Start Time: 09:15 UTC on 29 May 2025 Impact: Errors may occur during service management operations (create, delete, update, scale, start, stop) for VMs. Cause: A sudden spike in usage has caused backend VM components to hit operational limits, resulting in delays and failures. Current Status: Microsoft is mitigating the issue by failing over to a healthy backend instance. Please monitor the updates in Azure subscription under Service Health, if your resources are in East US region.
Report: "Azure Service Alert – East US Region VMs might be impacted"
Last updateAs per Microsoft, this issue has been resoved. Here is the incident summary, shared by Microsoft:Issue Summary:Between 09:07 UTC and 16:25 UTC on 29 May 2025, a platform issue resulted in an impact to the following services in the East US region:- Virtual Machines & Virtual Machine Scale Sets: Error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for resources hosted in this region. This impact was restricted to a single Availability Zone (AZ01), Physical AZ01. Retries may have been successful.- Azure Synapse Analytics: Issues while executing Spark jobs through Synapse Pipelines or Notebooks, encountering the error code "CLUSTER_CREATION_TIMED_OUT". Retries may have been successful.- Azure Data Factory: Activity or Pipeline run failures and delays due to dataflow activity failures.MS Response Timeline:- 09:07 UTC: Customer impact began.- 09:12 UTC: Auto-recovery attempts started, including load-shedding and failover.- 09:15 UTC: Service monitoring detected spikes in VM failures; investigation began.- 11:45 UTC: Platform engineers terminated problematic service instances to free compute resources.- 12:53 UTC: Services started processing backlogged VM requests, with some customers still seeing timeouts and throttling.- 13:15 UTC: Engineers redirected VM deployment traffic to alternate management services to speed recovery.- 13:48 UTC: Failover progress noted, backlog began draining.- 13:58 UTC: Azure Data Factory service restored.- 14:09 UTC: Azure Synapse Analytics service restored.- 16:25 UTC: All services fully restored; customer impact mitigated.
Issue Summary:Start Time: 09:15 UTC on 29 May 2025Impact: Errors may occur during service management operations (create, delete, update, scale, start, stop) for VMs.Cause: A sudden spike in usage has caused backend VM components to hit operational limits, resulting in delays and failures.Current Status: Microsoft is mitigating the issue by failing over to a healthy backend instance. Please monitor the updates in Azure subscription under Service Health, if your resources are in East US region.
Report: "Workspot API service issue"
Last updateThis incident has been resolved. No issues have been observed since the rollback was performed.
The Engineering team has identified the issue and reverted the Control changes implemented over the weekend at approximately 16:00 UTC. Since the rollback, no further issues have been observed. We continue to actively monitor the environment.
Workspot Service is in maintenance mode
Report: "Azure Network Infrastructure - Issues accessing a subset of Microsoft services"
Last updateThe issue has been mitigated. Microsoft have shared the below preliminary post-incident review regarding this incident (Tracking ID: KTY1-HW8): This is our Preliminary PIR that we endeavor to publish within 3 days of incident mitigation to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a "Final" PIR with additional details/learnings. What happened? Between 11:45 and 13:58 UTC on 30 July 2024, a subset of customers experienced intermittent connection errors, timeouts, or latency spikes while connecting to Microsoft services that leverage Azure Front Door (AFD) and Azure Content Delivery Network (CDN). The two main impacted services were Azure Front Door (AFD) and Azure Content Delivery Network (CDN), and downstream services that rely on these – including the Azure portal, and a subset of Microsoft 365 and Microsoft Purview services. From 13:58 to 19:43 UTC, a smaller set of customers continued to observe a low rate of connection timeouts. What went wrong and why? Azure Front Door (AFD) is Microsoft's scalable platform for web acceleration, global load balancing, and content delivery, operating in nearly 200 locations worldwide – including datacenters within Azure regions, and edge sites. AFD and Azure CDN are built with platform defenses against network and application layer Distributed Denial-of-Service (DDoS) attacks. In addition to this, these services rely on the Azure network DDoS protection service, for the attacks at the network layer. You can read more about the protection mechanisms at https://learn.microsoft.com/azure/ddos-protection/ddos-protection-overview and https://learn.microsoft.com/azure/frontdoor/front-door-ddos. Between 10:15 and 10:45 UTC, a volumetric distributed TCP SYN flood DDoS attack occurred at multiple Azure Front Door and CDN sites. This attack was automatically mitigated by the Azure Network DDoS protection service and had minimal customer impact. At 11:45 UTC, as the Network DDoS protection service was disengaging and resuming default traffic routing to the Azure Front Door service, the network routes could not be updated within one specific site in Europe. This happened because of Network DDoS control plane failures to that specific site, due to a local power outage. Consequently, traffic inside Europe continued to be forwarded to AFD through our DDoS protection services, instead of returning directly to AFD. This event in isolation would not have caused any impact. However, an unrelated latent network configuration issue caused traffic from outside Europe to be routed to the DDoS protection system within Europe. This led to localized congestion, which caused customers to experience high latency and connectivity failures across multiple regions. The vast majority of the impact was mitigated by 13:58 UTC, around two hours later when we resolved the routing issue. A small subset of customers without retry logic in their application may have experienced residual effects until 19:43 UTC. How did we respond? Our internal monitors detected impact on our Europe edge sites at 11:47 UTC, immediately prompting a series of investigations. Once we identified that the network routes could not be updated within that one specific site, we updated the DDoS protection configuration system to avoid traffic congestion. These changes successfully mitigated most of the impact by 13:58 UTC. Availability returned to pre-incident levels by 19:43 UTC once the default network policies were fully restored. How we are making incidents like this less likely or less impactful - We have already added the missing configuration on network devices to ensure a DDoS mitigation issue in one geography cannot spread to other geographies in the Europe region which resulted in traffic redirection. (Completed) - We are enhancing our existing validation and monitoring in the Azure network, to detect invalid configurations. (Estimated completion: November 2024) - We are improving our monitoring where our DDoS protection service is unreachable from the control plane, but is still serving traffic. (Estimated completion: November 2024) - This is our Preliminary PIR that we endeavor to publish within 3 days of incident mitigation to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a "Final" PIR with additional details/learnings. How can customers make incidents like this less impactful - For customers of Azure Front Door/Azure CDN products, implementing retry logic in your client-side applications can help handle temporary failures when connecting to a service or network resource during mitigations of network layer DDoS attacks. For more information, refer to our recommended error-handling design patterns: https://learn.microsoft.com/azure/well-architected/resiliency/app-design-error-handling#implement-retry-logic. - Applications that use exponential-backoff in their retry strategy may have seen success, as an immediate retry during intervals of high packet loss may have also seen high packet loss. A retry conducted during periods of lower loss would likely have succeeded. For more details on retry patterns, refer to https://learn.microsoft.com/azure/architecture/patterns/retry. - More generally, consider evaluating the reliability of your applications using guidance from the Azure Well-Architected Framework and its interactive Well-Architected Review: https://docs.microsoft.com/azure/architecture/framework/resiliency. - Finally, ensure that the right people in your organization will be notified about any future service issues by configuring Azure Service Health alerts. These alerts can trigger emails, SMS, push notifications, webhooks, and more: https://aka.ms/ash-alerts How can we make our incident communications more useful? You can rate this PIR and provide any feedback using our quick 3-question survey: https://aka.ms/AzPIR/KTY1-HW8
As per Microsoft, they have implemented networking configuration changes, telemetry shows improvement in service availability.
Microsoft updated the status page with the below details: We have implemented networking configuration changes and have performed failovers to alternate networking paths to provide relief. Monitoring telemetry shows improvement in service availability from approximately 14:10 UTC onwards, and we are continuing to monitor to ensure full recovery.
Microsoft updated its status page that they are investigating reports of issues connecting to Microsoft services globally. Customers may experience timeouts connecting to Azure services. Please refer to https://azure.status.microsoft/en-us/status for the latest update.
Report: "Crowdstrike Issue - General Update."
Last updateThe issue has been resolved by fixing the affected Virtual Desktop or providing the steps to our customers to recover the affected desktop.
Workspot has identified specific customers with virtual machines impacted by the Crowdstrike and Microsoft issue. The Workspot service is 100% operational, not impacted, and automatically detected the issues affecting customer VMs. We are actively working with customers affected by this outage to restore their service. If you are impacted by this issue, and we have contacted you, please respond to the communication so that we can schedule time with you to resolve the issue. If you have not received an email and are affected by this outage, please open a support ticket through Control so that we can help. Please allot approximately 30 mins to restore each VM. This is the average time we are currently taking. While this global outage is not a Workspot issue, we take pride in maintaining our high level of service for our customers. We will continue to monitor this issue this week. Thank you for understanding and patience.
Report: "PaaS Provider issue impacting Agent communications with Workspot Cloud"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating the following issues: 1. Control and Watch may not show telemetry data (i.e. CPU, Memory, RTT) 2. Gateways may show offline status in Control and Watch. Users can still connect as normal and are not impacted.
Report: "Workspot Agents and Enterprise Connectors failing to connect via the new URL after Control 18.2 update"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
Workspot Control was updated to 18.2 on April 20, 2024. Post update, Workspot observed that some Workspot Agents and Workspot Enterprise Connectors failed to connect to Control via the new https://control.us.workspot.com URL. This was presumably caused by firewalls blocking *.us.workspot.com. If this was happening to you, you might have seen the following symptoms: • Enterprise Connectors was showing as “Offline” in Workspot Control. • Non-persistent desktops were showing as “Offline” and were not able to be assigned to new users (ongoing sessions are unaffected). • Persistent desktops was showing as “Offline” but users were still connect to them. Workspot has remediated the issue observed after the 18.2 update and monitoring the environment. Please contact the Workspot Support for any questions.
Report: "Workspot Watch System Reboot"
Last updateThis incident has been resolved.
Workspot Watch may be loading slowly. Resolution in progress. All other systems are fully operational.
Report: "Multiple Microsoft Services are down affecting Workspot Customers"
Last updateMicrosoft confirmed that the issues have been mitigated. From Microsoft: Summary of Impact: Between 07:05 UTC and 09:45 UTC on 25 January 2023, customers experienced issues with networking connectivity, manifesting as network latency and/or timeouts when attempting to connect to Azure resources in Public Azure regions, as well as other Microsoft services including M365 and PowerBI. Preliminary Root Cause: We determined that a change made to the Microsoft Wide Area Network (WAN) impacted connectivity between clients on the internet to Azure, as well as connectivity between services in different regions, as well as ExpressRoute connections. Mitigation: We identified a recent change to WAN as the underlying cause and have rolled back this change. Networking telemetry shows recovery from 09:00 UTC onwards across all regions and services, with the final networking equipment recovering at 09:35 UTC. Most impacted Microsoft services automatically recovered once network connectivity was restored, and we worked to recover the remaining impacted services. Next Steps: We will follow up in 3 days with a preliminary Post Incident Report (PIR), which will cover the initial root cause and repair items. We'll follow that up 14 days later with a final PIR where we will share a deep dive into the incident. You can track the Post Incident Report (PIR) on the Microsoft Azure link - https://status.azure.com/ We are closing this incident from our end. Please reach out to the Workspot Support team in case you need any assistance related to Workspot Cloud Desktops.
As per the latest Microsoft update on the status, they have determined the network connectivity issue is occurring with devices across the Microsoft Wide Area Network (WAN). This impacts connectivity between clients on the internet to Azure, as well as connectivity between services in data centers, as well as ExpressRoute connections. The issue is causing impact in waves, peaking approximately every 30 minutes. They have identified a recent WAN update is the likely underlying cause, and they have taken steps to roll back this update. They are seeing signs of recovery across multiple regions and services, and are continuing to actively monitor the situation You can monitor the status of the ongoing issue on the below Microsoft links: - https://status.azure.com/ - https://status.office365.com/ We will keep you posted on the status periodically.
As per the latest update from Microsoft, starting at 07:30 UTC, they identified a networking issue impacting connectivity to Azure for a subset of users. They are actively investigating. You can monitor the status of the ongoing issue on the below Microsoft links: - https://status.azure.com/ - https://status.office365.com/ We will keep you posted on the status periodically.
Microsoft reported a global outage for Office 365 & Azure Portal services. Customers using Azure AD for authentication may see problems while logging into the Workspot clients or launching their desktops. You can monitor the status of the ongoing issue on the below Microsoft links: - https://status.azure.com/ - https://status.office365.com/ We will keep you posted on the status periodically.
Report: "Incident Detected - Azure West US 2 Region Connectivity Issues - December 14, 2022 20:16 UTC"
Last updateAzure has resolved the incident. Their last update: They have identified that multiple top-of-rack (ToR) network devices, connecting a single rack of servers, experienced an inadvertent power failure. A top-of-rack (ToR) network device, connecting a single rack of servers, experienced a fault. They have restored the power to the top-of-rack (ToR) network devices to mitigate the issue. A traffic routing normally, there should be no further connectivity issues expected. If you continue to notice connectivity issues or VM rebooting unexpectedly in West US2, please open a support ticket through the Workspot Control Support Portal.
This incident is now mitigated. Between 20:16 UTC on December 14, 2022 and 00:39 UTC on December 15, 2022, Virtual Machines in West US 2 experienced connectivity issues. Affected Virtual Machines may have restarted unexpectedly during this incident. Root cause: A network device experienced a fault, resulting in network connectivity loss to downstream Virtual Machines. Mitigation: Service healing was triggered to automatically redeploy affected Virtual Machines to healthy infrastructure. We are continuing to monitor the situation and will close the incident as resolved if no further issues are reported by our customers or Azure.
We are still waiting on an update from Azure. Last update was: Azure have identified that multiple top-of-rack (ToR) network devices, connecting a single rack of servers, experienced an inadvertent power failure. The Azure resources connected to these ToRs were shut down. Customers may have been unable to connect to their resources and Virtual Machines (VMs) that were deployed onto the affected nodes may have experienced a restart. They are actively working to return the ToRs to a normal state. An update will be provided in 60 minutes or as events warrant.
Azure have identified that multiple top-of-rack (ToR) network devices, connecting a single rack of servers, experienced an inadvertent power failure. The Azure resources connected to these ToRs were shut down. Customers may have been unable to connect to their resources and Virtual Machines (VMs) that were deployed onto the affected nodes may have experienced a restart. They are actively working to return the ToRs to a normal state. An update will be provided in 60 minutes or as events warrant.
Azure is continuing to work to reroute traffic around the affected network device. We are still seeing issues connecting to some VMs in West US 2 region of Azure. We will update this incident every 30 minutes or as warranted.
Azure reports a network device has experienced a fault, resulting in network connectivity loss to downstream resources. The unhealthy network device is being isolated from the network and traffic is being rerouted to healthy infrastructure. We will update this incident every 30 minutes or as warranted.
Azure has reported that starting at 19:11 UTC on 14 Dec 2022, Virtual Machines in West US 2 may be experiencing connectivity issues. These Virtual Machines may have also restarted unexpectedly since the start of this incident. A network device has experienced a fault, resulting in network connectivity loss to downstream resources. The unhealthy network device is being isolated from the network and traffic is being rerouted to healthy infrastructure. We will update this incident every 30 minutes or as warranted.
Report: "Control Service Incident Detected"
Last updateOur PaaS provider engineers have confirmed that the upstream provider resolved the incident at 23:40 UTC on 11/30/2022. There is no more information available at this time.
Workspot's PaaS provider experienced a DNS propagation issue with their upstream provider. Their metrics have returned to normal service levels. We will update this resolved incident with a post mortem once we have received it.
The Workspot Control Service is reachable once again. We are continuing to monitor the incident and will provide an update as soon as we have a proper resolution to the issue. All services are currently reachable and working as expected as of 12/01/2022 12:01am UTC.
As of 11/30/2022 10:52:53 PM UTC Workspot detected a Control Service incident. We are currently investigating and working to get the service up as soon as possible.
Report: "Control Is Currently Unreachable"
Last updateWe have been provided an RCA from our PaaS provider. ”Between August 23, 2022 17:54UTC and August 24, 2022 03:33UTC, our customers experienced DNS resolution failures in all regions of Private Spaces and Common Runtime. We sincerely apologize for the negative effects our customers experienced A failure originating with our upstream DNS provider began at 17:54 UTC. This failure initially caused DNS propagation delays for new apps and newly added custom domains, deteriorating at 23:30 UTC, when existing applications also could not complete DNS resolution. The failure was detected automatically by our monitoring systems and engineers were paged in, immediately engaging the provider. During this time, we learned that a migration that was performed on our account was directly responsible for the system degradation. The DNS provider’s engineers started working on mitigation at 20:18 UTC, resolving the issue at 00:56 UTC.” Workspot Control service was impacted by this DNS resolution failure/outage from 23:15:44 UTC on 8/23 to 00:54:44 AM on 8/24.
Workspot Control service is reachable, and we have not observed the issue in the last couple of hours. Our Service provider has confirmed that the issue with the upstream DNS provider is resolved now. We will update the incident with an RCA once we have it. Thank you for your patience.
Workspot Control Service is once again reachable. We are waiting on an update/RCA from our PaaS provider that experienced the DNS issue. We are continuing to monitor the situation. We will update the incident with a post mortem/RCA from our PaaS provider once we have it. Thank you for your patience.
We are receiving reports that some customers are able to access Control. It appears to be location dependent. All end users should be able to access their VMs, the only impact felt currently is Control is unreachable. Our PaaS is experiencing DNS issues, is aware of the issue and is investigating. We will continue to update status as warranted.
As of 4:45PM we are aware that Control is currently unreachable. Our service provider is experiencing DNS issues and is aware of the issue and investigating it. We will keep you updated as we get more information.
Report: "Network connectivity issues - Azure Multiple Regions - Identified"
Last updateAzure is reporting the issue is now completely resolved. Closing out this instance as resolved.
MSFT/Azure has updated Workspot with the following: Traffic across the Azure network and affected regions remain normal and they continue to actively monitor. They are finalizing the preventative workstream they have validated and expect to prevent recurrence of this particular issue that affected the Azure network and customer workloads. We will continue to update Workspot status as warranted.
MSFT/Azure has notified Workspot that even though network connectivity is healthy, we are continuing multiple workstreams to ensure preventative action has been completed prior to calling this issue resolved. We will update status as needed once Azure reports issue is resolved by them.
MSFT is now reporting multiple regions are experiencing latency or intermittent network connectivity issues. The have confirmed the following regions have seen intermittent impact: South Africa North, Canada Central, Korea Central, Japan East, Brazil South, US East, US East 2, US South Central Workspot is continuing to monitor the situation with Azure and will update this page as warranted.
Microsoft Azure team updated their status page "https://status.azure.com/en-us/status" that they are seeing network issue on multiple Azure regions.
Azure has notified Workspot that starting as early as 08:00 UTC on 29 July 2022, some customers may experience latency, or intermittent network connectivity issues when trying to access your Azure resources in East US. Retries may work for some customers. Azure is actively investigating the issue. Workspot will continue to update our status page with updates as we received them
Report: "Possible Issue Connecting to VMs in Azure West US 2 region Started 17:37 UTC"
Last updateAzure has informed Workspot that the issue is resolved. If you continue to experience any connectivity issues or find your VMs rebooting unexpectedly in West US 2 Azure region, please open a ticket with Workspot Support. This issue stands as resolved and is being closed. Thank you.
Azure has notified Workspot that customers in the West US 2 region may experience connection failures or have their VMs unexpectedly reboot. Issue was reported at 17:37 UTC. Azure is aware of the issue and are actively investigating the issue. Workspot will continue to update our status page with updates as we received them. Thank you.
Report: "Possible Issue Connecting to VMs in Azure EAST-US region - 19:00UTC"
Last updateThis incident has been mitigated. MSFT-Azure is continuing to investigate the root cause and prevent further occurrences. Workspot did not receive any notifications from customers of any connection failures or unexpected reboots of VMs between 19:02 UTC on 02 Jun 2022 and 22:54 UTC. Workspot considers this incident resolved and closed. If you have any questions, please reach out to your Customer Success Team and if you are experiencing any issues, please open a Support ticket through your Control Support Portal.
MSFT-AZURE is still continuing to investigate this issue. We have had no reports from customers in the EAST-US region of any issues connecting to their VMs at this time. We will continue to monitor and update you hourly unless otherwise warranted.
MSFT-Azure has notified Workspot that they are experiencing an issue in the EAST-US region ONLY where starting at 19:02 UTC a customer may experience connection failures when trying to access some Virtual Machines in the EAST US Region. These Virtual Machines may have also restarted unexpectedly. They are aware of the issue and actively exploring mitigation strategies. We will update you within 60 minutes or as we receive more information from MSFT-Azure.
Report: "Incident Detected"
Last updateThis incident has been resolved.
Workspot has detected an incident. We are currently investigating and working to get the service up as soon as possible. Azure Service Disruption Summary of impact: The impact to customers is Control will timeout when trying to manage/add/change in East US2. Impact Statement: Starting at approximately 12:25 UTC on 08 Apr 2022, customers running services in the East US 2 region may be experiencing service management errors, delays, and/or timeouts. Microsoft is investigating an underlying issue causing GET and PUT errors impacting the Azure portal itself, as well as services including Azure Virtual Machines (VMs), Virtual Machine Scale Sets (VMSS), and additional downstream services. Customers may see errors including “The network connectivity issue encountered for Microsoft.Compute cannot fulfill the request.” Finally, for some downstream services that have auto-scale enabled, this service management issue may cause data plane impact. Current Status: Microsoft is investigating what is causing service management failures in this region. The Compute Resource Provider (CRP) which handles underlying compute resources for Virtual Machines is currently overloaded and unable to handle the volume of requests – causing the delays, timeouts, and errors described above. Microsoft has several parallel workstreams that aim to mitigate the issue – including scaling out the Service Fabric cluster from which the CRP gateway operates, rebooting Service Fabric nodes, and working with first-party services to reduce CRP traffic from the largest internal requestors. Microsoft is prioritizing workstreams that will mitigate customer failures, but also focused on understanding what is causing the CRP volume – including analyzing relevant memory dumps to understand the issues more deeply. The next update will be provided by 21:30 UTC, or as soon as there is an update to share.
Report: "Workspot Control is currently not accessible"
Last updateThe RCA from our PaaS Provider: _On February 24th, 2022 between 16:25 UTC and 17:35 UTC, our customers experienced a network outage as a result of an invalid DNS configuration change to our infrastructure._ _A configuration update inadvertently changed a crucial set of DNS records, causing an outage for the Common Runtime's ingress path, which manifested for customer applications in the form of timeouts._ _Our team of on-call engineers quickly identified the root cause and applied a configuration update which quickly resolved the problem. After correcting the DNS configuration, the platform took some time to recover capacity as previously scaled down resources scaled back up to meet demand. and eventually fully recovered._ _## What will we do to mitigate problems like this in the future?_ _Engineering updated the infrastructure-managing code so these unintended DNS changes can't happen again._
There have been no further issues with our Paas Provider. We are resolving this issue and will provide an RCA when we receive it.
Workspot Control Service is back online. Our Paas Provider has restored access to their Apps. We are continuing to monitor but all services are currently restored within Control. We will provide a full RCA when we receive it. Thank you.
Workspot's Paas Provider has acknowledged it as a network connectivity issue to the Apps and their engineers are investigating. We will continue to work with them and update as necessary. Thank you for your patience.
Workspot is currently aware there is an issue accessing Control. We are actively investigating the issue. administrators will not be able to login to Control and end users will not be able to launch new non-persistent desktop sessions. Existing end user sessions will not be impacted, and end users will be able to launch persistent desktops.
Report: "Microsoft Azure CSP subscription maintenance January 26th - January 28th"
Last updateWorkspot has finished its maintenance on the Microsoft Azure CSP subscriptions.
A fix has been implemented and we are monitoring the results.
Microsoft Azure CSP subscription maintenance scheduled between January 26- January 28, 2022. No expected impact to your Workspot or Azure environment, however please contact Workspot Support if you notice any issues.
Report: "Customer Advisory: Azure Active Directory for Outdated Workspot Clients Will Fail After Jan 31, 2022"
Last updateObsolete Workspot Clients will cease working with Azure Active Directory after January 31, 2022. Microsoft will no longer allow TLS 1.0 or TLS 1.1 connections after this date, so Clients that do not support TLS 1.2 will not allow users to sign in. Please read more here: https://workspot.zendesk.com/hc/en-us/articles/4422952319501-Customer-Advisory-Azure-Active-Directory-for-Outdated-Workspot-Clients-Will-Fail-After-January-31-2022- This only impacts customers who are currently using Azure Active Directory. If you have any questions, please reach out to your Workspot CS team members.
Report: "Azure Resource Manager - Issues with management and resource operations"
Last updateMicrosoft has closed the incident as resolved. We are continuing to monitor but are closing our incident as well.
Microsoft has reported that their rollback is 75% complete. Workspot is continuing to monitor the situation and will update this incident as events warrant.
Microsoft has determined that a subset of customers may be experiencing issues, timeouts, or failures for some service management operations for services leveraging Azure Resource Manager. This could also include issues with operations attempted to manage resources or resource groups. This could result in downstream impact to other Azure services that rely on Azure Resource Manager, and we are sending notifications for these downstream services via Azure Service Health. Microsoft has identified the issue with backend role instances leveraged by Azure Resource Manager. They are currently rolling back a recent deployment as a mitigation strategy. As the rollback continues to progress, some customers and downstream services will continue to see improvements through to full recovery. Workspot is actively monitoring the situation and will update this incident as events warrant.
Report: "Azure Active Directory - experience authentication issues when attempting to access Azure, Dynamics 365, and/or MIcrosoft 365."
Last updateSummary of impact: Between 00:11 UTC and 02:25 UTC on 16 Dec 2021, customers may have experienced issues signing into Azure services. Preliminary root Cause: A shared component of the Microsoft account (MSA) and Azure AD sign-in services stopped responding when a combination of a configuration error and a routine update caused multiple redundant endpoints to become unreachable. This caused sign-in failures in Microsoft services for both Microsoft personal accounts and Azure AD accounts. Mitigation: Microsoft performed a failover for authentication services to redundant, healthy infrastructure. Microsoft analyzes performance data and trends on the affected systems to ensure that mitigation remains in full effect. A Post Incident Report will be published within the next 48-72 hours. Please refer https://status.azure.com/en-us/status/history/ for the report.
We are continuing to monitor for any further issues.
Microsoft performed a failover for authentication services to redundant, healthy infrastructure to mitigate the issue. Preliminary root Cause: A shared component of the Microsoft account (MSA) and Azure AD sign-in services stopped responding when a combination of a configuration error and a routine update caused multiple redundant endpoints to become unreachable. This caused sign-in failures in Microsoft services for both Microsoft personal accounts and Azure AD accounts. Next steps: Microsoft is analyzing performance data and trends on the affected systems to ensure that mitigation remains in full effect. A Post Incident Report will be published within the next 48-72 hours.
Impact Statement: Starting at 12:11 UTC on Dec 16 2021, customers using Azure Active Directory or Microsoft account may experience authentication issues when attempting to access Azure, Dynamics 365, and/or Microsoft 365. Current Status: Microsoft is aware of this issue and is actively investigating. The next update will be provided in 60 minutes, or as events warrant. Please refer to https://status.azure.com/en-us/status. This message was last updated at 02:30 UTC on 16 December 2021. Please note the time in Microsoft's notification above might be incorrect.
Report: "Possible Incident at 08 DEC 2021, 02:00 AM UTC"
Last updateThis incident has been resolved.
The service is operating within norms and we are monitoring the service.
AWS has mitigated the underlying issue that caused some network devices in the US-East-1 Region to be impaired. They are seeing improvement in availability across most AWS services. We continue to monitor the situation and will update once we have confirmation that there will be no negative impact to Workspot Services.
This is a proactive notification about a possible future outage. We have been notified by AWS and its services providers that some systems are degraded. https://status.aws.amazon.com/ AWS and services providers are working to resolve this issue before 08DEC 2021, 02:00 AM UTC. If this issue is not resolved before 02:00 AM UTC, administrators will not be able to login to Control and end users will not be able to launch new non-persistent desktop sessions. Existing end user sessions will not be impacted, and end users will be able to launch persistent desktops. At 2pm PT AWS reported that they have begun mitigating the issue and are seeing significant recovery, but we wanted to give our customers as much of a heads up as soon as we possibly could so they could prepare accordingly. We will continue to update this status over the next few hours until resolution.
Report: "Latency within Azure noticed - Connectivity to VMs may be impacted"
Last updateMicrosoft 365 has reported they have corrected a networking configuration issue which caused latency for users accessing MSFT 365 and Azure Services in North America. They confirm the issue is resolved. Please refer to MO298793 in the Microsoft 365 admin center for more details from MSFT. Thank you.
We are seeing reports and can confirm severe latency within Azure at this time. We have opened a SEV A ticket with MSFT and are working with them to resolve the issue. We are seeing very high RTT between regions currently. All Workspot Control functionality is normal. The impact being felt is in connecting to VMs, staying connected to VMs and slow responsiveness from VMs once in them. We will update you periodically as we receive more information from Azure.
Report: "Azure Windows Virtual Machine Provisioning Issue"
Last updateAzure announced that beginning at 07:00 UTC on 13 Oct 2021, a subset of customers using Windows Virtual Machines may experience failure notifications when performing service management operations - such as create, update, delete. Deployments of new VMS and any updates to extensions may fail. Non-Windows Virtual Machines, and existing running Windows Virtual Machines should not be impacted by this issue. What this means is that you won't be able to provision any new Virtual machine through Workspot Control. Also, customer having non-persistent pool with refresh periodically option enabled, they may see VMs are failing during provisioning. This issue was resolved at approximately 12:00 UTC on 13 OCT 2021. We have worked with any impacted customers and redeployed any failed VM Desktops they had. If you find yourself with a failed VM, please redeploy it and if it does not come back up in a healthy 'ready' state, open a ticket with Workspot Support.
Report: "Azure West US 2 Unhealthy Node Issue"
Last updateWe believe the issue with the unhealthy node in Azure WEST US 2 to be resolved at this point. If you have VM Desktops in the Azure WEST US 2 region and end users are experiencing issues with them, please open a Workspot Support Ticket through the Control Support Portal. Thank you.
Azure has notified us of a hardware issue in the West Us 2 region that may be impacting you. Starting at 20:54 UTC on 05 Oct 2021, you have been identified as a customer using Virtual Machines in West US 2 who may experience connection failures and increased network latency when attempting to access some Virtual Machines hosted in the region. Our monitoring has identified packet corruption between a network switch and server where your Virtual Machine(s) are deployed. We are removing the server in a degraded state from production to prevent additional deployments and further impact. Your Virtual Machine(s) remain operational, but potentially in a degraded state on the faulty hardware for 24 hours after this notification. Customers that stop then start their Virtual Machine within 24 hours of this notification will restart on healthy hardware. Virtual machines remaining on the node after 24 hours will be live migrated to a healthy node. What this means for you is that if your end users are experiencing a lot of latency in their connection or they are being disconnected/unable to connect to their VM Desktop as usual, then please use the redeploy action in the pools page in Control. This will move the VM Desktop to a new 'healthy' host/node. End users who are on the identified unhealthy host may not experience the issues described. Azure will automatically move those desktops to a healthy host at approximately 20:00 UTV (1pm PT) today, Wednesday 06OCT21. And exact time cannot be provided. If you have any questions, please feel free to reach out to your Customer Success Team member. Thanks, Workspot Support
Report: "Azure Connectivity Issue in North American Regions"
Last updateBetween 1345 UTC - 1405 UTC and 1615 UTC – 1635 UTC on August 2nd, customers who were attempting to connect to Azure services may have experienced latency, timeouts, and or connectivity. This may have impacted customers in East US, East US2, West Central US, Central US, Canada East, Canada Central, North Central US, WEST US, WEST US2, and Brazil South. Azure has reported a network configuration change caused connection traffic to flow into a single edge location, due to traffic moving into the single edge location requests to connect to Azure services underwent latency and timeouts. Azure has reported that they have reverted the network configuration change which established successful connections to Azure services. The issue is mitigated and no longer impacting customers.
Report: "Workspot Control Email Services Incident"
Last updateAt 4:34PM PDT Workspot Control Service was restarted to enable the security update from our email provider. At 4:35PM PDT The Workspot Control Service was online and email from Control was tested and proven to be sending emails as expected. This issue is now considered resolved.
Starting at 8:10AM PDT on May 19th, it was discovered that emails were not being delivered from the Workspot service due to a security update at our email provider. This impacted all emails from our service: device activations, First Time User onboarding, administrator activation, alerts and scheduled reports. We have tested a fix and will be updating our service at 4:30PM PDT today. At that time, there might be a slight delay in response to service requests, however we expect the service to be fully operational by 4:35PM PDT. Further updates will be forthcoming. If you have any questions or have issues other than what is described above, please open a Support Ticket through Workspot Control.
Report: "Incident Detected"
Last updateDNS issues have been fully resolved and all services are normal and all Workspot resources are reachable and have been resolving as usual for the past 120 minutes.
Our External DNS Provider has confirmed that the underlying DNS outage has been mitigated. They are validating recovery now. All services should be resolving and you should be able to connect to workspot.com services as usual. We will update once completely resolved.
External DNS provider has rerouted and we haven't seen any service alerts for the past 60 minutes. We are still monitoring. We will update if there are any changes or issue is resolved.
our External DNS Provider has rerouted their DNS and are noticing improvements in resolution. We will continue to monitor and update in an hour unless it is resolved earlier.
The issue resides with an external DNS service provider issue and we will update the status in one hour.
Workspot has detected an incident with DNS resolving *.workspot.com services. You may see intermittent ability to connect to Control and API service. We are currently investigating and working to get the service up as soon as possible.
Report: "Control DNS Incident Detected"
Last updateThis incident has been resolved.
Our DNS service provider confirmed the issue is resolved and we have not noticed any issues since 00:40 UTC. We continue to monitor.
A fix has been implemented and we are monitoring the results.
We are experiencing DNS issues at our 3rd party DNS service provider intermittently. Admins may not be able to login to Control and users may not be able to launch non-persistent Desktops or RD Pool Apps. We are working with our service provider. Next update will be at 1:00 AM UTC
Report: "Azure North America"
Last updateMicrosoft reports issues are resolved.
Microsoft reports the issues are mitigated and they are monitoring.
Microsoft has identified the issue and is applying mitigation.
Microsoft continues to investigate the Azure issue.
Customers using Azure in North America may experience issues connecting to resources. See https://status.azure.com/en-us/status for more.
Report: "Zendesk Ticketing"
Last updateThis incident has been resolved. Tickets may be opened via Control again.
We are continuing to investigate this issue.
We are experiencing issues with our 3rd party support ticketing provider Zendesk. You may receive a "page doesn't exist" error when trying to use the support link from within Control. All Workspot services are up and fully functional at this time, this issue only affects using the ticketing system to open a ticket with Workspot. If you need support during this time, please reach out to your CSPM or CSE.
Report: "Possible Incident at 26 Nov 2020, 02:00 AM UTC"
Last updateAWS has fully mitigated the impact to their affected subsystems and expects full recovery over the next few hours. https://status.aws.amazon.com/ Workspot has been testing and monitoring our internal redundant systems since the beginning of this notification and our service was never degraded during this period. Workspot has restarted multiple redundant systems inside Workspot Control and all systems are performing as expected. We will continue to monitor AWS's progress and will provide updates if any new information develops. Guidance: At this time, there are no restrictions to normal usage.
AWS is showing signs of recovery but at this time we expect some level in degradation of service starting at 02:00 AM UTC. Guidance: Until the issue is resolved, please do not log into Workspot Control or API Service and make any changes - add, delete, change configurations - until the AWS issue is resolved. What to expect in the next few hour if AWS has not completed recovery by 02:00 AM UTC: 02:00 - 03:30 AM UTC: Admins will see longer than normal login times or time outs, when accessing Workspot Control and APIs services. Users accessing non-persistent desktops, applications, HTML5 clients, and kiosk mode Clients will see longer than normal login times and possible login time outs. Scheduled tasks such as backups and refreshing non-persistent pools on a schedule will not complete. Persistent desktop users accessing desktops will not be impacted. 03:30 - recovery: Admins will not be able to access Workspot Control and APIs services. Users will not be able to access non-persistent desktops, applications, HTML5 clients, and Clients set to kiosk mode. Persistent desktop users accessing desktops will not be impacted. We are continuing to monitor this impact on AWS- https://status.aws.amazon.com/ - and will update as new information develops. Next update is in 30 minutes.
We are continuing to monitor for any further issues.
We are continuing to monitor this impact on AWS- https://status.aws.amazon.com/ - and will update as new information develops.
This is a proactive notification about a possible future outage. We have been notified by AWS and its services providers that some systems are degraded. https://status.aws.amazon.com/ AWS and services providers are working to resolve this issue before 26 Nov 2020, 02:00 AM UTC. If this issue is not resolved before 02:00 AM UTC, administrators will not be able to login to Control and end users will not be able to launch new non-persistent desktop sessions. Existing end user sessions will not be impacted, and end users will be able to launch persistent desktops. We will continue to update this status over the next few hours until resolution.
Report: "Incident Detected"
Last updateThis incident has been resolved. You may see some individual user issues, please follow the steps previously provided to ensure they are able to connect.
We have identified the issue. Yesterday, there was an inadvertent loss of DNS records which was recovered immediately. If you have users still impacted, the causes may be due to DNS propagation time for their individual ISP, record cached in the user's home router, or user may be using old Workspot Windows Client (pre 3.5.x). Remedies: If user has old WS Client, upgrade them to latest version (v3.6.x). Router cache - have user reboot home router. ISP DNS propagation - use a public DNS. If you have a standard process to help your end user flush their DNS cache please follow it. If not, please open a ticket and we will assist with step by step instructions. If you are experiencing other issues, please open a ticket.
Workspot has detected an incident affecting some customer where End Users may not be able to log into their desktops due to DNS resolution issue. We are currently investigating and working to get the service up as soon as possible.
Report: "Incident Detected"
Last updateThis incident has been resolved (0104 UTC). The DNS records have been restored. Users will be able to log in over the next 5 mins as DNS is updated globally.
The incident has been identified and mitigated. Users will be able to login to their desktops as replication occurs. We expect approximately 30 minutes for full global replication.
Workspot has detected an incident. We are currently investigating and working to get the service up as soon as possible. Some End Users may not be able to log into their desktops.
Report: "Azure EastUS Latency"
Last updateThis incident has been resolved.
Workspot is aware of latency and connectivity issues in the EASTUS region. We are working with Azure to mitigate this and will update with more information as needed.
Report: "Courtesy notification ~ Azure Active Directory"
Last updateThis incident has been resolved.
Workspot has confirmed with Microsoft that all AAD authentication may be inconsistent. This includes: AAD onboarding, AAD client, AAD HTML5 and AAD Control authentication as well as your Office 365 services. We will be updating as Microsoft provides us updates. See <a href="https://status.azure.com/en-us/status" target="_blank">Azure Status</a> for more.
Report: "User may see "Generic Activation Error"."
Last updateThis issue is resolved.
This issues affects customers using AAD authentication with Workspot. Workspot has confirmed with Microsoft that all AAD authentication may be inconsistent. This includes: AAD onboarding, AAD client, AAD HTML5 and AAD Control authentication. We will be update as Microsoft provides us updates.
We are currently investigating this issue.
Report: "MS Azure SouthUK and SouthCentralUS"
Last updateMicrosoft has fully resolved the issues in SouthUK and SouthCentralUS.
Microsoft reports the issues in SouthUK are resolved and services fully restored. In SouthCentralUS, MS reports hardware remediation has been completed. We are continuing to monitor.
Microsoft continues to work the issues in both Azure Regions. SouthUK the majority of service restoration is completed. See status.azure.com for latest updates. In SouthCentralUS, hardware remediation is continuing to bring nodes back online.
Microsoft continues to work the issues in both Azure Regions. SouthCentralUS issue has been identified as a hardware failure and MS is continuing to work to remediate the affected hardware. A cooling failure was identified as the cause for the issues in SouthUK. Remediation work is ongoing. Additional information available at status.azure.com.
Microsoft continues to work the issues in both Azure regions. Remediation in SouthCentralUS continues. SouthUK customers are beginning to see restoration of normal performance.
Please be advised that Microsoft Azure is currently experiencing two issues: 1) a power issue in SouthUK Region, and 2) a Capacity Issue in SouthCentralUS that may be impacting our customers ability to access or refresh VMs in these two regions. See https://status.azure.com/en-us/status for more.
Report: "Incident Detected"
Last updateThis incident has been resolved.
The team has been monitoring the stability of the network routing for 24 hours. We feel confident that this issue is resolved.
An update for the outage detected at 15:02 PM UTC. This was due to a critical security upgrade to the DB and impacting users and admins logging in during those two minutes. We are continuing to monitor the stability of the network routing from our platform network provider. This notice will be updated as gain new information.
There was a momentary outage detected at 15:02 PM UTC. We are continuing to monitor the stability of the network routing from our platform network provider. This notice will be updated as gain new information.
Our Engineering team has detected that routing has now been stabilized. We're currently monitoring the situation and digging deeper into the root cause of this disruption. We will continue to leave this notice up until we're certain the routing is stable.
Report: "Incident Detected"
Last updateThis incident has been resolved.
Our Engineering team has detected that routing has now stabilized. We're currently monitoring the situation and digging deeper into the root cause of this disruption. We will continue to leave this notice up until we're certain the routing is stable.
The service is up. Logins to non-persistent resources and Control admin console may be slower than normal. We are continuing to investigate this and will update shortly.
Workspot has detected an incident. We are currently investigating and working to get the service up as soon as possible.
Report: "Incident Detected"
Last updateOur investigation discovered performance issues with the IaaS provider that resulted in intermittent production service downtime around 11:00 to 13:30 UTC on December 16, 2019. The root cause has been addressed during the maintenance window on December 21, 2019 UTC.
Workspot has detected an incident at 1100 UTC. The Service is restored at 1330 UTC. We are currently investigating.
Report: "Investigating"
Last updateThis incident has been resolved.
Workspot has detected an incident. We are currently investigating and working to get the service up as soon as possible.
Report: "Workspot has detected an incident. We are currently investigating and working to get the service up as soon as possible."
Last updateWorkspot detected an incident between 4:15 AM UTC and 7:04 AM UTC on September 2, 2019. The incident impacted admins from logging into to Workspot Control and end users connecting to new non-persistent sessions or logging into clients configured in kiosk mode. Customers in Asia Pacific time zones were impacted. Existing end user sessions and connecting to persistent desktops were not impacted.
We are continuing to monitor for any further issues.
Workspot has resolved the incident and Service is up and running since 12:04 AM Pacific
A fix has been implemented and we are monitoring the results.
Admins may not able to login into Workspot Control and End Users may be able to not launch new connections to non-persistent desktops. There is no impact on any existing sessions or End Users connecting to persistent desktops.
Report: "DB Failover - PaaS provider detected hardware failure and initiated DB failover Workspot Service was down for 41 minutes"
Last updateResolved - DB Failover Successful During DB failover, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops.
Report: "Maintenance Window - Workspot Service was down for 5 minutes"
Last updateMaintenance Window - Workspot Service was down for 5 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops
Report: "Maintenance Window - Workspot Service was down for 2 minutes"
Last updateMaintenance Window - Workspot Service was down for 2 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops.
Report: "Maintenance Window - Workspot Service was down for 8 minutes"
Last updateMaintenance Window - Workspot Service was down for 8 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops. CM-506
Report: "Maintenance Window - Workspot Service was down for 3 minutes"
Last updateMaintenance Window - Workspot Service was down for 3 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops. CM-404
Report: "Maintenance Window - Workspot Service was down for 6 minutes"
Last updateMaintenance Window - Workspot Service was down for 6 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops. CM-455
Report: "Maintenance Window - Workspot Service was down for 12 minutes"
Last updateMaintenance Window - Workspot Service was down for 12 minutes During this maintenance window, Admins were not able to login into Workspot Control and End Users could not launch new connections to non-persistent desktops. There was no impact on any existing sessions or End Users connecting to persistent desktops. CM-496
Report: "Control not reporting events"
Last updateThis incident has been resolved.
Issue has been resolved
We have isolated the issue and are exploring solutions
Issue with control, not reporting all events. Some customers are experiencing issues logging into control.