Splunk Observability Cloud US0

Is Splunk Observability Cloud US0 Down Right Now? Check if there is a current outage ongoing.

Splunk Observability Cloud US0 is currently Operational

Last checked from Splunk Observability Cloud US0's official status page

Historical record of incidents for Splunk Observability Cloud US0

Report: "Splunk APM Trace Data Ingestion Delayed"

Last update
resolved

This incident has been resolved.

identified

A degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. No data is being lost at this time and MetricSets are not impacted but the most recent data may not be available in trace search results.

Report: "Degraded performance accessing the Splunk APM Interface"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are investigating a degradation in performance while using the Splunk APM Interface. Parts of the Splunk APM Troubleshooting experience like the service map, map breakdowns, Tag Spotlight and charts within Splunk APM may be impacted. Trace data ingest is not impacted.

Report: "Intermittent login failures for customers using Unified identity"

Last update
resolved

This incident has been resolved.

investigating

Customers using unified identity may experience intermittent failures while logging into Splunk Observability cloud web interface. Datapoint Ingest is not affected. We are investigating and will provide an update shortly.

Report: "Intermittent login failures for customers using Unified identity"

Last update
Investigating

Customers using unified identity may experience intermittent failures while logging into Splunk Observability cloud web interface. Datapoint Ingest is not affected. We are investigating and will provide an update shortly.

Report: "Degraded performance accessing the Splunk APM Interface"

Last update
Investigating

We are investigating a degradation in performance while using the Splunk APM Interface. Parts of the Splunk APM Troubleshooting experience like the service map, map breakdowns, Tag Spotlight and charts within Splunk APM may be impacted. Trace data ingest is not impacted.

Report: "Elevated Error Rate from the Splunk APM API"

Last update
resolved

We investigated and resolved issue with elevated Error Rate from the Splunk APM API from 11:54 AM PT to 12:24 PM PT. Trace data ingest was not impacted.

Report: "Elevated Error Rate from the Splunk APM API"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are investigating an elevated rate of errors occurring while interacting with the Splunk APM API. Trace data ingest is not impacted.

Report: "Elevated Error Rate from the Splunk APM API"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are investigating an elevated rate of errors occurring while interacting with the Splunk APM API. Trace data ingest is not impacted.

Report: "Elevated Error Rate from the Splunk APM API"

Last update
Investigating

We are investigating an elevated rate of errors occurring while interacting with the Splunk APM API. Trace data ingest is not impacted.

Report: "Some metric datapoints not being accepted at ingest"

Last update
resolved

This incident has been resolved.

monitoring

The fixes have taken effect as of 10 minutes ago, we are continuing to monitor for any further issues.

monitoring

We have implemented fixes and now observe less than 0.01% of metric datapoints not being accepted at ingest, trending down. We are continuing to monitor the fixes as they take effect.

identified

We've identified as issue which is preventing approximately 0.1% of metric datapoints from being accepted at ingest, starting approximately at 3:15pm PT. This may result in false alerts and missing metric data in charts and detectors. We are currently implementing fixes to remediate this issue.

Report: "Some metric datapoints not being accepted at ingest"

Last update
Identified

We've identified as issue which is preventing approximately 0.1% of metric datapoints from being accepted at ingest, starting approximately at 3:15pm PT. This may result in false alerts and missing metric data in charts and detectors. We are currently implementing fixes to remediate this issue.

Report: "Splunk Observability AI Assistant Partially Unavailable"

Last update
resolved

Some chat messages in the Splunk Observability AI Assistant may have been dropped or slow to answer between 14:54 PT and 16:42 PT.

Report: "Splunk Observability AI Assistant Partially Unavailable"

Last update
Resolved

Some chat messages in the Splunk Observability AI Assistant may have been dropped or slow to answer between 14:54 PT and 16:42 PT.

Report: "Splunk APM Monitoring, Troubleshooting MetricSets and Trace Data are Delayed"

Last update
resolved

This incident has been resolved.

investigating

A degradation in the performance of a key backend component of Splunk APM is causing both Troubleshooting, Monitoring MetricSets and Trace Data to be delayed by more than five minutes. No data is being dropped at this time but data for the APM Troubleshooting page, Tag Spotlight experience, Trace Analyzer and other APM pages, as well as metrics created from traces and APM detectors are all delayed.

monitoring

We are continuing to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

Report: "Splunk APM Monitoring, Troubleshooting MetricSets and Trace Data are Delayed"

Last update
Investigating

A degradation in the performance of a key backend component of Splunk APM is causing both Troubleshooting, Monitoring MetricSets and Trace Data to be delayed by more than five minutes. No data is being dropped at this time but data for the APM Troubleshooting page, Tag Spotlight experience, Trace Analyzer and other APM pages, as well as metrics created from traces and APM detectors are all delayed.

Report: "Splunk Synthetics Google Chrome Upgrade"

Last update
Scheduled

Splunk Synthetic Monitoring will update Google Chrome and Chromium to version 135.0.7049.84-1 for Browser tests on 4/22 at 8 am PST. We periodically auto-update to newer versions of Google Chrome/Chromium when available. Due to differences between browser versions, Synthetics test behavior or timings can sometimes change and may require updates to your configured steps

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Log Observer Connect Search Failures"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "Log Observer Connect Search Failures"

Last update
Investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "UI unavailable"

Last update
resolved

The UI was unavailable from 11:27 to 11:44 PT. Ingest and processing was not impacted. The issue has been resolved.

Report: "UI unavailable"

Last update
Resolved

The UI was unavailable from 11:27 to 11:44 PT. Ingest and processing was not impacted. The issue has been resolved.

Report: "Unable to fetch Splunk RUM sessions"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "Unable to fetch Splunk RUM sessions"

Last update
Resolved

This incident has been resolved.

Identified

The issue has been identified and a fix is being implemented.

Update

We are continuing to investigate this issue.

Investigating

We are currently investigating this issue.

Report: "Splunk APM Trace Data Ingestion Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. No data is being lost at this time and MetricSets are not impacted but the most recent data may not be available in trace search results.

Report: "Splunk APM Trace Data Ingestion Delayed"

Last update
Resolved

This incident has been resolved.

Monitoring

A fix has been implemented and we are monitoring the results.

Identified

The issue has been identified and a fix is being implemented.

Investigating

A degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. No data is being lost at this time and MetricSets are not impacted but the most recent data may not be available in trace search results.

Report: "Elevated Error Rate accessing Metrics Usage Analytics"

Last update
resolved

This incident has been resolved.

monitoring

The Usage Analytics interface has now recovered and we are continuing to monitor.

identified

We are experiencing an elevated rate of errors occurring while using the Metrics Usage Analytics interface. All other product interfaces are working as expected. The issue has been identified and is being actively worked on.

Report: "Elevated Error Rate accessing Metrics Usage Analytics"

Last update
Resolved

This incident has been resolved.

Monitoring

The Usage Analytics interface has now recovered and we are continuing to monitor.

Identified

We are experiencing an elevated rate of errors occurring while using the Metrics Usage Analytics interface. All other product interfaces are working as expected. The issue has been identified and is being actively worked on.

Report: "Delays in Sending Alerts"

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

We’ve identified that some detectors are sending delayed alerts. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Delays in Sending Alerts"

Last update
Resolved

This incident has been resolved.

Update

We are continuing to investigate this issue.

Investigating

We’ve identified that some detectors are sending delayed alerts. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Splunk Observability AI Assistant may be slow to respond."

Last update
resolved

This incident has been resolved.

investigating

Splunk Observability AI Assistant may be slow to respond. We are currently investigating the issue.

Report: "Splunk Synthetics runners are unavailable in the Stockholm region"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

We are seeing signs of recovery and are continuing to monitor.

identified

Splunk Synthetics runners are unavailable in the Stockholm region due to an AWS EC2 instance outage in the eu-north-1 region.

Report: "Metric Finder and Chart Creation Auto-completion Are Not Functioning"

Last update
resolved

This incident has been resolved.

investigating

Datapoint ingest and other functions are not affected. We are investigating this issue and will provide an update shortly.

Report: "Charts and Detectors are not working"

Last update
resolved

This incident is resolved and all systems are back to being operational.

monitoring

Charts, APM and Synthetics monitoring has recovered , we are continuing to monitor alerting.

monitoring

A fix has been implemented and we are monitoring the results. Please expect delayed alerts from the last 2-hour period. Please exercise caution when handling alerts. We will post an update when the alerts are fully caught up

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

Customers may be experiencing issues in charts and detectors. Detectors are not firing and charts are not showing data. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Events datapoints being dropped"

Last update
resolved

Events datapoint ingest is affected and we dropped datapoints for 20 mins. The fix for the issue has been deployed. This issue is resolved now.

Report: "New charts and detectors failing to start"

Last update
resolved

New charts and detectors were failing to start in us0 for the duration of 12:49 PM PST to 1:02 PM PST

Report: "Splunk APM Troubleshooting MetricSets were corrupted between 12:50 PM to 3:50 PM Pacific Time"

Last update
resolved

A degradation in the performance of the Splunk APM trace processing pipeline was causing Troubleshooting MetricSets to be corrupted between 12:50 PM and 3:50 PM Pacific Time, and represented only a portion of the traffic and activity they are meant to measure on our customer’s services, endpoints, traces and workflows. Trace data ingest was not impacted and Monitoring MetricSets remained accurate; but the APM Troubleshooting experience, service maps and Tag Spotlight may have shown incomplete and inaccurate data.

Report: "RUM processing lag"

Last update
resolved

RUM is experiencing a lag in processing data.

Report: "Splunk RUM Sessions Are Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate this issue.

investigating

A degradation in the performance of the Splunk RUM trace data ingestion pipeline is causing the processing and storage of sessions to be delayed by more than five minutes. No data is being lost at this time, but the most recent data may not be available.

Report: "We have issues ingesting internal organization metrics"

Last update
resolved

This incident has been resolved.

investigating

We are investigating an issue where we have dropped some internal organization metrics. This only affects a small number of metrics. This only affects internal organization metrics.

Report: "Unified Identity SSO is broken on eu0."

Last update
resolved

This incident has been resolved.

investigating

Unified Identity SSO is broken for all customers since 2:01 PM PT on October 6th. This only impact customers using Splunk cloud SSO (Unified identity). This does not impact customers using other login methods and does not impact ingest either. We have identified a fix and are monitoring the issue.

Report: "Splunk APM Trace Monitoring, Troubleshooting and Trace Ingestion MetricSets Delayed"

Last update
resolved

This incident has been resolved.

investigating

A degradation in the performance of the Splunk APM trace processing pipeline is causing Troubleshooting MetricSets to be delayed by more than five minutes. As a result, the APM Troubleshooting experience, service maps and Tag Spotlight do not have access to the most recent data. A degradation in the performance of the Splunk APM metrics processing pipeline is causing Monitoring MetricSets to be delayed by more than five minutes. Service, endpoint, and workflow dashboards, and other charts and detectors built from Monitoring MetricSets are impacted. A degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. The processing of metrics for Business Workflows, which also depends on this pipeline, are equally delayed. Trace data ingest is not impacted at this time; service-level and endpoint-level Monitoring MetricSets and the detectors built from them are also not impacted.

Report: "Datapoints are Being Dropped intermittently"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

Datapoint ingest is affected and we are dropping approximately 5% of the datapoints. We are investigating and will provide regular updates.

Report: "Charts and Detectors Delayed for Aggregated Metrics"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

Aggregated metrics are delayed, Customers may be experiencing delays in some charts and detectors running off of Aggregated Metrics. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Datapoints Being Dropped"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate the issue.

investigating

We are continuing to investigate the issue.

investigating

Datapoint ingest is affected and we are dropping datapoints. We are investigating and will provide an update every 15 mins.

Report: "MetricTimeSeries Creation Delayed"

Last update
resolved

Starting at 1:40pm PT a small number of customers may have experienced approximately a 5 minute delay in seeing newly created MetricTimeSeries. Datapoint ingest is not affected.

Report: "MetricTimeSeries Creation Delayed"

Last update
resolved

This incident has been resolved.

identified

Customers may experience delays in seeing newly created MetricTimeSeries. Datapoint ingest for exitsing time series is not affected. We are investigating and will provide an update shortly.

Report: "Splunk Synthetic Monitoring updated Google Chrome to version 125"

Last update
resolved

Splunk Synthetic Monitoring updated Google Chrome to version 125 for Browser tests on July 18 at 12:30 PM EDT. We periodically auto-update to newer versions of Google Chrome when available. Due to differences between browser versions, test behavior or timings can sometimes change and may require updates to your test steps.

Report: "Splunk Log Observer Interface Unavailable"

Last update
resolved

This incident has been resolved.

monitoring

Splunk Log Observer Interface is now operational.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate the major outage of Splunk Log Observer web application.

investigating

We are continuing to investigate the major outage of Splunk Log Observer web application.

investigating

We are continuing to investigate the major outage of Splunk Log Observer web application. Correction from previous message: Log data ingest is not impacted at this time.

investigating

We are continuing to investigate the major outage of log data ingest.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are investigating a major outage of the Splunk Log Observer web application. Log data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Delayed metrics from GCP"

Last update
resolved

The issue with Cloud Monitoring metrics has been resolved for all affected users as of Monday, 2024-07-15 10:52 US/Pacific.

identified

We are experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped. This is due to a GCP issue. See status page here: https://status.cloud.google.com/incidents/ERzzrJqeGR2GCW51XKFv

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped.

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed.

Report: "Some Splunk Observability IMM customers may be seeing some out of date information."

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Some Splunk Observability IMM customers may see out of date information for the process table rendered under the host view. We are currently investigating the issue.

Report: "Splunk RUM Session data Being Dropped"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

A degradation of the ingest path for Splunk RUM is causing session data to be dropped and lost. We are investigating the issue and will provide an update as soon as possible.

Report: "Charts not Loading for Some Customers"

Last update
resolved

Between 3:34AM to 3:44AM PDT, Charts for a subset of customers might not be loading. Datapoint ingest was not affected. The issue has been resolved.

Report: "Login is broken for Unified Identity Customers"

Last update
resolved

This incident has been resolved. Login is working as expected.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The root cause has been identified and a fix is being implemented.

investigating

Unified Identity SSO is broken for all customers since 9:45 PM PT on April 18th. This only impact customers using Splunk cloud SSO (Unified identity). This does not impact customers using other login methods and does not impact ingest either. We are currently investigating the issue.

Report: "Charts not Loading for Some Customers"

Last update
resolved

Some charts for a subset of customers may not have been loading between 1:02am-1:09am and again between 2:25am-2:29am PDT. Datapoint ingest is not affected. This is incident is now resolved.

Report: "Splunk APM Exemplar Search errors out"

Last update
resolved

We are experiencing a degradation in the availability of trace exemplars. When clicking on charts to search for trace exemplars, results may not be available. It may not be possible to view individual traces.

Report: "MetricTimeSeries Creation Delayed"

Last update
resolved

Customers may have experienced delays in seeing newly created MetricTimeSeries between 1.46pm and 2.06pm UTC. Datapoint ingest was not affected.

Report: "Splunk Observability Cloud Web Interface Unavailable"

Last update
resolved

The Splunk Observability Cloud web application was unavailable between 1:16pm PST and 1:37pm PST. Data ingest and alerting was not impacted.

Report: "The Observability Cloud web interface and API access are not available"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating a major outage for Splunk Observability applications.

Report: "Alerts being sent for some detector rules previously disabled via API"

Last update
resolved

This incident has been resolved.

identified

We have identified a bug that is causing alerts to be sent for detector rules that have been previously disabled via the API. This issue does not impact detectors that have rules disabled via the UI. Starting January 18, some detectors that had rules disabled via the API (such as through Terraform) continued to send alerts. We are currently rolling out a fix to production systems and expect to resolve the issue within the next 3 hours.

Report: "Alert Modal not displaying plot lines"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We’re investigating an issue with the Alert Modal chart not displaying plots. Only charts within Alert Modals are impacted. Alerting, notification, and general charting is not impacted.

Report: "IMM data points & metadata is down for Azure"

Last update
resolved

This incident has been fully resolved.

identified

Azure is experiencing an outage since 5:57 pm PST which is impacting IMM data points & metadata. There is no impact on traces and sessions.

Report: "Synthetics - Degraded processing rate with run count."

Last update
resolved

Certain scheduled synthetics runs may not have been processed during this time. This has now been resolved and the system should be back to normal.

investigating

We are continuing to investigate this issue.

investigating

We are currently experiencing reduced run counts and are working to remediate the issue.

Report: "Splunk APM Interface is experiencing partial degradation"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

Users may notice degraded performance when using Trace details page. There is no impact on ingest. We will provide an update as soon as possible.

Report: "MetricTimeSeries Creation Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and remediation is in progress.

investigating

We are continuing to investigate this issue.

investigating

Customers may experience delays in seeing newly created MetricTimeSeries. Datapoint ingest API is also experiencing high error rates. We are investigating and will provide an update shortly.