Splunk Observability Cloud AU0

Is Splunk Observability Cloud AU0 Down Right Now? Check if there is a current outage ongoing.

Splunk Observability Cloud AU0 is currently Operational

Last checked from Splunk Observability Cloud AU0's official status page

Historical record of incidents for Splunk Observability Cloud AU0

Report: "Splunk Synthetics Google Chrome Upgrade"

Last update
Scheduled

Splunk Synthetic Monitoring will update Google Chrome and Chromium to version 135.0.7049.84-1 for Browser tests on 4/22 at 8 am PST. We periodically auto-update to newer versions of Google Chrome/Chromium when available. Due to differences between browser versions, Synthetics test behavior or timings can sometimes change and may require updates to your configured steps.

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Log Observer Connect Search Failures"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "Log Observer Connect Search Failures"

Last update
Identified

The issue has been identified and a fix is being implemented.

Investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "Unable to fetch Splunk RUM sessions"

Last update
resolved

The issue has been resolved.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "Unable to fetch Splunk RUM sessions"

Last update
Update

We are continuing to investigate this issue.

Update

We are continuing to investigate this issue.

Investigating

We are currently investigating this issue.

Report: "Charts Not Loading"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Charts for some customers may not be loading. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

Charts for a subset of customers may be loading slowly. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Splunk Synthetics runners are unavailable in the Stockholm region"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

We are seeing signs of recovery and are continuing to monitor.

investigating

Splunk Synthetics runners are unavailable in the Stockholm region due to an AWS EC2 instance outage in the eu-north-1 region.

Report: "Splunk APM Monitoring MetricSets delayed"

Last update
resolved

A degradation in the performance of the Splunk APM metrics processing pipeline is causing Monitoring MetricSets to be delayed by more than five minutes. Trace data ingest is not impacted. Dashboards and detectors built from Monitoring MetricSets for services and endpoints are impacted. This incident happened between 14:24-14:34 (PST) on Feb 10.

Report: "Splunk APM Trace Data Ingestion Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

A degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. No data is being lost at this time and MetricSets are not impacted but the most recent data may not be available in trace search results.

Report: "Splunk RUM Sessions Delayed"

Last update
resolved

A degradation in the performance of the Splunk RUM trace data ingestion pipeline caused the processing and storage of sessions to be delayed by more than five minutes. No data was being lost. The issue resolved at 5:38 PM PT.

Report: "Splunk Observability Cloud API and Web Interface is slow"

Last update
resolved

This incident has been resolved.

monitoring

Performance is back to normal. We are monitoring the issue.

investigating

Splunk Observability Cloud API and Web Interface have degraded performance. We are currently investigating the root cause.

Report: "Splunk APM Monitoring MetricSets delayed"

Last update
resolved

This incident has been resolved.

investigating

A degradation in the performance of the Splunk APM metrics processing pipeline is causing Monitoring MetricSets to be delayed by more than five minutes. Trace data ingest is not impacted, but service, endpoint and workflow dashboards, and other charts and detectors built from Monitoring MetricSets are impacted.

Report: "Splunk Synthetic Monitoring updated Google Chrome to version 125"

Last update
resolved

Splunk Synthetic Monitoring updated Google Chrome to version 125 for Browser tests on July 18 at 12:30 PM EDT. We periodically auto-update to newer versions of Google Chrome when available. Due to differences between browser versions, test behavior or timings can sometimes change and may require updates to your test steps.

Report: "Delayed metrics from GCP"

Last update
resolved

The issue with Cloud Monitoring metrics has been resolved for all affected users as of Monday, 2024-07-15 10:52 US/Pacific.

identified

We are experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped. This is due to a GCP issue. See status page here: https://status.cloud.google.com/incidents/ERzzrJqeGR2GCW51XKFv

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped.

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed.

Report: "Splunk Observability Cloud Web Interface Unavailable"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are investigating a major outage of the Splunk Observability Cloud web application. Data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Log Observer Connect connections to Splunk Enterprise Cloud stacks may be failing intermittently."

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

Log Observer Connect connections to Splunk Enterprise Cloud stacks may be failing intermittently.

Report: "Email alert notifications not available to all customers"

Last update
resolved

Email alert notifications were not available to all customers. The issue was resolved by 1:50 AM Pacific (8:50 AM UTC).

Report: "Splunk Log Observer Interface Unavailable"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are investigating a major outage of the Splunk Log Observer web application. Log data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Splunk APM Exemplar Search errors out"

Last update
resolved

We are experiencing a degradation in the availability of trace exemplars. When clicking on charts to search for trace exemplars, results may not be available. It may not be possible to view individual traces.

Report: "Intermittent Ingestion Failures"

Last update
resolved

This incident has been resolved.

monitoring

We are no longer seeing the errors and are catching up on data-points.

identified

We have identified the issue and are working to resolve it

investigating

We are continuing to investigate the issue and are close to having the root-cause identified.

investigating

We are currently unable to ingest data from AWS, GCP and Azure and are actively working on the issue

Report: "Splunk Observability Cloud Web Interface Unavailable"

Last update
resolved

The Splunk Observability Cloud web application was unavailable between 21:14 UTC and 21:37 UTC. Data ingest and alerting was not impacted.

Report: "Splunk APM Troubleshooting and Monitoring MetricSets Delayed"

Last update
resolved

This incident has been resolved.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the performance of the Splunk APM trace processing pipeline is causing Troubleshooting and monitoring MetricSets to be delayed by more than five minutes. As a result, the APM Troubleshooting experience, service maps and Tag Spotlight do not have access to the most recent data. The processing of metrics for Business Workflows, which also depends on this pipeline, is equally delayed. Service-level endpoint, workflow dashboards, and other charts and detectors built from Monitoring MetricSets are impacted.

Report: "Configuration analysis of Splunk APM MetricSets was failing"

Last update
resolved

A bug was preventing the creation of new MetricSets configurations or disabling/modifying existing MetricSets configurations. The computation of existing MetricSets was not affected.

Report: "Splunk Observability Cloud Web Interface Unavailable"

Last update
resolved

Splunk Observability Cloud web application was not available from February 16, 2024 2:12 AM UTC to 2:21 AM UTC. Data ingest was not impacted at this time. The issue has been resolved.

Report: "Alerts being sent for some detector rules previously disabled via API"

Last update
resolved

This incident has been resolved.

identified

We have identified a bug that is causing alerts to be sent for detector rules that have been previously disabled via the API. This issue does not impact detectors that have rules disabled via the UI. Starting January 18, some detectors that had rules disabled via the API (such as through Terraform) continued to send alerts. We are currently rolling out a fix to production systems and expect to resolve the issue within the next 3 hours.

Report: "Splunk APM Troubleshooting MetricSets Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the performance of the Splunk APM trace processing pipeline is causing Troubleshooting MetricSets, Trace monitoring metrics to be delayed by more than five minutes. As a result, the APM Troubleshooting experience, service maps and Tag Spotlight do not have access to the most recent data.

Report: "Alert Modal not displaying plot lines"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We’re investigating an issue with the Alert Modal chart not displaying plots. Only charts within Alert Modals are impacted. Alerting, notification, and general charting are not impacted.

Report: "Org metrics delayed"

Last update
resolved

Org metrics may have been delayed processing from 12/5 at 10:30pm PT until 12/6 at 11:00am PT. Any delays may have resulted in incorrect billing data being displayed. No other metrics were impacted. At this time the issue is resolved and billing data should be reported properly.

Report: "Synthetics - Degraded processing rate with run count."

Last update
resolved

Certain scheduled synthetics runs may not have been processed during this time. This has now been resolved and the system should be back to normal.

investigating

We are currently experiencing reduced run counts and are working to remediate the issue.

Report: "Splunk APM Trace Data Being Dropped"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

A degradation of the trace data ingest path for Splunk APM is causing trace spans to be dropped and lost. We are investigating the issue and will provide an update as soon as possible.

Report: "Splunk Enterprise Cloud customers not able to sign up for Splunk Observability Cloud"

Last update
resolved

This incident has been resolved.

identified

Customers using Splunk Enterprise Cloud SSO are currently not able to sign up for Splunk Observability Cloud. Splunk Cloud SSO customers who are already signed up, are facing intermittent issues accessing Log Observer Connect. The problem has been identified and a fix is in progress

Report: "Chars and Detectors delayed"

Last update
resolved

Charts and detectors were delayed between 1.01 p.m. and 1.22 p.m. PDT. No data loss is lost. We have recovered now. All charts and detectors should be normal now.

Report: "Splunk Log Observer Charts Interface Unavailable"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented

investigating

We are continuing to investigate this issue.

investigating

We are investigating a partial outage of the Splunk Log Observer charts. Log data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Unable to create/manage Splunk LO Connect connections"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

We are investigating an issue wherein users are unable to create and manage Splunk LO Connect connections. There is no impact on functioning of existing LO Connect connections.

Report: "Intermittent failures for some Log Observer Connect searches"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor that Log Observer Connect searches are fully operational

identified

The issue has been identified and a fix has been implemented.

investigating

We are investigating intermittent failures for some Log Observer Connect searches.

Report: "Splunk AlwaysOn Profiling Interface Unavailable"

Last update
resolved

This incident has been resolved.

identified

The deployment is continuing as planned and the performance degradation is subsiding further as service instances are upgraded.

identified

We are continuing to deploy a fix for this issue and are seeing some early signs of recovery. We will provide another update on the recovery by 6pm PT.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

We are investigating a major outage of the AlwaysOn Profiling web application. Profiling data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Charts and Detectors Delayed"

Last update
resolved

Customers may have experienced delays in some charts and detectors between 12:24 - 12:51 PM PT. Datapoint ingest was not affected.

Report: "Metrics and tags collection is being dropped"

Last update
resolved

This incident has been resolved.

identified

A degradation of the ingest path for Splunk Cloud Integration is causing data points to be dropped and lost since Thursday, May 25 at 5p PT. We have identified the issue and are implementing a fix.

Report: "Intermittent Query Slowness for Splunk Log Observer Connect"

Last update
resolved

This incident has been resolved.

investigating

We are investigating intermittent query slowness for some Log Observer Connect customers.

Report: "Some sf.org metrics are incorrectly reporting drops - no data is being dropped"

Last update
resolved

A recent code push resulted in some sf.org metrics reporting incorrectly that data was being dropped. No data drops actually occurred. The code change has been rolled back and the metrics are now being correctly reported. This issue started around 8:10 AM PT and was resolved by 10:15 AM PT. Again, no production outage actually happened only sf.org metrics are reporting incorrect information. We apologize for this inconvenience. Impacted metrics: - sf.org.numDatapointsDroppedBatchSizeByToken - sf.org.numDatapointsDroppedBatchSize

Report: "MetricTimeSeries Creation Delayed"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

Customers may experience delays in seeing newly created MetricTimeSeries. Datapoint ingest is not affected for existing MetricTimeSeries.

Report: "Partial UI outage for Data management view"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are currently monitoring

identified

The issue has been identified and the fix is being implemented.

investigating

We are continuing to investigate the issue.

investigating

Data management for integrations list affected

Report: "Issues with Splunk APM usage and throttling metrics"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and Splunk APM usage and throttling metrics are now being generated as expected. We are continuing to monitor for any further issues.

investigating

We are investigating some issues with usage and throttling metrics generated for Splunk APM that might lead to inconsistencies in any charts that use them. Trace data is not impacted and continues to be processed as expected.

Report: "Datapoints Being Dropped"

Last update
resolved

This incident has been resolved.

monitoring

Azure metrics and tags collection may be delayed or dropped due to Azure outage https://status.azure.com/en-us/status . Datapoint ingest is affected and we are dropping datapoints on Azure cloud integration. We are monitoring and will provide an update once the issue is resolved.