Splunk Observability Cloud JP0

Is Splunk Observability Cloud JP0 Down Right Now? Check if there is a current outage ongoing.

Splunk Observability Cloud JP0 is currently Operational

Last checked from Splunk Observability Cloud JP0's official status page

Historical record of incidents for Splunk Observability Cloud JP0

Report: "Splunk APM MetricSets Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the performance of a key backend component of Splunk APM is causing both Troubleshooting and Monitoring MetricSets to be delayed by more than five minutes. No data is being dropped at this time but data for the APM Troubleshooting page and Tag Spotlight experience, as well as metrics created from traces and APM detectors are all delayed.

Report: "Splunk Observability Synthetic Monitoring tests unavailable"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

Synthetic Monitoring tests may experience issues, and runs may be missing or delayed until the issue is resolved. We will update the incident as we receive updated information.

Report: "Splunk Observability Synthetic Monitoring tests unavailable"

Last update
Update

We are continuing to investigate this issue.

Investigating

Synthetic Monitoring tests may experience issues, and runs may be missing or delayed until the issue is resolved. We will update the incident as we receive updated information.

Report: "Intermittent login failures for customers using Unified identity"

Last update
resolved

This incident has been resolved.

investigating

Customers using unified identity may experience intermittent failures while logging into Splunk Observability cloud web interface. Datapoint Ingest is not affected. We are investigating and will provide an update shortly.

Report: "Intermittent login failures for customers using Unified identity"

Last update
Investigating

Customers using unified identity may experience intermittent failures while logging into Splunk Observability cloud web interface. Datapoint Ingest is not affected. We are investigating and will provide an update shortly.

Report: "Splunk Synthetics Google Chrome Upgrade"

Last update
Scheduled

Splunk Synthetic Monitoring will update Google Chrome and Chromium to version 135.0.7049.84-1 for Browser tests on 4/22 at 8 am PST. We periodically auto-update to newer versions of Google Chrome/Chromium when available. Due to differences between browser versions, Synthetics test behavior or timings can sometimes change and may require updates to your configured steps.

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Log Observer Connect Search Failures"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "Log Observer Connect Search Failures"

Last update
Investigating

We're experiencing failures for searches related to Log observer. We will provide an update as soon as possible.

Report: "Splunk APM Troubleshooting MetricSets Delayed"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the performance of the Splunk APM trace processing pipeline is causing Troubleshooting MetricSets to be delayed by more than five minutes. As a result, the APM Troubleshooting experience, service maps and Tag Spotlight do not have access to the most recent data. The processing of metrics for Business Workflows, which also depends on this pipeline, are equally delayed. Trace data ingest is not impacted at this time; service-level and endpoint-level Monitoring MetricSets and the detectors built from them are also not impacted.

Report: "Splunk APM Troubleshooting MetricSets Delayed"

Last update
Identified

The issue has been identified and a fix is being implemented.

Investigating

A degradation in the performance of the Splunk APM trace processing pipeline is causing Troubleshooting MetricSets to be delayed by more than five minutes. As a result, the APM Troubleshooting experience, service maps and Tag Spotlight do not have access to the most recent data.The processing of metrics for Business Workflows, which also depends on this pipeline, are equally delayed. Trace data ingest is not impacted at this time; service-level and endpoint-level Monitoring MetricSets and the detectors built from them are also not impacted.

Report: "Unable to fetch Splunk RUM sessions"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

We are currently investigating this issue.

Report: "Unable to fetch Splunk RUM sessions"

Last update
Resolved

This incident has been resolved.

Identified

The issue has been identified and a fix is being implemented.

Investigating

We are currently investigating this issue.

Report: "Splunk Synthetics runners are unavailable in the Stockholm region"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

We are seeing signs of recovery and are continuing to monitor.

identified

Splunk Synthetics runners are unavailable in the Stockholm region due to an AWS EC2 instance outage in the eu-north-1 region.

Report: "Splunk APM Trace Data Ingestion Delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

A degradation in the the Splunk APM data ingestion pipeline is causing delays in processing and storage of trace data leading to issues with loading Trace Analysis and Exemplars.

Report: "Splunk Synthetic Monitoring updated Google Chrome to version 125"

Last update
resolved

Splunk Synthetic Monitoring updated Google Chrome to version 125 for Browser tests on July 18 at 12:30 PM EDT. We periodically auto-update to newer versions of Google Chrome when available. Due to differences between browser versions, test behavior or timings can sometimes change and may require updates to your test steps.

Report: "Delayed metrics from GCP"

Last update
resolved

The issue with Cloud Monitoring metrics has been resolved for all affected users as of Monday, 2024-07-15 10:52 US/Pacific.

identified

We are experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped. This is due to a GCP issue. See status page here: https://status.cloud.google.com/incidents/ERzzrJqeGR2GCW51XKFv

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed or dropped.

investigating

Experiencing an issue syncing cloud metrics from GCP. Some metrics from GCP could be delayed.

Report: "Charts not Loading for APM Workflows"

Last update
resolved

This incident has been resolved.

investigating

The issue is limited to some customers. We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

APM Workflow charts may not be loading for some customers. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Splunk APM Interface Unavailable"

Last update
resolved

The impact has been remediated and all APM services are working as expected. All API requests should be served without issues at this point.

monitoring

There was a major outage of the Splunk APM web application. Trace data ingest was not impacted, degraded performance was detected and a fix has been implemented. We are monitoring the results and will inform about the resolution.

investigating

We continue to investigate a major outage of the Splunk APM web application. Trace data ingest is not impacted at this time, degraded performance detected. We will provide additional updates as soon as possible.

investigating

We continue to investigate a major outage of the Splunk APM web application. Trace data ingest is not impacted at this time. We will provide additional updates as soon as possible.

investigating

We are investigating a major outage of the Splunk APM web application. Trace data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Splunk APM Exemplar Search errors out"

Last update
resolved

We are experiencing a degradation in the availability of trace exemplars. When clicking on charts to search for trace exemplars, results may not be available. It may not be possible to view individual traces.

Report: "Splunk Observability Cloud Web Interface Unavailable"

Last update
resolved

The Splunk Observability Cloud web application was unavailable between 21:17 UTC and 21:37 UTC. Data ingest and alerting was not impacted.

Report: "Configuration analysis of Splunk APM MetricSets was failing"

Last update
resolved

A bug was preventing the creation of new MetricSets configurations or disabling/modifying existing MetricSets configurations. The computation of existing MetricSets was not affected.

Report: "Splunk APM Exemplar Search"

Last update
resolved

This incident has been resolved.

investigating

A degradation in the availability of trace exemplars occurred from February 14 3PM PST to February 15 10AM PST (February 14 11PM UTC to February 15 6PM UTC). When clicking on charts to search for trace exemplars, results may not have been available. Viewing individual traces may not have been possible.

Report: "Alerts being sent for some detector rules previously disabled via API"

Last update
resolved

This incident has been resolved.

identified

We have identified a bug that is causing alerts to be sent for detector rules that have been previously disabled via the API. This issue does not impact detectors that have rules disabled via the UI. Starting January 18, some detectors that had rules disabled via the API (such as through Terraform) continued to send alerts. We are currently rolling out a fix to production systems and expect to resolve the issue within the next 3 hours.

Report: "Elevated Error Rate Accessing the Splunk APM Interface"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are investigating an elevated rate of errors occurring while using the Splunk APM Interface. Parts of the Splunk APM Troubleshooting experience, like the trace details interface, may be impacted and return errors. Trace data ingest is not impacted.

Report: "Charts and Detectors not Loading"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

Charts and Detectors for a subset of customers may not be loading. Datapoint ingest is not affected. We are investigating and will provide an update shortly.

Report: "Alert Modal not displaying plot lines"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We’re investigating an issue with the Alert Modal chart not displaying plots. Only charts within Alert Modals are impacted. Alerting, notification, and general charting are not impacted.

Report: "Synthetics - Degraded processing rate with run count."

Last update
resolved

Certain scheduled synthetics runs may not have been processed during this time. This has now been resolved and the system should be back to normal.

investigating

We are continuing to investigate this issue.

investigating

We are currently experiencing reduced run counts and are working to remediate the issue.

Report: "Splunk APM Monitoring MetricSets delayed"

Last update
resolved

A degradation in the performance of the Splunk APM metrics processing pipeline caused Monitoring MetricSets to be delayed by more than five minutes. Trace data ingest is not impacted, but service, endpoint and workflow dashboards, and other charts and detectors built from Monitoring MetricSets are impacted.

Report: "Splunk APM Monitoring MetricSets delayed"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

We are continuing to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate the issue

investigating

A degradation in the performance of the Splunk APM metrics processing pipeline is causing Monitoring MetricSets to be delayed by more than five minutes. Trace data ingest is not impacted, but service, endpoint and workflow dashboards, and other charts and detectors built from Monitoring MetricSets are impacted.

Report: "Splunk APM Trace Data Being Dropped"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

A degradation of the trace data ingest path for Splunk APM is causing trace spans to be dropped and lost. We are investigating the issue and will provide an update as soon as possible.

Report: "Splunk Enterprise Cloud customers not able to sign up for Splunk Observability Cloud"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

Customers using Splunk Enterprise Cloud SSO are currently not able to sign up for Splunk Observability Cloud. Splunk Cloud SSO customers who are already signed up, are facing intermittent issues accessing Log Observer Connect. The problem has been identified and a fix is in progress

Report: "Splunk Log Observer Charts Interface Unavailable"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented

investigating

We are continuing to investigate this issue.

investigating

We are investigating a partial outage of the Splunk Log Observer charts. Log data ingest is not impacted at this time. We will provide an update as soon as possible.

Report: "Unable to create/manage Splunk LO Connect connections"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

We are investigating an issue wherein users are unable to create and manage Splunk LO Connect connections. There is no impact on functioning of existing LO Connect connections.

Report: "Intermittent failures for some Log Observer Connect searches"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor that Log Observer Connect searches are fully operational

identified

The issue has been identified and a fix has been implemented.

investigating

We are investigating intermittent failures for some Log Observer Connect searches

Report: "Metrics and tags collection is being dropped"

Last update
resolved

This incident has been resolved.

identified

A degradation of the ingest path for Splunk Cloud Integration is causing data points to be dropped and lost since Thursday, May 25 at 5p PT. We have identified the issue and are implementing a fix.

Report: "Intermittent Query Slowness for Splunk Log Observer Connect"

Last update
resolved

This incident has been resolved.

investigating

We are investigating intermittent query slowness for some Log Observer Connect customers.

Report: "Some sf.org metrics are incorrectly reporting drops - no data is being dropped"

Last update
resolved

A recent code push resulted in some sf.org metrics reporting incorrectly that data was being dropped. No data drops actually occurred. The code change has been rolled back and the metrics are now being correctly reported. This issue started around 8:10 AM PT and was resolved by 10:15 AM PT. Again, no production outage actually happened only sf.org metrics are reporting incorrect information. We apologize for this inconvenience. Impacted metrics: - sf.org.numDatapointsDroppedBatchSizeByToken - sf.org.numDatapointsDroppedBatchSize

Report: "Partial UI outage for Data management view"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are currently monitoring

identified

The issue has been identified and the fix is being implemented

investigating

We are continuing to investigate the issue.

investigating

Data management for integrations list affected

Report: "Issues with Splunk APM usage and throttling metrics"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and Splunk APM usage and throttling metrics are now being generated as expected. We are continuing to monitor for any further issues.

identified

We are investigating some issues with usage and throttling metrics generated for Splunk APM that might lead to inconsistencies in any charts that use them. Trace data is not impacted and continues to be processed as expected.

Report: "Splunk observability Cloud Web Interface is experiencing issues"

Last update
resolved

This incident is resolved and everything is working as expected in Splunk Observability Cloud UI.

investigating

We are continuing to investigate this issue.

investigating

Splunk observability Cloud Web Interface is currently broken in jp0. We are currently investigating the issue.

Report: "Splunk Observability APM Trace Ingestion Delay"

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

Starting at 11:55a PT, a degradation in the performance of the Splunk APM data ingestion pipeline is causing the processing and storage of raw trace data to be delayed by more than five minutes. No data is being lost at this time and MetricSets are not impacted but the most recent data may not be available in trace search results.