Sentry

Is Sentry Down Right Now? Check if there is a current outage ongoing.

Sentry is currently Operational

Last checked from Sentry's official status page

Historical record of incidents for Sentry

Report: "Generalized latency issues throughout infrastructure"

Last update
investigating

We are currently investigating this issue which is impacting both US and EU localities.

Report: "Replay Recording Consumer Backlogging on 4 Partitions"

Last update
resolved

This incident has been resolved.

monitoring

We have scaled up Replay processing to account for throughput changes and are monitoring the situation.

investigating

We are currently investigating this issue.

Report: "Replay Recording Consumer Backlogging on 4 Partitions"

Last update
Investigating

We are currently investigating this issue.

Report: "Delays in error ingestion in us region"

Last update
resolved

Between 14:30 - 17:18 UTC, we experienced delays in error ingestion due to some components of our ingestion pipeline adding significant load to our primary database. Average error ingestion delay reached a maximum of 22.5 minutes at around 15:23 UTC. The issue was resolved and error ingestion is operating as expected.

monitoring

Our ingestion backlog has recovered and everything looks good currently. We're continuing to investigate improvements to our ingestion pipeline to prevent similar bottlenecks in the future. Average event ingestion latency is normal at around 15 seconds.

identified

We've identified an issue causing bottlenecks within our event ingestion pipeline. We are working on optimizing a few areas of our pipeline and are catching up on our ingestion backlog. Average event ingestion delay is now under 8 minutes and continuing to drop. We apologize for any inconvenience caused by this and expect it to be resolved in the next 1 hour. If anything changes, we'll provide another update.

investigating

We're continuing to investigate this issue and will provide another update as soon as we've identified the root cause. Maximum error delays peaked at around 22.5 minutes and are currently just under 19 minutes.

investigating

We're currently investigating reports of a delay in errors ingestion and will provide further updates as soon as we have more information.

Report: "Delays in error ingestion in us region"

Last update
Investigating

We're currently investigating reports of a delay in errors ingestion and will provide further updates as soon as we have more information.

Report: "Integrations Issue"

Last update
resolved

Between 2025-05-26 07:00 UTC and 2025-05-26 11:00 UTC, we experienced an issue with Integrations related to degraded performance, which may have resulted in delayed processing.. The issue was resolved and Integrations are operating as expected.

investigating

We're continuing to investigate this issue and will provide another update as soon as we've identified the root cause.

investigating

We're currently investigating reports of a potential issue with Integrations and will provide further updates as soon as we have more information.

Report: "Integrations Issue"

Last update
Investigating

We're currently investigating reports of a potential issue with Integrations and will provide further updates as soon as we have more information.

Report: "Span alerting in us delayed"

Last update
resolved

The incident has been resolved

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating an issue where span-based alerts are delayed by up to 15 minutes

Report: "Span alerting in us delayed"

Last update
Investigating

We are currently investigating an issue where span-based alerts are delayed by up to 15 minutes

Report: "Span Alert Delay"

Last update
resolved

Between 15:17 UTC and 23:51 UTC, we experienced an issue with Span Alert processing in US and DE. The issue was resolved and Span Alerting is operating as expected.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

Report: "Database maintenance in the EU region"

Last update
Scheduled

We will be doing routine database upgrades in EU during this window. No interruption is expected

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Span Alert Delay"

Last update
Identified

The issue has been identified and a fix is being implemented.

Report: "Slack API errors"

Last update
resolved

This incident has been resolved.

identified

Some slack notifications will fail to send, due to an ongoing slack incident. https://slack-status.com/2025-05/7b32241eb41a54aa

Report: "Slack API errors"

Last update
Identified

Some slack notifications will fail to send, due to an ongoing slack incident. https://slack-status.com/2025-05/7b32241eb41a54aa

Report: "Delays in event processing"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

Report: "Delays in event processing"

Last update
Identified

The issue has been identified and a fix is being implemented.

Report: "US spans ingestion delayed"

Last update
resolved

This incident has been resolved.

monitoring

We've continuing to process our backlog and monitor.

monitoring

We've addressed the issue and are currently processing our backlog.

investigating

We're currently experiencing a delay of about 11 minutes in the ingestion of spans in our US region and we're investigating.

Report: "US spans ingestion delayed"

Last update
Investigating

We're currently experiencing a delay of about 11 minutes in the ingestion of spans in our US region and we're investigating.

Report: "Database maintenance in EU region"

Last update
Scheduled

We will be performing routine database maintenance in our European region. During this time, ingestion may be delayed.

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Front end 502s in US region"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented, and we're continuing to monitor for recovery.

investigating

We are continuing to investigate errors with loadbalancing infrastructure.

investigating

We are currently investigating an increase in status 502 in our US-hosted region.

Report: "Front end 502s in US region"

Last update
Investigating

We are currently investigating an increase in status 502 in our US-hosted region.

Report: "Elevated error rates when accessing newly-created organizations in US region"

Last update
resolved

This incident has been resolved.

monitoring

We've identified the issue, and pushed a mitigation which should restore access while we continue to work on the underlying problem.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are still investigating this issue.

investigating

We are still investigating this issue.

investigating

We are currently investigating an increase in error rates when accessing newly-created organizations in the US region.

Report: "Elevated error rates when accessing newly-created organizations in US region"

Last update
Investigating

We are currently investigating an increase in error rates when accessing newly-created organizations in the US region.

Report: "Elevated 500s in US region"

Last update
Resolved

Sentry's front end returned a higher-than-normal number of 500-series statuses, starting at roughly 23:55 UTC on Saturday 2025-05-03, and ending at roughly 00:06 UTC on Sunday 2025-05-04. Ingestion during this time was delayed by a few minutes, but otherwise unaffected.

Report: "Elevated 500s in US region"

Last update
resolved

Sentry's front end returned a higher-than-normal number of 500-series statuses, starting at roughly 23:55 UTC on Saturday 2025-05-03, and ending at roughly 00:06 UTC on Sunday 2025-05-04. Ingestion during this time was delayed by a few minutes, but otherwise unaffected.

Report: "API latency"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to test potential workarounds.

investigating

We are testing some potential workarounds for this incident.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating increased latency on the sentry API.

Report: "API latency"

Last update
Investigating

We are currently investigating increased latency on the sentry API.

Report: "Ingestion issues for Sentry US"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

A spike in traffic cause a temporary degradation for one of our regional edge clusters in North America. After system scaling we are back to normal operation.

investigating

We are currently investigating this issue.

Report: "Ingestion issues for Sentry US"

Last update
Investigating

We are currently investigating this issue.

Report: "Elevated UI errors for EU customers"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating a user interface issue affecting a subset of our customers. Some users may experience display inconsistencies or missing elements within the application. Our engineering team is actively working to identify the root cause and implement a fix. We will provide an update as soon as more information is available.

Report: "Elevated UI errors for EU customers"

Last update
Investigating

We are currently investigating a user interface issue affecting a subset of our customers. Some users may experience display inconsistencies or missing elements within the application.Our engineering team is actively working to identify the root cause and implement a fix. We will provide an update as soon as more information is available.

Report: "Delayed profile ingestion"

Last update
resolved

The incident has been resolved. Due to issues in our internal systems scaling, between 13:26 UTC and 13:55 UTC we weren't able to process ingested profiles.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating reports of delayed ingestion of profiles for all customers on sentry.io (US).

Report: "Delayed profile ingestion"

Last update
Investigating

We are currently investigating reports of delayed ingestion of profiles for all customers on sentry.io (US).

Report: "Degraded API response times"

Last update
resolved

This incident has been resolved.

investigating

Fixes to improve latency have been applied, and additional improvements are being deployed.

investigating

We're investigating an issue causing increased latency for some users and working on a fix.

Report: "Degraded API response times"

Last update
Investigating

We're investigating an issue causing increased latency for some users and working on a fix.

Report: "Errors ingestion delayed"

Last update
resolved

The backlog has been processed and all delayed data has been backfilled.

monitoring

Errors ingestion has resumed normal operation. New events will show up in realtime while the remaining backlog of events is being processed and backfilled.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating reports of delayed ingestion of errors for all customers on sentry.io (US).

Report: "Errors ingestion delayed"

Last update
Investigating

We are currently investigating reports of delayed ingestion of errors for all customers on sentry.io (US).

Report: "Elevated error rates for EU site users"

Last update
resolved

From 19:15 until 20:00 UTC, users using the Sentry.io site in the European region may have experienced elevated error rates due to an unexpected fault in an infrastructure update. Event ingestion continued to operate as expected during this period.

Report: "Elevated error rates for EU site users"

Last update
Resolved

From 19:15 until 20:00 UTC, users using the Sentry.io site in the European region may have experienced elevated error rates due to an unexpected fault in an infrastructure update. Event ingestion continued to operate as expected during this period.

Report: "False positive uptime alerts for Vercel-hosted projects"

Last update
resolved

This incident has been resolved.

identified

We've identified the problem and are working with Vercel to mitigate it.

investigating

We are still working with Vercel to identify the issue.

investigating

We are still working on this issue.

investigating

We are continuing to work on this issue.

investigating

We are continuing to investigate this issue.

investigating

We're investigating false-positive Uptime alerts for Vercel-hosted projects, and have deactivated alerts for all Vercel-related uptime checks while we resolve this issue.

Report: "False positive uptime alerts for Vercel-hosted projects"

Last update
Resolved

This incident has been resolved.

Identified

We've identified the problem and are working with Vercel to mitigate it.

Update

We are still working with Vercel to identify the issue.

Update

We are still working on this issue.

Update

We are continuing to work on this issue.

Update

We are continuing to investigate this issue.

Investigating

We're investigating false-positive Uptime alerts for Vercel-hosted projects, and have deactivated alerts for all Vercel-related uptime checks while we resolve this issue.

Report: "EAP clickhouse cluster in DE overwhelmed"

Last update
resolved

EAP cluster is functioning normally again.

investigating

We are currently investigating this issue

Report: "EAP clickhouse cluster in DE overwhelmed"

Last update
Resolved

EAP cluster is functioning normally again.

Investigating

We are currently investigating this issue

Report: "Mail delivery delays in eu region."

Last update
resolved

Email delivery in the EU region is back to normal.

monitoring

A mitigation has been put in place. We are continuing to monitor email delivery in the EU region for any further issues.

monitoring

We are continuing to monitor for any further issues.

monitoring

Email delivery in the EU region is recovering.

identified

Our email provider in the EU region is having a partial outage, so email delivery will be temporarily inconsistent.

Report: "Mail delivery delays in eu region."

Last update
Resolved

Email delivery in the EU region is back to normal.

Update

A mitigation has been put in place. We are continuing to monitor email delivery in the EU region for any further issues.

Update

We are continuing to monitor for any further issues.

Monitoring

Email delivery in the EU region is recovering.

Identified

Our email provider in the EU region is having a partial outage, so email delivery will be temporarily inconsistent.

Report: "Intermittent errors in EU region"

Last update
resolved

Between March 12 01:06am - 12:52pm UTC, we experienced a networking issue with several of our API canary deployments. Approximately 0.2% of API requests were failing during this time. The issue was resolved and our API is operating as expected.

monitoring

We've implemented a fix and the errors have stopped, but we’re monitoring just in case.

investigating

We're currently investigating reports of an issue that is causing intermittent errors for a small number of customers across different parts of our product and will provide further updates as soon as we have more information.

Report: "Delay in profile and errors ingestion"

Last update
resolved

This issue has been resolved and old events will be backfilled.

identified

Realtime processing is back to normal. We are continuing to work on backfilling the delayed data.

identified

Some data was dropped between 21:43 and 21:53 UTC.

identified

We've mitigated the root cause and are working on processing the backlog.

identified

We've made infrastructure changes to mitigate the problem and we're working with out vendor. Will update again soon.

identified

We are continuing to work on a fix for this issue.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

We are currently investigating a delay in profile ingestion.

Report: "Delayed errors ingestion."

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Some customers may experience delayed errors ingestion. We are investigating the issue.

Report: "Sentry.io is unavailable"

Last update
resolved

Between 15:54 UTC and 16:09 UTC, we experienced issues with the availability our dashboard and several APIs, ingestion was not impacted. The issue was resolved and all components are operating as expected.

monitoring

We have recovered from the earlier issues in our system. We are continuing to monitor the situation.

investigating

We are currently investigating this issue.

Report: "Issues with Azure DevOps social sign-in"

Last update
resolved

Azure DevOps social sign-ins should work properly now.

investigating

We are investigating issues with Azure DevOps social sign-in.

Report: "Slack notification failures"

Last update
resolved

The issue was resolved and Slack notifications are operating as expected.

identified

We are currently experiencing issues with sending notifications to Slack due to an ongoing incident: https://slack-status.com/2025-02/1b757d1d0f444c34

Report: "Uptime Detection Issues"

Last update
resolved

This incident has been resolved.

monitoring

The increase in false positive timeouts has recovered. We’ve returned the failure threshold to its normal level and are continuing to monitor.

investigating

We’re currently experiencing an increase in false positive timeouts. We’ve temporarily raised the failure threshold to 6 checks while we investigate.

Report: "Database issue in US region"

Last update
resolved

Between 14:11 UTC and 14:17 UTC, we experienced problems with our main database, which caused short periods of dashboard unavailability, web API unavailability, and a small number of processing errors.

Report: "Uptime monitoring feature delays in EU"

Last update
resolved

This incident has been resolved.

investigating

We're experiencing high ingestion latency for uptime monitoring feature in EU. We're currently investigating.

Report: "Email Notifications in EU not being sent"

Last update
resolved

This incident has been resolved.

monitoring

Notifications in the EU region are being sent once more. We are monitoring the system to make sure it's stable.

investigating

We are continuing to work with our email provider to resolve access, as well as investigating alternatives.

investigating

We are currently working with our provider to resolve the issue.

Report: "US Performance alerts delay between 15:30 and 16:05 UTC"

Last update
resolved

This incident has been resolved.

monitoring

Performance alert delays have been resolved, we're continuing to monitor.

monitoring

Between 15:30 and 16:05 UTC we experienced delays of up to 7 minutes in our performance alerting in our US region.

Report: "EU region uptime checks delayed"

Last update
resolved

This incident has been resolved.

monitoring

Uptime checks have recovered and we are now monitoring the situation.

identified

We have identified the issue and issued a fix.

Report: "EU region profiles ingestion delay"

Last update
resolved

The delayed ingestion of profiles in our EU region has been resolved.

identified

We have identified an issue causing profile ingestion to be delayed by up to 30 min in our EU region and are working on a fix.

Report: "Delayed ingestion in US region"

Last update
resolved

All backlogs have been consumed. This incident has been resolved.

monitoring

We've finished consuming the backlog on errors ingestion, but we're going to continue monitoring the situation.

monitoring

We've finished consuming the profiles backlog, but we're still working through the errors backlog. (The remaining backlog only consists of events received before roughly 17:30 UTC on 2025-02-03; events received since then should be processed normally.)

monitoring

We're still processing backlogged errors and profiles.

monitoring

We've split ingestion so that new errors will be handled right away, while we continue to burn our backlog. Attachment ingestion is no longer backlogged, but we are seeing new delays in profile ingestion that we're working to correct.

monitoring

We are still consuming backlogged events.

monitoring

We've implemented a fix, and we're now consuming our backlog.

identified

We've identified a mitigation for the delay, and are currently working to implement it.

identified

We have identified a potential cause, and are investigating further.

investigating

We are currently investigating delayed ingestion in the US region.

Report: "Authentication issues affecting SSO and 2FA"

Last update
resolved

The issue preventing 2FA codes from being accepted has been resolved.

identified

We have identified an issue preventing the acceptance of 2FA codes and are working on a fix.

investigating

We are currently experiencing authentication issues, affecting SSO and 2FA features. Our team is actively investigating the issue and working to resolve it as quickly as possible.

Report: "EU region alert emails not being sent"

Last update
resolved

The issue causing alert emails to not send in our EU region has been resolved.

identified

We are continuing to work on a fix for issue alert emails not being sent in our EU region.

identified

We have identified an issue causing alert emails to not be sent in our EU region as of 3:19pm UTC and are working on a fix. US region alert emails are not affected.

Report: "Delay in transaction ingestion"

Last update
resolved

Transaction ingestion is back to normal.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We have implemented a fix, and transaction ingestion is beginning to catch up.

investigating

We are investigating a delay in transaction ingestion.

Report: "Some EU Performance aggregate data delayed up to 6 hours"

Last update
resolved

This incident has been resolved.

identified

Between January 9th 08:00 and 16:00 UTC our EU region experienced a delay of up to 6 hrs in the ingestion of some aggregate data in our performance product, such as measurements for transactions-per-minute.

Report: "Attachment delays"

Last update
resolved

This incident has been resolved.

monitoring

Attachment rates are back to normal, monitoring the situation.

identified

Attachment processing latency is back under 1 minute. We are still dealing with elevated numbers of attachments and are working to return to our normal processing rates.

identified

We've caught up with backlogs on about 50% of our attachment partitions. If anything changes, we'll provide another update.

investigating

We are investigating a backlog in attachment processing. Attachments are currently delayed by up to an hour. We are processing the backlog.

Report: "Errors, transactions and attachments ingestion issues in US region"

Last update
resolved

Between 10:30 and 12:15 pacific time, we experienced ingestion delays and dropped message in the errors, transactions and attachments ingestion pipelines, due to a component failure related to underlying infrastructure from our cloud provider. Roughly 1 in 1000 messages were not processed, and dropped events did not count against customer quotas. The issue is now resolved.

monitoring

Ingest latencies have recovered, and we are no longer dropping events. We estimate that 1 in 1000 events were dropped between 10:30 and 12:00 Pacific Time. The maximum ingestion delay was around 35 minutes. Dropped events were not counted against customer quotas.

monitoring

We are continuing to monitor for any further issues.

monitoring

Ingest latencies have recovered, and we are no longer dropping events. We estimate that 1 in 1000 events were dropped between 10:30 and 12:00 Pacific Time. The maximum ingestion delay was around 35 minutes. Dropped events were not counted against customer quotas.

investigating

We are currently experiencing delays and dropped messages on the events and transactions ingestion pipelines in the US region. We are currently working with our cloud provider to investigate the issue.

Report: "Native stack traces processing failures"

Last update
resolved

This incident has been resolved.

identified

We're experiencing problems with native stack trace processing. Some crashes might fail to symbolicate. We have identified the issue and are working on a solution.

Report: "Delays in spike protection enforcement"

Last update
resolved

Between 10am and 12pm pacific time, we experienced delays in spike protection enforcement.

Report: "Inconsistencies in metrics data and alerts"

Last update
resolved

Between Dec 13 ~4PM PT and Dec 18 ~12PM PT, we experienced an issue with metric data inconsistencies in the UI. The issue is now resolved.

monitoring

Data for the missing time window (Friday ~4PM PT to Tuesday ~4PM PT in the US region) has been restored, and timeseries data in the UI should be displaying normally. We are continuing to monitor just in case.

identified

A percentage of metrics alerts (~17% of timeseries affected) fired inconsistently or incorrectly between Friday ~4PM PT and Tuesday ~4PM PT, but alerts should now be working properly. Data from those timeseries in the UI is still incorrect for that time window and we’re working on restoring it (new data from Tuesday ~4PM PT and on, is correct).

Report: "US region performance alerts & ingestion delayed"

Last update
resolved

The issue with US region performance alerts and ingestion has been resolved.

investigating

US region performance alerts and ingestion are currently delayed by up to 7 min, we're continuing to investigate.

investigating

Starting at 15:36 UTC US region Performance alerts and ingestion are delayed by up to 20 min, we're currently investigating

Report: "US performance alerts delayed 10 minutes"

Last update
resolved

This incident has been resolved.

monitoring

The issue with delayed performance alerts has been resolved, we're continuing to monitor.

investigating

Performance alerts are delayed by up to 12 minutes, we're continuing to investigate the issue.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

Starting at 14:00 UTC Performance alerts in our US region have been delayed by up to 10 minutes, we're investigating the issue.

Report: "US performance metrics delayed ingestion & alerts"

Last update
resolved

This incident has been resolved.

monitoring

We have fixed the issue with US performance metrics ingestion & alerting and are continuing to monitor them.

investigating

We are currently investigating an issue with US performance metrics. US region ingestion and alerts are currently delayed by up to 15 minutes.

Report: "EU profile ingestion issue"

Last update
resolved

Between December 5 21:04 UTC and December 6 00:42 UTC we experienced an interruption in the processing of profiles in our EU region which led to some profile events being lost. We have corrected the issue and profiles are now being processed properly.

Report: "US and EU region delayed ingestion starting at 20:51 UTC"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

We experienced a delay of ingestion for all event types in our US and EU regions starting at 20:51 UTC. Our US region experienced a delay of 8 minutes while our EU region experienced a delay of 20 minutes. We have resolved the issue and are monitoring our ingestion.

Report: "Sentry Denial of Service"

Last update
resolved

On November 27 between 21:46 - 21:57 UTC, there was a denial of service attack on Sentry sending up to 10x the normal volume of traffic. Our systems scaled up within 6 minutes to accommodate this traffic. Between 21:46 - 21:52 UTC some customers may have experienced traffic loss of up to 80%. We are implementing changes to our ingestion layer to reduce the impact of similar attacks in the future.