Historical record of incidents for Sentry
Report: "Generalized latency issues throughout infrastructure"
Last updateWe are currently investigating this issue which is impacting both US and EU localities.
Report: "Replay Recording Consumer Backlogging on 4 Partitions"
Last updateThis incident has been resolved.
We have scaled up Replay processing to account for throughput changes and are monitoring the situation.
We are currently investigating this issue.
Report: "Replay Recording Consumer Backlogging on 4 Partitions"
Last updateWe are currently investigating this issue.
Report: "Delays in error ingestion in us region"
Last updateBetween 14:30 - 17:18 UTC, we experienced delays in error ingestion due to some components of our ingestion pipeline adding significant load to our primary database. Average error ingestion delay reached a maximum of 22.5 minutes at around 15:23 UTC. The issue was resolved and error ingestion is operating as expected.
Our ingestion backlog has recovered and everything looks good currently. We're continuing to investigate improvements to our ingestion pipeline to prevent similar bottlenecks in the future. Average event ingestion latency is normal at around 15 seconds.
We've identified an issue causing bottlenecks within our event ingestion pipeline. We are working on optimizing a few areas of our pipeline and are catching up on our ingestion backlog. Average event ingestion delay is now under 8 minutes and continuing to drop. We apologize for any inconvenience caused by this and expect it to be resolved in the next 1 hour. If anything changes, we'll provide another update.
We're continuing to investigate this issue and will provide another update as soon as we've identified the root cause. Maximum error delays peaked at around 22.5 minutes and are currently just under 19 minutes.
We're currently investigating reports of a delay in errors ingestion and will provide further updates as soon as we have more information.
Report: "Delays in error ingestion in us region"
Last updateWe're currently investigating reports of a delay in errors ingestion and will provide further updates as soon as we have more information.
Report: "Integrations Issue"
Last updateBetween 2025-05-26 07:00 UTC and 2025-05-26 11:00 UTC, we experienced an issue with Integrations related to degraded performance, which may have resulted in delayed processing.. The issue was resolved and Integrations are operating as expected.
We're continuing to investigate this issue and will provide another update as soon as we've identified the root cause.
We're currently investigating reports of a potential issue with Integrations and will provide further updates as soon as we have more information.
Report: "Integrations Issue"
Last updateWe're currently investigating reports of a potential issue with Integrations and will provide further updates as soon as we have more information.
Report: "Span alerting in us delayed"
Last updateThe incident has been resolved
A fix has been implemented and we are monitoring the results.
We are currently investigating an issue where span-based alerts are delayed by up to 15 minutes
Report: "Span alerting in us delayed"
Last updateWe are currently investigating an issue where span-based alerts are delayed by up to 15 minutes
Report: "Span Alert Delay"
Last updateBetween 15:17 UTC and 23:51 UTC, we experienced an issue with Span Alert processing in US and DE. The issue was resolved and Span Alerting is operating as expected.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
Report: "Database maintenance in the EU region"
Last updateWe will be doing routine database upgrades in EU during this window. No interruption is expected
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Span Alert Delay"
Last updateThe issue has been identified and a fix is being implemented.
Report: "Slack API errors"
Last updateThis incident has been resolved.
Some slack notifications will fail to send, due to an ongoing slack incident. https://slack-status.com/2025-05/7b32241eb41a54aa
Report: "Slack API errors"
Last updateSome slack notifications will fail to send, due to an ongoing slack incident. https://slack-status.com/2025-05/7b32241eb41a54aa
Report: "Delays in event processing"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
Report: "Delays in event processing"
Last updateThe issue has been identified and a fix is being implemented.
Report: "US spans ingestion delayed"
Last updateThis incident has been resolved.
We've continuing to process our backlog and monitor.
We've addressed the issue and are currently processing our backlog.
We're currently experiencing a delay of about 11 minutes in the ingestion of spans in our US region and we're investigating.
Report: "US spans ingestion delayed"
Last updateWe're currently experiencing a delay of about 11 minutes in the ingestion of spans in our US region and we're investigating.
Report: "Database maintenance in EU region"
Last updateWe will be performing routine database maintenance in our European region. During this time, ingestion may be delayed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Front end 502s in US region"
Last updateThis incident has been resolved.
A fix has been implemented, and we're continuing to monitor for recovery.
We are continuing to investigate errors with loadbalancing infrastructure.
We are currently investigating an increase in status 502 in our US-hosted region.
Report: "Front end 502s in US region"
Last updateWe are currently investigating an increase in status 502 in our US-hosted region.
Report: "Elevated error rates when accessing newly-created organizations in US region"
Last updateThis incident has been resolved.
We've identified the issue, and pushed a mitigation which should restore access while we continue to work on the underlying problem.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are still investigating this issue.
We are still investigating this issue.
We are currently investigating an increase in error rates when accessing newly-created organizations in the US region.
Report: "Elevated error rates when accessing newly-created organizations in US region"
Last updateWe are currently investigating an increase in error rates when accessing newly-created organizations in the US region.
Report: "Elevated 500s in US region"
Last updateSentry's front end returned a higher-than-normal number of 500-series statuses, starting at roughly 23:55 UTC on Saturday 2025-05-03, and ending at roughly 00:06 UTC on Sunday 2025-05-04. Ingestion during this time was delayed by a few minutes, but otherwise unaffected.
Report: "Elevated 500s in US region"
Last updateSentry's front end returned a higher-than-normal number of 500-series statuses, starting at roughly 23:55 UTC on Saturday 2025-05-03, and ending at roughly 00:06 UTC on Sunday 2025-05-04. Ingestion during this time was delayed by a few minutes, but otherwise unaffected.
Report: "API latency"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to test potential workarounds.
We are testing some potential workarounds for this incident.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently investigating increased latency on the sentry API.
Report: "API latency"
Last updateWe are currently investigating increased latency on the sentry API.
Report: "Ingestion issues for Sentry US"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A spike in traffic cause a temporary degradation for one of our regional edge clusters in North America. After system scaling we are back to normal operation.
We are currently investigating this issue.
Report: "Ingestion issues for Sentry US"
Last updateWe are currently investigating this issue.
Report: "Elevated UI errors for EU customers"
Last updateThis incident has been resolved.
We are currently investigating a user interface issue affecting a subset of our customers. Some users may experience display inconsistencies or missing elements within the application. Our engineering team is actively working to identify the root cause and implement a fix. We will provide an update as soon as more information is available.
Report: "Elevated UI errors for EU customers"
Last updateWe are currently investigating a user interface issue affecting a subset of our customers. Some users may experience display inconsistencies or missing elements within the application.Our engineering team is actively working to identify the root cause and implement a fix. We will provide an update as soon as more information is available.
Report: "Delayed profile ingestion"
Last updateThe incident has been resolved. Due to issues in our internal systems scaling, between 13:26 UTC and 13:55 UTC we weren't able to process ingested profiles.
A fix has been implemented and we are monitoring the results.
We are currently investigating reports of delayed ingestion of profiles for all customers on sentry.io (US).
Report: "Delayed profile ingestion"
Last updateWe are currently investigating reports of delayed ingestion of profiles for all customers on sentry.io (US).
Report: "Degraded API response times"
Last updateThis incident has been resolved.
Fixes to improve latency have been applied, and additional improvements are being deployed.
We're investigating an issue causing increased latency for some users and working on a fix.
Report: "Degraded API response times"
Last updateWe're investigating an issue causing increased latency for some users and working on a fix.
Report: "Errors ingestion delayed"
Last updateThe backlog has been processed and all delayed data has been backfilled.
Errors ingestion has resumed normal operation. New events will show up in realtime while the remaining backlog of events is being processed and backfilled.
A fix has been implemented and we are monitoring the results.
We are currently investigating reports of delayed ingestion of errors for all customers on sentry.io (US).
Report: "Errors ingestion delayed"
Last updateWe are currently investigating reports of delayed ingestion of errors for all customers on sentry.io (US).
Report: "Elevated error rates for EU site users"
Last updateFrom 19:15 until 20:00 UTC, users using the Sentry.io site in the European region may have experienced elevated error rates due to an unexpected fault in an infrastructure update. Event ingestion continued to operate as expected during this period.
Report: "Elevated error rates for EU site users"
Last updateFrom 19:15 until 20:00 UTC, users using the Sentry.io site in the European region may have experienced elevated error rates due to an unexpected fault in an infrastructure update. Event ingestion continued to operate as expected during this period.
Report: "False positive uptime alerts for Vercel-hosted projects"
Last updateThis incident has been resolved.
We've identified the problem and are working with Vercel to mitigate it.
We are still working with Vercel to identify the issue.
We are still working on this issue.
We are continuing to work on this issue.
We are continuing to investigate this issue.
We're investigating false-positive Uptime alerts for Vercel-hosted projects, and have deactivated alerts for all Vercel-related uptime checks while we resolve this issue.
Report: "False positive uptime alerts for Vercel-hosted projects"
Last updateThis incident has been resolved.
We've identified the problem and are working with Vercel to mitigate it.
We are still working with Vercel to identify the issue.
We are still working on this issue.
We are continuing to work on this issue.
We are continuing to investigate this issue.
We're investigating false-positive Uptime alerts for Vercel-hosted projects, and have deactivated alerts for all Vercel-related uptime checks while we resolve this issue.
Report: "EAP clickhouse cluster in DE overwhelmed"
Last updateEAP cluster is functioning normally again.
We are currently investigating this issue
Report: "EAP clickhouse cluster in DE overwhelmed"
Last updateEAP cluster is functioning normally again.
We are currently investigating this issue
Report: "Mail delivery delays in eu region."
Last updateEmail delivery in the EU region is back to normal.
A mitigation has been put in place. We are continuing to monitor email delivery in the EU region for any further issues.
We are continuing to monitor for any further issues.
Email delivery in the EU region is recovering.
Our email provider in the EU region is having a partial outage, so email delivery will be temporarily inconsistent.
Report: "Mail delivery delays in eu region."
Last updateEmail delivery in the EU region is back to normal.
A mitigation has been put in place. We are continuing to monitor email delivery in the EU region for any further issues.
We are continuing to monitor for any further issues.
Email delivery in the EU region is recovering.
Our email provider in the EU region is having a partial outage, so email delivery will be temporarily inconsistent.
Report: "Intermittent errors in EU region"
Last updateBetween March 12 01:06am - 12:52pm UTC, we experienced a networking issue with several of our API canary deployments. Approximately 0.2% of API requests were failing during this time. The issue was resolved and our API is operating as expected.
We've implemented a fix and the errors have stopped, but we’re monitoring just in case.
We're currently investigating reports of an issue that is causing intermittent errors for a small number of customers across different parts of our product and will provide further updates as soon as we have more information.
Report: "Delay in profile and errors ingestion"
Last updateThis issue has been resolved and old events will be backfilled.
Realtime processing is back to normal. We are continuing to work on backfilling the delayed data.
Some data was dropped between 21:43 and 21:53 UTC.
We've mitigated the root cause and are working on processing the backlog.
We've made infrastructure changes to mitigate the problem and we're working with out vendor. Will update again soon.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
We are currently investigating a delay in profile ingestion.
Report: "Delayed errors ingestion."
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Some customers may experience delayed errors ingestion. We are investigating the issue.
Report: "Sentry.io is unavailable"
Last updateBetween 15:54 UTC and 16:09 UTC, we experienced issues with the availability our dashboard and several APIs, ingestion was not impacted. The issue was resolved and all components are operating as expected.
We have recovered from the earlier issues in our system. We are continuing to monitor the situation.
We are currently investigating this issue.
Report: "Issues with Azure DevOps social sign-in"
Last updateAzure DevOps social sign-ins should work properly now.
We are investigating issues with Azure DevOps social sign-in.
Report: "Slack notification failures"
Last updateThe issue was resolved and Slack notifications are operating as expected.
We are currently experiencing issues with sending notifications to Slack due to an ongoing incident: https://slack-status.com/2025-02/1b757d1d0f444c34
Report: "Uptime Detection Issues"
Last updateThis incident has been resolved.
The increase in false positive timeouts has recovered. We’ve returned the failure threshold to its normal level and are continuing to monitor.
We’re currently experiencing an increase in false positive timeouts. We’ve temporarily raised the failure threshold to 6 checks while we investigate.
Report: "Database issue in US region"
Last updateBetween 14:11 UTC and 14:17 UTC, we experienced problems with our main database, which caused short periods of dashboard unavailability, web API unavailability, and a small number of processing errors.
Report: "Uptime monitoring feature delays in EU"
Last updateThis incident has been resolved.
We're experiencing high ingestion latency for uptime monitoring feature in EU. We're currently investigating.
Report: "Email Notifications in EU not being sent"
Last updateThis incident has been resolved.
Notifications in the EU region are being sent once more. We are monitoring the system to make sure it's stable.
We are continuing to work with our email provider to resolve access, as well as investigating alternatives.
We are currently working with our provider to resolve the issue.
Report: "US Performance alerts delay between 15:30 and 16:05 UTC"
Last updateThis incident has been resolved.
Performance alert delays have been resolved, we're continuing to monitor.
Between 15:30 and 16:05 UTC we experienced delays of up to 7 minutes in our performance alerting in our US region.
Report: "EU region uptime checks delayed"
Last updateThis incident has been resolved.
Uptime checks have recovered and we are now monitoring the situation.
We have identified the issue and issued a fix.
Report: "EU region profiles ingestion delay"
Last updateThe delayed ingestion of profiles in our EU region has been resolved.
We have identified an issue causing profile ingestion to be delayed by up to 30 min in our EU region and are working on a fix.
Report: "Delayed ingestion in US region"
Last updateAll backlogs have been consumed. This incident has been resolved.
We've finished consuming the backlog on errors ingestion, but we're going to continue monitoring the situation.
We've finished consuming the profiles backlog, but we're still working through the errors backlog. (The remaining backlog only consists of events received before roughly 17:30 UTC on 2025-02-03; events received since then should be processed normally.)
We're still processing backlogged errors and profiles.
We've split ingestion so that new errors will be handled right away, while we continue to burn our backlog. Attachment ingestion is no longer backlogged, but we are seeing new delays in profile ingestion that we're working to correct.
We are still consuming backlogged events.
We've implemented a fix, and we're now consuming our backlog.
We've identified a mitigation for the delay, and are currently working to implement it.
We have identified a potential cause, and are investigating further.
We are currently investigating delayed ingestion in the US region.
Report: "Authentication issues affecting SSO and 2FA"
Last updateThe issue preventing 2FA codes from being accepted has been resolved.
We have identified an issue preventing the acceptance of 2FA codes and are working on a fix.
We are currently experiencing authentication issues, affecting SSO and 2FA features. Our team is actively investigating the issue and working to resolve it as quickly as possible.
Report: "EU region alert emails not being sent"
Last updateThe issue causing alert emails to not send in our EU region has been resolved.
We are continuing to work on a fix for issue alert emails not being sent in our EU region.
We have identified an issue causing alert emails to not be sent in our EU region as of 3:19pm UTC and are working on a fix. US region alert emails are not affected.
Report: "Delay in transaction ingestion"
Last updateTransaction ingestion is back to normal.
A fix has been implemented and we are monitoring the results.
We have implemented a fix, and transaction ingestion is beginning to catch up.
We are investigating a delay in transaction ingestion.
Report: "Some EU Performance aggregate data delayed up to 6 hours"
Last updateThis incident has been resolved.
Between January 9th 08:00 and 16:00 UTC our EU region experienced a delay of up to 6 hrs in the ingestion of some aggregate data in our performance product, such as measurements for transactions-per-minute.
Report: "Attachment delays"
Last updateThis incident has been resolved.
Attachment rates are back to normal, monitoring the situation.
Attachment processing latency is back under 1 minute. We are still dealing with elevated numbers of attachments and are working to return to our normal processing rates.
We've caught up with backlogs on about 50% of our attachment partitions. If anything changes, we'll provide another update.
We are investigating a backlog in attachment processing. Attachments are currently delayed by up to an hour. We are processing the backlog.
Report: "Errors, transactions and attachments ingestion issues in US region"
Last updateBetween 10:30 and 12:15 pacific time, we experienced ingestion delays and dropped message in the errors, transactions and attachments ingestion pipelines, due to a component failure related to underlying infrastructure from our cloud provider. Roughly 1 in 1000 messages were not processed, and dropped events did not count against customer quotas. The issue is now resolved.
Ingest latencies have recovered, and we are no longer dropping events. We estimate that 1 in 1000 events were dropped between 10:30 and 12:00 Pacific Time. The maximum ingestion delay was around 35 minutes. Dropped events were not counted against customer quotas.
We are continuing to monitor for any further issues.
Ingest latencies have recovered, and we are no longer dropping events. We estimate that 1 in 1000 events were dropped between 10:30 and 12:00 Pacific Time. The maximum ingestion delay was around 35 minutes. Dropped events were not counted against customer quotas.
We are currently experiencing delays and dropped messages on the events and transactions ingestion pipelines in the US region. We are currently working with our cloud provider to investigate the issue.
Report: "Native stack traces processing failures"
Last updateThis incident has been resolved.
We're experiencing problems with native stack trace processing. Some crashes might fail to symbolicate. We have identified the issue and are working on a solution.
Report: "Delays in spike protection enforcement"
Last updateBetween 10am and 12pm pacific time, we experienced delays in spike protection enforcement.
Report: "Inconsistencies in metrics data and alerts"
Last updateBetween Dec 13 ~4PM PT and Dec 18 ~12PM PT, we experienced an issue with metric data inconsistencies in the UI. The issue is now resolved.
Data for the missing time window (Friday ~4PM PT to Tuesday ~4PM PT in the US region) has been restored, and timeseries data in the UI should be displaying normally. We are continuing to monitor just in case.
A percentage of metrics alerts (~17% of timeseries affected) fired inconsistently or incorrectly between Friday ~4PM PT and Tuesday ~4PM PT, but alerts should now be working properly. Data from those timeseries in the UI is still incorrect for that time window and we’re working on restoring it (new data from Tuesday ~4PM PT and on, is correct).
Report: "US region performance alerts & ingestion delayed"
Last updateThe issue with US region performance alerts and ingestion has been resolved.
US region performance alerts and ingestion are currently delayed by up to 7 min, we're continuing to investigate.
Starting at 15:36 UTC US region Performance alerts and ingestion are delayed by up to 20 min, we're currently investigating
Report: "US performance alerts delayed 10 minutes"
Last updateThis incident has been resolved.
The issue with delayed performance alerts has been resolved, we're continuing to monitor.
Performance alerts are delayed by up to 12 minutes, we're continuing to investigate the issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
Starting at 14:00 UTC Performance alerts in our US region have been delayed by up to 10 minutes, we're investigating the issue.
Report: "US performance metrics delayed ingestion & alerts"
Last updateThis incident has been resolved.
We have fixed the issue with US performance metrics ingestion & alerting and are continuing to monitor them.
We are currently investigating an issue with US performance metrics. US region ingestion and alerts are currently delayed by up to 15 minutes.
Report: "EU profile ingestion issue"
Last updateBetween December 5 21:04 UTC and December 6 00:42 UTC we experienced an interruption in the processing of profiles in our EU region which led to some profile events being lost. We have corrected the issue and profiles are now being processed properly.
Report: "US and EU region delayed ingestion starting at 20:51 UTC"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
We experienced a delay of ingestion for all event types in our US and EU regions starting at 20:51 UTC. Our US region experienced a delay of 8 minutes while our EU region experienced a delay of 20 minutes. We have resolved the issue and are monitoring our ingestion.
Report: "Sentry Denial of Service"
Last updateOn November 27 between 21:46 - 21:57 UTC, there was a denial of service attack on Sentry sending up to 10x the normal volume of traffic. Our systems scaled up within 6 minutes to accommodate this traffic. Between 21:46 - 21:52 UTC some customers may have experienced traffic loss of up to 80%. We are implementing changes to our ingestion layer to reduce the impact of similar attacks in the future.