Historical record of incidents for LogRocket
Report: "Platform wide issues"
Last updateWe are investigating a wide-spread issue with our cloud provider causing ingestion issues and limited access to the dashboard.
Report: "Degraded dashboard performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Ingestion delays"
Last updateThis incident has been resolved.
We've applied a fix and are monitoring the results
We are continuing to work with a vendor on identifying the root cause of our ingestion delays.
We are investigating issues causing ingestion delays. We're working with a vendor to identify the root cause of the problem
Report: "Ingestion delays"
Last updateThis incident has been resolved.
We have identified a potential cause and observing significant improvements to ingestion delays. We are continuing to monitor.
Dashboard performance should be improved. We are continuing to work with a vendor on identifying the root cause of our ingestion delays.
We are investigating issues causing ingestion delays and performance issues loading the dashboard and metrics.
Report: "Delayed Streaming Data Export"
Last updateThis incident has been resolved.
We are continuing to work on a fix for this issue.
We've resolved the issue for most Streaming Data Export destinations. A few customers have been temporarily disabled until another fix goes out in the morning, at which time they will be caught up and we will resolve the incident.
We are continuing to investigate this issue.
The Streaming Data Export service is delayed. We've identified the issue and are working to remediate.
Report: "Infrastructure instability"
Last updateThe system has stabilized and our processing backlog is recovering.
We are seeing significant infrastructure instability from our hosting provider.
Report: "Ingestion Instability"
Last updateThe backlog has fully recovered.
We are continuing to work down a backlog of ingestion from the initial outage, but have otherwise recovered from the outage.
Ingestion has stabilized, we are continuing to monitor the recovery.
We are investigating instability in our ingestion endpoints.
Report: "Degraded ingestion and alerting performance"
Last updateThis incident has been resolved
A fix has been implemented and ingestion is recovering
We've identified an issue with our analytics database resulting in delayed ingestion and degraded alerting performance. The problem has been identified and we're working on a solution.
Report: "Application sporadically unavailable"
Last updateThis incident has been resolved.
We're seeing elevated error rates loading the UI.
Report: "Streaming Data Export paused"
Last updateThis incident has been resolved.
We've monitored several exports and believe this is resolved.
The most recent export window ran successfully but we're continuing to monitor it for issues.
Our Streaming Data Export system is not currently exporting data, we are working to resolve this. When this is reenabled it will catch up on data that would have been exported earlier.
Report: "Ingestion and slow search results"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
System performance has been restored. We are continuing to monitor as we work through the remaining backlog of events to process.
Our vendor has identified a root cause and we are working to restore the search database to full performance.
We are continuing to investigate this issue.
We are investigating an issue with one of our search databases. Ingestion into this system is currently delayed, and loading data on the dashboard may be slow or timeout. Session Data collection is not impacted.
Report: "Increased App Latency"
Last updateThis incident has been resolved.
We've identified the source of the instability and are working through recovery.
We are investigating an issue with slow and/or unresponsive application load times.
Report: "Degraded Search and Analytics Performance"
Last updateThis incident has been resolved.
We've stabilized our infrastructure and are beginning to burn down our queues.
We are investigating an issue with our analytics data stores. Dashboard performance and data freshness may be degraded.
Report: "LogRocket dashboard degraded performance"
Last updateThis incident has been resolved.
The issue has been identified and a fix is being implemented. Users may experience missing sourcemap uploads as we resolve the issue.
We are currently investigating degraded performance of the LogRocket dashboard. Data collection is unaffected
Report: "Secondary Search Indexing Delays"
Last updateThis incident has been resolved.
We've worked with our vendor to implement a fix and we're starting to catch up and monitoring systems as we do.
We are currently investigating an issue with secondary search indexing. Data collection and session replay are not impacted.
Report: "Secondary Search Delays"
Last updateThis incident has been resolved.
We have addressed the issue and are continuing to monitor the situation. Some secondary search indexing remains delayed at this time.
We are currently investigating an issue with secondary search indexing. Data collection and session replay are not impacted.
Report: "Secondary search indexing delays"
Last updateThis incident has been resolved.
Processing is caught up and stable - we're monitoring to make sure the datastore is stable.
We are investigating an issue with secondary search indexing. Data collection and session replay are not impacted
Report: "Secondary search indexing delays"
Last updateThe backlog has been completely processed and all systems are now up to date.
We have resolved the issue and anticipate the backlog of events to take 1 to to 2 hours to complete.
We have identified a cause and are working towards a resolution.
We are investigating an issue with secondary search indexing. Data collection and session replay are not impacted.
Report: "Processing delays impacting a subset of metrics and filters"
Last updateAfter working with our vendor we believe things stabilized overnight. We will be monitoring throughout the day on Thursday.
We are working with our vendor to implement a resolution to our secondary search databases stability issues and expect to have the issue resolved completely in the next few hours.
We are continuing efforts with our vendor to resolve a stability issue with a secondary search database.
Processing systems are now up-to-date, but we are continuing to investigate and monitor the service causing the original delay.
We have identified an issue causing the performance degradation and are working with a vendor to resolve the issue.
We are investigating processing delays that impacts a small subset of metrics and filters. Session collection is not impacted, and the majority of metrics and filters are not impacted by the processing delay. We will update as we identify and work to resolve the processing delay.
Report: "Ingestion Delays"
Last updateThis incident has been resolved.
We've identified the problem and are working on a resolution.
We're investigating ingestion delays in our metrics pipeline.
Report: "Performance issues with primary search database"
Last updateWe've seen no additional performance degradation after our mitigation work.
We have completed processing of our primary indexing backlog, session filtering and metrics will be mostly up-to-date. Our secondary processing system is currently processing and includes some metrics, issues and alerting.
We have identified the cause of the issue, applied a fix and have begun processing the backlog of events.
We have restored some indexing to the database but are continuing to fully resolve the situation.
We are continuing to work with our vendor to remedy the situation. Session data collection continues to be unaffected, but secondary indexing that powers search and metrics continues to be delayed.
We are continuing to work with our vendor to resolve the issue.
We are continuing to work with our vendor to resolve the issue with ingesting new data.
We have identified the issue with our database vendor and are working with them to remedy the situation.
We are currently investigating an issue impacting our primary search database. Searching for sessions, loading metric charts, issues and alerting are all impacted. Session data collection has not been impacted.
Report: "Search and metrics data performance degraded"
Last updateAll systems are functioning normally again.
We have completed recovery work on the data store and are beginning to backfill search, metrics, and issues data. This process will take several hours to complete and we will continue to monitor the system during that time. No data will be lost. While the backfill is in progress, metrics alerts will not trigger or resolve.
We are investigating performance and availability issues with our primary search database. This impacts searching for sessions, loading metrics, and ingestion of event data into the search database. Session recording ingestion is not impacted by this performance issue.
Report: "We are currently investigating an issue with session ingestion."
Last updateThis incident has been resolved.
A fix has been deployed and we're monitoring as session collection catches up.
We have identified the issue with session ingestion and are working to deploy a solution.
Report: "Sporadic performance and stability issues"
Last updateWe believe the provider issues have been resolved and will be working with them to ensure this does not happen again.
The largest piece of a remediation has been rolled out by our provider. We expect further updates to take time, but believe the most disruptive issue has been resolved.
We're experiencing constrained compute availability from our hosting provider. This may affect performance and stability of the user-facing dashboards as our infrastructure prioritizes data collection and integrity. Our team is working to increase capacity.
Report: "Dashboard instability"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating an issue where the LogRocket dashboard is not serving requests.
Report: "Search Performance Degradation"
Last updateWe've worked around the issue, and will be performing a larger maintenance overnight but at this time the issue appears to be resolved.
We are working with our vendor to resolve performance issues with our primary search database. The issue is impacting performance of searching for sessions, loading metrics, and ingestion of event data into the search database. Session recording ingestion is not impacted by this performance issue.
Report: "Search Performance Degredation"
Last updateThe problematic node has been replaced in our search cluster and performance should be returning to normal levels.
We have identified a misbehaving node in our search cluster, our hosting provider is working to repair the node.
We are continuing to investigate this performance issue with our hosting provider.
We are investigating performance issues with our primary search database. The issue is impacting performance of searching for sessions, loading metrics, and ingestion of event data into the search database. Session recording ingestion is not impacted by this performance issue.
Report: "Analytics Database Performance Issues"
Last updateThis incident has been resolved.
We're watching the data store as processing is catching up.
The issue has been identified and we're working through a processing backlog
We are currently investigating this issue.
Report: "Alerting and Search delays"
Last updateThe incident has been resolved.
We've implemented some fixes and are waiting for a backlog to process while monitoring.
We are currently investigating an issue with our primary search cluster that is causing delays in processing some search data. Alerting is also impacted at this time. Session ingestion is not impacted and we continue to record new sessions.
Report: "Ingestion outage"
Last updateWe are continuing to monitor but systems have returned to normal at this time.
We are continuing to monitor for any further issues- alerting is impacted while we work towards a resolution
We are continuing to monitor for any further issues.
Ingestion of new data has recovered. We are continuing to work with our vendor to better understand the failure and monitor for continued success throughout the day.
We are working with our vendor to investigate an issue with our primary database. Ingestion of new data is currently delayed.
Report: "Search and Alerting Performance Issues"
Last updateThis incident has been resolved.
We've identified the problem and are working to monitor and stabilize the system before we resolve the issue.
We are currently investigating an issue with our primary search cluster that is causing severe delays in loading the dashboard and searching over sessions. Alerting is also impacted at this time. Session ingestion is not impacted and we continue to record new sessions.
Report: "Data Processing Delay"
Last updateThis incident has been resolved.
We are investigating an issue with one of our data stores. This is a high-priority issue as we work to resolve the delays impacting Issues, Metrics and Alerts.
Report: "Data Processing Delays"
Last updateThis incident has been resolved.
We are investigating an issue with one of our processing pipelines with our hosting provider. This is a high-priority issue as we work to resolve the delays impacting Issues, Metrics and Alerts.
Report: "Data processing delays"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
The previously identified issue was resolved but has not fully recovered our data processing systems. We have been in contact and continue to work with our upstream provider on this issue.
We have identified an issue that is causing delays in our data processing. We will update here when the backlog is cleared.
Report: "Data Processing Delays"
Last updateDelayed data has been reprocessed.
Updating affected components
Updating affected components
We have identified an autoscaling issue that is causing delays in our data processing. We are adding more resources manually, and will update this status when the backlog is cleared.
Report: "Search performance degraded"
Last updateThis incident has been resolved.
We are currently investigating degraded search performance on our dashboards.
Report: "Search performance degraded"
Last updateThis incident has been resolved.
We are currently investigating degraded search performance on our dashboards.
Report: "SDK breaking lodash on some sites"
Last updateA bug in our build pipeline caused our SDK to expose an internal library globally. As a result, our subset of the lodash library was bound to `window._`, overwriting the existing `_` global for some customer sites. This meant that some expected library functions were not defined after our script loaded. #### Technical Details: * A code change imported all of lodash and accessed a method on the import instead of importing just the method. * Webpack bundled lodash, including its `window._ `assignment despite the library being used in a module import context. #### Remediations: * We’ve added additional tests to catch unwanted pollution of the window object by our SDK * We’ve added an internal linting rule to avoid full lodash imports
We received customer reports that our SDK interfered with the popular lodash library, resulting in broken customer sites. We've rolled back to a known functional version of our SDK. The incident lasted for about an hour and 40 minutes.
Report: "Session Ingestion Delays"
Last updateSystems have remained stable while continuing to process the backlog of events. We estimate secondary processing will be live in the next 6 hours.
Primary session processing is now real-time. Secondary processing of session data continues, and will complete over the next hours.
Operations have been restored with our primary search cluster. We have resumed processing of delayed sessions.
A new issue has been identified and we're working with our vendor to resolve the issue so we can resume processing of delayed sessions.
We are continuing to monitor as we process pending sessions. We expect session ingestion to be real-time within the hour, followed by secondary processing and alerting.
A fix has been implemented. We are monitoring closely as we begin to process pending sessions.
The issue impacting ingestion has been identified. Session ingestion is currently delayed.
We are currently investigating an issue with session ingestion.
Report: "Data Processing Delays"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are experiencing delays in our data ingestion pipeline. We have identified the issue and are working towards a resolution. No data has been lost and we expect to catch up shortly. Search, metrics, and error reporting are currently delayed.
Report: "Authentication Service Outage"
Last updateDashboard users appear to be able to log in successfully again. We will continue monitoring the incident until our authentication provider reports that all functionality has been restored and provide any relevant updates.
Our authentication provider is experiencing an outage, preventing authenticating and logging in to our dashboard.
Report: "Reports of errors loading the session list page"
Last updateThis incident has been resolved.
We have deployed a fix and are monitoring the results
The issue has been identified and a fix is being implemented.
We are investigating reports of intermittent errors loading the session list in the LogRocket dashboard.
Report: "Reports of dashboard not loading"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Data Processing Delays"
Last updateWe have completed processing this afternoon's data.
Our data ingestion system is back at full capacity, and is in the process of catching up. We will resolve this incident when all data has been processed.
We are experiencing delays in our data ingestion pipeline. We have identified the issue and are working towards a resolution. No data has been lost and we expect to catch up shortly. Search, metrics, and error reporting are currently delayed.
Report: "Reports of dashboard not loading"
Last updateThis incident has been resolved.
Dashboard service has been restored. We are continuing to investigate problems with Error Reporting and Alerts from Errors.
We are continuing to work on a fix for this issue.
An issue with a management database has been identified. We are working to restore dashboard services for impacted users.
We are continuing to work on a fix for this issue.
We are investigating reports of the dashboard not loading for customers
Report: "Data Processing Delays"
Last updatePrimary processing is now current, session listing and basic search is now up-to-date. Secondary processing (which includes errors and alerts) will catch up over the next few hours.
The issue has been resolved. We are monitoring the ingestion pipeline as delayed messages are processed. We will update again as the system catches up.
We are experiencing delays in our data ingestion pipeline. We have identified the issue and are working towards a resolution. No data has been lost and we expect to catch up shortly. Search, metrics, and error reporting are currently delayed.
Report: "Data Processing Delays"
Last updateThe fix has been verified. There remains a minor delay in processing, which should be fully resolved in the next 30 minutes.
A fix has been deployed, we are monitoring closely as we process all pending events.
We are experiencing delays in our data ingestion pipeline. We have identified the issue and are working towards a resolution. No data has been lost and we expect to catch up shortly. Search, metrics, and error reporting are currently delayed.
Report: "Data Processing Delays"
Last updateOur processing completed overnight and usage has returned to normal.
Core session processing is now current. Secondary processing, which includes errors, is continuing to work through a larger backlog.
Our processing performance has been restored and we're working through our processing backlog
We've completed some storage maintenance and are working to catch up on processing of sessions.
We are continuing to work on a fix for this issue.
We have identified the issue and are working towards a resolution. Data collection is not impacted. Search, metrics and error reporting are currently delayed.
We are continuing to investigate this issue.
We are experiencing delays in our data ingestion pipeline. No data has been lost and we expect to catch up shortly.
Report: "Session Ingestion Instability"
Last updateThis incident has been resolved.
Primary service has been restored. We are continuing to monitor as the data backlog enters the system.
We've identified a problem with our CDN leading to degraded performance in our session ingestion system. New sessions and associated data may be delayed in appearing on the dashboard, and new errors/issues and alerts may also be delayed.
Report: "Session processing delays"
Last updateThe search database upgrade is now in progress. Updates will be posted at https://status.logrocket.com/incidents/137k6mv80lyx.
Processing of session data may be delayed while we stabilize and prepare to upgrade our search database. Ingestion of the data is working as expected - no data is being lost - but features that rely on processing of session data (e.g., session filtering on the dashboard, metrics charts and alerting) may be impacted.
Processing of session data may be delayed while we stabilize and prepare to upgrade our search database. Ingestion of the data is working as expected - no data is being lost - but features that rely on processing of session data (e.g., session filtering on the dashboard, metrics charts and alerting) may be impacted.
Report: "Session Ingestion Instability"
Last updateThis incident has been resolved.
Session ingestion and processing have stabilized. We're continuing to monitor closely.
We're investigating degraded performance in our session ingestion and processing system. New sessions may be delayed in appearing on the dashboard.