Historical record of incidents for Pipedream
Report: "Experiencing degraded performance across authentication and deployments"
Last updateWe are currently investigating this issue.
Report: "Delayed Connect Webhook Trigger Emits"
Last updateOur data processing infrastructure is running behind which is causing delays for some Connect webhook triggers. No data has been lost and the system has caught up shortly.
Report: "Some Connect Webhooks Experiencing Delays with Execution"
Last updateWe are no longer seeing issues with Connect Webhooks. Triggers are executing normally once again.
Connect Webhooks have now resumed their normal execution.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Some Connect Webhooks Experiencing Delays with Execution"
Last updateWe are no longer seeing issues with Connect Webhooks. Triggers are executing normally once again.
Connect Webhooks have now resumed their normal execution.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Some Workspaces Experiencing Delays with Trigger Execution"
Last updateWe are no longer seeing issues with the impacted workflows and all workflows are properly running again.
The issue impacting workflow triggers appears to be resolved. Events are processing again and we are monitoring our systems to ensure that triggers continue working as expected.
Some customers are reporting an issue with workflow triggers.
Report: "Delayed Connect Webhook Trigger Emits"
Last updateOur data processing infrastructure is running behind which is causing delays for some Connect webhook triggers. No data has been lost and the system has caught up shortly.
Report: "Some Workspaces Experiencing Delays with Trigger Execution"
Last updateWe are no longer seeing issues with the impacted workflows and all workflows are properly running again.
The issue impacting workflow triggers appears to be resolved. Events are processing again and we are monitoring our systems to ensure that triggers continue working as expected.
Some customers are reporting an issue with workflow triggers.
Report: "Elevated timeouts and error rates"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Elevated Error Rates"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Gateway timeouts in HTTP workflow executions"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Dropbox Source Events Delayed"
Last updateThis incident has been resolved.
The issue has been identified and a fix is being implemented.
Report: "Experiencing internal errors with workflow executions"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Google Drive API Outage: HTTP 429 Errors"
Last updateThe issue with the Google Drive API (HTTP 429 errors) has been resolved. Google has restored normal service, and all file uploads through the API are now functioning correctly. We appreciate your patience during this outage.
We are currently experiencing issues due to an outage on the Google Drive API. File uploads are being rejected and receiving HTTP 429 errors for all requests. We are monitoring the situation and will update once normal service is restored.
Report: "Experiencing significant delay with HTTP event sources"
Last updateServices are back up and events have caught up processing.
Services are back online. We'll continue monitoring.
The backend service powering http events encountered an issue during maintenance failover. We're waiting for services to come back up.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "Clicking into any workflow yields 404 error"
Last updateThis incident has been resolved.
We've identified a bug that causes workflows to throw a 404 error. We're addressing and will update here when it's resolved.
Report: "Event sources processing delay"
Last updateWe've disabled the traffic source causing the spike, and delays are back to normal
We're seeing a spike in volume on event sources, leading to a delay in event processing. We're investigating and will post updates ASAP.
Report: "Event History inaccessible"
Last updateThe event history UI is back, and events have caught up processing.
The backend service powering the event history UI (https://pipedream.com/@/event-history) is down, we're working to recover it.
Report: "Degraded functionality for git-synced projects"
Last updateThis incident has been resolved.
Syncing changes with GitHub is currently unavailable due to a downstream outage. See https://www.githubstatus.com/ for details. It is still possible to view and monitor events in GitHub synced projects. Workflow execution for synced projects is generally unaffected but workflows that depend on access to GitHub webhooks or GitHub APIs may also affected by this outage.
Report: "Pipedream.com 502 error"
Last updateThis incident has been resolved.
We are currently investigating pipedream.com frontend returning a 502 error
Report: "Google Drive and Google Sheets triggers may be delayed"
Last updateThis incident has been resolved.
Google Drive sources seems to be recovering, and we are continuing to monitor Google Sheets sources.
We are currently investigating an issue with Pipedream's integration with Google Drive and Google Sheets. We are receiving fewer webhooks from Google Drive and Google Sheets than expected. As a result, some users may experience delays with their workflows which use Google's (Instant) triggers.
Report: "502 Bad Gateway on pipedream.com"
Last updateThis incident has been resolved.
We're seeing elevated CPU on pipedream.com services, causing the service to return a 502 error.
Report: "Xfinity outage affecting access to Pipedream services"
Last updateThe Xfinity outage has been resolved
We've seen reports of some users failing to access Pipedream HTTP endpoints and the Pipedream API. We've isolated this to an issue with the Xfinity ISP and will post updates here as services become available.
Report: "Pipedream API down, affecting https://pipedream.com"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "v1 email sources not emitting events"
Last updateThis incident has been resolved.
Email sources tied to v1 workflows are failing to emit events. We're investigating.
Report: "HTTP Sources Outage"
Last updateThis incident has been resolved.
At this moment, timer-based sources and workflows are degraded due to a backlog of events that still need to be processed. We will continue monitoring the situation until it comes back to normal.
The issue has been identified and we were able to mitigate its impact to a good extent. We will continue monitoring the issue until its full resolution.
The issue has been identified and we were able to mitigate its impact to a good extent. We will continue monitoring the issue until its full resolution.
We identified a potential root cause for the downtime, and are working towards mitigating its impact. This will temporarily affect our website, public API, and platform, for a potentially short period.
We identified a potential root cause for the downtime, and are working towards mitigating its impact. This will temporarily affect our website, public API, and platform, for a potentially short period.
We are experiencing timeouts from HTTP sources, and we're investigating the causes of them.
Report: "AI Code Generation outage"
Last updateThis incident has been resolved.
An upstream AI provider is having a major outage. A.I. code generation in Node.js code steps are affected.
Report: "Timer-based triggers not running"
Last updateThis incident has been resolved.
We've reverted the code that introduced the bug. This also only affects v1 workflow triggers, not those using v2 (the latest version of workflows).
Timer-based triggers — like the Scheduler trigger, and other app-based triggers that run on a schedule — stopped running around 21:30 UTC. Our team is investigating and will post updates here.
Report: "GitHub incident affecting Pipedream GitHub sync"
Last updateThis incident has been resolved.
A GitHub incident (https://www.githubstatus.com/) is affecting our GitHub integration. When GitHub resolves this, we'll update you here.
Report: "Package installation through npm yielding "Failed to fetch" errors"
Last updateThis incident has been resolved.
We're seeing 500 errors reported from npm trying to install packages. You may see a "Failed to fetch" error when trying to test / deploy code steps that use third-party npm packages. We'll update this when the incident is resolved.
Report: "S3 destination ($.send.s3) writes delayed"
Last updateThis incident has been resolved.
S3 object delivery is delayed because of a bad configuration change. We've corrected the change. All objects sent to $.send.s3 have been retained, but delivery to the destination bucket will be delayed while we're recovering from the change.
Report: "Unable to scroll in workflow builder"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Slowness in the UI"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "New workflow creation not working"
Last updateThis incident has been resolved.
Building a new workflow results in a 404. We've found the cause and are addressing it.
Report: "Deploying and testing code fails in Pipedream UI"
Last updateWe've shipped a fix. This should be resolved.
We're seeing reports of users not able to deploy new workflows / sources, or test new code. We've identified the commit that introduced the bug and are reverting now.
Report: "Slow builder loading times"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Pipedream.com outage"
Last updateThe incident should be resolved. We updated node group to our Kubernetes production cluster and errantly removed a key policy in that update, and pods could not be scheduled on the new nodes. We identified the issue and the service was back by 19:53 UTC.
Services are beginning to come back up. We're continuing to monitor.
We are currently investigating this issue.
Report: "Workflows failing to trigger"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating an issue that's causing workflows to fail to trigger executions.
Report: "Spike in Pipedream Internal Errors due to AWS incident"
Last updateIncoming workflow events should be processing correctly again. Please let us know if you continue to see issues.
Workflows continue to fail due to an ongoing AWS incident. You can follow progress at https://health.aws.amazon.com/health/status
AWS is encountering an incident with multiple services in the us-east-1 region. We're escalating with our AWS team, and will provide an update ASAP.
Report: "Pipedream API error causing login failures"
Last updateThis incident has been resolved.
Redis has deployed new TLS certificates, and our services are successfully connecting to these databases again. Logins are working, and we're monitoring for any unresolved issues.
The certificate on one of the Redis clusters we connect to expired, causing connection issues from our API. The Redis team is working to resolve.
We've identified the source of the issue with our Redis cluster, and are actively working with the Redis team to troubleshoot.
We're investigating an issue with the Pipedream API. This is causing login failures and issues with the Pipedream REST API. Wotkflows appear to be running as normal. We'll update with status ASAP.
Report: "Triggers failing with 502 response"
Last updateThis incident has been resolved.
A fix has been implemented, and we are monitoring the results.
We are currently investigating an issue that is causing intermittent 502 errors for triggers
Report: "New user signups temporarily disabled"
Last updateThis incident has been resolved.
We've re-enabled Google and GitHub signup, but have kept new user creation via username / password temporarily disabled while we mitigate the attack. We've continued seeing no impact to existing users.
We're mitigating a DDoS attack against Pipedream, and are temporarily disabling new user signups to limit the impact. There should be no impact to existing customers. We'll provide another update here ASAP.
Report: "High latency, intermittent failure on https://pipedream.com UI"
Last updateThis should be resolved. A deploy of our frontend code caused a small service interruption on one pod in our cluster. Since only one pod was failing, requests to https://pipedream.com would fail intermittently.
We're investigating issues reported from the https://pipedream.com UI.
Report: "HTTP 5XX errors"
Last updateA large traffic spike caused a specific system to be temporarily overloaded. We scaled up that system to handle the additional load. The incident lasted roughly 8 minutes.
We're seeing 5XX errors reported from HTTP requests to Pipedream workflows, and are investigating.
Report: "Errors loading events, building workflows in the pipedream.com UI"
Last updateThis incident has been resolved.
We've seen reports of errors in the https://pipedream.com frontend. Production workflows should be processing events correctly. This is the result of an issue migrating data between two Redis clusters (which power some event data in the UI). We're addressing the core issues and working on a fix. We'll let you know when the service is operational.
Report: "Workflow Builder tests not working"
Last updateServices are back up. Both https://pipedream.com and all HTTP endpoints should be functioning as normal again.
We've resolved the workflow builder tests. We're also seeing 5XX errors returned for some HTTP requests, and investigating that now.
We've identified an issue with builder tests, and are working on a fix.
Report: "pipedream.com UI down"
Last updateThis has been resolved. https://pipedream.com should now work fine.
We deployed bad code and immediately noticed the issue. We're deploying a fix.
Report: "Spike in Pipedream Internal Errors for workflows"
Last updateAll systems are operational. Incoming events should be processed successfully. We've also processed the backlog of events that arrived during the first part of the incident, from ~1:00am UTC to 3:51 UTC. From 3:51 AM UTC to 6:04 AM, we had to disable incoming events due to some of the load issues we were experiencing. Events sent during this time may be retried by the source services. At 6:04am, we re-enabled the collection of incoming events again. We'll follow up with a detailed retrospective of this incident as soon as possible.
AWS has shipped a fix for the issue. We're restarting our services and bringing workflows back online. We'll send another update as soon as that's done.
AWS is still working on a patch for the issue. We're still in active communication with them.
The AWS Lambda team confirmed the issue was due to the scale of volume Pipedream is running on Lambda. They're working a fix now. We'll update this incident again soon.
We are continuing to work on a fix for this issue.
AWS has escalated to more teams internally. The issue is still ongoing.
We're continuing to discuss this with the AWS team. The issue is still ongoing.
AWS Lambda — part of the service we use to run workflows — has identified a service issue. They're working on it, and we'll communicate updates here.
We're seeing a spike in Pipedream Internal Errors across workflows, and we're investigating.
Report: "Downgraded performance of some event sources"
Last updateThis issue should be resolved. We're processing a small backlog of events from during the incident, but events should be processed soon, and new events should trigger as normal.
We are currently investigating this issue.
Report: "Builder errors"
Last updateThis incident has been resolved.
We are investigating possible errors in the builder.
Report: "https://pipedream.com UI fails to load workflows"
Last updatehttps://pipedream.com should be loading workflows again. The issue was introduced by a bad deploy of new code. We shipped a new version of our API that migrated our database schema to support a new feature. The API deploy failed, but the frontend was expecting the new code, so failed to render the workflow UI correctly. We rolled back the deploy, and the site should be working.
https://pipedream.com fails to return the list of existing workflows from the UI. Workflows are still running in production. We're looking into it and will update this incident as soon as possible.
Report: "Event sources failing intermittently"
Last updateThis incident has been resolved.
We've addressed the root issue. Event sources should be running and emitting events to workflows. We're monitoring to ensure no errors happen over the next few minutes.
We're investigating an issue with event sources failing to communicate with an internal database, and are looking into it.
Report: "404 errors when creating new workflows"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "HTTP Destination ($.send.http) failing to send"
Last updateThis incident has been resolved.
We've identified an issue with the $.send.http service due to a Redis cluster migration. We'll update this incident as soon as a fix is out.
Report: "HTTP endpoints returning 500 errors"
Last updateThis incident has been resolved.
We've pushed out a fix, and HTTP endpoints are recovering. We're monitoring and will resolve this incident once traffic is stable.
Our core HTTP service is returning 500 errors. HTTP-triggered event sources or workflows may be failing to process events. We're investigating the issue and will provide an update shortly.