Render

Is Render Down Right Now? Check if there is a current outage ongoing.

Render is currently Operational

Last checked from Render's official status page

Historical record of incidents for Render

Report: "Degraded performance for some users in Oregon"

Last update
investigating

Engineers have been alerted to and are investigating an issue causing performance degradation for some users in Oregon.

Report: "Connectivity issues for free Key Value services in Frankfurt"

Last update
resolved

This incident has been resolved.

monitoring

We've implemented a fix and are monitoring.

investigating

We are currently investigating this issue.

Report: "Connectivity issues for free Key Value services in Frankfurt"

Last update
Investigating

We are currently investigating this issue.

Report: "Increased restarts in all regions"

Last update
resolved

Between 2025-05-07 and 2025-05-13, services may have experienced an increase in instances being evicted and restarting. This was due to a failure of a routine cleanup task, which failed in a way that did not trigger our monitoring. We have fixed that task as well as improved monitoring and alerting to prevent this from recurring.

Report: "Service unavailability in the Oregon region"

Last update
resolved

Due to an issue with our infrastructure provider, some services may have experienced downtime between 21:58 and 22:04 on 2025-05-04. Builds and deploys may have been impacted during this time.

Report: "Issues accessing the Render dashboard"

Last update
resolved

This incident has been resolved.

monitoring

Our team has mitigated this issue and is monitoring the situation. The impact time for this incident is 2025-05-09 02:35 to 02:51 UTC. During this time the following will have been impacted: - Dashboard - REST API - Builds - Deployments

investigating

We are currently investigating issues accessing the Render dashboard. Our team is investigating. This should only be impacting access to the dashboard. Render services should not be impacted.

Report: "Issues accessing the Render dashboard"

Last update
Investigating

We are currently investigating issues accessing the Render dashboard. Our team is investigating. This should only be impacting access to the dashboard. Render services should not be impacted.

Report: "Logging instability and slow builds in the Oregon region"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We have identified an infrastructure issue that may result in logs being delayed in the Oregon region. We are working on a resolution.

Report: "Logging instability and slow builds in the Oregon region"

Last update
Resolved

This incident has been resolved.

Monitoring

A fix has been implemented and we are monitoring the results.

Identified

We have identified an infrastructure issue that may result in logs being delayed in the Oregon region. We are working on a resolution.

Report: "Build failures in Oregon and Virginia"

Last update
resolved

This incident has been resolved. Impact to any services was mitigated as of 20:36.

monitoring

We've identified that builds for services other than static sites were also affected. We continue to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are investigating static sites failing to build in the Oregon and Virginia regions.

Report: "Build failures in Oregon and Virginia"

Last update
Resolved

This incident has been resolved. Impact to any services was mitigated as of 20:36.

Update

We've identified that builds for services other than static sites were also affected. We continue to monitor for any further issues.

Monitoring

A fix has been implemented and we are monitoring the results.

Investigating

We are investigating static sites failing to build in the Oregon and Virginia regions.

Report: "Intermittent 502s have been reported on some services"

Last update
resolved

This incident has been resolved.

monitoring

We’ve rolled out a mitigation that’s successfully cleared up all the 502s tied to this specific incident. Things are looking stable so far, but we’re still monitoring and working on a more permanent fix.

identified

We’re still working on a fix and starting to see some improvements from the mitigation steps we’ve taken. We're now focusing on putting a more solid, permanent solution in place and will update this status page as soon as we’ve got more to share.

identified

We have identified the issue and are currently working on mitigating and fixing it.

investigating

We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.

investigating

We’re still actively looking into it. So far, it seems like this is primarily affecting some newly created services in the Frankfurt region.

investigating

Some services — mostly in the Frankfurt region — have been reported to return 502s on certain requests. It’s not hitting all services, and we can’t confirm yet if it’s limited to just one region. We’re on it and investigating.

Report: "Intermittent 502s have been reported on some services"

Last update
Resolved

This incident has been resolved.

Monitoring

We’ve rolled out a mitigation that’s successfully cleared up all the 502s tied to this specific incident. Things are looking stable so far, but we’re still monitoring and working on a more permanent fix.

Update

We’re still working on a fix and starting to see some improvements from the mitigation steps we’ve taken. We're now focusing on putting a more solid, permanent solution in place and will update this status page as soon as we’ve got more to share.

Identified

We have identified the issue and are currently working on mitigating and fixing it.

Update

We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.

Update

We’re still actively looking into it. So far, it seems like this is primarily affecting some newly created services in the Frankfurt region.

Investigating

Some services — mostly in the Frankfurt region — have been reported to return 502s on certain requests. It’s not hitting all services, and we can’t confirm yet if it’s limited to just one region.We’re on it and investigating.

Report: "Increased 404s on services in Oregon"

Last update
resolved

This incident has been resolved.

monitoring

This issue has been mitigated.

identified

We are seeing increased rates of 404s for valid URLs on services hosted in Oregon, including Static Sites, we are working on resolving this issue.

Report: "Increased 404s on services in Oregon"

Last update
Resolved

This incident has been resolved.

Monitoring

This issue has been mitigated.

Identified

We are seeing increased rates of 404s for valid URLs on services hosted in Oregon, including Static Sites, we are working on resolving this issue.

Report: "Intermittent latency spikes and 520 errors for web services in Ohio"

Last update
resolved

We've seen no further symptoms of network congestion in this region.

monitoring

Our upstream provider has allocated more network capacity in the congested region.

identified

We've received confirmation from an upstream provider that they have been experiencing networking congestion in this region during these periods of impact. They are now working on a remediation.

investigating

Despite increased networking resource allocation, we've see another instance of this issue. We're now collaborating with upstream providers to identify the source of these transient networking errors

monitoring

We have not seen a reoccurrence of the issue after making changes yesterday afternoon, but are continuing to monitor as the problem seems to be intermittent.

investigating

We are continuing to investigate while working on provisioning more networking resources and improving observability into the issue.

investigating

We are currently investigating this issue.

Report: "Intermittent latency spikes and 520 errors for web services in Ohio"

Last update
Resolved

We've seen no further symptoms of network congestion in this region.

Monitoring

Our upstream provider has allocated more network capacity in the congested region.

Identified

We've received confirmation from an upstream provider that they have been experiencing networking congestion in this region during these periods of impact. They are now working on a remediation.

Investigating

Despite increased networking resource allocation, we've see another instance of this issue. We're now collaborating with upstream providers to identify the source of these transient networking errors

Monitoring

We have not seen a reoccurrence of the issue after making changes yesterday afternoon, but are continuing to monitor as the problem seems to be intermittent.

Update

We are continuing to investigate while working on provisioning more networking resources and improving observability into the issue.

Investigating

We are currently investigating this issue.

Report: "Elevated rates of 404s in Ohio and Oregon regions"

Last update
resolved

This incident has been resolved.

identified

Engineers are fixing an issue causing elevated rates of 404s for services in the Ohio and Oregon regions.

Report: "Logs may be slow to load for some services in Oregon"

Last update
resolved

This incident has been resolved.

monitoring

We have implemented a fix and continue to monitor the situation.

investigating

We are currently investigating this issue.

Report: "Slow deploys for some users in Frankfurt"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Dashboard logs unavailable"

Last update
resolved

We have resolved the issue and logs are now working for all customers.

investigating

We are currently investigating reports of customers unable to view logs in our dashboard. The message displayed will be "Something went wrong while loading your logs. Try searching again. Internal Server Error"

Report: "Slow builds in Frankfurt"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We have made initial improvements but are continuing to work on a complete fix.

identified

The previous mitigation did not fully address the issue, so degraded performance is still being observed. We are working on a fix to fully resolve the issue.

monitoring

We will continue to monitor builds & deploys in Frankfurt for the next 16-18 hours.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently seeing degraded (slow) builds in the Frankfurt region.

Report: "Dashboard and Redis not responsive, builds and deploys are delayed"

Last update
resolved

This incident has been resolved.

identified

Dashboard is mostly recovered, but Shell access is still impacted. Redis/KeyVal was also impacted, but is now recovered.

identified

Builds and deploys are now fully operational.

identified

We have identified the issue and are working on resolution.

monitoring

We've identified an issue that caused Dashboard to be non-responsive for ~10 minutes (between 17:19 and 17:30 UTC). A fix has been put in place and we are monitoring results.

Report: "Free web services partially unavailable in Frankfurt"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Free web services are unavailable for some customers in Frankfurt. We are investigating the issue.

Report: "Render Maintenance Period"

Last update
resolved

This maintenance has been canceled. We were able to workaround the need for a maintenance period.

investigating

We will be upgrading critical infrastructure on March 19th at 4:00 pm PDT (March 19th 11:00 pm UTC). For up to 30 minutes, you will be unable to view, edit, create or deploy services and databases. There will be no interruptions to deployed services and databases. If you need help, please get in touch at support@render.com or talk to us on our community forum, https://community.render.com

Report: "Builds and deploys affected in Frankfurt"

Last update
resolved

This incident has been resolved.

investigating

Between approximately 14:00 and 14:40 UTC, builds and deploys may have failed for some services located in the Frankfurt region. Services are no longer affected and engineers are investigating.

Report: "Logins requiring 2FA"

Last update
resolved

This incident has been resolved.

investigating

If 2FA is enabled, users may be unable to enter their one-time password. We are investigating.

Report: "SSH for services in all regions"

Last update
resolved

Between 19:57 and 21:08 UTC, users would have been unable to SSH into hosts. This has now been resolved.

Report: "Free Tier Services disrupted in all Regions"

Last update
resolved

All services have recovered. Resolving.

identified

Render engineers have rolled out the fix and free tier has recovered in all regions except for Frankfurt.

identified

Engineers are rolling rolling out a fix now to free tier in all regions.

investigating

We are continuing to investigate this issue.

investigating

Render engineers are fixing an issue disrupting Free Tier Services.

Report: "Degraded deploys in all regions"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We've implemented a mitigation for Builds and Deploys. We are continuing to investigate Free Tier scale-ups.

identified

Builds and deploys are degraded for all services. Free tier services are also impacted when spinning up from idle.

investigating

We are currently investigating this issue.

Report: "Deploys failing with "Internal server error""

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Slow deploys for some Oregon services"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Increased HTTP 404 and 5xx errors"

Last update
resolved

We have rolled out our mitigation as of 4:25p PST. Resolving.

monitoring

Engineers have identified a mitigation to prevent this from occurring in the future and we will leave this incident in a state of Monitoring until it has been fully rolled out. This is expected to be complete within a few hours.

monitoring

We're investigating an increase in errors in our HTTP routing layer from 10:40 to 11:00 PST. The impact is over and we're working on a mitigation.

Report: "Dashboard logins failing"

Last update
resolved

As of 18:25 Pacific (Jan 31 02:25 GMT), this issue has been resolved.

identified

Logins to dashboard.render.com are currently failing, attempting to do so returns you to the main login page. Engineering has already begun diagnosing the issue.

Report: "Dashboard GitHub Login Failures"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Some users are reporting an issue logging into the Render Dashboard with their GitHub login credentials.

Report: "Services relying on GitHub may fail to build"

Last update
resolved

This incident has been resolved.

investigating

Due to an ongoing outage on GitHub, services may fail to build and consequently deploy.

Report: "Missing metrics in Oregon region"

Last update
resolved

This incident has been resolved.

monitoring

We've identified and resolved the issue and metrics should be appearing now as expected.

investigating

We are investigating an issue with metrics affecting some users in our Oregon region.

Report: "Deployments in Frankfurt not completing"

Last update
resolved

This is now resolved.

monitoring

Deployments are now succeeding. Please manually trigger any stuck builds.

investigating

We are continuing to investigate the cause of deployments not succeeding in Frankfurt.

investigating

We are continuing to investigate the cause of deployments not succeeding in Frankfurt.

investigating

We're investigating reports of deployments in our Frankfurt region not completing and getting stuck at "Build Successful"

Report: "Partial service disruption for web services and static sites"

Last update
postmortem

# Summary Beginning at 10:03 PST on December 3, 2024, Render's routing service was unable to reach newly deployed user services, resulting in 404 errors for end users. Some routing service instances also restarted automatically, which abruptly terminated HTTP connections and reduced capacity for all web traffic. The root cause was expiring TLS certificates on internal Render components, which created inconsistent internal state for Render's routing service. The affected certificates were refreshed and the routing service was restarted beginning at 10:24 PST and was fully recovered by 10:37 PST. # Impact _Impact 1. Starting at 10:03 PST, many services that deployed in this time period experienced full downtime. Clients to those services received 404 errors with the header no-server._ _Impact 2. Starting at 10:08 PST, the routing service started abruptly terminating connections, but was otherwise able to continue serving traffic normally._ _Recovery. By 10:37 PST, all routing service instances were reconnected to the metadata service and full service was restored._ # Timeline \(PST\) * 10:03 - Certificates in some clusters begin to expire resulting in Impact 1. * 10:08 - Some routing service instances begin to restart resulting in Impact 2. * 10:15 - An internal Render web service becomes unavailable after a deploy and an internal incident is opened. * 10:18 - Render engineers are paged because the routing service has stopped getting updates in some clusters * 10:20 - Render engineers identify routing service is failing to connect to metadata service * 10:24 - Render engineers restart the metadata service to refresh the mTLS certificate, routing service begins to recover * 10:37 - Restarts are completed and routing services in all clusters are recovered # Root Cause ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXef1PHnbrcmO9Zk0hXPewPE3suRSs5oo2s6rgl1tj1ef3TAjMTIAvjQ6Lj1YgGKd-C-O_fgxps2JRUKbqpS35JLhZKGjdKucnuTSb-egpCbJM1qb2csJfhkj1-0m0v9bTgkzk4cTw?key=9lzD6B69oZMukipDXCVY5A) The Render HTTP routing service uses an in-memory metadata cache to route traffic to user services. It relies on the Render metadata service for updates to this cache when changes are made to user services. This incident was triggered when certificates for this metadata service expired. The certificates were previously refreshed on restarts. But, as the metadata service has stabilized, we have been redeploying it less frequently. Although the system is designed to continue serving traffic when the metadata service is unavailable, it failed to account for partial connectivity failure. The certificates expiring caused a partial connectivity failure where updates for newly deployed services were only partially processed, reconciling to an inconsistent state that was unable to route traffic. In an attempt to fail fast, the routing service is designed to crash and restart to resolve any client-side connectivity issues after several minutes of stale data. These restarts did not solve the issue and long-lived connections or in-flight requests to those instances were abruptly terminated. # Mitigations ## Completed * Restart all metadata services to refresh certificates ## Planned * Automatically refresh metadata service TLS certificates. * Update our alert on missing metadata updates to fire sooner. * Add an alert monitoring reachability of the metadata service. * Increase the threshold to tolerate stale metadata before intentionally restarting the HTTP routing service. * Update the routing service metadata cache logic to handle this mixed connectivity state correctly.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "Services using the latest Node 18 release fail to deploy"

Last update
resolved

In order to prevent further failures, engineering has temporarily published Node 18.20.5 using 18.20.4's resources for Render-hosted services. This will be undone as soon as 18.20.5 is normally available. This issue has been resolved.

identified

Node v18.20.5 was released a bit over an hour ago, but its download directories do not contain the necessary data. Services using Node that specify this version, typically by specifying use of the latest release in the v18 series, will fail to deploy, or in the case of Cron Jobs, fail to execute even without a new deployment. Engineering is investigating alternatives to prevent services from failing in this manner.

identified

Cron Job execution logs indicate the environment being set up, but the cron job's command is never executed. Engineering is actively investigating.

Report: "DockerHub Image Deploys Failing"

Last update
resolved

Our mitigation is working and the errors have stopped.

monitoring

We have applied another mitigation and are no longer seeing errors. We will continue monitoring error rates.

identified

The issue has recurred. We are working on implementing a more permanent fix.

monitoring

We have mitigated the issue and are monitoring failures to ensure it doesn't reoccur.

investigating

There is an issue pulling public images from DockerHub. This means that deploying a public DockerHub image that doesn't specify a registry credential may fail. We are working on a fix. In the meantime, specifying your own credentials should avoid the current disruption.

Report: "Networking outage, Render services unavailable in Ohio region"

Last update
resolved

There was a network outage during maintenance on internal networking infrastructure in the Ohio region. The outage was for the time range of 14:55-14:59 PT

Report: "Outage for some freetier web traffic in Oregon region"

Last update
resolved

Traffic was disrupted for some freetier services in the Oregon region for approximately 8 minutes

Report: "Builds and deploys degraded in Ohio"

Last update
resolved

Between 12:27 PM and 12:52 PM PDT we saw elevated errors for builds and deploys in Ohio due to an incident with an upstream provider. The upstream incident has been resolved and we are no longer experiencing errors.

monitoring

We have seen recovery from the upstream provider. We are continuing to monitor builds and deploys in Ohio.

investigating

Builds and deploys may be degraded due to an upstream provider outage. We are currently investigating.

Report: "Some databases are Unavailable in Ohio"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

We are currently investigating this issue.

Report: "Increased response times for Virginia services"

Last update
resolved

There has been no further impact to response times, as previously stated the increased response times occurred from approximately 12:45PM EDT to 13:20PM EDT, and have been stable ever since. This incident has been resolved.

identified

Services hosted in Virginia began encountering increased response times as of approximately 12:45PM (Eastern Daylight Time). While response times seemed to return to normal around 13:20PM, our upstream routing provider has opened a status incident regarding routing performance in a Virginia facility, so this incident remains active as well.

Report: "Degraded Auto Deploys and Preview Deploys for some users"

Last update
resolved

Between 14:15 UTC and 17:52 UTC, some autodeploys and preview deploys were delayed.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate this issue.

investigating

Auto deploys are delayed or not working for some users. We're currently investigating.

Report: "Some services using Node unable to deploy"

Last update
resolved

Services that used Node but did not specify an explicit version (e.g. a range was used) were unable to deploy due to an issue downloading Node. Issues with Node stemmed from an outage with an upstream provider. The issue has been resolved and services may deploy again.

Report: "Request logs and network metrics missing for public services in Singapore"

Last update
resolved

From 21:26 to 22:09 UTC, there was a configuration issue that prevented the system from processing a subset of logs in Singapore. This resulted in a gap in request logs and network metrics for affected public services. The underlying issue has been resolved.

Report: "Image-based services inaccessible in Dashboard"

Last update
resolved

For around an hour, image-based services in Dashboard were showing up as not existing. These services still existed and remained operational, but could not be accessed in Dashboard during this time. The issue is now resolved.

Report: "Free Tier Services disrupted in Singapore region"

Last update
resolved

Engineers were alerted and responded to an issue disrupting all services in our Free Tier in Singapore. Services were disrupted for approximately 12 minutes, from 22:50 UTC to 23:02 UTC

Report: "Degraded network performance for 28 minutes"

Last update
resolved

Web services across all regions experienced intermittent request failures from 22:15-22:43 UTC.

Report: "Render Dashboard intermittently failing to load, some Oregon services affected for 5 minutes"

Last update
resolved

Between 1:05 - 1:10 PM PDT the Render Dashboard intermittently failing to load and some Oregon services were affected. Render engineers have identified & fixed the issue.

Report: "Missing requests logs/metrics"

Last update
resolved

This incident has been resolved.

investigating

We are aware of the issue and are currently investigating.

Report: "Some services unavailable in Virginia"

Last update
resolved

For 6 minutes, some services were unavailable in the Virginia region. Engineers responded and services were restored at 3:56 PM PDT.

Report: "Slow or Erroring Dashboard and APIs"

Last update
resolved

Normal Dashboard and REST API performance has resumed.

monitoring

A change to the Render API system resulted in slow performance for our Dashboard and REST API. We have reverted this change and are monitoring performance, which appears to have returned to normal. Customer services hosted on Render should not have been affected by this incident unless they are also active users of our REST API.

monitoring

We are continuing to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The Render Dashboard ( https://dashboard.render.com/ ) is encountering performance issues causing slow page loads, or timeouts and failures to load pages. Engineering is actively addressing the issue. Customer services hosted on Render are not affected by this incident.

Report: "Some services not starting in Oregon"

Last update
resolved

The incident has been resolved

monitoring

We have identified the issue and have applied a fix. We are seeing services successfully starting.

investigating

We are investigating some services that are not starting in Oregon.

Report: "The Render dashboard intermittently failing to load and and some services in Oregon are affected"

Last update
resolved

This incident has been resolved.

monitoring

We've implemented a fix and are continuing to monitor for elevated failure rates.

identified

We've identified the issue and have started work on a mitigation

investigating

We're still investigating and looking at ways to best mitigate this

investigating

Follow up from https://status.render.com/incidents/jw8wp2ss1566