Historical record of incidents for Supabase
Report: "Investigating issues with our API Gateway"
Last updateWe are seeing widespread reports of connectivity issues across all regions. We suspect an issue in an upstream provider given the scope; however we are currently investigating. We will post updates here as they become available.
Report: "Signin with Apple issues due to JWT issuer change"
Last updateWe are investigating reports of Sign in with Apple not working due to unexpected JWT issuer URL change on Apple’s side.
Report: "Dashboard Support Form Erroring on ticket submission"
Last updateSome users are seeing failures when attempting to submit a support ticket via the dashboard or via https://supabase.help - We are currently investigating. In the mean time, you can also email us at support@supabase.com - please include the relevant project id for the project or projects you are seeing issues with.
Report: "Regression prevents ALTER ROLE usage in projects running PostgreSQL 17"
Last updateWe have identified a regression affecting projects running PostgreSQL 17 that prevents ALTER ROLE statements from executing successfully against database roles. This issue may impact workflows that rely on role configuration changes.
We are currently investigating this issue.
Report: "Logs and Reports unavailable in the Supabase Dashboard"
Last updateWe have observed failures in our logging pipeline, customers projects may not be receiving logs from our API Gateway provider from requests originating from some regions. Our engineers are investigating and working with our partners to resolve this issue.
Report: "Increased Realtime Latency"
Last updateThere was increased latency and error rates across the Realtime service between June 4th 16h30 and 17h00 UTC. This issue has been resolved.
Report: "supabase-js client lib JSR issue in Edge Functions"
Last updateWe are continuing to monitor for any further issues with Edge Function deployments
A fix has been deployed and we are monitoring Edge Function deploys.
We are continuing to work on a fix for this issue.
The team has identified an issue with the latest version of supabase-js from JSR in Edge Functions. A fix has been identified and is being worked on. In the meantime, you can apply the following mitigations to maintain normal functionality: * use version 2.49.8 of the library * use import { createClient } from "npm:@supabase/supabase-js@2"
Report: "supabase-js client lib JSR issue in Edge Functions"
Last updateWe are continuing to monitor for any further issues with Edge Function deployments
A fix has been deployed and we are monitoring Edge Function deploys.
We are continuing to work on a fix for this issue.
The team has identified an issue with the latest version of supabase-js from JSR in Edge Functions.A fix has been identified and is being worked on.In the meantime, you can apply the following mitigations to maintain normal functionality:* use version 2.49.8 of the library* use import { createClient } from "npm:@supabase/supabase-js@2"
Report: "Management API – Increased Error Rates and Response Times"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are currently investigating elevated error rates and increased response times affecting our Management API. Project-level services remain fully operational and are not impacted by this issue.
Report: "Management API – Increased Error Rates and Response Times"
Last updateWe are currently investigating elevated error rates and increased response times affecting our Management API. Project-level services remain fully operational and are not impacted by this issue.
Report: "Edge Function Increased Error rate in US-East-1"
Last updateWe observed higher than usual error rates on edge function invocations in the us-east-1 region for about approximately 1 hour as the result of a large influx of queries from a single source. We have rate limited that source and error rates returned to normal levels.
Report: "Edge Function Increased Error rate in US-East-1"
Last updateWe observed higher than usual error rates on edge function invocations in the us-east-1 region for about approximately 1 hour as the result of a large influx of queries from a single source. We have rate limited that source and error rates returned to normal levels.
Report: "Dashboard and Management API maintenance"
Last updateDashboard and Management API will be briefly unavailable while we carry out needed upgrades. The interruption is expected to last less than 15 minutes.During the interruption, the management API and dashboard will be unavailable.The HTTP APIs and Postgres endpoints for existing projects will NOT be affected, and will continue serving traffic normally.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Degradation of Logs Service"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are adding capacity to our logs service cluster.
We are adding capacity to our logs service cluster and monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Degradation of Logs Service"
Last updateWe are currently investigating this issue.
Report: "Intermittent Pooler Connectivity Issues in US-West-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
Report: "Intermittent Pooler Connectivity Issues in US-West-1"
Last updateThe issue has been identified and a fix is being implemented.
Report: "High rate of 500 errors for Storage requests in eu-central-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "High rate of 500 errors for Storage requests in eu-central-1"
Last updateThe issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "High rate of 500 errors for Storage requests in us-east-1"
Last updateIncident has been resolved.
We have rolled out a fix and we are monitoring the results.
This issue has been identified and we're rolling out the fix now.
We've observed a spike in 500 errors for Storage requests in the us-east-1 region starting at 22:36 UTC.
Report: "High rate of 500 errors for Storage requests in us-east-1"
Last updateThis issue has been identified and we're rolling out the fix now.
We've observed a spike in 500 errors for Storage requests in the us-east-1 region starting at 22:36 UTC.
Report: "High rate of 500 errors in us-east-1"
Last updateWe've observed a spike in 500 errors for Storage requests in the us-east-1 region starting at 22:36 UTC.
Report: "Project creation disabled in East US (North Virginia)"
Last updateThis incident has been resolved.
Project creation in East US (North Virginia) (us-east-1) has been enabled now.
We have temporarily disabled project creation in East US (North Virginia) (us-east-1). We are working with our cloud provider AWS to re enable the region soon. Existing projects in the region are unaffected.
Report: "Project creation disabled in East US (North Virginia)"
Last updateWe have temporarily disabled project creation in East US (North Virginia) (us-east-1). We are working with our cloud provider AWS to re enable the region soon.Existing projects in the region are unaffected.
Report: "Elevated error rates for Supabase Management API"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Dashboard and Management API are seeing elevated error rates. Existing projects are unaffected, and are serving traffic normally.
Report: "Elevated error rates for Supabase Management API"
Last updateDashboard and Management API are seeing elevated error rates.Existing projects are unaffected, and are serving traffic normally.
Report: "Project Creation taking longer than normal"
Last updateThis incident has been resolved.
The queue has been worked through and these events are now completing as expected again. We'll keep an eye on things to make sure they stay in good shape.
We are seeing longer-than-usual wait times for project creation or configuration change events.
Report: "Project Creation taking longer than normal"
Last updateWe are seeing longer-than-usual wait times for project creation or configuration change events.
Report: "Supavisor connectivity issues in ap-northeast-1"
Last updateWe observed intermittent connection issues to our Supavisor cluster in ap-northeast-1 from 16:17 - 16:42 UTC. This issue has been resolved.
Report: "Supavisor connectivity issues in ap-northeast-1"
Last updateWe observed intermittent connection issues to our Supavisor cluster in ap-northeast-1 from 16:17 - 16:42 UTC.This issue has been resolved.
Report: "Storage not accepting third-party signed JWTs"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Users with third-party auth configured may see auth failures when making storage API requests. The team has identified the issue and is working on a solution.
Report: "Storage not accepting third-party signed JWTs"
Last updateUsers with third-party auth configured may see auth failures when making storage API requests. The team has identified the issue and is working on a solution.
Report: "Compute capacity issues observed in multiple availability zones"
Last updateWe have rolled out improvements to the restart process which lets us pre-reserve capacity before a restart/compute resize operation, with this improvement we are able to abort the restart process if there isn't sufficient capacity available, hence avoiding downtime. This does not resolve the ongoing capacity issue present in eu-west-3b, but it will negate any type of restart or compute resize-related outage which might be encountered and inform users if this didn't succeed due to a capacity-related issue. This additionally allows the platform to act more dynamically, allowing these operations to either execute successfully or fail gracefully, based on how AWS capacity fluctuates, and reducing the need for us to institute blanket guards at an availability zone or regional level. All project operations have been restored in previously affected availability zones: - ap-south-1a - eu-central-1a - eu-west-3b - eu-west-3c - eu-west-2b
We have rolled out improvements to the restart process which lets us pre-reserve capacity before a restart/compute resize operation, with this improvement we are able to abort the restart process if there isn't sufficient capacity available, hence avoiding downtime. This does not resolve the ongoing capacity issue present in eu-west-3b, but it will negate any type of restart or compute resize-related outage which might be encountered and inform users if this didn't succeed due to a capacity-related issue. This additionally allows the platform to act more dynamically, allowing these operations to either execute successfully or fail gracefully, based on how AWS capacity fluctuates, and reducing the need for us to institute blanket guards at an availability zone or regional level. All project operations have been restored in previously affected availability zones: - ap-south-1a - eu-central-1a - eu-west-3c - eu-west-2b All project operations except database version upgrades have been restored in the following availability-zone: - eu-west-3b
We have rolled out improvements to the restart process which lets us pre-reserve capacity before a restart/compute resize operation, with this improvement we are able to abort the restart process if there isn't sufficient capacity available, hence avoiding downtime. This does not resolve the ongoing capacity issue present in eu-west-3b, but it will negate any type of restart or compute resize-related outage which might be encountered and inform users if this didn't succeed due to a capacity-related issue. This additionally allows the platform to act more dynamically, allowing these operations to either execute successfully or fail gracefully, based on how AWS capacity fluctuates, and reducing the need for us to institute blanket guards at an availability zone or regional level. All project operations have been restored in previously affected availability zones: - ap-south-1a - eu-central-1a - eu-west-3c - eu-west-2b - eu-west-3b
Project restarts, as well as compute and database version upgrade operations have been disabled in the following availability zones: - eu-west-3b We are continuing to work with our cloud provider to in order to address compute capacity issues. All project operations have been restored in previously affected availability zones: - ap-south-1a - eu-central-1a - eu-west-3c - eu-west-2b
Project restarts, compute operations and database version upgrades have been disabled in eu-west-2b and eu-west-3b availability zones. Project restarts, as well as compute and database version upgrade operations have been re-enabled in the previously impacted ap-south-1a, eu-central-1a, and eu-west-3c availability zones. These operations are currently disabled in the still-impacted eu-west-3b and eu-west-2b availability zones.
We are continuing to work with our cloud provider to in order to address compute capacity issues in the eu-west-3b availability zone. Project restarts, as well as compute and database version upgrade operations have been re-enabled in the previously impacted ap-south-1a, eu-central-1a, and eu-west-3c availability zones. These operations are currently disabled in the still-impacted eu-west-3b availability zone. Project creation has been re-enabled in the eu-west-3 region.
We are continuing to work on a fix for this issue.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the following availability zones: - ap-south-1a - eu-central-1a - eu-west-3b - eu-west-3c We have disabled the ability to restart projects, compute or database version upgrades within these availability zones. Project creation has been disabled for the eu-west-3 region.
We are currently investigating this issue.
Report: "Some users experiencing issue restoring backups"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The fix is still making its way out to the fleet. We estimate it will be deployed to all affected projects in the next 2-3 hours.
We have fixed this issue for any newly created projects, so no additional users are affected. The team is currently rolling out the fix to any remaining projects which would have seen issues attempting to restore. As a reminder all backups are safe - the issue is in the restore process itself, not the backup process.
Some users attempting to restore backups may see a restore to a more recent point than they selected. All backup data is safe and accounted for, and we are working on a fix for the restore process.
Report: "Some users with"
Last updateSome users attempting to restore backups may see failures in the restore process. All backup data is safe and accounted for, and we are working on a fix for the restore process.
Report: "Some users experiencing issue restoring backups"
Last updateSome users attempting to restore backups may see a restore to a more recent point than they selected. All backup data is safe and accounted for, and we are working on a fix for the restore process.
Report: "Database Health Report in the Supabase Dashboard will be briefly unavailable due to upgrades"
Last updateThe Database Health Report, and any other custom reports that refer to the resource utilization (e.g. CPU, memory) on the database will be briefly unavailable. The maintenance is expected to take less than 30 minutes.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Logging Infrastructure Degraded Performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The additional capacity has helped, and while error rates remain elevated, the impact has lessened. The team is working on additional mitigations to bring error rates back to normal levels.
We are currently experiencing some issues in our logging layer, which will result in delayed logs for some features. The team is currently investigating the cause and is adding additional capacity to compensate.
Report: "Logging Infrastructure Degraded Performance"
Last updateWe are currently experiencing some issues in our logging layer, which will result in delayed logs for some features. The team is currently investigating the cause and is adding additional capacity to compensate.
Report: "Elevated Realtime error rates for the Europe region."
Last updateThis incident has been resolved.
We seen elevated error rates due to increased requests and added extra nodes in eu-west-2, us-west-1 and ap-southeast-2 to cope with this increase, we are monitoring to ensure this is adequate.
We are currently investigating this issue.
Report: "Elevated Realtime error rates for the Europe region."
Last updateThis incident has been resolved.
We seen elevated error rates due to increased requests and added extra nodes in eu-west-2, us-west-1 and ap-southeast-2 to cope with this increase, we are monitoring to ensure this is adequate.
We are currently investigating this issue.
Report: "API Requests to projects being blocked at the API Gateway layer"
Last updateBetween 16:02 UTC and 17:15 UTC on March 22nd 2025, customers who were accessing Supabase services from Next.js middleware had their Supabase requests blocked by our upstream CDN provider. The impacted requests received a “Sorry, you have been blocked” error response. The primary service impacted was Supabase Auth, as performing auth in Next.js middleware is a common and recommended usage pattern. A smaller number of requests to other Supabase services were also impacted. We sincerely apologise for the negative effects our customers experienced. Here is some additional detail about what happened and what we will do to mitigate future outages of this nature. ### Who was affected? Customers who access Supabase services from Next.js middleware, or otherwise use a `x-middleware-subrequest` header were affected. In total, 9.59M requests were blocked across our customers endpoints. ### What happened? On March 21st 2025, Vercel/Next.js published a security advisory for [CVE-2025-29927](https://github.com/advisories/GHSA-f82v-jwr5-mffw), which allowed for Authorization Bypass in Next.js Middleware. In addition to advising customers to upgrade to a patched version of Next.js, the security advisory contained a workaround that noted “If patching to a safe version is infeasible, it is recommend that you prevent external user requests which contain the `x-middleware-subrequest` header from reaching your Next.js application.” In an effort to protect their customers from this CVE, our CDN provider [implemented a new managed WAF rule](https://developers.cloudflare.com/changelog/2025-03-22-next-js-vulnerability-waf/) to block requests including the `x-middleware-subrequest` header, effectively implementing the suggested workaround for the CVE and rolled it out to their customers, which includes Supabase. The incident timeline was as follows: * SAT 22 MAR 14:53 UTC: Our CDN provider posted in our shared slack channel informing us they were rolling out a patch for [CVE-2025-29927](https://github.com/advisories/GHSA-f82v-jwr5-mffw) and that we would be the customer most impacted by this change. Slack is not monitored or part of our internal escalation mechanisms and our on-call engineers were not aware of the impending change. * SAT 22 MAR 16:02 UTC: The WAF rule was applied to the Supabase CDN. * SAT 22 MAR 16:09 UTC: We began to receive a high volume of customer reports indicating problems with the CDN blocking requests. * SAT 22 MAR 16:17 UTC: We declared an incident and response teams were paged and assembled. * SAT 22 MAR 17:07 UTC: We had identified the issue was related to the new WAF rule. * SAT 22 MAR 17:15 UTC: We worked with our CDN provider to disable the WAF rule. Once the WAF rule was disabled, customers were once again able to use the `x-middleware-subrequest` header and the incident was resolved. ### What will we do to mitigate problems like this in the future? 1. During the incident, we discovered gaps with vendor relationships and we found it difficult to engage directly with our CDN provider. We have taken action to ensure that we have clearer methods of communication in the future for incidents that may impact our mutual customers. 2. We have added alerting which will help us identify anomalies in the volume of customers requests that are blocked by our CDN provider, which will help us identify and resolve issues with harmful WAF rules faster in the future. 3. We are investigating fallback CDN providers and other options for our CDN. ### What actions do you need to take? All customers using Next.js are encouraged to patch to the latest version if you are on a version that is affected by [CVE-2025-29927](https://github.com/advisories/GHSA-f82v-jwr5-mffw). If you are hosting your Next.js application with Vercel or Netlify you do not need to take any action as these platforms have built in protection from this CVE, however it is still a good idea to upgrade to a patched version of Next.js. If you are hosting your Next.js application with a provider that is not Vercel or Netlify, we encourage you to urgently upgrade your application to a safe Next.js version as noted in the [Security Advisory](https://github.com/advisories/GHSA-f82v-jwr5-mffw).
This incident has been resolved.
The fix has been rolled out. Block rates at the API layer have returned to normal levels, and things are looking stable now. API requests to your supabase projects should now be working as expected.
With the assistance of our API gateway partner, we have identified the configurations resulting in the Blocking and are working to update them.
Our team has reached out to our API gateway partners and are continuing to investigate the periodic Block errors that folks are receiving across all API endpoints.
We are receiving reports of user API requests receiving API gateway errors when making Auth and other API calls. The team is currently looking into the reason and for mitigating steps.
Report: "API Requests to projects being blocked at the API Gateway layer"
Last updateThis incident has been resolved.
The fix has been rolled out. Block rates at the API layer have returned to normal levels, and things are looking stable now. API requests to your supabase projects should now be working as expected.
With the assistance of our API gateway partner, we have identified the configurations resulting in the Blocking and are working to update them.
Our team has reached out to our API gateway partners and are continuing to investigate the periodic Block errors that folks are receiving across all API endpoints.
We are receiving reports of user API requests receiving API gateway errors when making Auth and other API calls. The team is currently looking into the reason and for mitigating steps.
Report: "Elevated Error rates for Edge Functions globally in all regions."
Last updateThis incident has been resolved.
We increased available resources in us-east-1, which has reduced error rates to normal levels. All regions currently looking stable. Due to the volatility here, we'll be keeping an eye on things for a while to be sure.
US-East-1 has begun showing elevated error rates again. The team is currently looking into these errors now.
The fix for the secondary issues has been deployed, and things are looking stable again. We will continue to monitor for any further issues.
The fix is still being deployed across the fleet. We hope to have edge functions behaving normally again soon.
We have identified a second issue, and the team is rolling out a fix shortly. If you are seeing 503s and "worker boot error" messages in your edge function logs, this fix will address those situations.
Unfortunately, while it immediately appeared that the fix we deployed was working, error rates across all regions have begun to climb again. The team is continuing to investigate.
The fix has been deployed. Error rates across all regions have returned to normal levels. We'll keep an eye on things to ensure they remain stable.
The issue has been identified and a fix is being implemented.
Elevated error rates have expanded to all regions, globally.
Users with functions firing in eu-central-1 and eu-west-1, and us-east-1 may see increased error rates. You can, if you like, choose a region to execute functions in to avoid these regions by passing a header. More info on that here: https://supabase.com/docs/guides/functions/regional-invocation
Report: "Elevated Error rates for Edge Functions globally in all regions."
Last updateThis incident has been resolved.
We increased available resources in us-east-1, which has reduced error rates to normal levels. All regions currently looking stable. Due to the volatility here, we'll be keeping an eye on things for a while to be sure.
US-East-1 has begun showing elevated error rates again. The team is currently looking into these errors now.
The fix for the secondary issues has been deployed, and things are looking stable again. We will continue to monitor for any further issues.
The fix is still being deployed across the fleet. We hope to have edge functions behaving normally again soon.
We have identified a second issue, and the team is rolling out a fix shortly.If you are seeing 503s and "worker boot error" messages in your edge function logs, this fix will address those situations.
Unfortunately, while it immediately appeared that the fix we deployed was working, error rates across all regions have begun to climb again. The team is continuing to investigate.
The fix has been deployed. Error rates across all regions have returned to normal levels. We'll keep an eye on things to ensure they remain stable.
The issue has been identified and a fix is being implemented.
Elevated error rates have expanded to all regions, globally.
Users with functions firing in eu-central-1 and eu-west-1, and us-east-1 may see increased error rates. You can, if you like, choose a region to execute functions in to avoid these regions by passing a header. More info on that here: https://supabase.com/docs/guides/functions/regional-invocation
Report: "Increased error rates with the Management API"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
The team have successfully deployed a fix and we are monitoring.
The team have identified that a recent deploy is affecting a small subset of requests to our Management API, we are working on a fix. Data APIs and Postgres APIs for existing projects remain unaffected. Your projects should continue to work as normal.
Data APIs and Postgres APIs for existing projects are unaffected.
Report: "Increased error rates with the Management API"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
The team have successfully deployed a fix and we are monitoring.
The team have identified that a recent deploy is affecting a small subset of requests to our Management API, we are working on a fix.Data APIs and Postgres APIs for existing projects remain unaffected. Your projects should continue to work as normal.
Data APIs and Postgres APIs for existing projects are unaffected.
Report: "Some users across all regions are seeing projects in a "Restore" state for longer than usual"
Last updateThis incident has been resolved.
We have deployed the fix to our production platform and are monitoring the situation. Previously impacted projects' statuses have been updated in order to unblock dashboard operations.
The issue has been identified and a fix is being implemented.
For the projects in question, this status is only affecting usage from the Supabase Dashboard. The affected projects remain up and responsive to Postgres queries, API requests, and other functions.
Report: "Supavisor connection issues in eu-central-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
Starting at 17:34 UTC and ending 17:53 UTC: - Supavisor cluster was restarted in eu-central-1 due to intermittent connection issues - Storage error rates increased in eu-central-1
Report: "Intermittent Connectivity issues with Supavisor and Storage timeouts in us-west-1"
Last updateThis incident has been resolved.
We have deployed a fix, and error rates have returned to normal. We will continue to monitor
The issue has been identified and a fix is being implemented.
Report: "Unhealthy Host in Connection Pooler Load Balancer Affecting Projects in us-east-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
- we are still seeing elevated connection error rates for some customers
Some users are still seeing intermittent connectivity issues with Supvisor in us-east-1. The team is continuing to work on a fix.
- We are still seeing some connection timeouts for a few customers
- Our AWS health check failed and rebooted a Supavisor node - This reboot caused some timeouts for new connections for some customer connection pools - We have added resources to the cluster to prevent further node reboots - Timeouts have subsided after ~45 minutes
We are performing some emergency maintenance on our connection pooler cluster in us-east-1. New connections may timeout during this period. Expected maintenance period is roughly 30 minutes.
Report: "Intermittent Pooler Connectivity issues in US-East-1"
Last updateBetween 17:21-17:37 UTC, some connections to the pooler in us-east-1 were failing. This could result in postgres protocol requests failing, or storage API requests in the returning 544s. We discovered a failure the pooler infrastructure, which auto-healed. We are looking more deeply into the nature of the failure and how we can prevent similar instances of this in the future.
Report: "New projects and existing projects seeing auth failures after configuration changes"
Last updateThis incident has been resolved.
All affected projects have been updated, and should be working as expected. The team is continuing to monitor for any increase in issues related to this deployment.
We are continuing to work on a fix for this issue.
We have identified the deployment that caused this, and it has been fixed. Newly created projects should now work as expected. The team is currently working on deploying the fix to affected already-existing projects now.
We are currently investigating the root cause. At this time, we would advise against making changes to postgrest or auth settings. Out of caution, we would also suggest against restarts, compute add-on changes, or other configuration issues. The team is still assessing the full scope of changes, and we will update here as we have more information.
Report: ""Failed to Retrieve Tables" in the dashboard in US-West-1"
Last updateFor approximately 5 minutes, 22:59-23:04 UTC, requests between the Supabase Dashboard and project databases were failing, resulting in "Failed to Retrieve Tables" or similar errors in the dashboard. All projects were up and receiving regular postgres and API traffic - this only affected requests originating from the Supabase Dashboard to the project databases. The issue resolved itself, as designed, when a node in the infrastructure experienced issues was replaced. The team will also be looking into making this process more resilient.
Report: "pg_net disruption on some projects using version 0.8.0 of the extension"
Last updateThis incident has been resolved.
All known affected projects have been fixed at this point, and we are not seeing new errors related to this pg_net issue at this time. We will continue to monitor for any changes. If you are continuing to see permissions errors related to pg_net tables or objects, please open a ticket at https://supabase.help
Nearly all affected projects have been fixed. There are a very small number for which the team is still doing some additional mitigation, but all projects should be back to normal again soon.
The fix has been validated, and is currently being deployed to all affected projects.
We've identified the issue as an corner-case of a recent security fix, which impacts projects using pg_net 0.8.0, and causes pg_net requests to fail when executed by a non-superuser. A fix is being worked on and validated, and will be rolled out to impacted projects.
We are seeing increased reports of permissions issues when using pg_net and the engineering team are investigating.
Report: "Compute capacity issues observed in multiple availability zones"
Last updateWe have rolled out improvements to the restart process which lets us pre-reserve capacity before a restart/compute resize operation, with this improvement we are able to abort the restart process if there isn't sufficient capacity available, hence avoiding downtime.This does not resolve the ongoing capacity issue present in eu-west-3b, but it will negate any type of restart or compute resize-related outage which might be encountered and inform users if this didn't succeed due to a capacity-related issue.This additionally allows the platform to act more dynamically, allowing these operations to either execute successfully or fail gracefully, based on how AWS capacity fluctuates, and reducing the need for us to institute blanket guards at an availability zone or regional level.All project operations have been restored in previously affected availability zones:- ap-south-1a- eu-central-1a- eu-west-3c- eu-west-2bAll project operations except database version upgrades have been restored in the following availability-zone:- eu-west-3b
We have rolled out improvements to the restart process which lets us pre-reserve capacity before a restart/compute resize operation, with this improvement we are able to abort the restart process if there isn't sufficient capacity available, hence avoiding downtime.This does not resolve the ongoing capacity issue present in eu-west-3b, but it will negate any type of restart or compute resize-related outage which might be encountered and inform users if this didn't succeed due to a capacity-related issue.This additionally allows the platform to act more dynamically, allowing these operations to either execute successfully or fail gracefully, based on how AWS capacity fluctuates, and reducing the need for us to institute blanket guards at an availability zone or regional level.All project operations have been restored in previously affected availability zones:- ap-south-1a- eu-central-1a- eu-west-3c- eu-west-2b- eu-west-3b
Project restarts, as well as compute and database version upgrade operations have been disabled in the following availability zones:- eu-west-3bWe are continuing to work with our cloud provider to in order to address compute capacity issues.All project operations have been restored in previously affected availability zones:- ap-south-1a- eu-central-1a- eu-west-3c- eu-west-2b
Project restarts, compute operations and database version upgrades have been disabled in eu-west-2b and eu-west-3b availability zones.Project restarts, as well as compute and database version upgrade operations have been re-enabled in the previously impacted ap-south-1a, eu-central-1a, and eu-west-3c availability zones. These operations are currently disabled in the still-impacted eu-west-3b and eu-west-2b availability zones.
We are continuing to work with our cloud provider to in order to address compute capacity issues in the eu-west-3b availability zone.Project restarts, as well as compute and database version upgrade operations have been re-enabled in the previously impacted ap-south-1a, eu-central-1a, and eu-west-3c availability zones. These operations are currently disabled in the still-impacted eu-west-3b availability zone.Project creation has been re-enabled in the eu-west-3 region.
We are continuing to work on a fix for this issue.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the following availability zones:- ap-south-1a- eu-central-1a- eu-west-3b- eu-west-3cWe have disabled the ability to restart projects, compute or database version upgrades within these availability zones.Project creation has been disabled for the eu-west-3 region.
We are currently investigating this issue.
Report: "Connectivity Issues to Pooler in US-East-1"
Last updateSome users are reporting failing connections to the pooler in us-east-1. We reset affected pools, which restored connectivity again.
Report: "Small segment of users seeing failed restores or project unpauses"
Last updateThis incident has been resolved.
All known affected projects have been successfully restored. If you still have a project stuck restoring or with a failed restore, please reach out via our support form at https://supabase.help
We have identified an issue preventing approximately 570 projects from restoring successfully. These are predominately free plan projects, and the team has identified the issue and has begun working to get these projects running again. We have also put a fix in place to prevent more projects from encountering this same issue. We will update when all known projects are up and running again.
Report: "Issues with Supavisor in us-east-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating reports of issues with supavisor connections in us-east-1 region.
Report: "Auth service down for some free plan projects"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
We've updated the affected projects, and things are now looking stable. We're continuing to monitor for any spikes in error rates.
The issue has been identified and a fix is being implemented.
A small subset of free plan projects may be experiencing failures to auth endpoints, receiving errant 401s on any API Requests. The team is currently looking into the cause and total scope.
Report: "Assets not loading on supabase.com"
Last updateThis incident has been resolved
Assets were not loading on supabase.com due to an issue with an upstream provider, we have implemented a workaround to avoid this impact and are monitoring the situation.
Report: "Supavisor pooler and Storage connectivity issues in US-East-1"
Last update## Incident Summary **Date:** January 29, 2025 **Service Impacted:** Supavisor – Postgres connection pooler **Affected Region:** us-east-1 **Duration:** ~1 hour ## Background Supavisor is responsible for managing Postgres connections. The team scheduled a major upgrade to Supavisor 2.0 across 10 regions. The new version had already been successfully deployed in 9 smaller regions over the past few weeks, offering improvements in query latency and availability zone awareness. ## Timeline of Events * **13:00 UTC** – Maintenance began, and Supavisor 2.0 was successfully deployed to 9 regions with minimal downtime, requiring only client connection restarts. * **14:46 UTC** – Deployment to `us-east-1` was completed successfully with clients reconnected and throughput stable. * **16:05 UTC** – A majority of clients disconnected and failed to reconnect reliably. * **16:11 UTC** – Incident was declared, and rollback initiated. * **16:53 UTC** – Rollback completed, restoring stability. ## Root Cause Analysis ### Primary Cause The failure stemmed from an ETS \(Erlang Term Storage\) table corruption, affecting critical components: * **16:05:50 UTC** – An `:ets.select_delete` operation failed, causing corruption. * **16:05:56 UTC** – Process registry lookups, backed by ETS, in `Syn` and `Cachex` libraries began failing and prevented tenants from reconnecting. * **16:06:00 UTC** – Processes managed by the affected process registry crashed, dropping connections for ~80% of tenants. ## Resolution * Rollback to the previous Supavisor version was initiated. * The previous stable version restored normal operation. ## Action Items * **Faster rollbacks** – Reduce rollback time from 40 minutes to under 90 seconds. * **Improved alerting** – Detect client disconnections within 2 minutes. ## Conclusion The incident highlighted weaknesses in rollback speed and resiliency to unforeseen failures under load. Future improvements will focus on rollback efficiency to reduce the impact of similar deployment failures.
This incident has been resolved.
A failed Supavisor deploy has been rolled back.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue. For database connections, you can fall back to direct connections for the time being. Storage connectivity will be restored once the connection pooler issues are resolved.
Report: "Signup and Login Errors for some users on the Supabase Dashboard"
Last updateAll known affected users have been emailed with instructions. If you signed up on January 23 and are unable to login, please try to sign up again. If you are still having issues, please let our support team know by emailing support@supabase.com
Affected users who signed up for Supabase today may need to sign up for Supabase again. If you attempt to sign up again and receive any errors, please reach out to our support team at support@supabase.com and mention this incident. We will be emailing known affected users later this evening with instructions as well.
We have deployed a fix to prevent any further users from being affected; however, we are working to identify and restore access for users already affected. We will be reaching out to these users directly, and will update again here once that has been done.
The issue has been identified and a fix is being implemented.
Some users are unable to log into the supabase dashboard, and some new signups are getting errors when attempting to login. The team is currently investigating these issues, and we hope to have them resolved as soon as possible.
Report: "Realtime causing high number of connections on some projects"
Last updateAn error in connection handling logic for realtime replication slots resulted in some projects experiencing higher than normal realtime connections, which in some instances exhausted all available postgres connections impacting connectivity to those services.
Report: "Compute capacity issues observed in the eu-west-3b availability zone"
Last updateThis incident has been resolved.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the eu-west-3b availability zone. We have disabled the ability to restart projects, compute or database version upgrades within eu-west-3b. Project provisioning in eu-west-3, as well as all operations for projects located in the eu-west-3a and eu-west-3c availability zones are available.
We are currently investigating this issue.
Report: "Compute capacity issues observed in the eu-central-1a availability zone"
Last updateThis incident has been resolved.
Project operations in eu-central-1a have been enabled. We are continuing to monitor the situation.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the eu-central-1a availability zone. We have disabled the ability to restart projects, compute or database version upgrades within eu-central-1a. Project provisioning in eu-central-1, as well as all operations for projects located in the eu-central-1b and eu-central-1c availability zones are available.
We are currently investigating this issue.
Report: "Compute capacity issues observed in the ap-south-1b availability zone"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Logs and Reports unavailable in the Supabase Dashboard"
Last updateLogs and Reports unavailable in the Supabase Dashboard
Report: "Sporadic issues logging into the Supabase Dashboard"
Last updateThis incident has been resolved.
All error rates continue to be nominal, and things are looking stable. We'll continue monitoring for any reoccurring issues.
We've identified the issue and have pushed a fix. Error rates have returned to nominal levels, but we are continuing to monitor just in case.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "Supavisor and Storage connectivity issues in ap-southeast-1 (Singapore)"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Our engineers have identified the root cause of the issue and some connectivity has improved. We are now working on resolving the issue fully.
We have identified a Supavisor connectivity issue in ap-southeast-1. This issue is affecting Supavisor and our Storage functionality. Engineers are working on resolving the issue.
We are currently investigating this issue.
Report: "storage and pooler connection issues for some projects in ap-northeast-2"
Last updateThis incident has been resolved.
Work continues on rolling out the fixes to the remaining affected projects.
The issue has been identified and fix has been rolled out to most projects in the region. The fix for the remaining projects is in progress.
We are currently investigating this issue.
Report: "storage logs ingestion degraded in us-east-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Increased Management API Errors in US West"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Degraded Storage logs ingestion in us-east-1"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Realtime Postgres Changes degraded"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have determined Postgres Changes was using an incorrect number of connections to check RLS policies. This caused the system to reuse the same replication slot, leading to failed connections against Postgres Changes. The issue has been identified and we're deploying corrective measures.
We are currently investigating this issue.
Report: "Branching unavailable"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
A fix has been implemented and we are monitoring the results.
The branching service is unavailable due to an upstream issue with fly.io (https://status.flyio.net/incidents/7lfnb87h43hf) impacting internal service connectivity and workload provisioning.
Branching service was unavailable due to an upstream issue with fly.io (https://status.flyio.net/). The upstream issue has now been resolved and branching should be available once again.
We are currently investigating this issue.
Report: "Compute capacity issues observed in the eu-west-3b availability zone"
Last updateThis incident has been resolved.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the eu-west-3b availability zone. We have disabled the ability to restart projects, compute or database version upgrades within eu-west-3b. Project provisioning in eu-west-3, as well as all operations for projects located in the eu-west-3a and eu-west-3c availability zones are available.
We are currently investigating this issue.
Report: "Compute capacity issues observed in the eu-west-3b availability zone"
Last updateThis incident has been resolved.
We have identified an issue with our cloud provider regarding insufficient compute capacity in the eu-west-3b availability zone. We have disabled the ability to restart projects, compute or database version upgrades within eu-west-3b. Project provisioning in eu-west-3, as well as all operations for projects located in the eu-west-3a and eu-west-3c availability zones are available.
We are currently investigating this issue.
Report: "Compute capacity issues encountered in eu-west-3"
Last updateThis incident has been resolved.
We have observed an increase in compute capacity for eu-west-3b. Project restarts, compute upgrades, and database version upgrades have been re-enabled. We are monitoring available capacity.
We have isolated the issue to the eu-west-3b availability zone. The ability to restart projects, compute or database version upgrades within eu-west-3b is disabled at this time. Project creation and unpausing has been re-enabled for eu-west-3, as well as restart and upgrade operations for projects located in the eu-west-3a and eu-west-3c availability zones. We are monitoring compute capacity availability for the eu-west-3b availability zone.
We have identified an issue related to our cloud provider displaying insufficient compute capacity in the eu-west-3 region. We have disabled the ability to create new projects, restart projects or make compute upgrades in the eu-west-3 region until the observed capacity issues are resolved.
We are currently investigating this issue.