Historical record of incidents for Atlassian Bitbucket
Report: "Intermittent issues affecting Bitbucket cloud"
Last updateWe’re experiencing a recurrence of the intermittent issue affecting Bitbucket cloud. Our team is working diligently to resolve this issue, and we’ll keep you posted with further updates.
Report: "Intermittent issues affecting Bitbucket"
Last updateWe are currently experiencing intermittent issues with Bitbucket. Our team is working diligently to resolve this issue, and we'll keep you posted with further updates.
Report: "Customers may experience delays or failures receiving emails"
Last updateWe were experiencing cases of degraded performance for outgoing emails from Confluence, Jira Work Management, Jira Service Management, Jira, Opsgenie, Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas, Compass, and Loom Cloud customers. The system is recovering and mail is being processed normally as of 16:45 UTC. We will continue to monitor system performance and will provide more details within the next hour.
Report: "Bitbucket - Steps are queued and delayed from starting"
Last updateOn May 30th, there was an issue where Bitbucket pipeline steps were queued and delayed from starting. This problem has now been resolved, and the service operates normally for all customers.
The Bitbucket issues, where steps were queued and delayed, have been mitigated. Services are now functioning normally for all affected customers. We will monitor it closely to ensure stability.
The issue has been identified and a fix is being implemented.
We are investigating an issue affecting Bitbucket, where steps are queued and delayed from starting. Our team is working diligently to resolve this issue and restore services as quickly as possible. We'll keep you posted with further updates.
Report: "Bitbucket - Steps are queued and delayed from starting"
Last updateWe are investigating an issue affecting Bitbucket, where steps are queued and delayed from starting.Our team is working diligently to resolve this issue and restore services as quickly as possible.We'll keep you posted with further updates.
Report: "Users receiving errors when attempting to load BitBucket"
Last updateOur team has mitigated the issue that caused error messages when loading Bitbucket. Bitbucket should now function correctly for all previously impacted users.
Our team has mitigated the issue that caused error messages when loading Bitbucket. We are continuing to monitor at this time to ensure performance has been restored, and resolve the root cause.
We are aware of some users experiencing issues loading Bitbucket and may be receiving errors such as '500 internal server error'. Our team is investigating this issue with urgency and will provide an update as soon as possible.
Report: "Users receiving errors when attempting to load BitBucket"
Last updateWe are aware of some users experiencing issues loading BitBucket and may be receiving errors such as '500 internal server error'. Our team is investigating this issue with urgency and will provide an update as soon as possible.
Report: "Users receiving errors when attempting to load Bitbucket"
Last updateWe are aware of some users experiencing issues loading Bitbucket and may be receiving errors such as '500 internal server error'. Our team is investigating this issue with urgency and will provide an update as soon as possible.
Report: "Bitbucket has degraded performance"
Last update### Summary On May 8, 2025, at 3:26pm UTC, Bitbucket Cloud experienced website and API latency due to an overloaded primary database. The event was caused by a backfill job running from an internal Atlassian service, which triggered an excessive call volume of expensive queries and pressure on database resources. As a result, the primary database automatically failed over, and Bitbucket services recovered in 15 minutes. Our real-time monitoring detected the incident immediately, and the high-intensity backfill job was stopped. However, following the failover, a backlog of retries from downstream services continued to impact overall database performance. Customers may have seen intermittent errors or website latency during this time. During this period following the failover, the engineering team implemented several strategies to further shed database load, successfully alleviating pressure on resources and improving performance. On May 9th at 11:19 AM UTC, Bitbucket Cloud systems were fully operational. ### **Impact** The overall impact occurred between May 8th, 2025, at 3:26 PM and May 9th at 11:19 AM UTC on Bitbucket Cloud. The incident resulted in increased latency and intermittent failures across Bitbucket Cloud services, including the website, API, and Bitbucket Pipelines. ### **Root cause** The issue was caused by an internal high-scale backfill job that triggered excessive load on certain API endpoints, which eventually impacted the database through resource-intensive queries and operations. This led to additional load from retries by dependent services, increasing the total recovery time. ### **Remedial action plan and next steps** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified during our testing, as it was related to a specific high-scale backfill job run by an internal component, which impacted highly resource-intensive database queries. To prevent this type of incident from recurring, we are prioritizing the following improvement actions: * Improve database request routing so that more reads go to read replicas instead of the write-primary database. * Adjust rate limits for internal API endpoints with resource-intensive database operations. * Optimize database queries so that they can run more efficiently. * Tune retry policies from downstream services. We apologize to customers whose services were interrupted by this incident, and we are taking immediate steps to improve the platform’s reliability. Thanks, Atlassian Customer Support
This incident has been resolved.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Bitbucket Cloud scheduled maintenance for Code Search"
Last updateBitbucket Cloud will undergo maintenance on Saturday May 17, 2025, from 01:00 to 04:00 UTC. During this maintenance window, customers may experience degraded performance with CodeSearch. If your request times out, please re-issue your request once the maintenance has concluded. There will be no downtime for any other product services, and this maintenance will not affect users' daily activities.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Some pipeline builds are not triggering"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are investigating an issue with pipeline builds not triggering that is impacting some Bitbucket customers. We will provide more details within the next hour.
Report: "Some pipeline builds are not triggering"
Last updateWe are investigating an issue with pipeline builds not triggering that is impacting some Bitbucket customers. We will provide more details within the next hour.
Report: "Bitbucket Cloud - Database Maintenance"
Last updateBitbucket Cloud will conduct a mandatory maintenance on Saturday April 12, 2025 between 16:30-19:30UTC. During the maintenance window, users will experience intermittent degradation, and any events that rely on webhooks may need to be manually re-triggered post-maintenance. Users should be able to interact with BB UI throughout the maintainance window. We apologize for the short notice and appreciate your understanding
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "Bitbucket has degraded performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are still investigating reports of performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are still investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Bitbucket has degraded performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are still investigating reports of performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are still investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Bitbucket pipelines showing error when running a build."
Last updateThe issue where users running builds would receive an error message should now be resolved. This issue was impacting only those using self-hosted runners. It is expected that builds should have still run as expected despite this error message in the meantime.
We are aware of an issue where users are receiving an error message when running a build. Although we believe builds are still running successfully our team is investigating with urgency and an update will be provided when available.
Report: "Bitbucket pipelines showing error when running a build."
Last updateThe issue where users running builds would receive an error message should now be resolved. This issue was impacting only those using self-hosted runners.It is expected that builds should have still run as expected despite this error message in the meantime.
We are aware of an issue where users are receiving an error message when running a build. Although we believe builds are still running successfully our team is investigating with urgency and an update will be provided when available.
Report: "Bitbucket has degraded performance"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
We have mitigated the problem and currently monitoring the results.
We are investigating reports of intermittent errors and performance degradation for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Bitbucket website and git operations down for some customers"
Last updateBetween 11/Mar/25 23:02 UTC and 11/Mar/25 23:59 UTC, some Bitbucket Cloud customers experienced issues accessing our website, API, and git services. We have mitigated the problem and will take steps to avoid this issue in the future.
We are monitoring an identified fix and seeing signs of recovery.
We are continuing to investigate this issue.
Some customers are experiencing issues with our web and git operations. We are investigating.
Report: "Bitbucket has degraded performance"
Last update### SUMMARY On February 11, 2025, between 15:41 and 16:26 UTC, Atlassian customers using Bitbucket Cloud experienced workspace access errors \(HTTP 404\) when attempting to access the website, API, and Git over HTTPS/SSH. The event was triggered by a failure in our feature flagging service, which inadvertently blocked some users from core services. The incident was detected within eight minutes by automated monitoring and was resolved 45 minutes later once a change to a feature flag configuration had been fully deployed. ### **IMPACT** A subset of Bitbucket Cloud users were unable to access their workspace. When trying to access their Bitbucket cloud repository through the website, API, or CLI, these users would have seen a 404 error message. ### **ROOT CAUSE** An upstream failure in a feature flagging service resulted in Bitbucket’s application logic not working correctly. This resulted in access errors for a subset of customers. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have a number of testing and preventative processes in place, this specific failure scenario wasn’t identified during testing. We are prioritizing the following improvement actions designed to avoid repeating this type of incident: * Fixing the root cause of the bug in our feature flag service * Improving Bitbucket’s fallback mechanisms and handling of errors in feature flags * Improving test coverage and war gaming failures with core dependencies. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Between 11/Feb/25 3:41 PM UTC and 11/Feb/25 4:26 PM UTC, some customers for Bitbucket cloud experienced degraded performance and 400 errors. We have mitigated the problem and would take steps to avoid this issue in the future. We will be publishing a public PIR on this incident in this status page, once it becomes available The issue has been resolved and all the services are operating normally.
We are investigating reports of intermittent errors and performance degration for some Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Degraded website performance for some customers."
Last updateAfter monitoring, this incident is now resolved. If you continue to see issues, please ensure you are not using a stale / old tab and are loading a fresh version of Bitbucket in your browser.
We have rolled out a fix for this issue and are seeing recovery for Bitbucket users. Please refresh the page and it should fix the issue. We are currently monitoring the fix and further investigating root cause.
We've identified an issue with some customers DNS configuration not resolving Atlassian domains correctly. If you are experiencing errors accessing Bitbucket, we advise allowlisting the following domains in your DNS configuration: https://support.atlassian.com/organization-administration/docs/ip-addresses-and-domains-for-atlassian-cloud-products/ We are still investigating the issue with vendors and will update statuspage as we learn more.
We've identified an issue with some customers DNS configuration that can be blocking some Bitbucket assets. We are still investigating and will update statuspage with more information shortly.
We are currently investigating an issue impacting some customers who are seeing intermittent errors accessing the Bitbucket.org website.
Report: "Bitbucket Cloud web, api, and Pipelines service outage"
Last update### Summary On January 21, 2025, between 14:02 and 17:49 UTC, Atlassian customers using Bitbucket Cloud were unable to use the website, API, or Pipelines. The event was triggered by write contention in a high traffic database table. The incident was detected within eight minutes. We then worked to both throttle traffic and improve query performance, which allowed services to recover. The total time to resolution was about three hours and 47 minutes. ### **IMPACT** The overall impact was between 14:02 and 17:49 UTC, affecting Bitbucket Cloud. This impacted customers globally, and they were unable to use the website, APIs, or Pipelines services. Git hosting \(SSH\) was unaffected. ### **ROOT CAUSE** The issue was caused by an increase in API traffic triggering write contention on a high-traffic table, resulting in increased CPU usage and degraded database performance. This ultimately impacted the availability of core services \(web, API, and Pipelines\). ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the code path being triggered does not commonly experience this type of traffic. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Running additional maintenance on core database tables * Added throttling on write-heavy operations To improve service resilience and recovery time for our environments, we will implement additional preventative measures such as: * Improving database observability to isolate failures * Continuing to shard data to better distribute traffic load We apologize to customers whose services were impacted by this incident and are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Earlier we experienced database contention on high traffic tables, which resulted in website, API, and Pipelines outages. All Bitbucket services are now operational. A full post mortem will be published.
All Git, Web, API and Pipelines services are now operational. We are continuing to monitor database and Pipelines reliability.
We have identified the root cause of the database issue that impacted Bitbucket website and Git operations; this has been mitigated now. We are experiencing Pipelines degradation that we are working to resolve.
We have identified the root cause of the database issue and have mitigated the problem. We are now monitoring closely.
We are investigating an issue with saturated Bitbucket database that impacts all Bitbucket operations. We will provide more details within the next 30 minutes.
We are investigating an issue with saturated Bitbucket database that impacts all Bitbucket operations. We will provide more details within the next hour.
We are still investigating an issue with Bitbucket Web and Git operations that is impacting Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
We are investigating an issue with Bitbucket Web and Git operations that is impacting Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Increased error rate in Bitbucket cloud APIs"
Last updateThis incident has been resolved.
A solution has been implemented, and we're monitoring the fix to confirm resolution.
Bitbucket Cloud support has observed a small increase in errors with all Bitbucket cloud APIs, we are investigating the issue and looking into root cause.
Report: "Issues with attachments, including viewing previews, downloading and uploading"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the result.
We are continuing to work on a fix for this issue.
We have identified the issue and we are working to fix the issue.
Report: "Unable to invite new users due to missing recaptcha token"
Last updateBetween 08:00 UTC to 11:54 UTC, we experienced problems with the invitation of new users for Cloud customers on admin.atlassian.com. The issue has been resolved and the service is operating normally.
We continue to work on resolving the invitation workflow in admin.atlassian.com We have identified the root cause and performed changes in the environment to mitigate the issue.
We are investigating reports of intermittent errors for some Atlassian customers when they are trying to invite users using their admin panels (admin.atlassian.com) We will provide more details once we identify the root cause.
Report: "Unable to connect to Bitbucket Cloud via SSH"
Last update### **SUMMARY** On September 10, 2024, between 6:34 PM UTC and 7:19 PM UTC, some Atlassian customers experienced an issue preventing users from connecting to Bitbucket Cloud via SSH. The issue arose from a change in how we determined IP allow lists, which inadvertently blocked access for customers with these controls enabled. The incident was promptly identified through our monitoring systems, and our teams initiated response protocols to mitigate the issue. ### **IMPACT** The incident only affected customers who had IP whitelisting enabled on their Bitbucket Cloud accounts. These customers experienced difficulties connecting via SSH due to the unintended blocking caused by a change in the IP allow list computation. The service interruption lasted approximately 45 minutes, during which time affected users were unable to access their repositories through SSH. ### **ROOT CAUSE** A change to IP allow list evaluation was incompatible with a new Bitbucket Cloud networking configuration. This inadvertently blocked SSH access for customers with specific allow lists restrictions enabled. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** To restore the SSH service, the team quickly rolled back the release responsible for the IP allow list issue. We know that outages impact your productivity. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Improve monitoring coverage of IP allowlisting; * Add additional tests and deployment validation checks for changes to IP allowlist configurations. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Atlassian Customer Support
We experienced connectivity issues with Bitbucket Cloud via SSH, which only affected customers using IP allow listing. The service was unavailable for approximately 40 minutes. However, the issue was identified and resolved, and service was restored around 19:23 UTC.
Report: "Users are experiencing reCaptcha errors while signing up"
Last updateThis issue has been resolved.
We have identified the root cause and the issue appears to be resolved.
Users attempting to sign up are encountering reCaptcha errors that are preventing a successful signup.
Report: "Bitbucket Cloud website performance degradation"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing performance degradation with Bitbucket Cloud. Users may encounter slower than expected response times when accessing Bitbucket Cloud website.
Report: "Bitbucket Webhooks and Pipelines on push and pull request not being triggered"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating an issue related to Connect Webhooks not being delivered. This had a downstream impact on Bitbucket Pipelines on push not being triggered. Manually triggered and scheduled pipelines are still working.
Report: "Bitbucket website is slow to load"
Last updateBetween 2024-08-07 5.20am UTC to 2024-08-07 5.40am UTC, we experienced degraded performance with Bitbucket Website. The issue has been resolved and the service is operating normally.
We are currently experiencing an issue where Bitbucket website is slow to load. Our engineering team is actively investigating the root cause and working to resolve the issue as quickly as possible.
Report: "Bitbucket Pipelines Failing to Start"
Last updateThe issue has been resolved and the service is operating normally.
We have identified the cause and have mitigated the problem. We are now monitoring this closely.
We are currently experiencing an issue where Bitbucket Pipelines are failing to start. Our engineering team is actively investigating the root cause and working to resolve the issue as quickly as possible.
Report: "Pipelines failing to start."
Last updateBetween 2024-07-24 3.30am UTC to 2024-07-24 6.40am UTC, we experienced degraded performance with API and Pipelines for Atlassian Bitbucket. The issue has been resolved and the service is operating manually.
Between 2024-07-24 3.30am UTC to 2024-07-24 6.40am UTC, we experienced degraded performance with API and Pipelines for Atlassian Bitbucket. The issue has been resolved and the service is operating manually.
Both Bitbucket API and Pipelines services have recovered. We'll continue to monitor and provide another update in 20min.
A fix has been implemented and API performance has begun to recover. Some Pipelines are still delayed but we are seeing recovery. We'll continue to monitor and provide an update within the hour.
We've identified an issue causing increased API error rate and delayed Pipelines. We are working on implementing a fix to restore service and will post a follow up within one hour.
We are currently investigating an issue affecting pipelines creation due to an increased API error rate, we're investigating and will provide a follow-up soon.
Report: "Bitbucket Cloud services degraded"
Last updateThis incident has been resolved.
The impact to most Bitbucket Cloud operations has been resolved. We are continuing to monitor the impact to Bitbucket Pipelines.
We are currently investigating an issue that is impacting Bitbucket Cloud.
Report: "Some users may experience delays in receiving email notifications"
Last updateBetween 12:00am 9th July to 08:00am 10th July, we experienced email deliverability issues for some recipient domains for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, and Jira Product Discovery. The issue has been resolved and future emails will flow normally.
We continue to work on resolving the Email Notifications for Confluence, Jira Work Management, Jira Service Management, Jira, Trello, Atlassian Bitbucket, and Jira Product Discovery. We have identified the root cause.
Report: "Some products are hard down"
Last updateBetween 03-07-2024 20:08 UTC to 03-07-2024 20:31 UTC, we experienced downtime for Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We have mitigated the problem and continue looking into the root cause. The outage was between 8:08pm 03/07 UTC - 08:31pm 03/07 UTC We are now monitoring closely.
We are investigating an issue with <FUNCTIONALITY IMPACTED> that is impacting <SOME/ALL> Atlassian, Atlassian Partners, Atlassian Support, Confluence, Jira Work Management, Jira Service Management, Jira, Opsgenie, Atlassian Developer, Atlassian (deprecated), Trello, Atlassian Bitbucket, Guard, Jira Align, Jira Product Discovery, Atlas, Atlassian Analytics, and Rovo Cloud customers. We will provide more details within the next hour.
Report: "Intermittent error accessing content"
Last updateBetween 2024-06-20 22:04 UTC to 2024-06-20 22:28 UTC, we experienced intermittent issue for users to access the services for some Atlassian Cloud customers. The issue has been resolved and the service is operating normally.
We have identified the root cause of the intermittent errors and have mitigated the problem. We are now monitoring closely.
We are investigating an intermittent issue with accessing Atlassian Cloud services that is impacting some Atlassian Cloud customers. We will provide more details once we identify the root cause.
Report: "Error responses across multiple Cloud products"
Last update### Summary On June 3rd, between 09:43pm and 10:58 pm UTC, Atlassian customers using multiple product\(s\) were unable to access their services. The event was triggered by a change to the infrastructure API Gateway, which is responsible for routing the traffic to the correct application backends. The incident was detected by the automated monitoring system within five minutes and mitigated by correcting a faulty release feature flag, which put Atlassian systems into a known good state. The first communications were published on the Statuspage at 11:11pm UTC. The total time to resolution was about 75 minutes. ### **IMPACT** The overall impact was between 09:43pm and 10:17pm UTC, with the system initially in a degraded state, followed by a total outage between 10:17pm and 10:58pm UTC. _The Incident caused service disruption to customers in all regions and affected the following products:_ * Jira Software * Jira Service Management * Jira Work Management * Jira Product Discovery * Jira Align * Confluence * Trello * Bitbucket * Opsgenie * Compass ### **ROOT CAUSE** A policy used in the infrastructure API gateway was being updated in production via a feature flag. The combination of an erroneous value entered in a feature flag, and a bug in the code resulted in the API Gateway not processing any traffic. This created a total outage, where all users started receiving 5XX errors for most Atlassian products. Once the problem was identified and the feature flag updated to the correct values, all services started seeing recovery immediately. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the change did not go through our regular release process and instead was incorrectly applied through a feature flag. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Prevent high-risk feature flags from being used in production * Improve the policy changes testing * Enforcing longer soak time for policy changes * Any feature flags should go through progressive rollouts to minimize broad impact * Review the infrastructure feature flags to ensure they all have appropriate defaults * Improve our processes and internal tooling to provide faster communications to our customers We apologize to customers whose services were affected by this incident and are taking immediate steps to address the above gaps. Thanks, Atlassian Customer Support
Between 22:18 UTC to 22:56 UTC, we experienced errors for multiple Cloud products. The issue has been resolved and the service is operating normally.
We are investigating an issue with error responses for some Cloud customers across multiple products. We have identified the root cause and expect recovery shortly.
Report: "Bitbucket Pipelines degraded experience"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
The team continues to investigate an issue impacting Bitbucket Pipelines, but believes impact is limited to self-hosted runners.
We are currently investigating an issue impacting Bitbucket Pipelines, including self-hosted and cloud runners.
Report: "Partially Degraded Experience Running Pipelines"
Last updateThis incident has been resolved and Pipelines are back to running normally.
Backlog of Pipelines has been successfully processed and we are running normally again. The team is continuing to monitor the situation.
As a result of this incident, we are now processing a backlog of Pipelines which is causing slowness. The team is working on mitigating this to process remaining Pipelines from the initial incident.
An issue was identified with the ability to parse yml impacting some customers ability to start or complete Pipelines. It has since been resolved and the team is monitoring for further issues.
Report: "Degraded Performance of Bitbucket Website and Pipelines"
Last updateThis incident has been resolved.
A fix has been applied and performance restored. The team is monitoring to ensure no further recurrence.
We are currently investigating an issue impacting our database that is slowing most functionality across Bitbucket and Pipelines.
Report: "Git LFS operations aren't working."
Last updateThis incident has been resolved
A fix has been implemented and deployed. Operations over SSH should be working as expected. We will continue to monitor this situation.
We are continuing to investigate the issue. As a workaround, we recommend users attempt to use HTTP as this seems to be impacting SSH
The Bitbucket Cloud team are investigating an issue with git lfs operations, we're working on identifying the root cause and will provide an update soon.
Report: "Delay in starting pipelines"
Last updateThe incident affecting pipelines has been resolved.
We have identified a bottleneck in a service and scaled up the underlying infrastructure, we are monitoring for resolution in clearing the backlog.
- We've observed a delay in pipelines starting once triggered. - We're isolating a root cause and will implement a fix as soon as possible.
Report: "Bitbucket Cloud service degradation"
Last update### Summary On March 11, 2024, between 20:29 UTC and 21:41 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This impact was caused by an issue with Bitbucket’s database, resulting in connection pools becoming saturated, increasing response times, and a ramp-up of requests timing out completely. ### **IMPACT** Customers who were impacted experienced increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were also affected. ### **ROOT CAUSE** The incident was caused by a bug in the version of database software being used. With Bitbucket’s query patterns, if certain processes do not run frequently enough, eventually issues can arise that can result in poor query planner performance. Due to this bug, our process configuration, which has been tuned to our specific workload previously, is no longer proving to be effective. While the appropriate tuning is determined, we have implemented a system to trigger that process as soon any issues are detected. We are confident this will prevent a repeat incident while we determine an appropriate threshold and cadence. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Vacuuming immediately when the defect is detected. * Appropriately tuning autovacuum settings to meet the requirements of our workload. * Upgrading our database version as soon as the fix becomes available. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
Identified degraded performance and increased error rate, the team is currently working to mitigate.
We're investigating an issue with high resource utilisation on one of our databases, customers may experience a degradation in performance while we work on a resolution.
Report: "Bitbucket Pipelines have a delay in triggering builds"
Last updateThis incident has been resolved.
The issue is resolved and pipelines triggers are working as expected. We are monitoring to make sure it is fully functional.
The issue still remains unresolved, we are working on a new approach to fix issue.
A rollback is in-progress, resolution expected in approximately 30 minutes. A workaround option is to manually trigger a build in the Bitbucket Cloud UI for the repositories that have not triggered automatically.
We are continuing to work on a fix for this issue.
Bitbucket Cloud pipeline triggers are not working as expected, a root cause has been identified and a recent change is being rolled back. The impact will be a delay in pipelines starting until resolved, we will provide a follow-up shortly.
Report: "Admin Portal Feature Access Issue"
Last updateBetween 6:30 AM UTC to 9:50 AM UTC, we experienced failures in accessing some features from the Admin Portal. The issue has been resolved and the service is operating normally.
We are investigating an issue causing failures in accessing some features from the Admin Portal, which is impacting some of our Cloud customers. We have identified the root cause and anticipate recovery shortly.
Report: "Some paginated queries in Forge hosted storage kept repeating the last page"
Last updateWe have identified and resolved a problem with Forge hosted storage, where some paginated queries kept repeating the last page. The incident was detected by our internal monitoring and was resolved quickly after detection by reverting the deployment. Activating changes recently made to the query cursors for paginated queries introduced a bug that impacted some apps. A small number of requests were impacted over a 16-minute window, while the incident lasted. Timeline: - 25/Mar/24 10:08 p.m. UTC - Impact started, when the changes were deployed to production - 25/Mar/24 10:09 p.m. UTC - Incident was detected - 25/Mar/24 10:24 p.m. UTC - Incident was resolved and impact ended The impact of this incident has been completely mitigated and our monitoring tools confirm that query operations are back to the pre-incident behaviour. We have also resolved the underlying bug and deployed the fix to production, completely eliminating the cause of this incident. We apologise for any inconvenience this may have caused to our customers and our developer community.
Report: "Bitbucket Cloud website performance degraded"
Last update### Summary On February 22, 2024, between 7:22 UTC and 13:30 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This was caused by the vacuum process not being run frequently enough on our high-traffic database tables, which impaired the database’s ability to handle requests. This resulted in connection pools becoming saturated, response times increasing, and a ramp-up of requests timing out completely. After the database recovered at 13:30 UTC, Bitbucket Pipelines experienced build scheduling delays as it processed the backlog of jobs. Additional resources were added to Bitbucket Pipelines and the backlog was cleared in full by 17:30 UTC. ### **IMPACT** Customers who were impacted experienced significant delays with running Bitbucket Pipelines and increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were unaffected. ### **ROOT CAUSE** The incident was caused by an issue during the routine autovacuuming of our active database tables, which impaired its ability to serve requests. This led to slowdowns that impacted a variety of Bitbucket services, including the queuing of a large backlog of unscheduled pipelines. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Reconfigure vacuuming threshold for high write activity database tables. * Adjust alert thresholds to proactively catch this behavior earlier and reduce potential impact. * Tuning autoscaling and load shedding behavior for Pipelines services and increasing build runner capacity. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate the cause of the degraded performance.
We are continuing to investigate this issue.
We are continuing to investigate the cause of the degraded performance.
We are aware of an incident impacting the performance of the Bitbucket Cloud website. An update will be provided soon.
Report: "Pipelines stuck in pending state"
Last update### Summary On February 22, 2024, between 7:22 UTC and 13:30 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This was caused by the vacuum process not being run frequently enough on our high-traffic database tables, which impaired the database’s ability to handle requests. This resulted in connection pools becoming saturated, response times increasing, and a ramp-up of requests timing out completely. After the database recovered at 13:30 UTC, Bitbucket Pipelines experienced build scheduling delays as it processed the backlog of jobs. Additional resources were added to Bitbucket Pipelines and the backlog was cleared in full by 17:30 UTC. ### **IMPACT** Customers who were impacted experienced significant delays with running Bitbucket Pipelines and increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were unaffected. ### **ROOT CAUSE** The incident was caused by an issue during the routine autovacuuming of our active database tables, which impaired its ability to serve requests. This led to slowdowns that impacted a variety of Bitbucket services, including the queuing of a large backlog of unscheduled pipelines. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Reconfigure vacuuming threshold for high write activity database tables. * Adjust alert thresholds to proactively catch this behavior earlier and reduce potential impact. * Tuning autoscaling and load shedding behavior for Pipelines services and increasing build runner capacity. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Services have recovered and are operational
Services are in the process of recovering while we continue to monitor
The issue has been identified. We are working towards resolution.
We are still receiving some reports of Pipelines queueing or delays, requiring further investigation
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating an issue preventing Bitbucket Pipelines from starting for some customers.
Report: "Investigating new product purchasing"
Last updateBetween 28th Feb 2024 23:15 UTC to 29th Feb 2024 02:41 UTC, we experienced issue with new product purchasing for all products. All new sign up products have been successfully provision and confirmed issue has been resolved and the service is operating normally.
We are investigating an issue with new product purchasing that is impacting for all products. Customers adding new cloud products may have experienced a long waiting page or an error page after attempting to add a product. We have mitigated the root cause and are working to resolve impact for customers who attempted to add a product during the impact period. We will provide more details within the next hour.
Report: "Service Disruptions Affecting Atlassian Products"
Last update### **Summary** On February 14, 2024, between 20:05 UTC and 23:03 UTC, Atlassian customers on the following cloud products encountered a service disruption: Access, Atlas, Atlassian Analytics, Bitbucket, Compass, Confluence, Ecosystem apps, Jira Service Management, Jira Software, Jira Work Management, Jira Product Discovery, Opsgenie, StatusPage, and Trello. As part of a security and compliance uplift, we had scheduled the deletion of unused and legacy domain names used for internal service-to-service connections. Active domain names were incorrectly deleted during this event. This impacted all cloud customers across all regions. The issue was identified and resolved through the rollback of the faulty deployment to restore the domain names and Atlassian systems to a stable state. The time to resolution was two hours and 58 minutes. ### **IMPACT** External customers started reporting issues with Atlassian cloud products at 20:52 UTC. The impact of the failed change led to performance degradation or in some cases, complete service disruption. Symptoms experienced by end-users were unsuccessful page loads and/or failed interactions with our cloud products. ### **ROOT CAUSE** As part of a security and compliance uplift, we had scheduled the deletion of unused and legacy domain names that were being used for internal service-to-service connections. Active domain names were incorrectly deleted during this operation. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. The detection was delayed because existing testing & monitoring focused on service health rather than the entire system’s availability. To prevent a recurrence of this type of incident, we are implementing the following improvement measures: * Canary checks to monitor the entire system availability. * Faster rollback procedures for this type of service impact. * Stricter change control procedures for infrastructure modifications. * Migration of all DNS records to centralised management and stricter access controls on modification to DNS records. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
We experienced increased errors on Confluence, Jira Work Management, Jira Service Management, Jira Software, Opsgenie, Trello, Atlassian Bitbucket, Atlassian Access, Jira Align, Jira Product Discovery, Atlas, Compass, and Atlassian Analytics. The issue has been resolved and the services are operating normally.
We have identified the root cause of the Service Disruptions affecting all Atlassian products and have mitigated the problem. We are now monitoring this closely.
We have identified the root cause of the increased errors and have mitigated the problem. We continue to work on resolving the issue and monitoring this closely.
We are investigating reports of intermittent errors for all Cloud Customers across all Atlassian products. We will provide more details once we identify the root cause.
Report: "Increased authentication errors across multiple products"
Last updateBetween 2:30 UTC to 4:26 UTC, we experienced increased authentication errors for Confluence, Jira Work Management, Jira Service Management, Jira Software, and Atlassian Bitbucket. The issue has been resolved and the service is operating normally.
We have identified the root cause of the authentication errors and have mitigated the problem. We are now monitoring this closely and will provide further updates within the hour.
We are investigating authentication issues impacting some Confluence, Jira Work Management, Jira Service Management, Jira Software, and Atlassian Bitbucket Cloud customers. We will provide more details within the next hour.
Report: "Bitbucket Pipelines - Unable to pull image errors"
Last updateThe issue has been resolved and the service is operating normally. Bitbucket Cloud customers experienced failures when running Pipelines builds on self-hosted runners. Customers received error messages noting "Unable to pull image".
The cause of the issue has been identified and remediated. We are continuing to monitor the issue to ensure that impact has been resolved.
We are investigating an issue with Bitbucket Pipelines builds, some customers may see errors and failed builds when using self-hosted runners. We will update with more details within the next hour.
Report: "User searches failing"
Last updateBetween 15:40 UTC to 15:57 UTC customers experienced intermittent failures when searching for users in Atlassian cloud services: Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, and Compass. The issue has been resolved and the service is operating normally.
We have mitigated the problem. We are now monitoring closely.
We are investigating an issue with our user search service that is impacting Atlassian cloud service: Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, and Compass. We will provide more details within the next hour.
Report: "Drop in the success rate of Forge hosted storage API calls"
Last updateWe identified a problem with the Forge hosted storage API calls, which resulted in a drop in invocation success rates in the developer console. The impact of this incident has been mitigated and our monitoring tools confirm that the success-rate is back to the pre-incident behaviour. It impacted 16 apps according to our logs, where these apps saw a reduced success rate of storage.get API calls, as listed in https://developer.atlassian.com/platform/forge/runtime-reference/storage-api-basic. As part of Forge's preparation to support Data Residency, Forge hosted storage has been undergoing a platform and data migration for storing app data. As part of this migration we do comparison checks for data consistency between the old and new platform. The previous incident earlier, https://developer.status.atlassian.com/incidents/9q71ytpjhbtl, had put the data on the new platform out of sync and so comparisons of the data from the old and new platform started showing failures and the migration logic retries on failures to test for consistency issues. This retry behaviour increased latency of these requests which led to 16 apps receiving an increased number of 504 timeout errors. Checking synchronously was identified by the team as a bug and should have been async. Once the root cause was identified we moved our backing platform rollout to a previous stage. The rollout is split into several stages. The issues we were having were on our blocking stage where we make calls to both the old and new platform and wait for both to complete so we can test any performance issues in the new platform before using it as our source of truth. It was in this blocking stage where we had a bug that included waiting on comparisons when it should've been async. To recover, we reverted back to our shadow mode stage. In this stage, all operations to the new platform are asynchronous, including comparisons that were blocking in the other stage and resulted in timeout issues and 504 errors being sent to apps. This is the state that Forge hosted storage has been in for several months without any problems. Here is the timeline of the impact: - On 2024-02-05 at 06:42 PM UTC, impact started with comparisons start happening on out of sync data in blocking mode - On 2024-02-05 at 08:57 PM UTC, impact was detected to API by our monitoring systems - On 2024-02-05 at 11:34 PM UTC, rollout to new platform was reverted to known stable state and impact ended We will release a public incident review, PIR, here in the upcoming weeks for this and the incident that happened earlier, https://developer.status.atlassian.com/incidents/9q71ytpjhbtl. We will detail all that we can about what caused the issue, and what we are doing to prevent it from happening again. We apologise for any inconveniences this may have caused our customers and the developer community and committed to preventing further issues with our hosted storage capability.
Report: "Not able to access Bitbucket Pipelines UI"
Last updateWe experienced problems not being able to access Bitbucket Pipelines UI. The issue has been resolved and the service is operating normally.
We are investigating cases of not being able to access Bitbucket Pipelines page. We will provide more details within the next hour.
Report: "Performance degradation for Forge app invocations"
Last updateAt around 4am UTC, about 40 percent of Forge app invocations experienced high latency, with a portion of the requests failing, during a 15 minute time window. The scaling of the instances was misconfigured following a new deployment of the service, which needed manual intervention which took a few minutes to resolve the issue. Timeline: - 2024-01-31 04:00 UTC: impact started - 2024-01-31 04:03 UTC: incident detected - 2024-01-31 04:15 UTC: the incident was resolved and the impact ended This issue is now resolved and Forge is fully operational. We apologize for any inconveniences this may have caused to our customers, partners, and our developer community.
Report: "Outage in Atlassian Intelligence functionality in multiple products"
Last updateBetween 23:45 UTC to 00:30 UTC, we experienced an outage in some Atlassian Intelligence features for Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, Atlas, and Compass. The issue has been resolved and the service is operating normally.
We have identified the root cause of the increased errors and have mitigated the problem. We are now monitoring closely.
We are investigating an issue with Atlassian Intelligence that is impacting some Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, Atlas, and Compass Cloud customers. We will provide more details within the next hour.
Report: "Atlassian's cross product user search service is currently degraded."
Last update### **SUMMARY** On Dec 18, 2023, between 12:29 p.m. and 3:35 p.m. UTC, Dec 18, 2023, Atlassian's cloud customers using Atlas, Bitbucket Cloud, Compass, Confluence Cloud, Jira Service Management, Jira Software, Jira Work Management, Jira Product Discovery products were unable to search for users or use the "@mention" functionality. Customers' user search results failed or were delayed as Atlassian's service returning user search results was degraded in several regions. The incident originated from a computationally intensive operation that was triggered multiple times in rapid succession, resulting in degraded performance of Atlassian's user search service across several regions. Notably, customers in the EU west region were most affected. The incident was detected within 2 minutes by automated monitoring, and our team promptly took action by recovering unhealthy systems and scaling up the service's infrastructure temporarily. The resolution process concluded in 3 hours and 06 minutes. ### **IMPACT** The overall impact was between Dec 18, 2023, between 12:29 p.m. UTC and Dec 18, 2023, 3:35 p.m. UTC. The Incident caused service disruption to cloud customers worldwide. Customers experienced delayed or failed user searches when using the following Atlassian cloud products: * Atlas * Bitbucket Cloud * Compass * Confluence Cloud * Jira Service Management * Jira Software * Jira Work Management * Jira Product Discovery ### **ROOT CAUSE** The incident stemmed from Atlassian's user search service receiving commands to process multiple computationally intensive operations in rapid succession. These operations were directed at the same customer data set, and therefore overloaded resources within a clustered database system, leading to memory exhaustion and subsequent unresponsiveness to user search requests. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** To prevent a recurrence of such incidents, we are implementing the following measures: * Implement a mechanism to queue computationally intensive operations in order to avoid overloading the resources within the systems and process them without impact on customer experience. * Fine-tune our clustered database settings to mitigate the impact of resource exhaustion on the overall system. We apologize to customers whose services were affected during this incident; we are taking immediate steps to improve the service’s resiliency. Thanks, Atlassian Customer Support
It has been resolved. Atlassian's cross product user search is working.
Atlassian's cross product user search service is currently healthy. Searches for users within Atlassian products are working as expected. We are in the process of investigating the root cause of this incident.
Atlassian's cross product user search service is recovering. Searches for users within Atlassian products are returning to normal.
Atlassian's cross product user search service is recovering. Searches for users within Atlassian products are returning to normal.
Atlassian's cross product user search service is recovering. Searches for users within Atlassian products are returning to normal.
We are investigating reports of intermittent errors for <SOME/ALL> Atlassian, Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Align, Jira Product Discovery, Atlas, and Compass Cloud customers. We will provide more details once we identify the root cause.
Report: "Egress connectivity timing out"
Last updateThe systems are stable after the fix and monitoring for a specified duration
The issue was identified and a fix implemented. We are monitoring currently.
We are currently investigating an incident that result in outbound connections from Atlassian cloud in us-east-1 intermittently timing out. This affects Jira, Trello, Confluence, Ecosystem products. The features affected for these products are those that require opening a connection from Atlassian Cloud to public endpoints on the Internet
Including Atlassian Developer
We are currently investigating an incident that result in connection time outs on service egress proxy. This affects Jira, JSM, Confluence, BitBucket, Trello, Ecosystem products. The features affected for these products are those that require a connection to service egress.
Report: "Forge Function Invocations outage impacting Smartlinks"
Last updateForge Invocations had an 8 minute outage between 2023-11-29 03:05:13 UTC to 2023-11-29 03:13:27 UTC resulting in Smart Links failing. This service has recovered post this time period.