Historical record of incidents for GitHub
Report: "Incident with Actions"
Last updateCustomers are currently unable to generate attestations from public repositories due to a broader outage with our partners.
We are investigating reports of degraded performance for Actions
Report: "Some Copilot chat models are failing requests"
Last updateWe are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 3.7, Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.
We are currently investigating this issue.
Report: "Incident With Copilot"
Last updateOn June 6, 2025, an update to mitigate a previous incident led to automated scaling of database infrastructure used by Copilot Coding Agent. The clients of the service were not implemented to automatically handle an extra partition. Hence it was unable to retrieve data across partitions, resulting in unexpected 404 errors. As a result, approximately 17% of coding sessions displayed an incorrect final state - such as sessions appearing in-progress when they were actually completed. Additionally, some Copilot-authored pull requests were missing timeline events indicating task completion. Importantly, this did not affect Copilot Coding Agent’s ability to finish code tasks and submit pull requests. To prevent similar issues in the future we are taking steps to improve our systems and monitoring.
Report: "Disruption with some GitHub services"
Last updateWe are currently investigating this issue.
Report: "Codespaces billing is delayed"
Last updateWe are currently investigating this issue.
Report: "Incident with Pull Requests"
Last updateWe are investigating reports of degraded performance for Pull Requests
Report: "Incident with Copilot"
Last updateWe are investigating reports of degraded performance for Copilot
Report: "Incident with Actions"
Last updatePages is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Actions
Report: "Incident with Actions"
Last updateThis incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We have applied mitigations and are monitoring for recovery.
We are currently investigating delays with Actions triggering for some users.
We are investigating reports of degraded performance for Actions
Report: "Disruption with some GitHub services"
Last updateOn May 30, 2025, between 08:10 UTC and 16:00 UTC, the Microsoft Teams GitHub integration service experienced a complete service outage. <br /><br />During this period, the service was unable to deliver notifications or process user requests, resulting in a 100% error rate for all integration functionality except link previews.<br /><br />This outage was due to an authentication issue with our downstream provider. We mitigated the incident by working with our provider to restore service functionality and are working to migrate to more durable authentication methods to reduce the risk of similar issues in the future.
Our team is continuing to work to mitigate the source of the disruption affecting a small set of customers using the GitHub Microsoft Teams integration.
We are experiencing a disruption with our Microsoft Teams integration. Investigations are underway and we will provide further updates as we progress.
We are currently investigating this issue.
Report: "Disruption with Gemini 2.5 Pro"
Last updateBetween May 15, 2025 10:10 UTC and May 15, 2025 22:58 UTC the Copilot service was degraded and returned a high volume of internal server errors for requests targeting Gemini 2.5 Pro, a public preview model. This was due to a high volume of rate limiting by the upstream model provider, similar in volume to the internal server errors during the previous day.<br /><br />We mitigated the incident by temporarily disabling Gemini 2.5 Pro for all Copilot Chat experiences, and then worked with the model provider to ensure model health was sufficiently improved before re-enabling.<br /><br />We are working with the model provider to move to more resilient infrastructure to mitigate issues like this one in the future.
The issues with our upstream model provider have been resolved, and Gemini 2.5 Pro is available again in Copilot Chat, VS Code, and other Copilot products.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.
We have started to gradually re-enable the Gemini 2.5 Pro model in Copilot Chat, VS Code, and other Copilot products.
We have disabled the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products due to an issue with an upstream model provider.<br /><br />Users may still see these models as available for a brief period but we recommend switching to a different model. Other models are not impacted and are available.<br /><br />Once our model provider has resolved the issues impacting Gemini 2.5 Pro, we will re-enable it.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateOn May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments and reduced available capacity. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.<br /><br />By 12:45 UTC, we mitigated the issue by redeploying backend components to reset state and scaling up available resources to more quickly work through the backlog of queued jobs. We are working to improve our deployment and failover resiliency and validation to reduce the likelihood of similar issues in the future.
We are continuing to monitor the affected Actions runners to ensure a smooth recovery.
We are observing indications of recovery with the affected Actions runners.<br /><br />The team will continue monitoring systems to ensure a return to normal service.
We're continuing to investigate delays in Actions runners for hosted Ubuntu 24.<br /><br />We will provide further updates as more information becomes available.
Actions is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing high wait times for obtaining standard hosted runners for ubuntu 24. Other hosted labels and self-hosted runners are not impacted.
We are currently investigating this issue.
Report: "We're experiencing errors"
Last updateOn May 26, 2025, between 06:20 UTC and 09:45 UTC GitHub experienced broad failures across a variety of services (API, Issues, Git, etc). These were degraded at times, but peaked at 100% failure rates for some operations during this time.<br /><br />On May 23, a new feature was added to Copilot APIs and monitored during rollout but it was not tested at peak load. At 6:20 UTC on May 26, load increased on the code path in question and started to degrade a Copilot API because the caching for this endpoint and circuit breakers for high load were misconfigured.<br /><br />In addition, the traffic limiting meant to protect wider swaths of the GitHub API from queuing was not yet covering this endpoint, meaning it was able to overwhelm the capacity to serve traffic and cause request queuing.<br /><br />We were able to mitigate the incident by turning off the endpoint until the behavior could be reverted.<br /><br />We are already working on a quality of service strategy for API endpoints like this that will limit the impact of a broad incident and are rolling it out. We are also addressing the specific caching and circuit breaker misconfigurations for this endpoint, which would have reduced the time to mitigate this particular incident and the blast radius.
We continue to see signs of recovery.
Issues is operating normally.
Git Operations is operating normally.
API Requests is operating normally.
Copilot is operating normally.
Packages is operating normally.
Actions is operating normally.
Packages is experiencing degraded performance. We are continuing to investigate.
Copilot is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing degraded performance. We are continuing to investigate.
We are continuing to investigate degraded performance.
Issues is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for API Requests and Git Operations
Report: "Incident with Actions"
Last updateOn May 27, 2025, between 09:31 UTC and 13:31 UTC, some Actions jobs experienced failures uploading to and downloading from the Actions Cache service. During the incident, 6% of all workflow runs couldn’t upload or download cache entries from the service, resulting in a non-blocking warning message in the logs and performance degradation. The disruption was caused by an infrastructure update related to the retirement of a legacy service, which unintentionally impacted Cache service availability. We resolved the incident by reverting the change and have since implemented a permanent fix to prevent recurrence.<br /><br />We are improving our configuration change processes by introducing additional end-to-end tests to cover the identified gaps, and implementing deployment pipeline improvements to reduce mitigation time for similar issues in the future.
Mitigation is applied and we’re seeing signs of recovery. We’re monitoring the situation until the mitigation is applied to all affected repositories.
We are experiencing degradation with the GitHub Actions cache service and are working on applying the appropriate mitigations.
We are investigating reports of degraded performance for Actions
Report: "Disruption with some GitHub services"
Last updateOn May 23, 2025, between 17:40 UTC and 18:30 UTC public API and UI requests to read and write Git repository content were degraded and triggered user-facing 500 responses. On average, the error rate was 61% and peaked at 88% of requests to the service. This was due to the introduction of an uncaught fatal error in an internal service. A manual rollback was required which increased the time to remediate the incident.<br /><br />We are working to automatically detect and revert a change based on alerting to reduce our time to detection and mitigation. In addition, we are adding relevant test coverage to prevent errors of this type getting to production.
API Requests is operating normally.
API Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Report: "Elevated error rates for Claude Sonnet 3.7"
Last updateOn May 20, 2025, between 12:09 PM UTC and 4:07 PM UTC, the GitHub Copilot service experienced degraded availability, specifically for the Claude Sonnet 3.7 model. During this period, the success rate for Claude Sonnet 3.7 requests was highly variable, down to approximately 94% during the most severe spikes. Other models remained available and working as expected throughout the incident.<br />The issue was caused by capacity constraints in our model processing infrastructure that affected our ability to handle the large volume of Claude Sonnet 3.7 requests.<br />We mitigated the incident by rebalancing traffic across our infrastructure, adjusting rate limits, and working with our infrastructure teams to resolve the underlying capacity issues. We are working to improve our infrastructure redundancy and implementing more robust monitoring to reduce detection and mitigation time for similar incidents in the future.
Copilot is operating normally.
The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 is once again available in Copilot Chat, VS Code and other Copilot products.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.
We are continuing to work with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
We’re still working with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
We are experiencing degraded availability for the Claude Sonnet 3.7 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.
We are investigating reports of degraded performance for Copilot
Report: "Delayed GitHub Actions Jobs"
Last updateOn May 22, 2025, between 07:06 UTC and 09:10 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 11% of all workflow runs were delayed by an average of 44 minutes. A recently deployed change contained a defect that caused improper request routing between internal services, resulting in security rejections at the receiving endpoint. We resolved this by reverting the problematic change and are implementing enhanced testing procedures to catch similar issues before they reach production environments.
We've applied a mitigation which has resolved these delays.
Our investigation continues. At this stage GitHub Actions Jobs are being executed, albeit with delays to the start of execution in some cases.
We are continuing to investigate these delays.
We're investigating delays with the execution of queued GitHub Actions jobs.
We are investigating reports of degraded performance for Actions
Report: "[Retroactive] Incident with Git Operations"
Last updateBetween 10:00 and 20:00 UTC on May 27, a change to our git proxy service resulted in some git client implementations not being able to consistently push to GitHub. Reverting the change resulted in an immediate resolution of the problem for all customers. The inflated time to detect this failure was due to the relatively few impacted clients. We are re-evaluating the proposed change to understand how we can prevent and detect such failures in the future.
Report: "Disruption with some GitHub services"
Last updateThis incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We are currently investigating this issue.
Report: "Incident with Actions"
Last updateWe are investigating reports of degraded performance for Actions
Report: "[Retroactive] Incident with Git Operations"
Last updateBetween 10:00 and 20:00 UTC on May 27, a change to our git proxy service resulted in some git client implementations not being able to consistently push to GitHub. Reverting the change resulted in an immediate resolution of the problem for all customers. The inflated time to detect this failure was due to the relatively few impacted clients. We are re-evaluating the proposed change to understand how we canprevent and detect such failures in the future.
Report: "We're experiencing errors"
Last updateWe are investigating reports of degraded performance for API Requests and Git Operations
Report: "Incident with Copilot"
Last updateOn May 20, 2025, between 18:18 UTC and 19:53 UTC, Copilot Code Completions were degraded in the Americas. On average the error rate was 50% of requests to the service in the affected region. This was due to a misconfiguration in load distribution parameters after a scale down operation.<br /><br />We mitigated the incident by addressing the misconfiguration.<br /><br />We are working to improve our automated failover and load balancing mechanisms to reduce our time to detection and mitigation of issues like this one in the future.
Copilot is operating normally.
We are experiencing degraded availability for Copilot Code Completions in the America’s.<br />We are working on resolving the issue.
We are investigating reports of degraded performance for Copilot
Report: "Disruption with Gemini 2.5 Pro model"
Last updateBetween May 14, 2025 14:16 UTC and May 15, 2025 01:02 UTC the Copilot service was degraded and returned a high volume of internal server errors for requests targeting Gemini 2.5 Pro, a public preview model. On average, the error rate for Gemini 2.5 Pro was 19.6% and peaked at 41%. This was due to a high volume of internal server errors and rate limiting by the upstream model provider.<br /><br />We mitigated the incident by temporarily disabling Gemini 2.5 Pro for all Copilot Chat experiences, and then worked with the model provider to ensure model health was sufficiently improved before re-enabling.<br /><br />We are working with partners to improve communication speed and are planning to move to more resilient infrastructure to mitigate issues like this one in the future.
We have received confirmation from our upstream provider that the issue has been resolved. We are seeing significant recovery. The Gemini 2.5 Pro model is now fully available in Copilot Chat, VS Code, and other Copilot products.
We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. We are working closely with our upstream provider to resolve this issue.
We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
We are experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.
We keep investigating issues with Gemini 2.5 Pro model which is in Public Preview. Users may see intermittent errors with this model.
We are currently investigating this issue.
Report: "Delayed GitHub Actions Jobs"
Last updateWe're investigating delays with the execution of queued GitHub Actions jobs.
We are investigating reports of degraded performance for Actions
Report: "Disruption with some GitHub services"
Last updateOn May 16th, 2025, between 08:42:00 UTC and 12:26:00 UTC, the data store powering the Audit Log API service experienced elevated latency resulting in higher error rates due to timeouts. About 3.8% of Audit Log API queries for Git events experienced timeouts. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
We are investigating issues with the audit log. Users querying Git audit log data may observe increased latencies and occasional timeouts.
We are currently investigating this issue.
Report: "Incident with Webhooks"
Last updateA change to the webhooks UI removed the ability to add webhooks. The timeframe of this impact was between May 20th, 2025 20:40 UTC and May 21st, 2025 12:55 UTC. Existing webhooks, as well as adding webhooks via the API were unaffected. The issue has been fixed.
Report: "Incident with Webhooks"
Last updateA change to the webhooks UI removed the ability to add webhooks. The timeframe of this impact was between 8:49 and 12:55 UTC on May 21st 2025. Existing webhooks, as well as adding webhooks via the API were unaffected. The issue has been fixed.
Report: "Incident with Copilot"
Last updateWe are investigating reports of degraded performance for Copilot
Report: "Elevated error rates for Claude Sonnet 3.7"
Last updateWe are investigating reports of degraded performance for Copilot
Report: "GitHub Enterprise Importer (GEI) is experiencing degraded throughput"
Last updateBetween May 16, 2025, 1:21 PM UTC and May 17, 2025, 2:26 AM UTC, the GitHub Enterprise Importer service was degraded and experienced slow processing of customer migrations. Customers may have seen extended wait times for migrations to start or complete.<br /><br />This incident was initially observed as a slowdown in migration processing. During our investigation, we identified that a recent change aimed at improving API query performance caused an increase in load signals, which triggered migration throttling. As a result, the performance of migrations was negatively impacted, and overall migration duration increased. In parallel, we identified a race condition that caused a specific migration to be repeatedly re-queued, further straining system resources and contributing to a backlog of migration jobs, resulting in accumulated delays. No data was lost, and all migrations were ultimately processed successfully.<br /><br />We have reverted the feature flag associated with a query change and are working to improve system safeguards to help prevent similar race condition issues from occurring in the future.
We continue to see signs of recovery for GitHub Enterprise Importer migrations. Queue depth is decreasing and migration duration is trending toward normal levels. We will continue to monitor improvements.
We have identified the source of increased load and have started mitigation. Customers using the GitHub Enterprise Importer may still see extended wait times until recovery completes.
Investigations on the incident impacting GitHub Enterprise Importer continue. An additional contributing cause has been identified, and we are working to ship additional mitigating measures.
We have taken several steps to mitigate the incident impacting GitHub Enterprise Importer (GEI). We are seeing early indications of system recovery. However, customers may continue to experience longer migrations and extended queue times. The team is continuing to work on further mitigating efforts to speed up recovery.
We are continuing to investigate issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
We are investigating issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateOn May 15, 2025, between 00:08 AM UTC and 10:21 AM UTC, customers were unable to create fine-grained Personal Access Tokens (PATs) on github.com. This incident was triggered by a recent code change to our front end that unintentionally affected the way certain pages loaded and prevented the PAT creation process from completing.<br /><br />We mitigated the incident by reverting the problematic change. To reduce the likelihood of similar issues in the future, we are improving our monitoring for page load anomalies and PAT creation failures and improving our safe deployment practices.
The issue preventing users from creating Personal Access Tokens (PATs) has been resolved. The root cause was identified and a change was reverted to restore functionality. PAT generation is now working as expected.
We have identified the cause, and have a working fix. We will continue to update users.
We are exploring the best path forward, but no new update at this stage.
While we have found a possible cause, we have no update on mitigation steps at this stage. We will continue to keep users updated.
We are investigating fine grained PAT creation failures. We will continue to keep users updated on progress towards mitigation. Existing FGP's are unaffected.
We are currently investigating this issue.
Report: "Incident with Git Operations, API Requests and Issues"
Last updateOn April 28th, 2025, between 4AM and 11AM UTC, ~0.5% of customers experienced HTTP 500 or 429 responses for raw file access (via the GitHub website and APIs). Additionally, ~0.5% of customers may have seen slow pull request page loads and increased timeouts in the GraphQL API. <br /><br />The incident was caused by queueing in serving systems due to a change in traffic patterns, specifically scraping activity targeting our API. We have adjusted limits and added flow control to systems in response to the changing traffic patterns to improve our ability to prevent future large queueing issues. We’ve additionally updated rate limiting unauthenticated requests to reduce overall load, more details are here: <br />https://github.blog/changelog/2025-05-08-updated-rate-limits-for-unauthenticated-requests/
We are seeing signs of recovery and continue to monitor latency.
We continue to investigate impact to Issues and Pull Requests. Customers may see some timeouts as we work towards mitigation.
We are continuing to investigate impact to Issues and Pull Requests. We will provide more updates as we have them.
Users may see timeouts when viewing Pull Requests. We are still investigating the issues related to Issues and Pull Requests and will provide further updates as soon as we can
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Issues API is currently seeing elevated latency. We are investigating the issue and will provide further updates as soon as we have them.
We are investigating reports of degraded performance for API Requests, Git Operations and Issues
Report: "GitHub Enterprise Importer (GEI) is experiencing degraded throughput"
Last updateWe are currently investigating this issue.
Report: "Disruption with Gemini 2.5 Pro"
Last updateWe are currently investigating this issue.
Report: "Disruption with Gemini 2.5 Pro model"
Last updateWe are currently investigating this issue.
Report: "Incident with Git Operations"
Last updateOn May 8, 2025, between 14:40 UTC and 16:27 UTC the Git Operations service was degraded causing some pushes and merges to fail. On average, the error rate was 1.4% with a peak error rate of 2.24%. This was due to a configuration change which unexpectedly led a critical service to shut down on a subset of hosts that store repository data.<br /><br />We mitigated the incident by re-deploying the affected service to restore its functionality.<br /><br />In order to prevent similar incidents from happening again, we identified the cause that triggered this behavior and mitigated it for future deployments. Additionally, to reduce time to detection we will improve monitoring of the impacted service.
Pull Requests is operating normally.
Actions is operating normally.
We have identified the issue and applied mitigations, and are monitoring for recovery.
Actions is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Git Operations and Pull Requests
Report: "Disruption with some GitHub services"
Last updateThis incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We are currently investigating this issue.
Report: "Incident with Git Operations"
Last updateWe are investigating reports of degraded performance for Git Operations and Pull Requests
Report: "Issue Attachments Failing to Upload"
Last updateOn May 1, 2025 from 22:09 UTC to 23:13 UTC, the Issues service was degraded and users weren't able to upload attachments. The root cause was identified to be a new feature which added a custom header to all client-side HTTP requests, causing a CORS errors when uploading attachments to our provider.<br /><br />We mitigated the incident by rolling back the feature flag that added the new header at 22:56 UTC. In order to prevent this from happening again, we are adding new metrics to monitor and ensure the safe rollout of changes to client-side requests.
We have identified the underlying cause of attachment upload failures to Issues and mitigated it by rolling back a feature flag. If you are still experiencing failures when uploading attachments to Issues, please reload your page.
We are investigating attachment upload failures on Issues. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded availability for Issues
Report: "Disruption with Pull Request Ref Updates"
Last updateOn April 30, 2025, between 8:02 UTC and 9:05 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures, delays for non-migration sourced jobs, and delays to tracking refs.<br /><br />We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.<br /><br />We mitigated the incident by shifting the migration jobs to a different job queue.<br /><br />To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
Some customers of github.com are reporting issues with PR tracking refs not being updated due to processing delays and increased failure rates. We're investigating the source of the issue.
We are investigating reports of degraded performance for Pull Requests
Report: "Delays for web and email notification delivery"
Last updateOn April 29th, 2025, between 8:40am UTC and 12:50pm UTC the notifications service was degraded and stopped delivering most web and email notifications as well as some mobile push notifications. This was due to a large and faulty schema migration that rendered a set of database primaries unhealthy, affecting the notification delivery pipelines, causing delays in the most of the web and email notification deliveries.<br /><br />We mitigated the incident by stopping the migration and promoting replicas to replace the unhealthy primaries.<br /><br />In order to prevent similar incidents in the future, we are addressing the underlying issues in the online schema tooling and improving the way we interact with the database to not be disruptive to production workloads.
The notification delivery backlog has been processed and notifications are now being delivered as expected.
New notification deliveries are occuring in a timely manner and we have processed a significant portion of the backlog. Users may still notice delayed delivery of some older notifications.
Web and email notifications continue to be delivered successfully and the service is in a healthy state. We are processing the backlog of notification deliveries which are currently as much as 30-60 minutes delayed.
We are starting to see signals of recovery with delayed web/email notifications now being dispatched.<br /><br />The team continue to monitor recovery and ensure return to normal service.
We are seeing impact on both web and email notifications with most customers seeing delayed deliveries. <br /><br />The last incident updated regarding impact on email notifications was incorrect. Email notifications have been experiencing the same delays as web notifications for duration of incident.<br /><br />We have applied changes to our system and are monitoring to see if these restore normal service. Updates to follow.
Web notifications are experiencing delivery delays for the majority of customers. We are working to mitigate impact and restore delivery times back within normal operating bounds.<br /><br />Email notifications remain unaffected and are delivering as normal.<br /><br />We will provide futher updates as we have more information.
We are currently investigating this issue.
Report: "Issue Attachments Failing to Upload"
Last updateWe are investigating reports of degraded availability for Issues
Report: "Delays for web and email notification delivery"
Last updateWeb and email notifications continue to be delivered successfully and the service is in a healthy state. We are processing the backlog of notification deliveries which are currently as much as 30-60 minutes delayed.
We are starting to see signals of recovery with delayed web/email notifications now being dispatched.The team continue to monitor recovery and ensure return to normal service.
We are seeing impact on both web and email notifications with most customers seeing delayed deliveries. The last incident updated regarding impact on email notifications was incorrect. Email notifications have been experiencing the same delays as web notifications for duration of incident.We have applied changes to our system and are monitoring to see if these restore normal service. Updates to follow.
Web notifications are experiencing delivery delays for the majority of customers. We are working to mitigate impact and restore delivery times back within normal operating bounds.Email notifications remain unaffected and are delivering as normal.We will provide futher updates as we have more information.
We are currently investigating this issue.
Report: "Delays for web notification delivery"
Last updateWeb notifications are experiencing delivery delays for the majority of customers. We are working to mitigate impact and restore delivery times back within normal operating bounds.Email notifications remain unaffected and are delivering as normal.We will provide futher updates as we have more information.
We are currently investigating this issue.
Report: "Incident with Issues, API Requests and Pages"
Last updateOn April 23, 2025, between 07:00 UTC and 07:20 UTC, multiple GitHub services experienced degradation caused by resource contention on database hosts. The resulting error rates, which ranged from 2–5% of total requests, led to intermittent service disruption for users. The issue was triggered by heavy workloads on the database leading to connection saturation. The incident mitigated when the database throttling activated which allowed the system to rebalance the connections. This restored the traffic flow to the database and restored service functionality. To prevent similar issues in the future, we are reviewing the capacity of the database, improving monitoring and alerting systems, and implementing safeguards to reduce time to detection and mitigation.
A brief problem with one of our database clusters caused intermittent errors around 07:05 UTC for a few minutes. Our systems have recovered and we continue to monitor.
Issues is operating normally.
Actions is operating normally.
Pages is operating normally.
API Requests is operating normally.
Codespaces is operating normally.
Codespaces is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for API Requests, Issues and Pages
Report: "Incident with Git Operations, API Requests and Issues"
Last updateIssues API is currently seeing elevated latency. We are investigating the issue and will provide further updates as soon as we have them.
We are investigating reports of degraded performance for API Requests, Git Operations and Issues
Report: "Disruption with some GitHub services"
Last updateStarting at 19:13:50 UTC, the service responsible for importing Git repositories began experiencing errors that impacted both GitHub Enterprise Importer migrations and the GitHub Importer which were restored at 22:11:00 UTC. At the time, 837 migrations across 57 organizations were affected. Impacted migrations would have shown the error message "Git source migration failed. Error message: An error occurred. Please contact support for further assistance." in the migration logs and required a retry.<br /><br />The root cause of the issue was a recent configuration change that caused our workers, responsible for syncing the Git repository, to lose the necessary access required for the migration. We were able to retrieve the needed access for the workers , and all dependent services resumed normal operation.<br />We’ve identified and implemented additional safeguards to help prevent similar disruptions in the future.<br />
We are investigating issues with GitHub Enterprise Importer. We will continue to keep users updated on progress towards mitigation.
We are currently investigating this issue.
Report: "Incident with Issues, API Requests and Pages"
Last updateWe are investigating reports of degraded performance for API Requests, Issues and Pages
Report: "Incident with Pull Requests"
Last updateOn April 16, 2025 between 3:22:36 PM UTC and 5:26:55 PM UTC the Pull Request service was degraded. On average, 0.7% of page views were affected. This primarily affected logged-out users, but some logged-in users were affected as well. <br /><br />This was due to an error in how certain Pull Request timeline events were rendered, and we resolved the incident by updating the timeline event code.<br /><br />We are enhancing test coverage to include additional scenarios and piloting new tools to prevent similar incidents in the future.<br />
Pull Requests is operating normally.
The fix is rolling out and we're seeing recovery for users encountering 500 errors when viewing a pull request.
The fix is currently being deployed, we anticipate this to be fully mitigated in approximately thirty minutes.
Users may experience 500 errors when viewing a PR. Most of the impact is limited to anonymous access there is a small handful of logged in users who are also experiencing this. We have the fix prepared and it will be deployed soon.
We are investigating reports of degraded performance for Pull Requests
Report: "Disruption with some GitHub services"
Last updateOn April 15th during regular testing we found a bug in our Copilot Metrics Pipeline infrastructure causing some data used to aggregate Copilot usage for the Copilot Metrics API to not be ingested. As a result of the bug, customer metrics in the Copilot Metrics API would have indicated lower than expected Copilot usage for the previous 28 days.<br />To mitigate the incident we resolved the bug so that all data from April 14th onwards would be accurately calculated and immediately began backfilling the previous 28 days with the correct data. All data has been corrected as of 2025-04-17 5:34PM UTC.<br />We have added additional monitoring to catch similar pipeline failures in the future earlier and are working on enhancing our data validation to ensure that all metrics we provide are accurate.
We have resolved issues with data inconsistency for Copilot Metrics API data as of April 17th 2025 1600 UTC. All data is now accurate.
We are continuing to work on correcting the Copilot Metrics API data from March 19th 2025 to April 14th 2025. Data from April 15 and later is accurate. Currently, the API returns about 10% lower usage numbers. Based on the current investigations we estimate to have a resolution by April 18th 0100 hrs UTC. We will provide an update if there is change in the ETA.
We have an updated ETA on correcting all Copilot metrics API data: 20 hours. We won't post more updates here unless the ETA changes.
We are working on correcting the Copilot metrics API source data from March 19th to April 14th. Currently, the API returns about 10% lower usage numbers than the reality. We don't have an ETA for the resolution at the moment.
The Copilot metrics API (https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28) now returns accurate data for April 15th. We're working on correcting the past 27 days, as we are under-reporting certain metrics from this time.
We'll have accurate data for April 15th in the next 60 minutes. We're still working on correcting the data for the additional 27 days before April 15th. The complete correction is estimated to take up to 7 days, but we're working to speed this up.<br /><br />https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28 is the specific impacted API.
As we've made further progress on correcting the inconsistencies, we estimate it will take approximately a week for a full recovery. We are investigating options for speeding up the recovery, and we appreciate your patience as we work through this incident.
We are working on correcting the inconsistencies now, our next update we will provide an estimated time when the issue will be fully resolved.
We are currently experiencing degraded performance with our Copilot metrics API, which is temporarily causing partial inconsistencies in the data returned. Our engineering teams are actively working to restore full functionality. We understand the importance of timely updates and are prioritizing a resolution to ensure all systems are operating normally as quickly as possible.
We are currently investigating this issue.
Report: "Codespaces Scheduled Maintenance"
Last updateScheduled maintenance is currently in progress. We will provide updates as necessary.
Codespaces will be undergoing global maintenance from 16:30 UTC on Monday, April 21 to 16:30 UTC on Tuesday, April 22. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.
Report: "Disruption with some GitHub services for Safari Users"
Last updateOn April 15, 2025 from 12:45 UTC to 13:56 UTC, access to GitHub.com was restricted for logged out users using WebKit-based browsers, such as Safari and various mobile browsers. During the impacting time, roughly 6.6M requests were unsuccessful.<br /><br />This issue was caused by a configuration change intended to improve our handling of large traffic spikes but was improperly targeted at too large a set of requests.<br /><br />To prevent future incidents like this, we are improving how we operationalize these types of changes, adding additional tools for validating what will be impacted by such changes, and reducing the likelihood of manual mistakes through automated detection and handling of such spikes.
Safari users are now able to access GitHub.com.<br /><br />The fix has been rolled out to all environments.
Most unauthenticated Safari users should now be able to access github.com. We are ensuring the fix is deployed out to all environments.<br /><br />Next update in 30m.
We have identified the cause of the restriction for Safari users and are deploying a fix. Next update in 15 minutes.
Some unauthenticated Safari users are seeing the message "Access to this site has been restricted." We are currently investigating this behavior.
We are currently investigating this issue.
Report: "Disruption with some GitHub services for Safari Users"
Last updateSome unauthenticated Safari users are seeing the message "Access to this site has been restricted." We are currently investigating this behavior.
We are currently investigating this issue.
Report: "Disruption with some Pull Requests stuck in processing state"
Last updateOn April 9, 2025, between 11:27 UTC and 12:39 UTC, the Pull Requests service was degraded and experienced delays in processing updates. At peak, approximately 1–1.5% of users were affected by delays in synchronizing pull requests. During this period, users may have seen a "Processing updates" message in their pull requests after pushing new commits, and the new commits did not appear in the Pull Request view as expected. The Pull Request synchronization process has automatic retries and most delays were automatically resolved. Any Pull Requests that were not resynchronized during this window were manually synchronized on Friday, April 11 at 14:23 UTC.<br /><br />This was due to a misconfigured GeoIP lookup file that our routine GitHub operations depended on and led to background job processing to fail. <br /><br />We mitigated the incident by reverting to a known good version of the GeoIP lookup file on affected hosts.<br /><br /><br />We are working to enhance our CI testing and automation by validating GeoIP metadata to reduce our time to detection and mitigation of issues like this one in the future.
Pull Requests is operating normally.
The team has identified a mitigation and is rolling it out while actively monitoring recovery
Some users are experiencing delays in pull request updates. After pushing new commits, PRs show a "Processing updates" message, and the new commits do not appear in the pull request view.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Report: "[Retroactive] Access from China temporarily blocked for users that were not logged in"
Last updateDue to a configuration change with unintended impact, some users that were not logged in who tried to visit GitHub.com from China were temporarily unable to access the site. For users already logged in, they could continue to access the site successfully. Impact started 2025/04/12 at 20:01 UTC. Impact was mitigated 2025/04/13 at 14:55 UTC. During this time, up to 4% of all anonymous requests originating from China were unsuccessful. The configuration changes that caused this impact have been reversed and users should no longer see problems when trying to access GitHub.com.
Report: "Incident with Codespaces"
Last updateOn April 11 from 3:05am UTC to 3:44am UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency. We reverted the changes and immediately restored service health.<br /><br />We are working on safer mechanisms for testing and rolling out such configuration changes, and we expect no further disruptions.
We have reverted a problematic configuration change and are seeing recovery across starts and resumes
We have identified an issue that is causing errors when starting new and resuming existing Codespaces. We are currently working on a mitigation
We are investigating reports of degraded availability for Codespaces
Report: "[Retroactive] Access from China temporarily blocked for users that were not logged in"
Last updateDue to a configuration change with unintended impact, users that were not logged in who tried to visit GitHub.com from China were temporarily unable to access the site. For users already logged in, they could continue to access the site successfully. Impact started 2025/04/12 at 20:01 UTC. Impact was mitigated 2025/04/13 at 14:55 UTC.The configuration changes that caused this impact have been reversed and users should no longer see problems when trying to access GitHub.com.
Report: "Incident with Pull Requests"
Last updateOn April 9, 2025, between 7:01 UTC and 9:31 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures and delays for non-migration sourced jobs.<br /><br />We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.<br /><br />We mitigated the incident by shifting the migration jobs to a different job queue. <br /><br />To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
We saw a period of delays on Pull request experiences. The impact is over at the moment, but we are investigating to prevent a repeat.
We are investigating reports of degraded performance for Pull Requests
Report: "Disruption with some GitHub services"
Last updateOn April 7, 2025 between 2:15:37 AM UTC and 2:31:14 AM UTC, multiple GitHub services were degraded. Requests to these services returned 5xx errors at a high rate due to an internal database being exhausted by our Codespaces service. The incident mitigated on its own.<br /><br />We have addressed the problematic queries from the Codespaces service, minimizing the risk of future reoccurrances.
Pull Requests is operating normally.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Report: "Disruption with some Pull Requests stuck in processing state"
Last updateWe are currently investigating this issue.
Report: "Vision requests are unavailable for certain models on Copilot Chat on github.com"
Last updateOn 2025-04-08, between 00:42 and 18:05 UTC, as we rolled out an updated version of our GPT 4o model, we observed that vision capabilities for GPT-4o for Copilot Chat in GitHub were intermittently unavailable. During this period, customers may have been unable to upload image attachments to Copilot Chat in GitHub. <br /><br />In response, we paused the rollout at 18:05 UTC. Recovery began immediately and telemetry indicates that the issue was fully resolved by 18:21 UTC. <br /><br />Following this incident, we have identified areas of improvements in our model rollout process, including enhanced monitoring and expanded automated and manual testing of our end-to-end capabilities.
The issue has been resolved now, and we're actively monitoring the service for any further issues.
Image attachments are not available for some models on Copilot chat on github.com. The issue has been identified and the fix is in progress.
We are currently investigating this issue.
Report: "Incident with Pull Requests"
Last updateWe are investigating reports of degraded performance for Pull Requests
Report: "Disruption with some GitHub services"
Last updateBetween 2025-03-27 12:00 UTC and 2025-04-03 16:00 UTC, the GitHub Enterprise Cloud Dormant Users report was degraded and falsely indicated that dormant users were active within their business. This was due to increased load on a database from a non-performant query.<br /><br />We mitigated the incident by increasing the capacity of the database, and installing monitors for this specific report to improve observability for future. As a long-term solution, we are rewriting the Dormant Users report to optimize how it queries for user activity, which will result in significantly faster and accurate report generation.
We are aware that the generation of the Dormant Users Report is delayed for some of our customers, and that the resulting report may be inaccurate. We are actively investigating the root cause and a possible remediation.
We are currently investigating this issue.
Report: "Vision requests are unavailable for certain models on Copilot Chat on github.com"
Last updateWe are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateOn 2025-04-03, between 6:13:27 PM UTC and 7:12:00 PM UTC the docs.github.com service was degraded and errored. On average, the error rate was 8% and peaked at 20% of requests to the service. This was due to a misconfiguration and elevated requests.<br />We mitigated the incident by correcting the misconfiguration.<br />We are working to reduce our time to detection and mitigation of issues like this one in the future.
We are investigating and working on applying mitigations to intermittent unavailability of GitHub's Docs.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateOn April 1st, 2025, between 08:17:00 UTC and 09:29:00 UTC the data store powering the Audit Log service experienced elevated errors resulting in an approximate 45 minute delay of Audit Log Events. Our systems maintained data continuity and we experienced no data loss. The delay only affected the Audit Log API and the Audit Log user interface. Any configured Audit Log Streaming endpoints received all relevant Audit Log Events. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
The Audit Log is experiencing an increase of failed queries due to availability issues with the associated data store. Audit Log data is experiencing a delay in availability. We have identified the issue and we are deploying mitigating measures.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateBetween March 29 7:00 UTC and March 31 17:00 UTC users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit eBook and event registration forms on resources.github.com, also due to a service outage. <br /><br />The incident occurred due to expired credentials used for an internal service. We mitigated it by renewing the credentials and redeploying the affected services. To improve future response times and prevent similar issues, we are enhancing our credential expiry detection, rotation processes, and on-call observability and alerting.
We are currently applying a mitigation to resolve an issue with managing marketing email subscriptions.
We are currently investigating this issue.
Report: "Scheduled Codespaces Maintenance"
Last updateScheduled maintenance is currently in progress. We will provide updates as necessary.
Codespaces will be undergoing maintenance in all regions from 17:00 UTC on Wednesday, March 2 to 17:00 UTC on Thursday, March 3. Maintenance will begin in Southeast Asia, Central India, Australia Central, and Australia East regions. Once it is complete, maintenance will start in UK South and West Europe, followed by East US, East US2, West US2, and West US3. Each batch of regions will take approximately three to four hours to complete.During this time period, users may experience connectivity issues with new and existing Codespaces.If you have uncommitted changes you may need during the maintenance window, you should verify they are committed and pushed before maintenance starts. Codespaces with any uncommitted changes will be accessible as usual once maintenance is complete.
Report: "Disruption with Pull Request Ref Updates"
Last updateBetween March 27, 2025, 23:45 UTC and March 28, 2025, 01:40 UTC the Pull Requests service was degraded and failed to update refs for repositories with higher traffic activity. This was due to a large repository migration that resulted in a larger than usual number of enqueued jobs; while simultaneously impacting git fileservers where the problematic repository was hosted. This resulted in an increase in queue depth due to retries on failures to perform those jobs causing delays for non-migration sourced jobs.<br /><br />We declared an incident once we confirmed that this issue was not isolated to the problematic migration and other repositories were also failing to process ref updates. <br /><br />We mitigated the issue by stopping the migration and short circuiting the remaining jobs. Additionally, we increased the worker pool of this job to reduce the time required to recover. <br /><br />As a result of this incident, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads. <br />
This issue has been mitigated and we are operating normally.
We are continuing to monitor for recovery.
We believe we have identified the source of the issue and are monitoring for recovery.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Report: "[Retroactive] Disruption with Pull Request Ref Updates"
Last updateBeginning at 21:24 UTC on March 28 and lasting until 21:50 UTC, some customers of github.com had issues with PR tracking refs not being updated due to processing delays and increased failure rates. We did not status before we completed the rollback, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.
Report: "[Retroactive] Disruption with Pull Request Ref Updates"
Last updateBeginning at 21:24 UTC on March 28 and lasting until 21:50 UTC, some customers of github.com had issues with PR tracking refs not being updated due to processing delays and increased failure rates. We did not status before we completed the rollback, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.
Report: "Disruption with some GitHub services"
Last updateThis incident was opened by mistake. Public services are currently functional.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateThis incident was opened by mistake. Public services are currently functional.
We are currently investigating this issue.
Report: "Disruption with Pull Request Ref Updates"
Last updateThis incident has been resolved.
This issue has been mitigated and we are operating normally.
We are continuing to monitor for recovery.
We believe we have identified the source of the issue and are monitoring for recovery.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Report: "Incident with Codespaces"
Last updateOn March 21, 2025 between 01:00 UTC and 02:45 UTC, the Codespaces service was degraded and users in various regions experienced intermittent connection failures. The peak error error was 30% of connection attempts across 38% of Codespaces. This was due to a service deployment.<br /><br />The incident was mitigated by completing the deployment to the impacted regions. <br /><br />We are working with the service team to identify the cause of the connection losses and perform necessary repairs to avoid future occurrences.
Codespaces is operating normally.
We have seen full recovery in the last 15 minutes for Codespaces connections. GitHub Codespaces are healthy. For users who are still seeing connection problems, restarting the Codespace may help resolve the issue.
We are continuing to investigate issues with failed connections to Codespaces. We are seeing recovery over the last 10 minutes.
Customers may be experiencing issues connecting to Codespaces on GitHub.com. We are currently investigating the underlying issue.
We are investigating reports of degraded performance for Codespaces
Report: "Intermittent GitHub Actions workflow failures"
Last updateOn March 21st, 2025, between 05:43 UTC and 08:49 UTC, the Actions service experienced degradation, leading to workflow run failures. During the incident, approximately 2.45% of workflow runs failed due to an infrastructure failure. This incident was caused by intermittent failures in communicating with an underlying service provider. We are working to improve our resilience to downtime in this service provider and to reduce the time to mitigate in any future recurrences.
Actions is operating normally.
We have made progress understanding the source of these errors and are working on a mitigation.
We're continuing to investigate elevated errors during GitHub Actions workflow runs. At this stage our monitoring indicates that these errors are impacting no more than 3% of all runs.
We're continuing to investigate intermittent failures with GitHub Actions workflow runs.
We're seeing errors reported with a subset of GitHub Actions workflow runs, and are continuing to investigate.
We are investigating reports of degraded performance for Actions
Report: "[Retroactive] Incident with Migrations Submitted Via GitHub UI"
Last updateBetween 2024-03-23 18:10 UTC and 2024-03-24 16:10 UTC, migration jobs submitted through the GitHub UI experienced processing delays and increased failure rates. This issue only affected migrations initiated via the web interface. Migrations started through the API or the command line tool continued to function normally. We are sorry for the delayed post on githubstatus.com.
Report: "Disruption with some GitHub services"
Last updateOn March 21st, 2025, between 11:45 UTC and 13:20 UTC, users were unable to interact with GitHub Copilot Chat in GitHub. The issue was caused by a recently deployed Ruby change that unintentionally overwrote a global value. This led to GitHub Copilot Chat in GitHub being misconfigured with an invalid URL, preventing it from connecting to our chat server. Other Copilot clients were not affected.<br /><br />We mitigated the incident by identifying the source of the problematic query and rolling back the deployment.<br /><br />We are reviewing our deployment tooling to reduce the time to mitigate similar incidents in the future. In parallel, we are also improving our test coverage for this category of error to prevent them from being deployed to production.
Copilot is operating normally.
Mitigation is complete and we are seeing full recovery for GitHub Copilot Chat in GitHub.
We have identified the problem and have a mitigation in progress.
Copilot is experiencing degraded performance. We are continuing to investigate.
We are investigating issues with GitHub Copilot Chat in GitHub. We will continue to keep users updated on progress toward mitigation.
We are currently investigating this issue.
Report: "[Retroactive] Incident with Migrations Submitted Via GitHub UI"
Last updateBetween 2024-03-23 18:10 UTC and 2024-03-24 16:10 UTC, migration jobs submitted through the GitHub UI experienced processing delays and increased failure rates. This issue only affected migrations initiated via the web interface. Migrations started through the API or the command line tool continued to function normally. We are sorry for the delayed post on githubstatus.com.
Report: "Incident with Pages"
Last updateOn March 20, 2025, between 19:24 UTC and 20:42 UTC the GitHub Pages experience was degraded and returned 503s for some customers. We saw an error rate of roughly 2% for Pages views, and new page builds were unable to complete successfully before timing out. <br /><br />This was due to replication failure at the database layer between a write destination and read destination. We mitigated the incident by redirecting reads to the same destination as writes. <br /><br />The error with replication occurred while in this transitory phase, as we are in the process of migrating the underlying data for Pages to new database infrastructure. Additionally our monitors failed to detect the error.<br /><br />We are addressing the underlying cause of the failed replication and telemetry. <br />
We have resolved the issue for Pages. If you're still experiencing issues with your GitHub Pages site, please rebuild.
Customers may not be able to create or make changes to their GitHub Pages sites. Customers who rely on webhook events from Pages builds might also experience a downgraded experience.
Webhooks is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Pages
Report: "Intermittent GitHub Actions workflow failures"
Last updateOn March 21st, 2025, between 05:43 UTC and 08:49 UTC, the Actions service experienced degradation, leading to workflow run failures. During the incident, approximately 2.45% of workflow runs failed due to an infrastructure failure. This incident was caused by intermittent failures in communicating with an underlying service provider. We are working to improve our resilience to downtime in this service provider and to reduce the time to mitigate in any future recurrences.
Actions is operating normally.
We have made progress understanding the source of these errors and are working on a mitigation.
We're continuing to investigate elevated errors during GitHub Actions workflow runs. At this stage our monitoring indicates that these errors are impacting no more than 3% of all runs.
We're continuing to investigate intermittent failures with GitHub Actions workflow runs.
We're seeing errors reported with a subset of GitHub Actions workflow runs, and are continuing to investigate.
We are investigating reports of degraded performance for Actions
Report: "Incident with Codespaces"
Last updateOn March 21, 2025 between 01:00 UTC and 02:45 UTC, the Codespaces service was degraded and users in various regions experienced intermittent connection failures. The peak error error was 30% of connection attempts across 38% of Codespaces. This was due to a service deployment.The incident was mitigated by completing the deployment to the impacted regions. We are working with the service team to identify the cause of the connection losses and perform necessary repairs to avoid future occurrences.
Codespaces is operating normally.
We have seen full recovery in the last 15 minutes for Codespaces connections. GitHub Codespaces are healthy. For users who are still seeing connection problems, restarting the Codespace may help resolve the issue.
We are continuing to investigate issues with failed connections to Codespaces. We are seeing recovery over the last 10 minutes.
Customers may be experiencing issues connecting to Codespaces on GitHub.com. We are currently investigating the underlying issue.
We are investigating reports of degraded performance for Codespaces
Report: "Incident with Actions: Queue Run Failures"
Last updateOn March 18th, 2025, between 23:20 UTC and March 19th, 2025 00:15 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 0.3% of all workflow runs queued during the time failed to start, about 0.67% of all workflow runs were delayed by an average of 10 minutes, and about 0.16% of all workflow runs ultimately ended with an infrastructure failure. This was due to a networking issue with an underlying service provider. At 00:15 UTC the service provider mitigated their issue, and service was restored immediately for Actions. We are working to improve our resilience to downtime in this service provider to reduce the time to mitigate in any future recurrences.
Actions is operating normally.
The provider has reported full mitigation of the underlying issue, and Actions has been healthy since approximately 00:15 UTC.
We are continuing to investigate issues with delayed or failed workflow runs with Actions. We are engaged with a third-party provider who is also investigating issues and has confirmed we are impacted.
Some customers may be experiencing delays or failures when queueing workflow runs
We are investigating reports of degraded performance for Actions
Report: "Incident with Pages"
Last updateOn March 20, 2025, between 19:24 UTC and 20:42 UTC the GitHub Pages experience was degraded and returned 503s for some customers. We saw an error rate of roughly 2% for Pages views, and new page builds were unable to complete successfully before timing out. This was due to replication failure at the database layer between a write destination and read destination. We mitigated the incident by redirecting reads to the same destination as writes. The error with replication occurred while in this transitory phase, as we are in the process of migrating the underlying data for Pages to new database infrastructure. Additionally our monitors failed to detect the error.We are addressing the underlying cause of the failed replication and telemetry.
We have resolved the issue for Pages. If you're still experiencing issues with your GitHub Pages site, please rebuild.
Customers may not be able to create or make changes to their GitHub Pages sites. Customers who rely on webhook events from Pages builds might also experience a downgraded experience.
Webhooks is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Pages
Report: "Incident with Issues"
Last updateBetween March 17, 2025, 18:05 UTC and March 18, 2025, 09:50 UTC, GitHub.com experienced intermittent failures in web and API requests. These issues affected a small percentage of users (mostly related to pull requests and issues), with a peak error rate of 0.165% across all requests.<br /><br />We identified a framework upgrade that caused kernel panics in our Kubernetes infrastructure as the root cause. We mitigated the incident by downgrading until we were able to disable a problematic feature. In response, we have investigated why the upgrade caused the unexpected issue, have taken steps to temporarily prevent it, and are working on longer term patch plans while improving our observability to ensure we can quickly react to similar classes of problems in the future.
We saw a spike in error rate with issues related pages and API requests due to some problems with restarts in our kubernetes infrastructure that, at peak, caused 0.165% of requests to see timeouts or errors related to these API surfaces over a 15 minute period. At this time we see minimal impact and are continuing to investigate the cause of the issue.
We are investigating reports of issues with service(s): Issues We're continuing to investigate. Users may see intermittent HTTP 500 responses when using Issues. Retrying the request may succeed.
We are investigating reports of issues with service(s): Issues We're continuing to investigate. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Issues. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded performance for Issues
Report: "macos-15-arm64 hosted runner queue delays"
Last updateOn March 18, between 13:04 and 16:55 UTC, Actions workflows relying on hosted runners using the beta MacOS 15 image experienced increased queue time waiting for available runners. An image update was pushed the previous day that included a performance reduction. The slower performance caused longer average runtimes, exhausting our available Mac capacity for this image. This was mitigated by rolling back the image update. We have updated our capacity allocation to the beta and other Mac images and are improving monitoring in our canary environments to catch this potential issue before it impacts customers.
We are seeing improvements in telemetry and are monitoring for full recovery.
We've applied a mitigation to fix the issues with queuing Actions jobs on macos-15-arm64 Hosted runner. We are monitoring.
The team continues to investigate issues with some Actions macos-15-arm64 Hosted jobs being queued for up to 15 minutes. We will continue providing updates on the progress towards mitigation.
We are currently investigating this issue.
Report: "Disruption with some GitHub services"
Last updateOn March 18th, 2025, between 13:35 UTC and 17:45 UTC, some users of GitHub Copilot Chat in GitHub experienced intermittent failures when reading or writing messages in a thread, resulting in a degraded experience. The error rate peaked at 3% of requests to the service. This was due to an availability incident with a database provider. Around 16:15 UTC the upstream service provider mitigated their availability incident, and service was restored in the following hour.<br /><br />We are working to improve our failover strategy for this database to reduce the time to mitigate similar incidents in the future.
We are seeing recovery and no new errors for the last 15mins.
We are still investigating infrastructure issues and our provider has acknowledged the issues and is working on a mitigation. Customers might still see errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
We are still investigating infrastructure issues and collaborating with providers. Customers might see some errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
We are experiencing issues with our underlying data store which is causing a degraded experience for a small percentage of users using Copilot Chat in github.com
We are currently investigating this issue.
Report: "Scheduled Migrations Maintenance"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Migrations will be undergoing maintenance starting at 21:00 UTC on Tuesday, March 18 2025 with an expected duration of up to eight hours.During this maintenance period, users will experience delays importing repositories into GitHub.Once the maintenance period is complete, all pending imports will automatically proceed.
Report: "Incident with Actions: Queue Run Failures"
Last updateOn March 18th, 2025, between 23:20 UTC and March 19th, 2025 00:15 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 0.3% of all workflow runs queued during the time failed to start, about 0.67% of all workflow runs were delayed by an average of 10 minutes, and about 0.16% of all workflow runs ultimately ended with an infrastructure failure. This was due to a networking issue with an underlying service provider. At 00:15 UTC the service provider mitigated their issue, and service was restored immediately for Actions. We are working to improve our resilience to downtime in this service provider to reduce the time to mitigate in any future recurrences.
Actions is operating normally.
The provider has reported full mitigation of the underlying issue, and Actions has been healthy since approximately 00:15 UTC.
We are continuing to investigate issues with delayed or failed workflow runs with Actions. We are engaged with a third-party provider who is also investigating issues and has confirmed we are impacted.
Some customers may be experiencing delays or failures when queueing workflow runs
We are investigating reports of degraded performance for Actions
Report: "macos-15-arm64 hosted runner queue delays"
Last updateOn March 18, between 13:04 and 16:55 UTC, Actions workflows relying on hosted runners using the beta MacOS 15 image experienced increased queue time waiting for available runners. An image update was pushed the previous day that included a performance reduction. The slower performance caused longer average runtimes, exhausting our available Mac capacity for this image. This was mitigated by rolling back the image update. We have updated our capacity allocation to the beta and other Mac images and are improving monitoring in our canary environments to catch this potential issue before it impacts customers.
We are seeing improvements in telemetry and are monitoring for full recovery.
We've applied a mitigation to fix the issues with queuing Actions jobs on macos-15-arm64 Hosted runner. We are monitoring.
The team continues to investigate issues with some Actions macos-15-arm64 Hosted jobs being queued for up to 15 minutes. We will continue providing updates on the progress towards mitigation.
We are currently investigating this issue.
Report: "Incident with Issues"
Last updateBetween March 17, 2025, 18:05 UTC and March 18, 2025, 09:50 UTC, GitHub.com experienced intermittent failures in web and API requests. These issues affected a small percentage of users (mostly related to pull requests and issues), with a peak error rate of 0.165% across all requests.We identified a framework upgrade that caused kernel panics in our Kubernetes infrastructure as the root cause. We mitigated the incident by downgrading until we were able to disable a problematic feature. In response, we have investigated why the upgrade caused the unexpected issue, have taken steps to temporarily prevent it, and are working on longer term patch plans while improving our observability to ensure we can quickly react to similar classes of problems in the future.
We saw a spike in error rate with issues related pages and API requests due to some problems with restarts in our kubernetes infrastructure that, at peak, caused 0.165% of requests to see timeouts or errors related to these API surfaces over a 15 minute period. At this time we see minimal impact and are continuing to investigate the cause of the issue.
We are investigating reports of issues with service(s): Issues We're continuing to investigate. Users may see intermittent HTTP 500 responses when using Issues. Retrying the request may succeed.
We are investigating reports of issues with service(s): Issues We're continuing to investigate. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of issues with service(s): Issues. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded performance for Issues