Alchemy

Is Alchemy Down Right Now? Discover if there is an ongoing service outage.

Alchemy is currently Degraded
Last checked from Alchemy's official status page

Incident History

Report: "BNB Mainnet Degraded Performance"

Last update
investigating

We are currently investigating occasional errors on BNB Chain. Customers may experience degraded performance.

Report: "BNB Mainnet Degraded Performance"

Last update
investigating

We are currently investigating occasional errors on BNB Chain. Customers may experience degraded performance.

investigating

We are currently investigating this issue.

Report: "Degraded Performance on Roll Up Chains:"

Last update
investigating

We are currently investigating this issue.

Report: "BNB Chain degraded performance"

Last update
investigating

We are currently investigating occasional errors on BNB Chain. Customers may experience degraded performance.

Report: "Elevated Errors for Legacy Domain (alchemyapi.io)"

Last update
investigating

We’re currently investigating elevated error rates for requests made to the legacy domain alchemyapi.io. This issue affects traffic routed through that domain. Requests sent to the new domain (*.g.alchemy.com) are not impacted and continue to perform as expected.

Report: "Service Degradation on Unichain Mainnet"

Last update
investigating

We’re aware of an ongoing issue impacting Unichain Mainnet. This is currently causing a stall in safe head progression and no new state root updates. Transactions may still be submitted but confirmations and finalized state updates may be delayed until mitigation is complete. The Unichain team has shared a tentative mitigation time of ~2 hours. We’re closely monitoring the situation and will provide updates as soon as more information becomes available!

Report: "[Cloudflare] Elevated Latency in US-East (IAD) Region"

Last update
investigating

We are continuing to investigate this issue.

investigating

We are currently observing intermittent latency and error rates on some requests routed through the US-East (IAD) region. Customers with traffic routed through US-East may see increased latency and occasional errors. This is tied to scheduled maintenance from one of our infrastructure providers Cloudflare - see status page here: https://www.cloudflarestatus.com/incidents/8m430h71zy9c We are monitoring the situation closely with Cloudflare team

Report: "[Network Wide] Celo Alfajores Block Stalled"

Last update
investigating

Celo Alfajores block production is stalled on block 58410172 mined (Oct-02-2025 12:38:12 AM +UTC): https://alfajores.celoscan.io/block/58410172 This is a network wide incident and we'll keep this status page up to date

Report: "Elevated Latencies in the USE1 Region"

Last update
investigating

We're currently encountering latencies in USE1 Region. We're actively investigating with our eng team - we'll keep this page updated alongside our findings

Report: "[Maintenance for Shanghai Hardfork] Linea Sepolia"

Last update
investigating

The Linea Sepolia network will undergo the Shanghai hardfork. During this upgrade, downtime and stalls are expected for node operators (including our infrastructure). We will monitor the network closely throughout the maintenance and restore full functionality as soon as possible after the upgrade is complete.

Report: "SSL Issues on RPC Endpoints"

Last update
investigating

We are currently observing SSL issues impacting customer traffic, with confirmed effects on Scroll and Shape chains and potential impact on additional chains. Our team is actively engaged with the Cloudflare team to identify the root cause and restore normal operations.

Report: "Lens Mainnet State Stalled"

Last update
investigating

We are currently investigating this issue.

Report: "Block Lag on Multiple Networks"

Last update
identified

The issue has been identified and we are rolling out a fix.

investigating

We are currently investigating the issue.

Report: "Elevated Latency and Increased Timeouts on Arbitrum Mainnet"

Last update
investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "NFT Webhooks creation issues"

Last update
investigating

We are currently investigating this issue.

Report: "Scheduled Polygon Mainnet Node Upgrade [Bor v2.3.0 and Heimdall v0.4.0 release]"

Last update
investigating

Scheduled Start: Monday, September 29, 2025, at 08:00 AM UTC Scheduled End: Monday, September 29, 2025, at 11:00 AM UTC Impact: No downtime expected Details: Polygon mainnet has announced a scheduled hard fork on October 8, 2025, at 14:12 UTC. To prepare for this event, we will be upgrading our Polygon mainnet nodes ahead of time on September 29, 2025, starting at 08:00 AM UTC. The upgrade is expected to take approximately 3 hours. This upgrade ensures our infrastructure is aligned with the Bor v2.3.0 and Heimdall v0.4.0 releases, part of the Rio hard fork for the Veblop upgrade. You can find more information here: https://forum.polygon.technology/t/bor-v2-3-0-and-heimdall-v0-4-0-release-rio-hard-fork-for-veblop-upgrade/21310

Report: "[Scheduled Maintenance] Linea Sepolia (Pectra Upgrade)"

Last update
investigating

Start Time: 15:00 UTC End Time: 16:00 UTC The Linea Sepolia network will undergo the Pectra Upgrade during this window. As part of the upgrade, downtime and stalls are expected for node operators relying on Geth (including our infrastructure). We will monitor the network closely throughout the maintenance and restore full functionality as soon as possible after the upgrade is complete.

Report: "Elevated latency in USE1 Region"

Last update
investigating

We are investigating increased latency in the USE1 region and are actively coordinating with the Cloudflare team to resolve the issue.

Report: "Elevated Errors Creating Custom Webhooks"

Last update
investigating

We are currently investigating this.

Report: "Zeta Mainnet Stalled for Upgrade"

Last update
monitoring

Block production stopped for upgrade window

Report: "Scheduled Maintenance: Flow Testnet Upgrade"

Last update
investigating

Scheduled: Sep 17, 2025 – 15:00 to 17:00 UTC Components Affected: Flow Testnet RPC & related services Description: The Flow Testnet will undergo the Forte Network Upgrade, unlocking new composability and automation features for developers and AI agents. During this time, some testnet services on Alchemy may be temporarily unavailable. Impact: Testnet SendTransaction operations may be unavailable. Public endpoints will remain available for reading chain data. Mainnet services remain fully operational. More info: https://status.flow.com/incidents/2x404y0vkn8b

Report: "Ethereum Mainnet Node Fleet Upgrade – Ongoing"

Last update
investigating

We are continuing to investigate this issue.

investigating

Status: Partial Mitigation / Upgrade in Progress Regions Affected: EUC, APAC Details: We recently observed elevated latencies and errors on ETH Mainnet due to an issue with a subset of Reth nodes. Mitigation steps have been applied: traffic is currently served from our Geth nodes in USE1, and Reth node upgrades are underway. Current Impact: Customers in the APAC region may experience slightly higher latencies while node upgrades are in progress. EUC region has been fully upgraded, and traffic is now being served normally. Next Steps: Complete rolling upgrade of Reth nodes in APAC. Re-enable all regions once upgrades are stable. Disable full forwarding for trace methods after the upgrade is fully complete to restore normal operation. Updates: We will provide further updates as the APAC region upgrade completes and all regions return to normal latencies. Please reach out if you have any questions!

Report: "Elevated Latency for Alchemy Enhanced APIs"

Last update
investigating

We are currently investigating this issue.

Report: "Scroll degraded performance"

Last update
investigating

Some customer may experience performance degradation. We are investigating this issue

Report: "Eth Mainnet Subgraph Indexing Delay"

Last update
investigating

We are currently investigating this issue.

Report: "Elevated Errors on World Chain"

Last update
investigating

We are currently investigating elevated errors on world chain related to our open incident on Ethereum mainnet.

Report: "Eth Mainnet state stalled on"

Last update
investigating

We are currently investigating this issue.

Report: "Linea Sequencer Outage"

Last update
investigating

We are currently investigating this issue.

Report: "Linea increased latencies"

Last update
identified

We identified elevated latency problems on Linea due to greatly increased traffic. We made some adjustments and monitoring the results.

Report: "Polygon Mainnet downtime"

Last update
investigating

We are currently investigating this issue.

Report: "[Cloudflare incident] Elevated 503 error rate"

Last update
investigating

Cloudflare identified an incident across all of their products. Alchemy services may be interrupted and customers may see 503 errors.

Report: "Starknet Mainnet Downtime"

Last update
investigating

We are currently investigating this issue.

Report: "Zetachain Mainnet"

Last update
investigating

We are currently seeing issues with our Zetachain Mainnet nodes. Our team is actively investigating.

Report: "Starknet nodes down"

Last update
investigating

Since the recent Starknet re-org we've seen issues across our Starknet nodes fleet. We are actively investigating solutions.

Report: "Scroll Mainnet Down"

Last update
resolved

We are currently investigating this issue.

Report: "Elevated Errors Worldchain Mainnet"

Last update
resolved

The issue has been identified and a fix is being implemented.

Report: "Elevated Errors Across Eth Mainnet"

Last update
investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "STARKNET_MAINNET Scheduled Maintenance: Grinta Upgrade (v0.14)"

Last update
resolved

The Starknet team has announced that the Grinta upgrade (v0.14) will take place on Monday, September 1st at 6:00 AM GMT. This is a chain-wide upgrade introducing a decentralized architecture, mempool, and preconfirmation support to improve the user experience. The Starknet team has acknowledged that a temporary downtime of approximately 15 minutes is expected during the upgrade. Thank you for your patience and understanding.

Report: "Degraded Performance on ZkSync Mainnet"

Last update
investigating

A network-wide issue is currently affecting ZkSync Mainnet, which is causing node crashes. Our team is actively collaborating with the ZkSync engineering team to resolve this as quickly as possible. Customers may observe elevated 5xx error rates as a result of this incident.

Report: "Elevated 503s on Rollups chain with Wallet Services / Account Abstractions endpoints"

Last update
investigating

We're currently experiencing an elevated 503s errors on Rollups chains with Wallet Services / Account Abstractions endpoints and specifically with alchemy_requestGasAndPaymasterAndData. Will share updated as we'll uncover more!

Report: "[Cloudflare-Incident] Increased errors and latencies"

Last update
investigating

We're currently being impacted by a Cloudflare incident. We're actively investigating with Cloudflare team and will update this status page with our latest findings!

Report: "[Network Wide] Increase 5xx on Sei Mainnet"

Last update
investigating

We're currently investigating an underlying issue from Sei validators not created new blocks on chain and creating a backlog of pending transactions. We're working closely with Sei team to understand the root cause and assess impact

Report: "[Network-Wide] Polygon ZKEVM Block Production Stalled"

Last update
investigating

Polygon ZKEVM is currently experiencing a stall block production. The latest block produced was 24940457: https://zkevm.polygonscan.com/

Report: "Increased Latencies"

Last update
investigating

We're currently seeing increased latencies with third party providers, we're investigating and we'll updates to this page!

Report: "Elevated -32603 with getTransactionReceipts and Transfers API on Base"

Last update
investigating

We're currently seeing an elevated -32603 errors on Base Mainnet with getTransactionReceipts and Transfers API endpoints. We're actively investigating and we'll keep this status page updated!

Report: "[Network Wide] Base Sepolia Chain Halt"

Last update
investigating

The Base Sepolia chain is currently halted - this is a network wide incident reported by the Base team: https://status.base.org/incidents/hm21kw149p5x

Report: "Elevated error for APSE1 region"

Last update
monitoring

A fix has been implemented and we are monitoring the results.

Report: "Elevated Latency in EUC1 & APSE1"

Last update
identified

We are currently investigating reports of increased latency affecting services on all network in the EUC1 (Europe Central) and APSE1 (Asia Pacific Southeast) regions. Our engineering team is working to identify the cause and mitigate impact. We will provide updates as soon as more information is available.

Report: "Zetachain network wide stalled for a planned upgrade"

Last update
monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Elevated errors in EUC1 region"

Last update
identified

The issue has been identified and a fix is being implemented.

Report: "Blast API Performance Degradation"

Last update
investigating

We are currently observing a partial service degradation impacting the Blast API segment of our infrastructure across multiple chains. Affected systems are experiencing elevated latency, intermittent request failures, and block propagation delays across multiple regions. This may result in: Authentication Failures: Users may be unable to log in due to timeouts or dropped authentication requests. Block Lag: Delayed synchronization with the latest chain state, potentially impacting transaction visibility and confirmation times. We will provide continuous updates as remediation progresses and full service performance is restored.

Report: "Monad Testnet Halt for Planned Upgrade"

Last update
monitoring

Monad BFT Consensus goes live on public testnet This is the first tail-fork-resistant consensus mechanism on the EVM Testnet will halt at 13:30 GMT for implementation

Report: "Elevated errors for Europe- central"

Last update
resolved

From 11:20 UTC to 11:41 UTC We're working on a few improvements to speed up routing & pushed a configuration that was slightly off in our European data centers. This has been fixed. We are adding guardrails to make sure this won't happen again!!

Report: "Elevated Latencies For EU Region"

Last update
investigating

Due to an issue with an infra provider, we're seeing increased latencies for requests hitting our EU stack. We're working closely with our provider, as they resolve this issue on their end!

Report: "Elevated Latency and Timeouts on EAPI Services"

Last update
investigating

We're currently experiencing increased latency and timeout errors across our EAPI services including endpoints such as our Portfolio API. Our team is actively investigating the root cause and working to restore normal performance as quickly as possible.

Report: "Polygon Mainnet Block Height Stall"

Last update
identified

We are currently observing a protocol-wide halt on the Polygon mainnet at block height 74,592,238. This disruption is impacting transaction finalization, block propagation, and all dependent network operations. Our engineering team is actively implementing a node upgrade to remediate the issue.

Report: "Elevated 5xx errors on Polygon mainnet"

Last update
investigating

We are currently investigating.

Report: "Seeing Elevated latencies"

Last update
investigating

We are currently investigating

Report: "Elevated error rates and latency across EVM chains"

Last update
resolved

Our infrastructure experienced a brief service disruption today between 06:05 AM UTC and 06:17 AM UTC across our EVM chains. During this window, we observed elevated error rates and increased latency due to unusually high traffic, due to an overloaded service. We've mitigated the issue by rate limiting the source, and everything is now fully stabilized. We apologize for any inconvenience this may have caused

Report: "Elevated Errors on Botanix"

Last update
investigating

We are currently investigating this issue.

Report: "Elevated Paymaster Errors on Base"

Last update
investigating

We are currently investigating this issue.

Report: "Elevated Errors on Usage Cap Updates"

Last update
resolved

This incident has been resolved.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

investigating

We’re currently investigating this issue.

Report: "Elevated Errors on WorldChain"

Last update
investigating

We are currently investigating this issue.

Report: "Elevated Errors on Bundler For Base Mainnet"

Last update
investigating

We are currently investigating this issue.

Report: "Elevated Errors on Subgraphs Across Networks"

Last update
investigating

We are currently investigating this issue!

Report: "Increased Latency and Errors for NFT APIs Across Multiple Networks"

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating increased latency and errors for NFT API endpoints.

Report: "CrossFi Testnet Block Stalled"

Last update
investigating

We are currently investigating this issue.

Report: "[Network Wide] Flow Testnet Block Production Stalled"

Last update
resolved

This incident has been resolved.

investigating

We're seeing the block production halted on Flow Test at the network level

Report: "Spike in -32000 errors on Polygon Mainnet"

Last update
investigating

We are currently investigating this issue.

Report: "Lens Testnet Issues"

Last update
investigating

We're actively investigating issues impacting Lens Testnet

Report: "[Network Wide] Celo Alfajores Stuck on Block 51109972"

Last update
investigating

Cela Alfajores block production is stalled on block 51109972 - see explorer here: https://alfajores.celoscan.io/block/51109972 This is a network wide incident and we're working with Celo team for a fix!

Report: "Finalized Block Stalled on Unichain Mainnet"

Last update
investigating

We're seeing the Finalized block stalled on 21275800 since ~01:42 UTC This is related to this incident from Unichain: https://status.unichain.org/cmcvcwdwv001oc3nwefxc4k17 Our nodes are catching-up and will update this status page accordingly!

Report: "Elevated 5xx errors across our Data APIs on multiple networks"

Last update
investigating

We are currently investigating this issue.

Report: "[Network Wide] Botanix Testnet Stalled"

Last update
investigating

We're currently seeing Botanix Testnet block production stalled on block 3072275: https://3636.testnet.routescan.io/block/3072275 This is a network wide incident and we're working closely with Botanix team!

Report: "Solana Devnet Stall"

Last update
identified

We have identified a network stall on Solana Devnet. Please note that this stall is only affecting Devnet - not Mainnet nor Testnet.

Report: "Impaired performance with Erigon nodes on Polygon Mainnet"

Last update
investigating

The Erigon portion of our Polygon Mainnet node fleet is currently stalled. This is a network-wide issue, and we're working closely with the Polygon team to fix it. As a result, you may experience 503 errors or request timeouts. We will provide regular updates as we resolve this.

Report: "Increased Errors and Latency Across Multiple Networks for NFT, Transfers, and Token Balances APIs"

Last update
identified

The issue has been identified and a fix is being implemented.

investigating

We’ve observed elevated errors and latency across multiple networks for our NFT, Transfers, and Token Balances APIs. We are actively investigating the issue.

Report: "Elevated 503 Errors Across Multiple Networks for NFT, Transfers, and Token Balances APIs"

Last update
investigating

We’ve observed elevated 503 errors across multiple networks for our NFT, Transfers, and Token Balances APIs. We are actively investigating the issue.

Report: "Eth Mainnet Elevated Latency From APAC Region"

Last update
monitoring

We saw a short spike in latency from APAC region, and have recovered our nodes. We're actively monitoring a recovery now!

Report: "Elevated Latency on Solana Mainnet"

Last update
identified

We've identified the issue and are applying a fix.

Report: "Elevated Errors on Monad"

Last update
investigating

We are currently investigating this issue!

Report: "Elevated Errors on Solana Endpoints"

Last update
resolved

This incident has been resolved.

investigating

We are actively investigating this issue!

Report: "Elevated Errors on Solana Endpoints"

Last update
Resolved

This incident has been resolved.

Investigating

We are actively investigating this issue!

Report: "Elevated Errors on zksync sepolia"

Last update
resolved

This incident has been resolved.

investigating

Working with zksync team to resolve network wide issues

Report: "Elevated Errors on Zksync Sepolia"

Last update
resolved

This incident has been resolved.

identified

Seeing network wide issues on Zksync Sepolia. We have flagged to the Zksync team and are actively investigating!

Report: "Elevated Errors on zksync sepolia"

Last update
Resolved

This incident has been resolved.

Investigating

Working with zksync team to resolve network wide issues

Report: "Elevated Errors on Zksync Sepolia"

Last update
Resolved

This incident has been resolved.

Identified

Seeing network wide issues on Zksync Sepolia. We have flagged to the Zksync team and are actively investigating!

Report: "[Network Wide] Unichain Sepolia Stalled on Block 21818269"

Last update
resolved

This incident has been resolved.

investigating

We observed that Unichain Sepolia is stuck on Block 21818269. This is a network wide incident: https://unichain-sepolia.blockscout.com/

Report: "[Network Wide] Unichain Sepolia Stalled on Block 21818269"

Last update
Resolved

This incident has been resolved.

Investigating

We observed that Unichain Sepolia is stuck on Block 21818269. This is a network wide incident: https://unichain-sepolia.blockscout.com/

Report: "Elevated 503s on Matic Amoy"

Last update
resolved

This incident has been resolved.

investigating

We're currently investigating on an elevated 503s currently happening on Matic Amoy

Report: "Elevated 503s on Matic Amoy"

Last update
Resolved

This incident has been resolved.

Investigating

We're currently investigating on an elevated 503s currently happening on Matic Amoy

Report: "World Chain Sepolia Game Day Testing"

Last update
Completed

The scheduled maintenance has been completed.

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Scheduled

World will be conducting game day testing on World Chain Sepolia between 10:00am ET Wed 5/21 through 4:00pm ET Fri 5/23. You may see block production stall or reorgs on the chain during this time.

Report: "UI Issue - Usage Chart Displaying "Deactivated App""

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating an issue where the usage chart in the Alchemy Dashboard says "Deactivated App" for apps that are not deactivated. Please note that only the UI is affected - there are no issues processing requests to these apps.

Report: "UI Issue - Usage Chart Displaying "Deactivated App""

Last update
Resolved

This incident has been resolved.

Investigating

We are currently investigating an issue where the usage chart in the Alchemy Dashboard says "Deactivated App" for apps that are not deactivated. Please note that only the UI is affected - there are no issues processing requests to these apps.

Report: "eth_getLogs - Empty Logs"

Last update
resolved

We are now fully upgraded to the newest RETH patch: v1.4.2.

monitoring

We have downgraded our RETH nodes to v1.3.12, which has resolved the issue. We are continuing to work with the RETH team to patch v1.4.1, and will mark this incident as "Resolved" when we are able to re-upgrade to v1.4.1.

identified

The issue has been identified, and is related to the v1.4.1 RETH upgrade. We are working with the RETH team to patch this upgrade and resolve the issue.

investigating

We are currently investigating an incident where eth_getLogs is returning no logs on certain chains, including Base Mainnet and Eth Sepolia

Report: "Increased Transaction and UserOperation landing times on Base Mainnet"

Last update
resolved

This incident has been resolved.

investigating

The Base team has identified an extremely large number of transactions in the mempool, which is causing the sequencer to accept transactions at a slower rate. This is a network-wide issue and we are actively monitoring it. You can reference Base's official status page as well for the latest updates: https://status.base.org/

Report: "Degraded performance on Abstract Mainnet"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating the issue

Report: "Degraded Performance on BASE_MAINNET"

Last update
resolved

This incident has been resolved.

monitoring

The sequencer has cleared, and we observe a significant improvement in the transaction landing time.

investigating

The Base team has identified an extremely large number of transactions in the mempool, which is causing the sequencer to accept transactions at a slower rate. This is a network-wide issue and we are actively monitoring it.

Report: "Increased Error Rates, Sei Mainnet"

Last update
resolved

This incident has been resolved.

investigating

Seeing increased error rates on Sei Mainnet, we are investigating the issue.

Report: "Increased Error Rate on Solana Mainnet"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Elevated 401 Response returned for multiple chains"

Last update
resolved

From approximately 08:30 UTC to 09:30 UTC we experienced degraded performance on a subset of networks which would have manifested downstream as an 'Authentication failed' error. This impacted our Wallet Service and Subgraph products as well. Mitigation efforts have been completed and services are operational. We sincerely apologize for the disruption this may have caused.

identified

We've isolated the issue and have begun mitigation efforts. We are seeing a decrease in 4XX response codes and services are re-syncing.

identified

The elevated 401 response from networks is having downstream impact on RPC, Wallet Service, and Subgraph products. We are actively working to remediate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are currently investigating this issue.

Report: "Custom Webhooks Delayed Events"

Last update
resolved

This incident has been resolved.

investigating

We're currently investigating delayed events for custom webhooks!

Report: "Unable to login to Alchemy Dashboard"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are currently working through a issue with logging in to the Alchemy Dashboard.

Report: "Latency observed across multiple networks"

Last update
resolved

This incident has been resolved. Starting at 07:54 UTC a Cloudflare maintenance event caused high latency figures across multiple networks and methods. This incident has been resolved as of 09:37UTC.

monitoring

Latency metrics have stabilized, we're continuing to monitor the situation.

identified

We are beginning to see a recovery in latency figures across networks. We are continuing to investigate.

investigating

We are actively investigating elevated latency across multiple methods and networks.

Report: "Elevated Errors on Base"

Last update
resolved

Starting at approximately 2025-04-28 10:07UTC, we experienced an issue related to log ingestion on Base which may have manifested as 'null' responses, or server errors. This incident has now been resolved as of 16:07UTC. We appreciate your patience as we worked through this.

monitoring

A fix has been implemented and we are currently monitoring, metrics have stabilized.

identified

We've identified the issue and are currently implementing a fix, node health is beginning to recover.

investigating

We are currently investigating this issue.

Report: "Elevated Errors on Flow"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Elevated Errors on Polygon ZkEVM"

Last update
resolved

This incident has been resolved.

investigating

We are continuing to investigate this issue.

investigating

Currently seeing some errors on Polygon ZkEVM which we are actively managing and investigating.

Report: "Elevated Error rates across networks"

Last update
resolved

This incident has been resolved.

identified

We're seeing some intermittent error rates across the board due to an underlying upgrade to our of systems. We're already seeing recovery on our end and will continue to monitor

Report: "Elevated Errors on OpBNB"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Blast Sepolia Network Stalled"

Last update
resolved

This incident has been resolved.

investigating

Blast Sepolia network is stalled on block 20155661: https://sepolia.blastexplorer.io/

Report: "CrossFi testnet stalled"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Lens mainnet issues"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Berachain mainnet stalled"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating an issue with Berachain Mainnet - the network is stalled and we'll share update here asap!

Report: "[Network wide] Flow testnet down"

Last update
resolved

This incident has been resolved.

investigating

Flow testnet is encountering an incident at the network level

Report: "Increased latencies for EU traffic"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating an increased latencies for Europe traffic

Report: "Elevated Errors on Signer"

Last update
resolved

Retroactive status page for elevated errors on user sign up and authentication with the signer. We have identified and resolved this issue.

Report: "Incident 4/9/2025"

Last update
resolved

This incident has been resolved.

investigating

We are seeing issues across base mainnet currently, https://status.base.org/

investigating

We are currently investigating this issue.

Report: "Increased Latency on Monad Testnet for Certain Methods"

Last update
resolved

This incident, which mainly affected latency for eth_calls and alchemy_getTokenBalances, has now been resolved.

investigating

We are currently investigating this issue.

Report: "Eth Sepolia Internal Transfers Unavailable"

Last update
resolved

This incident has been resolved.

investigating

We are currently experiencing an issue affecting internal transfer ingestion on Eth Sepolia, which has been halted. This issue is due to an ongoing problem at the node client level with Erigon on Sepolia: https://github.com/erigontech/erigon/issues/14089 Since this is an upstream issue, we are waiting for Erigon to deploy a fix! We'll keep this page up to date!

Report: "Intermittent Issues Loading Dashboard"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

We are currently investigating a dashboard issue -- you may see "Cannot read properties of undefined (reading 'json')"

Report: "Impaired performance with webhooks on ARBITRUM_MAINNET and ARBITRUM_SEPOLIA"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been pushed and we are monitoring our systems.

investigating

We are currently investigating the issue.

Report: "Celo Mainnet Downtime (L1 to L2 Migration)"

Last update
resolved

This incident has been resolved.

monitoring

Celo Mainnet will be migrating to an L2 starting on Tuesday, March 25th 7:45PM PST (2:45AM UTC Wednesday, March 26th). Expect downtime for a few hours.

Report: "[Network Wide ]Monad Testnet Stalled"

Last update
resolved

This incident has been resolved.

identified

We are continuing to work on a fix for this issue.

identified

The issue has been identified and a fix is being implemented.

Report: "3/19 Alchemy Signer Scheduled Maintenance (6:00PM-6:20PM EST)"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

Alchemy Signer will not be operational on 3/19 for > 20 minutes due to scheduled maintenance on Turnkey. For reference: https://www.turnkey-status.com/ During this time: new user sign up, authentication, and signing transactions using alchemy signer will not work.

Report: "Mantle Mainnet Degraded Performance"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Degraded performance on Ronin Mainnet newHead subscription Websocket"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Linea Sepolia Block Production Stalled"

Last update
resolved

This incident has been resolved.

investigating

We are currently investigating this issue.

Report: "Due to a network wide issue we are seeing degraded performance on Polygon zkEvm"

Last update
resolved

This Incident has been resolved

identified

The issue has been identified and a fix is being implemented.

Report: "Monad Testnet Instability Issues"

Last update
resolved

This incident has been resolved.

monitoring

Incident has been resolved.

monitoring

A patch released by the Monad team has been deployed to our nodes. This, along with additional node resources coming online, has significantly improved our latency figures back to expected baselines. We will continue to monitor.

investigating

We are currently investigating some instability on Monad that can lead to high response time. We'll update this page alongside our findings! Thanks for your patience!

Report: "Blast Sepolia eth_call issues"

Last update
resolved

This incident has been resolved.

investigating

Following the Pectra upgrade, nodes on Blast Sepolia have been experiencing issues with eth_call. The testnet has been down, with block production also impacted. This is a network wide incident and our team is actively investigating the root cause! Further updates will be provided as soon as we have more details!

Report: "Scroll Sepolia stalled"

Last update
resolved

Block production has started again and our nodes have caught up!

investigating

Block production on Scroll Sepolia is stalled - latest block mined was 8486730 This is a network wide incident: https://sepolia.scrollscan.com/

Report: "[Dashboard] Authentication Errors on Login"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently observing elevated 5XX status codes with authentication on signer endpoints. Existing sessions should not be impacted. API calls and requests are not impacted.

Report: "Ink Mainnet and Sepolia Down"

Last update
resolved

This incident has been resolved.

investigating

Both Ink Mainnet and Sepoliare down. This is a network wide incident. We'll update this page as we discover more!

Report: "Dashboard Loading Issues"

Last update
resolved

We’ve deployed a fix and the Dashboard is now fully operational again! 🚀 This incident did not impact your app’s functionality!

investigating

We saw a regression - currently investigating!

monitoring

We deployed a fix and we are monitoring the situation!

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We are currently encountering issues to log-in to the Alchemy Dahsboard! This does not prevent your app from working! Our eng team is on this and we'll share update soon! Sorry for the inconvenience!

Report: "Unichain Sepolia Not Updating to the Latest Block"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

Nodes on the current network have fallen behind the latest block number, and may return outdated block information. We're looking into the issue now and will provide an update ASAP!

Report: "Shape Mainnet Chain Halt"

Last update
resolved

This incident has been resolved.

monitoring

Blocks are being mined and transactions are currently being processed.

investigating

Network wide incident on Shape Mainnet. Latest block mined #9937064. Continuing to monitor. https://shapescan.xyz/block/9937064

Report: "Monad Testnet Network Latencies"

Last update
resolved

This incident has been resolved.

monitoring

The latency seems to be related to traffic congestion on the chain. Traffic has decreased on the Monad chain as a whole and we are seeing improvements in the response times!

investigating

We are currently investigating higher latencies on Monad Testnet We'll share updates as soon as possible and appreciate your patience

Report: "Avax Testnet Issues"

Last update
resolved

The Avalanche team has released a new version (AvalancheGo v1.13.0-fuji) to address the instability issues affecting Fuji nodes: https://status.avax.network/incidents/n4dnd7sb1n6h We’ve now upgraded our nodes accordingly and everything should be running smoothly again!

investigating

Avalanche Testnet is undergoing a network wide issue Updates from Avalanche team: https://status.avax.network/incidents/n4dnd7sb1n6h