Is Quay.io Down? How to Check Red Hat Quay Status Right Now

Statusfield Team
6 min read

Container pulls failing? Quay.io not responding? Learn how to check Red Hat Quay.io status in real time, what components can fail, and how to get instant alerts when the registry goes down.

Quay.io is Red Hat's hosted container image registry — the infrastructure behind millions of container pulls every day for Kubernetes clusters, CI/CD pipelines, and developer workstations. When Quay.io goes down, docker pull fails, deployments stall, and CI builds break silently or with cryptic errors.

Here's how to check if Quay.io is the problem, what the failure modes look like, and how to make sure you know before your pipeline does.

Is Quay.io Down Right Now?

Check these in order:

  1. Statusfield — Quay.io status — real-time monitoring, updated continuously.
  2. Red Hat's official status pagestatus.redhat.com covers Quay.io and other Red Hat services with component-level detail.
  3. Twitter/X — search quay.io down sorted by Latest. DevOps teams report registry failures quickly.
  4. Red Hat Customer Portal — if you have a Red Hat subscription, active incidents are surfaced in the portal.

What Quay.io Outages Look Like

Quay.io failure modes differ depending on which layer is affected:

ComponentSymptomWho's affected
Registry APIdocker pull returns 500/503; image manifests unavailableAll users pulling or pushing images
Image serving (CDN)Pull initiates but blobs fail to download; build hangsCI/CD pipelines, Kubernetes nodes pulling images
AuthenticationLogin fails; unauthorized: authentication required errorsAnyone authenticating via robot accounts or OAuth
Build serviceAutomated builds don't trigger or fail at queue stageTeams using Quay.io's built-in build triggers
NotificationsBuild completion webhooks don't firePipelines waiting on Quay build events

A registry API outage is the most disruptive — it blocks all image operations. A CDN/blob outage is more subtle: the manifest resolves but the layer downloads stall or time out, which can look like a slow network at first.

Common Errors During a Quay.io Outage

ErrorLikely cause
Error response from daemon: Get "https://quay.io/v2/...": dial tcp: i/o timeoutRegistry unreachable — full outage or DNS issue
unauthorized: authentication requiredAuth service degraded, or your token has expired
toomanyrequests: Rate limit exceededUsually your issue, but can spike during partial outages as clients retry
error pulling image configuration: ... 500 Internal Server ErrorRegistry API error on manifest/config fetch
net/http: TLS handshake timeoutTLS termination layer having issues
manifest unknown: manifest unknownTag missing or registry inconsistency — can occur during partial outages

Key distinction: Timeout errors point to infrastructure problems (registry down, CDN layer failing). Auth errors can be either an outage or an expired credential — check Statusfield first before rotating tokens.

Why CI/CD Pipelines Break Hard

Quay.io is commonly embedded at multiple points in a CI/CD pipeline — your build environment pulls a base image, your test stage pulls a test runner image, and your deploy stage pushes the final artifact and then your cluster pulls it. A Quay.io outage can break all three stages independently.

The CI failure pattern during an outage:

  1. Build job starts, tries to pull quay.io/your-org/base-image:latest
  2. Pull hangs for 60–300 seconds (default Docker timeout)
  3. Job fails with a timeout error that looks like a network problem
  4. Engineers start investigating their VPN, firewalls, or cluster networking — the actual cause is the registry, not their infrastructure

This diagnostic confusion is why monitoring the registry externally is valuable. If you know Quay.io is down before your pipeline fails, you save the debugging cycle entirely.

Kubernetes Clusters: ErrImagePull and ImagePullBackOff

During a Quay.io outage, Kubernetes nodes that need to pull images will enter ErrImagePull status, then back off into ImagePullBackOff. The pod events will show:

Warning  Failed     2m    kubelet  Failed to pull image "quay.io/...": 
         rpc error: code = Unknown desc = failed to pull and unpack image: 
         failed to resolve reference "quay.io/...": unexpected status code 503

If your pods are already running and just need to restart, existing cached images will be used (depending on your imagePullPolicy). Pods with imagePullPolicy: Always will fail even on restart. Pods with imagePullPolicy: IfNotPresent or Never won't be affected unless the image isn't already cached on the node.

During a Quay.io outage:

  • New deployments: will fail
  • Rolling updates: will stall
  • Existing pods restarting with Always pull policy: will fail to reschedule
  • Existing pods restarting with IfNotPresent and cached image: will work

How to Get Instant Quay.io Outage Alerts

By the time your pipeline fails, you've already lost minutes (or more). The better pattern: know before the failure happens.

Monitor Quay.io on Statusfield — Statusfield polls Quay.io's status continuously and sends an alert the moment any component changes. Route the alert to email, Slack, or a webhook into your incident management system.

For teams running Kubernetes in production: consider adding Quay.io monitoring to your runbook so on-call engineers check registry status first when pods fail to schedule.

Start monitoring Quay.io on Statusfield → — free, no credit card required.


Frequently Asked Questions

Is Quay.io down for everyone or just me?

Check Statusfield or status.redhat.com. If both show operational, the issue is likely local — check your Docker daemon logs, verify your authentication token hasn't expired, and test with curl -v https://quay.io/v2/ to confirm connectivity.

Why does docker pull quay.io/... hang but not give an error?

This is typically the image blob download stalling — the manifest resolved but the layer download from Quay's CDN is timing out. It often looks like a slow connection rather than a failure. Check the CDN/blob service component specifically on the status page. You can also try docker pull --disable-content-trust to rule out content trust verification overhead.

My Quay.io robot account suddenly can't authenticate — is this an outage?

It could be either. Check Statusfield first. If the authentication component shows healthy, your token may have expired or been revoked. Generate a new robot account token in the Quay.io console and retry before assuming an outage.

Our Kubernetes pods show ImagePullBackOff — is Quay.io down?

Check Statusfield immediately. If Quay.io is degraded, there's nothing to debug in your cluster — wait for recovery. If Quay.io shows healthy, look at: image tag existence (docker manifest inspect quay.io/your-org/image:tag), pull secret configuration in the namespace, and network policy rules that might block the kubelet from reaching quay.io.

How often does Quay.io go down?

Quay.io publishes incident history at status.redhat.com. Container registry incidents tend to be infrequent but high-impact — because they sit in the critical path of deployments, even a short degradation affects many teams simultaneously.

Can I mirror Quay.io images to avoid outage impact?

Yes. For production-critical images, mirroring to a local registry (Harbor, AWS ECR, GCR) provides resilience against upstream registry outages. This is common practice for base images and build tools. Application images are harder to pre-mirror since they change with every deployment.

Published: April 4, 2026. Check current Quay.io status →