Historical record of incidents for Pinecone
Report: "Console elements not loading due to widespread Google Cloud outage impacting control plane and all GCP clusters"
Last updateWe are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
Report: "[Serverless][GCP][europe-west4] Increased latencies and errors on write path operations for some indexes"
Last updateWe are currently investigating this issue.
Report: "[Serverless][AWS][eu-west-1] Increase latencies and error rates on read endpoints"
Last updateThe issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Increase latencies and error rates on read endpoints in AWS eu-west-1"
Last updateWe are currently investigating this issue.
Report: "Index creation and deletion operations delayed in Azure eastus2 serverless environment"
Last updateWe are currently investigating this issue.
Report: "[Serverless][GCP][europe-west4] Increased latencies and timeouts on query operations for some indexes"
Last updateWe are currently investigating this issue.
Report: "[Serverless][AWS][eu-west-1] Delayed availability of newly-written records on some indexes"
Last updateThe issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][AWS][eu-west-1] Delayed availability of newly-written records on some indexes"
Last updateThe issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][AWS eu-west-1][GCP europe-west4] Internal server errors for read path operations on some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][AWS eu-west-1][GCP europe-west4] Internal server errors for read path operations on some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][GCP][europe-west4] Increased latency and timeouts on some indexes, read operations impacted"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless][GCP][europe-west4] Increased latency and timeouts on some indexes, read operations impacted"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless][AWS][us-east-1] Indexing lag on some indexes; recent writes may not appear in reads"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][AWS][us-east-1] Indexing lag on some indexes; recent writes may not appear in reads"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][GCP][us-central1] Sporadic errors for fetch and query operations on some indexes"
Last updateThis incident has been resolved
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
Report: "[Serverless][GCP][us-central1] Sporadic errors for fetch and query operations on some indexes"
Last updateThis incident has been resolved
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
Report: "Availability issues with rerank endpoint"
Last updateWe saw availability issues with `rerank` endpoints across all models due to disruptions in our rerank hosting infrastructure. The issue is resolved now and stable.
Report: "Availability issues with rerank endpoint"
Last updateWe saw availability issues with `rerank` endpoints across all models due to disruptions in our rerank hosting infrastructure. The issue is resolved now and stable.
Report: "Partial Control Plane Outage"
Last updateThis incident has been resolved
We've identified an issue that was impacting our control plane instances in Asia. We've applied a fix and are monitoring the resolution.
Subset of control plane calls are failing with 500. This impacts index management APIs and viewing resources from the console. Customer data plane operations will continue to work as expected.
Report: "[Serverless][AWS][us-east-1] Increased latency on read path requests for some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][Azure][eastus2] High indexing lag on some indexes"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
Read operations in some indexes may not return recently written records. We are currently investigating this issue.
Report: "[Inference] Downtime on llama-embed & multilingual-e5-large models"
Last updateFrom 11:10 PM EST to 11:25 PM EST, the models multilingual-e5-large & the llama-text-embed-v2 were down. The incident is now resolved. We are continuing to monitor and are working on an RCA.
Report: "[Serverless][AWS][us-east-1] 5xx errors and increased latency for some read operations on some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "[Serverless][GCP][us-central1] Increased latency for upsert and fetch operations on some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless][Azure][eastus2] 5xx errors on read requests to some indexes"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][AWS][us-east-1] Errors on queries for some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Elevated P90 latencies in file upload and search"
Last updateThis incident has been resolved.
A fix has been deployed and we are monitoring latencies.
Report: "Elevated latencies on file ingestion in Assistant (US)"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "[Assistant] File uploads and deletions not processing"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless] [AWS] [us-east-1] Delays in index creation and deletion operations"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][GCP][us-central1] 500 errors on queries for some indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless][AWS][us-east-1] gRPC status code 14 (503 errors) on some queries"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][Azure][eastus2] Increased latency and 5xx errors on upserts"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[GCP-Starter] Increase in write operations errors and an increase in freshness lag."
Last updateThis incident has been resolved. As communicated via email, all indexes in the gcp-starter environment must be migrated to serverless.
We are investigating increased write operations (updates, deletes, and upserts) errors and freshness lag in our legacy free tier, GCP-Starter. In the meantime, all users can migrate to our Serverless offering: https://docs.pinecone.io/guides/indexes/convert-a-gcp-starter-index-to-serverless
Report: "[Serverless][AWS][us-east-1] Newly created indexes failing to initialize"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "API requests to records (Pinecone Inference) endpoints failing with 404 error in GCP and Azure serverless environments"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] Increase in latency for all operations"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Metrics in console not populating for serverless indexes"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless] [AWS] [eu-west-1] Increased latency and 5xx errors on write path"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless] [AWS] [eu-west-1] Increased latency and 5xx errors on write path"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] Increased latency and errors on write and fetch operations"
Last updateThis incident has been resolved.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] Increased freshness lag for some indexes"
Last updateThis incident has been resolved.
Some customers may experience a freshness lag of a few minutes when upserting vectors before the new data can be queried. We are investigating the issue.
Report: "[Serverless][AWS][us-east-1] Increased latency and 500 errors for queries"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [ europe-west4] 504 errors for read operations"
Last updateSince the resources on the read path have increased, no new errors have been observed. This incident has been resolved.
We have increased the resources for the read path and are monitoring the results.
Most of the errors are affecting a single project. We are taking steps to mitigate the problem.
We are investigating an increase in 504 errors for read operations for Serverless indexes in GCP europe-west4
Report: "[Serverless] [AWS] [us-east-1] Increased 5xx errors on write path"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
The issue resurfaced, and we are implementing a fix.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] [europe-west4] Increase in internal errors (5xx) for Queries with filters"
Last updateNo more errors for the last 15 minutes. The fix has resolved the issue.
The fix has been implemented and rolled out. We are now monitoring the error rate.
Identify an issue with the Metadata service used for filtered queries. We are working on the fix.
Seeing a sudden spike 5XX Errors for Queries with filters for GCP us-central1 & europe-west4
Report: "[Serverless] [AWS] [us-east-1] Increase in internal errors (5xx)"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless] [AWS] [us-east-1] Increased lag in freshness layer for some indexes"
Last updateThis incident has been resolved.
The impact has been mitigated, and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless][GCP][us-central1][europe-west4] Operations are timing out or returning errors"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "[Serverless][AWS][us-east-1] Increase in internal errors (500) for write operations"
Last updateThis incident has been resolved.
We are monitoring the cluster since deploying the fix.
We have identified an issue with the index builders and have deployed a fix.
We are currently investigating an increase in 500 errors for serverless indexes in AWS us-east-1.
Report: "[Serverless][AWS][us-east-1] Increase in internal errors (5xx) for writes operations"
Last updateThis incident has been resolved.
An issue has been identified, and we are working on fixing it.
We are currently seeing an increase in 5XX errors for writes (upsert, deletes and update) for Serverless indexes in AWS us-east-1.
Report: "[Serverless] [GCP] [us-central1] Seeing an increase in internal errors for queries"
Last updateThe spike in internal errors was tied to a live deployment in this region and the impact subsided after approximately 5 minutes
Errors have halted, and we are investigating the root cause.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] Elevated error rates and latencies for some operations"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [us-central1] High upsert latency for the 90 percentile"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
As part of mitigation for the elevated upsert latency, we are performing a few minutes of maintenance to scale up database resources. We anticipate a drop in write path availability.
We are investigating an issue where one in a hundred upsert operations takes over a second. This only affects serverless indexes in GCP us-central1.
Report: "[Serverless] [GCP] [europe-west4] Newly-created indexes failing to initialize"
Last updateThis incident has been resolved.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [europe-west4] Errors for some read and write path operations"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "[Serverless] [GCP] [europe-west4] 504 and gRPC 14 errors for upsert operations"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Multilingual e5 Large Query Availability"
Last updateFrom 10:37-10:54 multilingual e5 large embeddings were unavailable. The incident is now resolved.
Report: "[Serverless] [AWS] [us-east-1] 5xx errors for queries including values and/or metadata, queries by ID, and fetch operations"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "[Serverless] [AWS] [us-west-2] Increase in 500 errors and latency for both read and write operations"
Last updateThis incident has been resolved.
We are currently investigating this issue.
Report: "Control plane operations are failing for pod-based indexes to controller.pinecone.io"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
We are currently investigating this issue. Control plane operations will succeed when hitting api.pinecone.io
Report: "Serverless metrics in console not visible for time filters less than 12 hours"
Last updateThis incident has been resolved.
We are currently investigating this issue.