Historical record of incidents for ShipHawk
Report: "Scheduled Maintenance"
Last updateScheduled maintenance is currently in progress. We will provide updates as necessary.
We will be undergoing scheduled maintenance at this time on the ShipHawk Instances listed below. This process may take up to 3 hours with 15 minutes of downtime.
Report: "Pitney Bowes Expedited Delivery Services partial unavailability"
Last updateThis incident has been resolved.
Pitney Bowes has notified us they are experiencing technical issues with PB Expedited Delivery Services. Customers who use Pitney Bowes may experience slowness or unavailability of Pitney Bowes shipping rates or printing labels. We will continue to monitor their progress as they resolve this issue. For additional information use Pitney Bowes status page - https://status.pitneybowes.com
Report: "Partial WMS Access Issue – Investigation Underway"
Last updateThis incident has been resolved.
The affected WMS environment is now back online, and system access has been restored for impacted customers. We are currently monitoring the environment to ensure stability.
We are currently investigating an issue affecting one of our WMS environments. A subset of customers may be unable to access the system. Our team is actively working to identify the root cause and restore full access as soon as possible. We will continue to provide updates as we make progress. Thank you for your understanding and patience.
Report: "Rating API slowness"
Last updateThis incident has been resolved.
The issue causing slowness in the rating API has been identified and resolved.
Some customers are experiencing slowness with the rating API. Our team is actively investigating the issue. We will provide updates as we learn more.
Report: "Pitney Bowes Expedited Delivery Services partial unavailability"
Last updateResolved by Pitney Bowes
Pitney Bowes has notified us they are experiencing technical issues with PB Expedited Delivery Services. Customers who use Pitney Bowes may experience slowness or unavailability of Pitney Bowes shipping rates or printing labels. We will continue to monitor their progress as they resolve this issue. For additional information use Pitney Bowes status page - https://status.pitneybowes.com
Report: "UPS Web Services outage"
Last updateThis incident has been resolved.
UPS resolved the issue on their side, we will continue to monitor it.
We are seeing a degraded performance with UPS web services. We will continue to monitor as UPS works to resolve this. To see current response times from UPS you can check: - https://downdetector.com/status/ups/ - https://www.shippingapimonitor.com/history.html?api=ups
Report: "USPS Endicia is experiencing issues with printing postage"
Last updateResolved by Endicia.
USPS Endicia has noted that they have fixed the postage printing issue. See USPS Endicia status page for more details: https://status.endicia.com/
Report: "FedEx Web Services outage"
Last updateResolved by FedEx.
We are seeing a degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve this. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "FedEx Web Services outage"
Last updateResolved by FedEx team.
We are seeing a degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve this. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "UPS Web Services outage"
Last updateResolved by UPS.
We are seeing a degraded performance with UPS web services. We will continue to monitor as UPS works to resolve this. To see current response times from UPS you can check: - https://downdetector.com/status/ups/ - https://www.shippingapimonitor.com/history.html?api=ups
Report: "Service disruption on instance"
Last updateThis incident has been resolved.
The system is back to normal after restarting ElasticSearch data nodes which spiked in CPU usage. We're monitoring the situation and are investigating the root cause.
We are currently experiencing a service disruption on sh-default environment. Our DevOps team is working to identify the root cause and implement a solution. Further details will be provided shortly. Customer Impact: Some customers are reporting that they are unable to open the WebPortal.
Report: "FedEx Web Services outage"
Last updateResolved on FedEx side.
We are seeing a degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve this. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "USPS - slower response times"
Last updateResolved by USPS
The incident is reported on Endicia Status page - https://status.endicia.com/
Our monitoring systems show that USPS APIs are responding slower than usual for some accounts. We will continue to monitor the situation.
Report: "Pitney Bowes Expedited Delivery Services partial unavailability"
Last updateThis incident has been resolved.
We are continuing to investigate this issue.
Pitney Bowes has notified us they are experiencing an API Degraded Performance. Customers who use Pitney Bowes may experience slowness or unavailability of Pitney Bowes shipping rates or printing labels. We will continue to monitor their progress as they resolve this issue. For additional information use Pitney Bowes status page - https://apistatus.pitneybowes.com/incidents/z9dm61vtz8kq
Report: "FedEx Web Services outage"
Last updateThis incident has been resolved by FedEx.
We are seeing a degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve this. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "Pitney Bowes Expedited Delivery Services slowness"
Last updateThis incident has been resolved by Pitney Bowes.
Pitney Bowes has notified us they are experiencing an API Degraded Performance. Customers who use Pitney Bowes may experience slowness in getting shipping rates or printing labels. We will continue to monitor their progress as they resolve this issue. For additional information use Pitney Bowes status page - https://apistatus.pitneybowes.com/
Report: "Degraded API Performance"
Last updateThe incident is now fully resolved.
API services are fully restored now. The team is still investigating the root cause and is actively monitoring it.
We are currently experiencing a partial API disruption. Our team is working to identify the root cause and implement a solution. Further details will be provided shortly.
Report: "NetSuite Service | US Ashburn"
Last updateAccording to NetSuite, all services in US Ashburn data centers have been restored following the power interruption caused by a cooling system issue.
NetSuite notified us that NetSuite service is unavailable in US Ashburn data centers as a result of a data center physical infrastructure issue. We will continue to monitor their progress as they resolve this issue. Customer impact: Customers with NetSuite accounts hosted in US Ashburn may experience delays in Orders and Shipments synchronization with NetSuite. Additional details can be found on the NetSuite status page - https://status.netsuite.com/ Start Time: 12:14pm Pacific Time
Report: "Some Customers cannot sync Item Fulfillments from NetSuite"
Last updateThe NetSuite team confirmed that their engineers resolved the defect and deployed the change. Please let us know if you still experience any issues with Item Fulfillments Synchronization.
The NetSuite team is still working on the fix on the NetSuite side. For the customers that are still experiencing the issue, for faster resolution, we recommend upgrading the ShipHawk bundle version to 2023.2.0.1, which includes the fix. To get the latest information on the incident you can subscribe to the ShipHawk status page updates or contact NetSuite support for the status of the solution for defect #711212.
The issue has been identified and a fix is being implemented by NetSuite team.
The NetSuite team confirmed that there is a defect in the most recent NetSuite update. The defect details are: "Defect 711212 : Item Saved Search > Result subtab > Add Base Price as Group > Error: INVALID_RESULT_SUMMARY_FUNC". While the NetSuite team is working on the solution, ShipHawk Technical Support has a short-term solution for all affected customers. If your account is affected, please reach out to support@shiphawk.com to receive instructions or assistance with the fix.
We're working on a complete solution with NetSuite support. Meanwhile, the ShipHawk engineering team provided a short-term solution that can be applied to the NetSuite bundle immediately. If your account is affected, please contact the ShipHawk support team support@shiphawk.com to get the solution applied to your NetSuite account.
NetSuite users are facing issues with Item Fulfillments sync from NetSuite to ShipHawk. When NetSuite users save an Item Shipment record, the error is shown: "The result field Base Price cannot be grouped. Please edit the search and omit this field or use a different summary function." This was first reported at 5:15 AM Pacific Time. We are currently investigating and have reached out to NetSuite support. No changes that could cause the issue were made by ShipHawk.
Report: "DHL eCommerce outage"
Last updateThis incident has been resolved by DHL eCommerce Customer impact: Some customers were unable to rate, print labels or track with DHL eCommerce. Start Time: 7:40am Pacific Time End Time: 8:30am Pacific Time
DHL eCommerce has notified us they are experiencing an API outage. We will continue to monitor their progress as they resolve this issue. Customer impact: Some customers may be unable to rate, print labels or track with DHL eCommerce. Start Time: 7:40am Pacific Time
Report: "DHL eCommerce outage"
Last updateThis incident has been resolved by DHL eCommerce Customer impact: Some customers were unable to rate, print labels or track with DHL eCommerce. Start Time: 1:24pm Pacific Time End Time: 2:14pm Pacific Time
DHL eCommerce has notified us they are experiencing an API outage. We will continue to monitor their progress as they resolve this issue. Customer impact: Some customers may be unable to rate, print labels or track with DHL eCommerce. Start Time: 1:24pm Pacific Time
Report: "Amazon AWS Internet Connectivity in US-EAST-2 region"
Last updateCustomer impact: ShipHawk servers are up and available, and some customers are being impacted by internet connectivity with Amazon US-East-2 resources. Here is the latest update from Amazon: "1:06 PM PST Between 11:34 AM and 12:51 PM PST, customers experienced Internet connectivity issues for some networks to and from the US-EAST-2 Region. Connectivity between instances within the Region, in between Regions, and Direct Connect connectivity were not impacted by this issue. The issue has been resolved and connectivity has been fully restored." Start Time: 11:34am Pacific Time End Time: 1:06pm Pacific Time
Customer impact: ShipHawk servers are up and available, and some customers are being impacted by internet connectivity with Amazon US-East-2 resources. Here is the latest update from Amazon: "12:59 PM PST We are beginning to signs of recovery, and continue to work toward full resolution." To follow updates from Amazon, you can subscribe here: https://status.aws.amazon.com/rss/internetconnectivity-us-east-2.rss We are actively monitoring this issue. Start Time: 12:26pm Pacific Time
Customer impact: ShipHawk servers are up and available, and some customers are being impacted by internet connectivity with Amazon US-East-2 resources. Here is the latest update from Amazon: "12:51 PM PST We can confirm an issue which is impacting Internet connectivity for the US-EAST-2 Region, and are attempting multiple parallel mitigation paths. Connectivity between instances within the US-EAST-2 Region, in-between AWS Regions, and Direct Connect traffic is not impacted by the event. Some customers may be experiencing VPN connectivity due to this issue." To follow updates from Amazon, you can subscribe here: https://status.aws.amazon.com/rss/internetconnectivity-us-east-2.rss We are actively monitoring this issue. Start Time: 12:26pm Pacific Time
Customer impact: ShipHawk servers are up and available, and some customers are being impacted by internet connectivity with Amazon US-East-2 resources. Amazon is now investigating the issue and has posted the issue publically: https://health.aws.amazon.com/health/status The latest update is: "12:26 PM PST We are investigating an issue, which may be impacting Internet connectivity between some customer networks and the US-EAST-2 Region." To follow updates from Amazon, you can subscribe here: https://status.aws.amazon.com/rss/internetconnectivity-us-east-2.rss We are actively monitoring this issue. Start Time: 12:26pm Pacific Time
We are currently investigating an issue where some customers have reported being unable to log into ShipHawk. While AWS has not reported this yet, we are seeing degraded performance reported by: https://downdetector.com/status/aws-amazon-web-services/
Report: "Investigating slow proposed shipment generation."
Last update## Incident summary Some of the ShipHawk NetSuite users experienced slowness in item fulfillments syncing between NetSuite and ShipHawk. The slowness was detected by the monitoring system at 9:28 AM Pacific Time, Monday 8/8, and continued till 12:51 PM Pacific Time. ## Impact Because of internal configuration changes, proposed shipment generation for large orders that had incomplete product information was done incorrectly and caused generation of a huge amount of packages. Processing of those proposed shipments took too much memory on background workers that were processing that queue. That, in turn, caused their unstable behavior and caused delays for all other item fulfillments processed in that queue. As a result, NetSuite Item Fulfillments were synchronizing to ShipHawk with a delay from 3 to 52 minutes. ## Detection and Recovery The incident was detected by ShipHawk monitoring system when the synchronization delay reached 3 minutes. The initial response was to scale processing power. Adding additional resources did not help as the new background job processors quickly became stuck for the same reason. The delay eventually increased and reached 52 minutes at its peak. At 12:30 PM we fixed the data of the products that were causing the issue and removed incorrectly generated proposed shipments. That unblocked the system and all the jobs that were waiting in the queue were processed within 21 minutes. The system returned to its normal state at 12:51 PM Pacific Time. ## Corrective actions In order to prevent that type of issue in the future, we plan to accomplish the following: 1. Develop a time-limiting system for background job processors, so a few slow jobs don’t block the entire queue. 2. Improve the UX to eliminate the ability to create product configurations that could cause unexpected behavior. 3. Add hard limitations to specific actions of the system, in order to reduce the risk of resource-abusive processes.
This issue was resolved at 12:51 PM Pacific Time. Customer impact: Some customers have reported a delay when syncing orders from their ERP. Start time: 9:28 AM Pacific Time End time: 12:51 PM Pacific Time
A fix is in place and being rolled out. Processing times will improve over the next 10-15 minutes. Customer impact: Some customers have reported a delay when syncing orders from their ERP.
The issue has been identified and we are working to resolve it. We estimate this issue will be solved within the next hour. Customer impact: Some customers have reported a short delay when syncing orders from their ERP.
Our monitoring system has identified some slowness when generating proposed shipments. Some customers may see a minor delay in the time it takes for a proposed shipment to generate when an order syncs to ShipHawk from their ERP. We are actively investigating this issue.
Report: "AWS US-East 2a outage"
Last update## Incident summary ShipHawk API and Web Portal were not available between 9:57 AM and 11:15 AM Pacific Time, 7/28/2022. The incident was caused by an AWS outage at US-EAST-2. ## Detection This incident was detected at 10:02 AM Pacific Time when the internal alerting system diagnosed an outage. Some of the application servers, primary database node, search engine nodes were not accessible. After more investigation, we found that disk volumes attached to the primary database are completely inaccessible. Eventually, we found that it was caused by a major outage in the AWS US-EAST-2a availability zone. ## Recovery After it was confirmed that the issues are caused by the US-EAST-2a outage at 10:30 AM, the devops team initiated switching to the database replica which is located in a different AWS availability zone. That was finished at 11:09 AM and it took additional 10 minutes until all services fully recovered. ## Timeline All times are Pacific Time. 09:57 AM - the system response time started growing 10:02 AM - internal notification systems signaled about the primary database node outage 10:07 AM - the engineering team started the investigation 10:30 AM - the root cause was identified and the team started working on recovery plan 11:09 AM - the database replica was promoted to a primary node 11:19 AM - the system has fully recovered ## Corrective actions 1. Increase number of availability zones in order to minimize the effect of potential AWS outage 2. Reduce time it takes to switch to redundant availability zones.
This incident is fully resolved. Customer impact: Customers were not able to use ShipHawk services. Start Time: 9:57am Pacific Time End Time: 11:25am Pacific Time
ShipHawk services are now back online. We will continue to monitor as services are restored. To follow updates from Amazon, please see: https://health.aws.amazon.com/health/status Customer impact: Customers are not able to use ShipHawk services. Start Time: 9:57am Pacific Time End Time: 11:25am Pacific Time
It appears that Amazon hosting (AWS) in US-East 2a is experiencing an outage. Our DevOps team is actively working to restore ShipHawk by switching to an AWS facility that is not impacted by this outage. We expect to restore services soon. To follow updates from Amazon, please see: https://health.aws.amazon.com/health/status Customer impact: Customers are not able to use ShipHawk services. Start Time: 9:57am Pacific Time
We are currently investigating this issue.
Report: "Investigating intermittent issue with logging in"
Last updateThis incident has been resolved. A post mortem will be made available on this status page on Friday 24 June 2022. Customer impact: Some customers were unable to log into ShipHawk during this time. Start Time: 8:10am Pacific Time End Time: 9:15am Pacific Time
We are currently investigating an issue where some customers are reporting that they cannot log in.
Report: "Service disruption"
Last update## **Incident summary** We determined the actual start to be 6:24 PM Pacific Time. The issue was reported by an affected customer at 8:02 PM Pacific Time and was resolved at 9:29 PM Pacific Time. During this incident, some customers were unable to ship. ## **Leadup** As a part of a routine database maintenance process, we planned a standard procedure for reclaiming unused disk space. The process started as planned but took more time than originally estimated when we ran this in our test environment. This eventually caused issues with the document generation processes. That, in turn, affected the ability to book new shipments, which heavily rely on new document generation. ## **Fault** The process of reclaiming unused disk space for document generation took longer than expected that eventually caused the table to be locked. Attempts to save new documents to the database failed because of this. Because document generation is a part of the shipments booking process, attempts to book new shipments failed as well. ## **Impact** Some ShipHawk users were not able to book new shipments from 6:24 PM to 9:29 PM Pacific Time. Some of the API requests related to document generation failed by timeout. ## **Detection** The incident was first detected when reported by a customer at 8:02 PM Pacific Time. ## **Response & Recovery** We responded to the incident with all possible urgency and ultimately made the necessary changes to unlock the tables and recover the service. The DevOps team made an analysis of the issue and after considering multiple options and made a decision to terminate the database optimization process and manually release the table lock. ## **Timeline** All times are in Pacific Time. **Thursday, 10 June 2022** 5:30 PM - the standard database maintenance process started 6:24 PM - the tool designed for reclaiming unused disk space acquired a lock on the table 8:02 PM - a customer reported issues with BOL generation and shipment booking 8:06 PM - the support team began investigating the reported issue 8:15 PM - the ticket was passed to the engineering team, and the DevOps engineering team started investigating 8:30 PM - the root cause was identified 9:10 PM - the DevOps team identified a way to recover the service without data loss 9:29 PM - the service was restored ## **Root cause identification: The Five Whys** 1. Document generation and shipment booking failed by timeout. 2. Because the system was not able to save newly generated documents into the database. 3. Because the documents table was locked. 4. Because the process of reclaiming unused disk space took longer than expected. 5. Because one of the database tables was too big. ## **Root cause** An existing procedure for reclaiming unused disk space does not work sufficiently for large database tables \(>2Tb\). ## **Lessons learned** * The procedure for reclaiming unused disk space should be optimized for large tables. * We need to improve monitoring for anomalies in shipping API usage, especially during routine database maintenance. ## **Corrective actions** 1. Optimize the procedure for reclaiming unused disk space for large database tables. 2. Begin monitoring anomalies in shipping API usage.
This issue is now resolved. Users can book shipments as expected. A post mortem will be shared within the next 2-3 business days to summarize this incident, how it was resolved and how we intend to mitigate such an event in the future. Customer Impact: Some customers were unable to ship. Start Time: 8:04 PM Pacific Time EndTime: 9:29 PM Pacific Time
Our DevOps team has implemented a fix. Users should now see be able to book shipments as expected. We are monitoring to ensure no further customer impact. Customer Impact: Some customers were unable to ship. Start Time: 8:04 PM Pacific Time EndTime: 9:29 PM Pacific Time
We are currently experiencing a service disruption. Our DevOps team is working to identify the root cause and implement a solution. Further details will be provided shortly. Customer Impact: Some customers are reporting that they are unable to ship. We will send an additional update on or before 10:00pm Pacific Time.
Report: "Investigating issue with login"
Last updateThis incident has been resolved.
You can now login from the homepage or by navigating directly to the sign in page: https://shiphawk.com/app/sign-in
Some customers have reported they are unable to access the application when using the "Sign In" button from the ShipHawk homepage: https://shiphawk.com We are currently investigating this issue. In the meantime, you can go to: https://shiphawk.com/app/sign-in to access your account. The ShipHawk Application and API are fully operational.
Report: "Investigating reported issue with booking shipments"
Last updateStatus update: This issue is resolved. Customer Impact: Some customers are seeing the following error when attempting to book some of their shipments: “There was a problem booking your shipment: Couldn’t find Shipment with ‘id’= [WHERE “shipments”.“deleted_at” IS NULL AND “shipments”.“crm_instance_id” = $1]”. This error prevents them from booking the shipment. A workaround is possible and requires an administrative change by our team. If you experience this issue, contact our support team for assistance. Start Time: April 1, 7:12 AM PST End Time: April 1, 10:00 AM PST
Status update: A fix has been implemented and we are monitoring the results Customer Impact: Some customers are seeing the following error when attempting to book some of their shipments: “There was a problem booking your shipment: Couldn’t find Shipment with ‘id’= [WHERE “shipments”.“deleted_at” IS NULL AND “shipments”.“crm_instance_id” = $1]”. This error prevents them from booking the shipment. A workaround is possible and requires an administrative change by our team. If you experience this issue, contact our support team for assistance. Start Time: April 1, 7:12 AM PST
Status update: We have identified the issue and are working towards a resolution. Customer Impact: Some customers are seeing the following error when attempting to book some of their shipments: “There was a problem booking your shipment: Couldn’t find Shipment with ‘id’= [WHERE “shipments”.“deleted_at” IS NULL AND “shipments”.“crm_instance_id” = $1]”. This error prevents them from booking the shipment. A workaround is possible and requires an administrative change by our team. If you experience this issue, contact our support team for assistance. Start Time: April 1, 7:12 AM PST
Status update: We are investigating this issue. Customer Impact: Some customers are seeing the following error when attempting to book some of their shipments: “There was a problem booking your shipment: Couldn’t find Shipment with ‘id’= [WHERE “shipments”.“deleted_at” IS NULL AND “shipments”.“crm_instance_id” = $1]”. This error prevents them from booking the shipment. A workaround is possible and requires an administrative change by our team. If you experience this issue, contact our support team for assistance. Start Time: April 1, 7:12 AM PST
Report: "Investigating reported issue with warehouse filtering in the User Interface"
Last updateStatus update: A fix has been deployed and confirmed to be working as expected. Customer Impact: Some Users with more than one warehouse assigned to them cannot see/filter orders for the warehouses they are assigned to. Start Time: March 10, 6:27 AM PST End Time: March 10, 8:53 AM PST
Status update: The issue has been identified. A solution has been identified, and we are deploying the fix now. Customer Impact: Some Users with more than one warehouse assigned to them cannot see/filter orders for the warehouses they are assigned to.
We are currently investigating an issue with warehouse filtering in the user interface.
Report: "Oracle NetSuite SuiteTalk Incident"
Last updateThis incident has been resolved. For more details, see https://status.netsuite.com/incidents/sn1hlv7zhzz5
NetSuite has updated their status page to inform all customers that the incident has been resolved. We are continuing to monitor the status. "We have resolved the issue affecting SuiteTalk and SuiteAnalytics Connect in Multiple Regions. Customer Impact: Customers may have experienced a degradation of service when using SuiteTalk and SuiteAnalytics Connect. If you are still experiencing issues, please contact NetSuite Customer Support through your standard method. Start Time: February 17, 10:32 AM PST End Time: February 17, 10:57 AM PST"
Oracle NetSuite is experiencing an incident related to APIs which is directly impacting communication between ShipHawk and NetSuite. We are investigating the impact and will provide updates as we learn more. To stay up to date with this incident, see: https://status.netsuite.com/incidents/sn1hlv7zhzz5
Report: "FedEx Web Services - degraded performance"
Last updateThis incident has been resolved.
FedEx web service performance has normalized. We will continue to monitor.
We are seeing degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "FedEx Web Services returns rates intermittently"
Last updateFedEx web services are operating normally. This incident is resolved.
We are seeing FedEx rate responses returning intermittently. Other carrier services, like UPS and USPS are operational and performing as expected. We are investigating this issue.
Report: "Intermittent delay with Item Fulfillment synchronization"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results. Processing times continue to improve without delays in Item Fulfillment synchronization. We will continue to monitor and adjust as necessary to mitigate delays.
A fix has been put in place and we expect normalized processing times within 15-25 minutes.
We have identified intermittent delays in Item Fulfillment syncs from NetSuite and are actively investigating. Item Fulfillments are being received and processed with intermittently. Our engineering team is making adjustments to mitigate any delays. We will send an update in 30 minutes.
Report: "Pages may be slow loading"
Last update## **Incident summary** Between 6:30am and 3:30pm PST, several customers experienced slowness of the application. ## **Leadup** In preparation for the peak season, we provisioned additional servers for anticipated volume. Our customers collectively experienced larger order, shipment and rate request volumes than we expected. Additionally, FedEx, UPS and other Carrier APIs experienced delayed response times to requests made by our system. The combination of these issues slowed down ShipHawk API response times for some customers. ## **Fault** With the load more than expected, API response time slowed down. Automated load balancer marked some of the slower servers as unhealthy which led to higher load on healthy servers and that slowed down the system even more. The engineering team made a decision to add more servers to help handle the extra load. The added resources did not help. Adding new resources for rating caused much higher use of database connections, which resulted in errors and did not help with performance degradation. ## **Impact** ShipHawk users experienced slowness of the service from 6:30 am PST till 3:30 pm PST. Some of the API requests were failing by timeout and syncing with external systems was delayed. A total of 9 urgent support cases were submitted to ShipHawk during the impact window. ## **Detection** It was first detected by monitoring systems at 6:30 am PST and then was reported by customers at 6:42 am PST ## **Response** Customers were notified about the slowness via our status page at 6:44am PST. We responded to the incident with all possible urgency and ultimately made the necessary changes to solve the problem while continuing to processing similar volumes to Black Friday and Cyber Monday through the end of the week. ## **Recovery** We needed to add more servers for processing extra API requests, but that created too many connections to the database. The solution was to implement a database connection pooling system that allowed us to optimize the database connections usage. Around 3:00 pm PST, the new connection pool system was activated and we were able to added more resources to process API requests and background jobs. That resolved the slowness at 3:30pm PST. To further mitigate the chances of another incident, we set up redundant connection poolers and provisioned more resources to production throughout the night. That proved effective during the next day \(Tuesday 11/30\), when ShipHawk was experience similar API load and response times remained stable throughout. ## **Timeline** All times in PST **Monday, 29 November** 6:30am - monitoring systems alerted average API response time increase and an increased number of "499 Client Closed Request" errors 6:32 am - engineering team started investigating the slowness 6:42 am - customers reported slowness of Item Fulfillments sync and overall application slowness 6:44 am - Status Page was updated with the details about the incident. 7:30 am - API load balancer reconfigured, to prevent a cascade effect when the load balancer was removing slow instances from the pool which was adding more load to healthy instances, and that made them slow/unhealthy too 8:00 am - application servers reconfigured, more resources moved to API services from backend services, to better match the type of the load 9:00 am - existing servers upgraded to more powerful EC2 instances, extra servers provisioned for handling the extra load 10:00 am - monitoring systems detected errors related to extra high use of database connections which prevented us from provisioning more servers 11:00 am - the decision was made to configure a new database connection pooling system that should mitigate the database connections issue and allow provisioning more resources 3:00 pm - a new database connection pooling system was installed and configured 3:30 pm - confirmed that the incident resolved **Tuesday, 30 November** 12:00am - 4:30am - additional application and background processing servers added for redundancy ## **Root cause identification: The Five Whys** 1. The application had degraded performance because of added load on the API and slow carrier response times. 2. The system did not automatically address the added load because database connections were consumed. 3. Because we pushed extra resources and didn’t expect this to cause an issue with database connections. 4. Because we need did not have tests to cover load tests that would have identified this. 5. Because we had not previously felt this kind of testing was necessary until we reached this level of scale. ## **Root cause** Suboptimal use of database connections led to issues with the application scaling. The team did not have an immediate solution for that because the issue had not been replicated in testing. ## **Lessons learned** * We need more application load testing in place. * Carrier API response slowness can cause slowness for the application. * Customers with high API usage volatility should isolated from other multi-tenant users. ## **Corrective actions** 1. Introduce new load testing processes. 2. Implement better automated scaling system for the peak load periods. 3. Prioritize solutions to mitigate response time delays due to carrier response time delays.
This incident has been resolved. In an effort to help during this heightened holiday processing, we will provide extended support hours from 3:00 AM to 9:00 PM Pacific Time via normal support channels through Friday 12/3/21 for all customers.
Our engineering team is deploying additional changes to address page slowness. We are seeing significant improvement with site and API responsiveness with these changes, and we will continue to closely monitor system performance.
We continue to experience exponentially larger volumes than anticipated, despite significant over-provisioning of system resources in preparation for Black Friday/Cyber Monday. As a result, some customers are experiencing slower than normal performance. ShipHawk engineering will continue to make incremental improvements throughout the day and will inform you as changes are made.
The deployed changes are now in effect across the system. Overall site and API performance continues to improve. ShipHawk Engineering will continue to tune and monitor performance.
ShipHawk Engineering is deploying changes to address system performance. We expect those changes to have a positive impact on site and API responsiveness over the next 15-25 minutes, and we will continue to monitor system performance.
Some clients have reported they are still seeing slow response times. Our engineering team is investigating further for a complete resolution. We will update you as soon as we know more information.
Our engineering team was able to improve the responsiveness of ShipHawk's WebPortal and API, and error messages have subsided. We will continue to monitor the issue throughout the day to confirm the resolution of this issue.
There are no new updates at this time. Engineering is continuing to resolve this issue. We will update you as soon as we have more information.
The site is currently experiencing a higher than normal amount of load, and may be causing pages to be slow or unresponsive. We're investigating the cause and will provide an update as soon as possible. Our engineering team is working on a solution. The next update will be within 30 mInutes.
Report: "Some Customers cannot sync Sales Orders / Item Fulfillments from NetSuite"
Last update## **Incident summary** ShipHawk NetSuite SuiteApp users ran into an issue with item fulfillments and order syncing between NetSuite and ShipHawk. When NetSuite users saved an Item Shipment record, the error is shown: `TypeError: ItemFulfillment.find is not a function` This was first reported at 9:12 pm PST on Wednesday 11/17 and affected all customers that were using ShipHawk bundle with version >=2021.6.0. The issue was caused by a change made by NetSuite in the processing of NApiVersion 2.1 scripts, which ShipHawk bundles 2021.6.0\+ are using, and the incident lasted until NetSuite reverted the change at 11:30 am PST Sunday 11/21. ## **Leadup** On 11/17, NetSuite changed the way how they process scripts with NApiVersion 2.1 without notice, to fix a known and unrelated defect \(NetSuite defect #647251\). When this happened, ShipHawk SuiteApp bundles 2021.6.x and higher could no longer sync orders or item fulfillments between NetSuite and ShipHawk. ## **Fault** ShipHawk bundle could not load dependencies correctly; therefore, it was not able to call static functions required to work properly and caused the code to raise an exception `TypeError: ItemFulfillment.find is not a function [at Object.afterSubmit (/SuiteBundles/Bundle 161164/ShipHawk (2)/event_scripts/shiphawk-update-fulfillment-event-script.js:55:35)]`. ## **Impact** Order and Item Fulfillments could not sync between NetSuite and ShipHawk. This incident affected all NetSuite customers who using ShipHawk bundle - 2021.6.x and 2021.7.x. A total of 12 urgent cases were submitted to ShipHawk during the impact window. ## **Detection** The incident was reported first time at 9:12 pm PST, Wednesday 11/17. More reports were submitted starting from 4:21 am PST, Thursday 11/18. ## **Response** During this incident, ShipHawk customer success and engineering teams worked around the clock to keep impacted customers informed, identify the root cause and search for workarounds. ShipHawk and NetSuite engineering resources worked to identify the issues and work towards a resolution. NetSuite discovered two defects \(defect #651122 and #651305\) which they ultimately resolved. ShipHawk identified both near-term and long-term options to mitigate this in the future, both of which would have materially delayed resolution. As such, ShipHawk Engineering decided the best path was to collaborate with NetSuite as they reverted the changes introduced on 11/17 because this was determined to be the fastest way to get joint customers operational. ## **Recovery** Case #4491650 was submitted to NetSuite Support, and as a result, NetSuite created 2 defects that were escalated to U2 Critical priority: **Defect 651122**`SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked without 'new'` **Defect 651305** `SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked without 'new'`. NetSuite ultimately reverted their changes in the processing of NApiVersion=2.1 scripts. After clearing cached files, impacted customers were able to sync orders and item fulfillments between ShipHawk and NetSuite. ## **Timeline** All times are PST. **Wednesday 11/17** * 21:00 - NetSuite introduced a change to the NApiVersion scripts processor in order to fix defect #647251 * 21:12 - the first time the issue that Orders and Item Fulfillments are not syncing was reported by ShipHawk customers **Thursday 11/18** * 4:21 - multiple customers started reporting the same issue * 5:00 - the issue was verified by ShipHawk CS team and passed to the Engineering team * 6:21 - the incident notification was posted to the ShipHawk Status Page * 6:23 - ShipHawk Engineering identified that the issue is happening only for the customers who use the latest bundle versions and that the issue is related to changes in how NetSuite processes NApiVersion=2.x scripts * 6:23 - case #4491650 was submitted to NetSuite Support * 10:23 - NetSuite team notified that they reverted changes but some of it is still stuck in the server cache – there is a chance the issue might self resolve if the partner’s cache is flushed * 13:02 - ShipHawk Engineering team prepared a new bundle version 2021.7.1 which intend to reset cached files * 13:30 - bundle 2021.7.1 was successfully tested and then pushed to some of the customer accounts where the fix was confirmed * 20:40 - the same issue was reported again **Friday 11/19** * 7:58 NetSuite team notified us about a critical defect 651122: `SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked without 'new'` * 15:17 Defect 651122 was reported as fixed and deployed to all server * 22:30 ShipHawk team verified that the fix doesn't work even after a cache refreshed * 22:31 NetSuite Support Case #4491650 re-opened * 15:23 A new critical defect 651305 was created in NetSuite: `SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked without 'new'` **Saturday 11/20** * 23:56 NetSuite pushed fixes for ShipHawk testing accounts **Sunday 11/21** * 10:30 NetSuite team confirmed that the fix is pushed to all accounts * 11:02 ShipHawk prepared the new bundle version 2021.7.2 which intend to reset cached files * 11:09 ShipHawk team verified the fix working and CS team helped customers to install the new bundle ## **Root cause identification:** 1. The customers were not able to sync orders and Item Fulfillments from NetSuite ShipHawk 2. Because NetSuite changed how they process scripts with NApiVersion 2.1 3. Because it was deployed by NetSuite without notice to find and resolve such issues without impacting customers 4. Because some ShipHawk scripts with Public scope return classes when they should use Same Account scope instead 5. Because ShipHawk does not have an alternative order and/or item fulfillments sync process for customers using the SuiteApp ## **Root cause** The instability we saw in customer accounts was introduced because some of our SuiteApp scripts with Public scope return classes. After the incident was resolved, NetSuite advised us that this is only supported for Same Account scope. Had we used this alternative scope, it may have mitigated the issue. ## **Lessons learned** * NetSuite may deploy changes to SuiteApp Developer tools without notice * NetSuite recommends we change scope of scripts in the bundle from Public to Same Account * ShipHawk needs to investigate alternative integration methods * ShipHawk needs to explore manual workarounds in the event an integration encounters an unplanned breaking change * ShipHawk needs to explore alternative syncing strategies to further mitigate risk ## **Corrective actions** * We will prioritize effort to change scope of scripts in the bundle from Public to Same Account per NetSuite’s recommendation * We will explore redundant and alternative syncing strategies to reduce reliance on changes made by integration partners
This incident is resolved. If you were impacted by this issue, update to ShipHawk bundle latest version 2022.7.2 to refresh cached files in NetSuite. Once you have done so, you can update the saved search "Recent Un-synced Orders" to import any un-synced orders from NetSuite into ShipHawk. To do this in NetSuite: 1. Enter "Recent Un-synced Orders" in the Search bar at the top of the page. 2. Click on the search result returned. 3. Click "Edit this search". 4. In the Criteria tab, click on the row labeled "Date Created is after yesterday". 5. Click the icon that appears next to "Date Created" that prompts a new window to open. 6. Update the criteria so that the Date Created criteria will look for orders back to the first date you were impacted. 7. Click "Save" to update the search. 8. Wait 15-25 minutes for new orders to be imported. 9. Confirm missing orders are imported into ShipHawk. Please contact support@shiphawk.com if you have additional questions or concerns. A post-mortem will be provided and accessible on this status page within the next 3-5 business days.
NetSuite has released their fix for Defect 651305: SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked with ‘new’ which is impacting ShipHawk and other SDN Developers. In order to renew the cache, we deployed a new version of ShipHawk bundle - 2021.7.2 We recommend updating the ShipHawk bundle to the latest version 2022.7.2, and expanding the saved search criteria so that un-synced orders (if any) will be imported in ShipHawk from NetSuite.
There are no current updates at this time. Our engineering team continues to investigate, monitor, and also work toward a workaround that is not dependent on NetSuite's fix. As previously noted, NetSuite has identified another defect which they have also deemed U2, a critical defect. Defect 651305: SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked with ‘new’ which is impacting ShipHawk and other SDN Developers. NetSuite is working diligently to address this critical issue. Additionally, ShipHawk engineering continues to investigate possible workarounds that do not depend on NetSuite’s resolution of these critical defects. We will continue to share information until this is fully resolved.
We continue to investigate, monitor, and also work toward a workaround that is not dependent on NetSuite. As previously noted, NetSuite has identified another defect which they have also deemed U2, a critical defect. Defect 651305: SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked with ‘new’ which is impacting ShipHawk and other SDN Developers. NetSuite is working diligently to address this critical issue. Additionally, ShipHawk engineering continues to investigate possible workarounds that do not depend on NetSuite’s resolution of these critical defects. We will continue to share information until this is fully resolved.
We continue to triage this escalation with NetSuite. NetSuite has identified another new defect 651305: SuiteScript > RESTLet Script > TypeError: Class constructor CounterEntry cannot be invoked with ‘new’ which is impacting ShipHawk and other SDN Developers. The urgency level of this defect has also been escalated to U2, identified as a critical defect. NetSuite’s management and engineering teams are currently investigating and working on a resolution, with a range for solution within 1-2 days. ShipHawk’s engineering team deems this urgent and as our greatest priority. We continue to closely work with NetSuite on a resolution and will continue to update you as we know more.
We continue to monitor the issue and are working diligently with NetSuite technical support to resolve this issue as quickly as possible. We will continue to provide updates until this issue is fully resolved.
We continue to triage this escalation with NetSuite. NetSuite has acknowledged Defect 651122: SuiteScript > RESTLetScript > TypeError: Class constructor CounterEntry cannot be invoked without 'new' which is impacting ShipHawk and other SDN Developers. NetSuite has escalated the issue to "U2" and understands this is a critical defect with U2 priority, NetSuite's target resolution period is 1-2 days. That said, the resources we are working with are telling us they are confident it will be resolved today. Within the next week, we will provide you with a full post-mortem, including our plans to further mitigate unplanned changes made by NetSuite and/or dependencies between ShipHawk and NetSuite. If you are experiencing this issue, you can directly reach out to NS and reference NS support case #4491650, Defect 651122
We continue to triage this escalation with NetSuite. NetSuite has acknowledged Defect 651122: SuiteScript > RESTLetScript > TypeError: Class constructor CounterEntry cannot be invoked without 'new' which is impacting ShipHawk and other SDN Developers. NetSuite has escalated the issue to "U2" and understands this is a critical defect with U2 priority, NetSuite's target resolution period is 1-2 days. That said, the resources we are working with are telling us they are confident it will be resolved today. Within the next week, we will provide you with a full post-mortem, including our plans to further mitigate unplanned changes made by NetSuite and/or dependencies between ShipHawk and NetSuite. If you are experiencing this issue, you can directly reach out to NS and reference NS support case #4491650, Defect 651122
We continue to triage this escalation with NetSuite. NetSuite has acknowledged Defect 651122: SuiteScript > RESTLetScript > TypeError: Class constructor CounterEntry cannot be invoked without 'new' which is impacting ShipHawk and other SDN Developers. NetSuite has escalated the issue to "U2" and understands this is a critical defect with U2 priority, NetSuite's target resolution period is 1-2 days. That said, the resources we are working with are telling us they are confident it will be resolved today. Within the next week, we will provide you with a full post-mortem, including our plans to further mitigate unplanned changes made by NetSuite and/or dependencies between ShipHawk and NetSuite. If you are experiencing this issue, you can directly reach out to NS and reference NS support case #4491650, Defect 651122
We are continuing to investigate this issue. We have escalated to NetSuite and are working with them for resolution. Our next update will be within 60 minutes.
We are continuing to investigate this issue. We have escalated to NetSuite and are working with them for resolution. Our next update will be within 30 minutes.
NetSuite is experiencing an issue impacting multiple partners and current NetSuite users. We are working with their team on an immediate resolution. Our next update will be within 30 minutes.
NetSuite has rolled back a change they made last night that caused issues for SuiteApp Developers who use NApiVersion 2.x. According to NetSuite technical support, their changes are reverted now, but some files in NetSuite may still be cached. If you are still experiencing issues, it may be due to file caching. We will continue to discuss this issue with NetSuite to learn what mitigations can be put in place to avoid an issue like this in the future and will provide our findings in a post-mortem.
At this time, our engineering team is still investigating this issue.
NetSuite users are facing issues with item fulfillments and order sync in ShipHawk. When NetSuite users save an Item Shipment record, the error is shown: TypeError: ItemFulfillment.find is not a function [at Object.afterSubmit (/SuiteBundles/Bundle 161164/ShipHawk (2)/event_scripts/shiphawk-update-fulfillment-event-script.js:55:35)] This was first reported at 9:12 PM PDT. We are currently investigating and have reached out to NetSuite support. No changes that could cause the issue were made by ShipHawk.
Report: "FedEx Web Services - degraded performance"
Last updateFedEx web services is now showing steady uptime. This issue is now resolved.
We are seeing degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "FedEx web services - degraded performance"
Last updateFedEx is reporting stable uptime, and we now consider this resolved. See outage details here: https://www.shippingapimonitor.com/history.html?api=fedex
We are seeing degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "Trouble logging in"
Last update**Incident summary** During an internal process that archives data, we noticed that disk usage beginning to increase and decided to upgrade the volume proactively. Due to internal AWS optimization processes, the upgrade created slowness in the system, which later led to the incident. We promoted a replica database to restore the service and service was restored at 11:45am PST. ## **Leadup** 9:30am PST - we started an internal process that archives data 10:30am PST - internal monitoring systems alerted fast increasing disk usage 10:35am PST - the volume attached to the database servers was upgraded This change resulted in degraded database performance. ## **Fault** Due to internal AWS optimization processes, the volume upgrade created slowness in the system, which later led to the incident starting at 10:42am PST. ## **Impact** Customers hosted on shared instances were not able to use the system from 10:42am PST to 11:45am PST. Affected services: * Web Portal * Workstations * ShipHawk API ## **Detection** The Incident was detected by the automated monitoring system and was reported by multiple customers. ## **Response** After receiving the alerts from the monitoring system, the engineering team connected with ShipHawk Customer Success and described the level of impact. The incident notification was posted to [https://status.shiphawk.com/](https://status.shiphawk.com/) ## **Recovery** 3 steps were performed for the service recovery: * primary database node disabled * the database replica was promoted to primary * the OLD primary node hostname was pointed to the NEW primary node by updating DNS records ## **Timeline** All times are in PST. **10/15/2021:** 10:00am - an internal process that archives data started 10:30am - internal monitoring systems alerted fast increasing disk usage 10:35am - the volume attached to the primary database node was upgraded 10:42am - the database performance degraded 10:43am - the monitoring system alerted multiple errors and API unresponsiveness 10:50am - the engineering team began an investigation of the incident 11:20am - the root cause was understood and the team created an action plan 11:30am - primary node was disabled and the replica was promoted to a primary 11:40am - OLD primary node hostname was pointed to the NEW primary node by updating DNS records **11:45am - the service is fully restored** 1:30pm - a new database replica was created and the sync process started **10/16/2021:** 2:30pm - the new database replica sync process finished ## **Root cause identification: The Five Whys** 1. The application had an outage because the database performance degraded 2. The database performance degraded because the volume, attached to the primary database node, was upgraded 3. The volume was upgraded because the disk usage fastly increased 4. Because we ran data archiving processes that used more disk than was expected 5. Because the data archiving process was tested on the environment with different primary/replica database configurations and the problem was not identified during tests ## **Root cause** The difference in configurations of the test and production systems led to missed inefficiency in the data archiving process. ## **Lessons learned** * The test environment requires configuration changes to more closely resemble production * The data archiving process should start slower * The internal process to promote replica databases to primary needs to be faster
This incident is resolved. We’re sorry this prevented your team from fulfillment during this outage period. Understanding this urgency, we made every possible effort to solve this as quickly as possible. The incident started at 10:42am and was resolved before 11:45am Pacific Time. A post-mortem will be provided and accessible on this status page within the next 3-5 business days. Please contact support@shiphawk.com if you have additional questions or concerns.
A fix has been implemented and we are monitoring the results. Customers can now login. Monitoring will continue throughout the day. Next update to finalize/close this incident will be provided within the next few hours.
We are continuing to investigate this issue.
Some users may be experiencing trouble when logging in to ShipHawk. Our Engineering team is currently investigating issues related to login. We will send an additional update at 11:45am Pacific Time.
Report: "FedEx web services - degraded performance"
Last updateThis incident has been resolved.
We are seeing degraded performance with FedEx web services. We will continue to monitor as FedEx works to resolve. To see current response times from FedEx you can check: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "Degraded performance with PrintNode"
Last updatePrintNode services are all operating normally. See https://www.printnode.com/en/status for details.
PrintNode is currently experiencing issues. Some regions may be experiencing degraded performance when attempting to print. See https://www.printnode.com/en/status for details.
Report: "FedEx web services - degraded performance"
Last updateFedEx web services are operating at normal response times.
FedEx web services appear to be operational. We will continue to monitor the situation. To monitor yourself, go to: https://www.shippingapimonitor.com/history.html?api=fedex
We are seeing a drop in performance with FedEx web services. If you need to ship immediately and have an alternative parcel provider, you can add a Shipping Policy to force packages to go out with that alternative carrier. Contact support if you would like help setting up a Shipping Policy to force an alternate carrier selection in the interim. To monitor FedEx on your own, you can go to https://www.shippingapimonitor.com/history.html?api=fedex
Report: "FedEx web services - degraded performance"
Last updateFedEx web services appear to be operational. To monitor on your own, you can see FedEx web service performance at: https://www.shippingapimonitor.com/history.html?api=fedex
We are monitoring FedEx web service performance.
We are seeing degraded performance with FedEx web services and are monitoring the situation. To monitor, you can see FedEx web service performance at: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "Investigating reported issue with Amazon prime shipping."
Last updateRating and booking with Amazon Buy Shipping is now operational.
A case has been opened with Amazon and is pending feedback. We will provide an update as soon as Amazon provides details to resolve this issue.
We are investigating the issue.
Report: "Degraded response times with FedEx Web Services"
Last updateFedEx Web Services are operational.
FedEx's Web Service response times appear to be returning to normal. We are continuing to monitor performance.
We are continuing to investigate this issue.
We are currently investigating this issue. It appears FedEx Web Services response times are performing slower than usual. For more information: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "Investigating potential service interruption"
Last updateThe service interruption lasted from approximately 12:21pm to 12:35pm Pacific. An issue was discovered during a minor production deployment which was reverted. We apologize for any inconvenience and will investigate our options to mitigate similar incidents from occurring.
We are currently investigating reports of a potential service interruption with access to the ShipHawk Dashboard. We apologize for any inconvenience and will post another update as soon as we learn more.
Report: "Difficulties connecting to PrintNode"
Last updateAll services operating normally.
20:18:34 UTC: Potential resolution. Our data centre made routing changes to avoid traffic from an upstream Telia device. As of 19:17 UTC you should see improvements, please contact us if you are still encountering issues.
PrintNode has notified us that some customers may be experiencing issues when attempting to print directly to a networked printer. We will continue to monitor the situation. PrintNode status can be monitored here: https://www.printnode.com/en/status Mon 2020-12-21 14:57:20 UTC: Internet connection issues affecting some customers. Some customers are reporting difficulties connecting to PrintNode. PrintNode systems are working normally; we suspect this is a regional internet outage and are monitoring the situation.
Report: "Slow response times"
Last updateAlerting notified us of an issue early morning PST. The issue was resolved with minimal impact.
Report: "Slow response times"
Last updateAn issue was identified that created sluggish response times intermittently. While all systems remained green, an issue consumed workers which ultimately caused some operations to timeout. These could be retried, but even so, some of those may have timed out again. Once the new issue was identified, we addressed it and introduced new alerting to mitigate such issues in the future.
Report: "UPS web services - degraded performance"
Last updateThis incident has been resolved.
We are seeing degraded performance with response times from UPS web services. For more on UPS web service response times: https://www.shippingapimonitor.com/history.html?api=ups
Report: "FedEx web services - degraded performance"
Last updateWe are seeing short periods of delayed responses with FedEx web services. You can see these here: https://www.shippingapimonitor.com/history.html?api=fedex
Report: "USPS Endicia is experiencing an issue with printing postage"
Last updateEndicia has noted that they have fixed the postage printing issue. See Endicia for more details: https://status.endicia.com/incidents/krl4sbw029h0
The issue has been identified and we are monitoring Endicia's service.
Issue identified with printing USPS labels via Endicia is being monitored.
You may subscribe to Endicia's updates here: https://status.endicia.com/incidents/krl4sbw029h0