Historical record of incidents for Jitterbit
Report: "EDI degradation"
Last updateWe are currently investigating slow EDI document processing in NA eiCloud. We will update when we have more information.
Report: "EMEA: Production Cloud Agent Group Upgrade to 11.44"
Last updateThis maintenance includes an upgrade of the Production Cloud Agent Group to version 11.44. There is no planned downtime.11.44 Windows and Linux Private Agent installers will be available at the conclusion of this maintenance window. The 11.44 Docker Private Agent will be available as listed in the release notes, published prior to the release. For questions, contact support.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Report: "APAC: Production Cloud Agent Group Upgrade to 11.44"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This maintenance includes an upgrade of the Production Cloud Agent Group to version 11.44. There is no planned downtime.11.44 Windows and Linux Private Agent installers will be available at the conclusion of this maintenance window. The 11.44 Docker Private Agent will be available as listed in the release notes, published prior to the release. For questions, contact support.
Report: "LATAM: Jitterbit Wevo iPaaS Release 11.44"
Last updateThe scheduled maintenance has been completed.
Verification is currently underway for the maintenance items.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This maintenance includes the 11.44 Wevo iPaaS release. There is no planned downtime. 11.44 release notes will be published prior to the release. For questions, contact support.
Report: "NA: Jitterbit Harmony Release 11.44 and SCAG upgrade to 11.44"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This maintenance includes the 11.44 Jitterbit Harmony release and an upgrade of the Sandbox Cloud Agent Group (SCAG) to version 11.44. There is no planned downtime. 11.44 release notes will be published prior to the release. For questions, contact support.
Report: "EMEA: Jitterbit Harmony Release 11.44 and SCAG upgrade to 11.44"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This maintenance includes the 11.44 Jitterbit Harmony release and an upgrade of the Sandbox Cloud Agent Group (SCAG) to version 11.44. There is no planned downtime. 11.44 release notes will be published prior to the release. For questions, contact support.
Report: "APAC: Jitterbit Harmony Release 11.44 and SCAG upgrade to 11.44"
Last updateThe scheduled maintenance has been completed.
Scheduled maintenance is currently in progress. We will provide updates as necessary.
This maintenance includes the 11.44 Jitterbit Harmony release and an upgrade of the Sandbox Cloud Agent Group (SCAG) to version 11.44. There is no planned downtime. 11.44 release notes will be published prior to the release. For questions, contact support.
Report: "Wevo iPaaS is facing slowness during platform navigation and flow's execution"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing an issue causing the navigation from the Wevo iPaaS to be slow, which is also affecting some of the flow's execution.
Report: "Harmony iPaaS has high response time"
Last update**Issue Summary:** Users experienced slowness across the platform, leading to performance degradation and latency in services. **Root Cause:** The slowness was traced to excessive resource consumption within the data layer of the platform. This was caused by a surge in typical queries which placed an load on backend data processing systems. **Impact:** * Noticeable performance lag for end-users interacting with the Frontend ~~platform~~. **Resolution:** The issue has been identified and isolated. A fix addressing the data layer’s handling of such high-volume queries is currently being implemented and is scheduled for deployment in an upcoming platform release.
We are investigating high response times in North America Harmony iPaaS
Report: "[Wevo iPaaS Latam] - Partial unavailability in flow's execution"
Last updateThis incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "EMEA Harmony (West): Degraded Performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently experiencing degraded performance in our EMEA Harmony (West) zone. Please check back for updates.
Report: "EMEA Harmony (West): Degraded Performance"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently experiencing degraded performance in our EMEA Harmony (West) zone. Please check back for updates.
Report: "Wevo iPaaS is facing slowness during platform navigation and flow's execution"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing an issue causing the Wevo iPaaS portal navigation to be slow which is also affecting the flow's execution.
Report: "Slowness processing Wevo iPaaS logs"
Last updateThis incident has been resolved. All the new flow's execution logs are being reflected in the Wevo iPaaS portal as the integrations are executed. We expect to process the entire backlog in the next 72 hours.
The Wevo iPaaS Cloud Database maintenance has been finished and all the new flow's execution logs will be reflected in the Wevo iPaaS portal after the flow execution is finished. We're currently working to process the delayed logs backlog from the past few days and monitoring the environment health.
The Wevo iPaaS Cloud Database maintenance is still in progress. Going forward, all the new flow's execution logs will be reflected in the Wevo iPaaS portal after the integration execution is finished. Please note that old execution logs may still be delayed to be visible since they are enqueued for processing as a backlog.
The Wevo iPaaS Cloud Database maintenance is still in progress. Going forward, all the new flow's execution logs will be reflected in the Wevo iPaaS portal after the integration execution is finished. Please note that old execution logs may still be delayed to be visible since they are enqueued for processing as a backlog.
The Wevo iPaaS Cloud Database maintenance is still in progress. Going forward, all the new flow's execution logs will be reflected in the Wevo iPaaS portal after the integration execution is finished. Please note that old execution logs may still be delayed to be visible since they are enqueued for processing as a backlog.
We are continuing to work on a fix for this issue. A maintenance has been started in the Wevo iPaaS Cloud Database. There won't be any data loss during this process or impact during the flow execution.
We have identified a new delay to process flow's execution logs and the team is already working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flow execution hasn't been impacted and is still running even though not reflected in the Wevo iPaaS portal.
A fix has been implemented and we are monitoring the results. Going forward, all the new flow's execution logs will be reflected in the Wevo iPaaS portal after the integration execution is finished. Please note that old execution logs may still be delayed to be visible.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flows execution haven't been impacted and are still running even though not reflected in the Wevo iPaaS portal.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flows execution haven't been impacted and are still running even though not reflected in the Wevo iPaaS portal.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flows execution haven't been impacted and are still running even though not reflected in the Wevo iPaaS portal.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flows execution haven't been impacted and are still running even though not reflected in the Wevo iPaaS portal.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed. The flows execution haven't been impacted and are still running even though not reflected in the Wevo iPaaS portal.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
The issue has been identified and the team is still working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
The issue has been identified and the team is currently working to normalize the scenario. Please note that the flow execution logs will be delayed till the entire enqueued backlog is completely processed.
We are currently experiencing a slowness while processing flows' execution logs. The execution logs will be delayed to be reflected in the Wevo iPaaS portal but no flows' executions are getting impacted, all flows are properly being executed as their schedules are configured.
Report: "Flow execution logs with slow processing"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Wevo iPaaS is facing slowness during platform navigation and flow's execution"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently experiencing an issue causing the navigation from the Wevo iPaaS to be slow which is also affecting the flow's execution.
Report: "[ iPaaS Latam ] - General unavailability in flow's execution"
Last updateNew flow's execution were not triggered causing integrations not to run
Report: "Delay in Runtime Operation Status in EMEA"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Report: "Flow execution logs with slow processing"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue
We are continuing to work on a fix for this issue
We are continuing to work on a fix for this issue
Flow execution logs with slow processing
Report: "[ iPaaS Latam ] - General unavailability in flows and logs"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
Flow execution and logs unavailability
Report: "Missing and/or incorrect EDI segment causing transaction failures"
Last updateThis incident has been resolved.
Root cause has been identified and fixed. Ongoing efforts to identify and resubmit the affected transactions
Report: "[ iPaaS Latam ] Flow - Faul Validation dynamic Tokens in Account Connector"
Last updateProblem Root Cause: There was a malfunction in the validation and renewal of dynamic account tokens in some flows, which caused temporary unavailability. Solution: The problem in the Accounts module was identified and an immediate correction was made available, resolving the problem. Affected all connectors? Did not affect concectors Only affected new external clients that needed to authenticate to create new dynamic accounts. Ticket: WIP-543
Report: "[ iPaaS Latam ] Failed to register Storage in the Dashboard from 2023-08-10 to 2023-08-23"
Last updateAfter updating the database drive in the microservices, the current date format was no longer compatible and for this reason there was a failure in recording data from the Storage. We adjusted the compatibility in the microservices and the data is stored correctly again Posted 1 year ago. Sep 01, 2023 - 17:20 GMT-03:00
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Slowness and Performance problem in one of our Proxy"
Last updateSlowness and Performance problem in one of our Proxy
Report: "[ iPaaS Latam ] AgentOnPremise does not remain with ONLINE status"
Last updateAgentOnPremise does not remain with ONLINE status
Report: "[ iPaaS Latam ] Proxy problem"
Last updateProxy problem
Report: "[ iPaaS Latam ] Slowness and Performance problem in one of our Microservice Workers - Processing Logs"
Last updateSlowness and Performance problem in one of our Microservice Workers - Processing Logs
Report: "[ iPaaS Latam ] Temporary Connection Disruption for Some On-Premise Agents"
Last updateTemporary Connection Disruption for Some On-Premise Agents
Report: "[ iPaaS Latam ] Slowdown in Proxy and Agent Operations"
Last updateSlowdown in Proxy and Agent Operations
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Slowness and Performance problem in one of our Proxy"
Last updateSlowness and Performance problem in one of our Proxy
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Slowness and Performance problem in one of our Proxy"
Last updateSlowness and Performance problem in one of our Proxy
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Flow execution logs with slow processing"
Last updateFlow execution logs with slow processing
Report: "[ iPaaS Latam ] Slowness and performance issue in events for on and off flows"
Last updateRecently, we identified an issue in one of our security layers that impacted the functionality of our Auto Scaling system, resulting in the failure to scale servers correctly. As a consequence, the ability to start some flows was temporarily affected. After a detailed analysis, we have identified the root cause of this incident and have taken the necessary steps to resolve it. To ensure that situations like this do not happen again, we are improving our monitoring processes and adding new steps to the platform's checklist. At this moment, our systems are operating normally, and there are no reports of similar issues. Posted 1 month ago. Aug 19, 2024 - 11:53 GMT-03:00
Report: "Scheduler Issue on Cloud Agents"
Last update### Root Cause Analysis We apologize for the service disruption. We appreciate your patience and understanding. We are fully committed to resolving this and minimizing further disruptions. **Issue:** The cloud agents scheduler service failed to restart after a scheduled upgrade, disrupting customer schedules during the outage. **Impact:** Major **Services Impacted:** Customer scheduled operations using Jitterbit Sandbox and Production Cloud Agent groups Location: APAC Cloud **Problem Description:** Jitterbit iPaas scheduler service utilizes an external library for handling timezone-related functions. This library, upon initialization, attempts to update its timezone data file. An incorrect value within this file caused the Scheduler service to fail during startup. **Timeline:** **9/6/2024 \(UTC\)** **04:00 -** The upgrade process began and the old servers were drain stopped from service **05:15 -** The process was finished and internal testing began and discovered the issue with the scheduler service **08:20** - Identified the issue and started to develop a workaround and then apply to the Cloud Agents **08:55** - Workaround applied and confirmed that issues was mitigated **Root Cause:** A syntax error was found in third-party timezone data file **Action:** _Immediate Action:_ * Implemented a work around by fixing the timezone data file and restarting the service * Document workaround in release notes for customers that may run into this issue with their private agent _Strategic Action:_ * A future agent version will not update the timezone data file automatically. * Correct syntax of timezone data file
Issue has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are continuing to investigate this issue.
We are currently investigating this issue.
Report: "Issue with the Success Central and Developer Portal websites"
Last updateThis incident has been resolved.
We believe the issue has been resolved and continue to monitor.
The Success Central (<a href="https://success.jitterbit.com">success.jitterbit.com</a>) and Developer Portal (<a href="developer.jitterbit.com">developer.jitterbit.com</a>) documentation sites are experiencing degraded performance.
Report: "New Operation Status and Schedules Impacted in APAC"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Test Alert - please ignore"
Last updateTest - alert has been resolved.
This is a test.
Report: "Issues with Harmony cloud logins, operations, and APIs"
Last update# Root Cause Analysis **Product:** Harmony **Issue:** Service disruption for login service, APIs on APIM, Agent communications **Impact:** Major **Services Impacted:** * Cloud Login * Rest API * Operations * APIs running on APIM **Location:** NA \(North America Cloud\) **Problem Description** Login and App access: Users encountered an internal server error when attempting to log in to Identity, hindering access to various apps such as Cloud Studio, APIM, Marketplace, and the Management Console. Cloud and Private Agents: Harmony Cloud and Private agents faced connection issues and were unable to log in, rendering them incapable of processing any operations. API on APIM: The APIs experienced intermittent issues, ranging from degraded execution to complete unresponsiveness. **Duration** March 2, 2024 9:05 PM UTC - March 3, 2024 02:20 AM UTC **Root Cause** This high-traffic volume was confirmed to be the result of a distributed denial-of-service \(DDoS\) attack from a malicious third party targeting our AWS Cloud Service. We detected unusually high traffic volume on our firewalls and load-balancers. The servers behind the firewall were unable to handle this overwhelming surge in traffic, resulting in disruptions to user logins, agents processing operations, and APIs on APIM. Our Web Application Firewalls \(WAF\) failed to detect the unique fingerprint of the traffic and, consequently, were unable to block it. **Immediate Action** Once the elevated traffic was detected, the Jitterbit team, per security protocol, took countermeasures by blocking all traffic and safely restoring each service by thoroughly reviewing with our Information Security and AWS Security Team. During this time, services to API, Login and Agent communications were disrupted. Once the fingerprint was derived, a rule was put in place to mitigate the attack. We implemented rate-limit rules as an additional safety layer while collaborating with the AWS Security Team. Services were safely restored by thorough review by both the Jitterbit and AWS teams. No customer data was lost or at risk during the disruption. **Strategic Action** The Information Security team will run security scans and review the Intrusion Detection System \(IDS\) to confirm this was an isolated scenario. Jitterbit Security Teams are continuing to work with the AWS Security team to further analyze and implement proactive safety measures to mitigate further disruptions.
The issue has been resolved and services have been restored. We are working on the details for the RCA and will share those soon. We apologize for the inconvenience.
A fix has been implemented and services have recovered. We will continue to monitor and investigate any new issues that arise.
At this moment, our team has identified the issue and is actively working on a solution. As services gradually come back online, we continue our investigation.
We are currently working with our cloud hosting provider AWS on a network/infrastructure issue around an unusual high volume of traffic. This is affecting degradation in our services. Will continue to provide updates as they come in.
We apologize for the inconvenience. We are still investigating the issue. The is our highest priority and will provide details as they come.
We are continuing investigating this issue.
We are currently investigating the issue impacting logins to applications, operations, and APIs. Our team is actively working to identify and resolve this issue as quickly as possible.
We are still investigating the issue. Logins, Operations, and API functionalities are all affected.
At this time, we are still investigating. Issue is affecting login and APIs
We are currently looking into an issue with slow activity log and degraded agent synchronization.
Report: "Performance degredation"
Last update**Root Cause Analysis: 2/15/24** | | | | --- | --- | | **Issue** | Degraded Harmony Response Agent processing | | **Impact** | Critical | | **Services Impacted** | Cloud Login Rest API Operations APIs | | **Location** | NA \(North America Cloud\) | | **Problem Description** | **Slow Login**: Users experience delays during login to the application. **Agent Processing:** Operations stuck in submitted or pending | | **Root Cause** | **Degraded Connection**: Underlying node types were causing slow network performance. | **Backlog of jobs:** A backlog of tasks has accumulated due to a previous issue, resulting in a backlog in job assignments. | | **Impact** | **User Experience**: Slow to no login **Application Performance**: The overall application responsiveness **Agent Processing:** Operation processing delay | | **Resolution Steps** | **Scale additional compute resources** **Trace connectivity path** **Database Health Check**: Investigated the data layer for resource bottlenecks, query performance, and server health. **Network Troubleshooting**: Verified network connectivity **Service:** Observed issue was intermittent but processing slow **Monitoring**: Reviewed monitors and breakpoints **Logs:** Increase resources at the messaging layer to process the job backlog. As part of this process, the queue was cycled. Increase resources at the messaging layer to process the job backlog. As part of this process, the queue was cycled. | | Action | Tactical Remove faulty directory provider Add monitors for data layer to detect token issues Add monitors for operations queue Review data query response times | Strategic Profile sequence that lead to performance degradation Add circuit breakers to prevent cascading effect Add cordone areas to help with faster resolution |
This incident has been resolved.
A fix has been implemented. We are monitoring the system. There are still backlog of activity logs that will take some time to process.
We apologize for the inconvenience you’re experiencing. At this time, we have identified the cause of the issue and working on a resolution. We’ll provide an estimated time of resolution as soon as possible. Thank you for your patience!
We are continuing looking into this issue that is affecting user login and job execution. More information to be provided as it comes in.
We are currently investigating a reported slowness during login.
Report: "eiCloud NA performance degredation"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
Service: eiCloud NA Description: We're currently looking into a problem affecting eiCloud performance Customer impact: Customers may see a delay in transactions status/messaging updates. Transaction processing times are unaffected and completing as normal
Report: "eiCloud NA is experiencing a delay with ftp transactions"
Last updateA fix has been implemented and performance is recovering.
A fix has been deployed. Backlogs still remain and we are monitoring the progress through the backlogs.
Issue has been identified. Working on a fix.
eiCloud NA is experiencing a delay with ftp transactions.
A fix has been implemented and we are monitoring the results.
eiCloud NA is experiencing a delay with ftp transactions
Report: "Jitterbit - Wevo iPaaS LATAM: Slowness and Performance problem in one of our Proxy"
Last updateResolved - This incident has been resolved. Nov 10, 07:32 GMT-03:00 Update - We have already identified the problem and carried out the appropriate procedures. We are currently monitoring and monitoring the entire environment. Nov 10, 07:26 GMT-03:00 Update - We are continuing to monitor for any further issues. Nov 10, 06:00 GMT-03:00 Monitoring - We identified that one of our Proxy cluster servers was slow and thus causing intermittent failures in some connections made on the platform. Some Frontend and Onpremise Agent services may have been affected. Nov 10, 02:30 GMT-03:00
Report: "Some Operations not running on EMEA Cloud Agents"
Last updateA fix has been applied and will continue to montior.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Report: "Degraded performance on Harmony"
Last updateIssue has been resolved.
A fix has been rolled out. Monitoring performance.
We are looking into a performance issue with Harmony Cloud login and application user interface. Will provide an update shortly.
Report: "Issues with Cloud Agents"
Last updateThis incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently looking into issue with a few of the Cloud Agent groups processing operations in EMEA