Redox Engine

Is Redox Engine Down Right Now? Check if there is a current outage ongoing.

Redox Engine is currently Operational

Last checked from Redox Engine's official status page

Historical record of incidents for Redox Engine

Report: "Redox FHIR Sandbox Refresh"

Last update
Scheduled

The Redox FHIR Sandbox will be refreshed to it's original state. Any unique data created by customers in the Sandbox environment over the past 6 months will be removed. Customers may continue to create unique data on seeded test patients once the refresh is completed.

Completed

The scheduled maintenance has been completed.

In progress

Scheduled maintenance is currently in progress. We will provide updates as necessary.

Report: "Traffic delayed and increased errors"

Last update
resolved

Latency has now resolved and traffic is flowing as expected.

monitoring

For most feeds we have seen resolved latency. We are continuing to monitor until we see latency fully resolved.

monitoring

We have implemented a fix and seen that error rates have returned to nominal levels. The message latency is starting to decline. We are continuing to monitor until latency fully resolves.

identified

We believe we have identified the root cause of the issue and are deploying a fix.

investigating

We are seeing a number of increased errors with our API and delayed messages of up to 30 minutes. We are currently investigating the issue.

Report: "Traffic delayed and increased errors"

Last update
Resolved

Latency has now resolved and traffic is flowing as expected.

Update

For most feeds we have seen resolved latency. We are continuing to monitor until we see latency fully resolved.

Monitoring

We have implemented a fix and seen that error rates have returned to nominal levels. The message latency is starting to decline. We are continuing to monitor until latency fully resolves.

Identified

We believe we have identified the root cause of the issue and are deploying a fix.

Investigating

We are seeing a number of increased errors with our API and delayed messages of up to 30 minutes. We are currently investigating the issue.

Report: "Customer configured alerts not fully functional"

Last update
resolved

Alerting continues to function as expected. We are reaching out to customers who may have missed emails who are still in an alerted state individually.

monitoring

The secondary system's incident is resolved. All Customer Configured Alerts are back up and running. We are investigating possible impact for what alerts may not have been correctly emailed during the outage. You can look at the Monitoring page on your dashboard to see an accurate current status of all customer configured alerts.

investigating

A secondary system we use for our Customer Configured Alerts is currently experiencing an outage. This may result in missed alerts or some extra alerts. Being sent out. This only affects the Customer Configured Alerts not all our other alerting systems. We are monitoring the impact of this secondary system outage.

Report: "Customer configured alerts not fully functional"

Last update
Resolved

Alerting continues to function as expected. We are reaching out to customers who may have missed emails who are still in an alerted state individually.

Monitoring

The secondary system's incident is resolved. All Customer Configured Alerts are back up and running. We are investigating possible impact for what alerts may not have been correctly emailed during the outage.You can look at the Monitoring page on your dashboard to see an accurate current status of all customer configured alerts.

Investigating

A secondary system we use for our Customer Configured Alerts is currently experiencing an outage. This may result in missed alerts or some extra alerts. Being sent out. This only affects the Customer Configured Alerts not all our other alerting systems. We are monitoring the impact of this secondary system outage.

Report: "Delay in Email Alerting"

Last update
resolved

This incident has been resolved.

monitoring

We've implemented a fix and are presently monitoring for further impact

identified

We are aware of an issue with out alerting system that has delayed email alerting due to connection issues by up to two hours. Our development team is implementing a fix, and Redox is actively monitoring the situation.

Report: "Viewing details in some Logs may be delayed"

Last update
resolved

This incident has been resolved.

monitoring

We have identified the issue and have implemented a change. Log visibility is no longer delayed. We are continuing to monitor the issue as we work on deploying a long term fix.

investigating

We're currently observing a delay impacting the visibility of the transmission information in some logs. There are NO delays in actual message traffic or delivery — all data is transmitting as expected. Our team is actively working to reduce the latency and will keep you updated on any significant changes. Thank you for your understanding and patience.

Report: "Customer log content may be delayed"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are continuing to investigate this issue.

investigating

We are continuing to investigate this issue.

investigating

We're currently observing a delay affecting the archive importing system, which impacts the visibility of message logs and transmission contents. To clarify: there are no delays in actual message traffic or delivery — all data is transmitting as expected. However, some customers may experience up to a 5-minute delay in viewing these logs within the system. Our team is actively working to reduce the latency and will keep you updated on any significant changes. Thank you for your understanding and patience.

Report: "Editing filters is currently unavailable in the dashboard"

Last update
resolved

This incident has been resolved.

identified

Editing filters is currently unavailable in the dashboard. New filters can still be created, and already created filters are still working as intended. Already created filters cannot be edited. The issue has been identified and a fix is being implemented.

Report: "Some email notification services are down"

Last update
postmortem

## Summary On February 11th at 11:30 AM CT, a small number\(<20\) email notifications to users were delayed. These notifications were not sent until February 13th at 12:35 PM CT. ## What Happened * On February 11th, a misconfigured API key in a new notification service prevented email notifications from being sent from a subset of systems. * As a result, customers experienced delays in receiving: * Notifications about traffic errors and resolutions. * Notifications about upcoming certificate expirations \(30 days out\). * The API key was updated and normal processing resumed on February 13th at 12:35 PM CT. ## What we are doing about this: * Additional monitoring and alerting were added to prevent similar potential failures in the future.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and customers will now start receiving the queued emails. We are continuing to monitor the results.

identified

Two email notification services that Redox uses are down. These services affect emails for notifying customers of expiring certificates and for message traffic or VPN related issues. We have identified the issue and are implementing a fix. When the issue is resolved, customers will receive all backlogged emails that would have been sent while the services were down.

Report: "Slow traffic and increased latency"

Last update
postmortem

## Summary Starting January 24, 2025 at 2:40CT we became aware of message processing latency for some of our customers. This latency occurred intermittently through Feb 3 when some message processing stopped, resulting in rejected messages for a subset of customers. A subset of customers with subscriptions in the affected database until the root causes were determined to be from a\(n\) * inefficient query that monitors message processing * lack of monitoring visibility into a set of waiting messages that were in an errored state On February 4, 2025 at 1:15 CT, changes were deployed to fix both root causes were applied with most customers being mitigated by February 4, 2025 17:14 CT. All impacted customer were fully operational by February 5, 2025 at 12:33 CT. ## What Happened * On January 23, atypical messages became stuck in a processing waiting state. Combined with a lack of visibility into errors for that waiting state and an inefficient query for monitoring message processing, one database ran out of available space. * Customers with subscriptions on that database experienced increasing latency intermittently from Jan 24 thru Feb 4. * To mitigate this incident, we removed the problematic messages to unblock customers subscriptions on that one database. Additionally, we made optimizations to the database query that monitors message processing and added metrics to capture and alert the errors from messages waiting to be processed. ## What we are doing about this: * We have created an alert that captures when messages are erroring in this waiting state. * We have corrected the edge case discovered allowing the large message payload. * We have improved performance on a query monitoring message process. * We are improving the process of moving waiting messages into processing to handle atypical messages better.

resolved

The latency and slowness have gone back to normal levels

monitoring

Traffic and latency seem to be returning to normal, but we are continuing to monitor for further developments

investigating

We are currently seeing slow traffic and increased latency for message processing. We are investigating this issue.

Report: "Filters are unavailable in the dashboard UI. Processing is not impacted."

Last update
postmortem

# Summary * On February 6, 2025 at 1025CT we became aware of an issue that affected the ability to view or modify some filters in the Redox dashboard for a subset of customers; at 1136CT we deployed a fix that enabled users to once again interact with filters in the dashboard. * That subset included only customers who had filters that were previously linked to a deleted subscription and there was no impact to filter logic or message processing during this time. ## What Happened * On February 5 we implemented a code change to support an upcoming enhancement to the customer filters experience. * After implementation, we uncovered that, in cases where a subscription had a filter and was later deleted, the user would receive an error; this prevented filters from loading for other subscriptions in that same environment, which meant users could not interact with them for a period of time. * Approximately an hour after we learned of the issue, we deployed a fix and users were again able to interact with filters in the dashboard as expected. ## What we are doing about this: * We are adding additional alerting that would support proactively identifying this type of issue in the future. * We are expanding our understanding of edge cases and incorporating additional testing to support those scenarios.

resolved

This incident has been resolved.

identified

Viewing and editing filters is unavailable in the dashboard. Message processing with filters is not impacted. We have identified the issue and a fix is being implemented.

Report: "High latency in transmission processing for some customers"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor the fix that we've implemented. Almost all queues (fewer than a dozen remaining) have drained, and the rest are set to drain by midday today (February 4th). If you have any questions, feel free to reach out to support@redoxengine.com

monitoring

We have implemented a fix that is mitigating the effects of the latency we've been seeing. We are actively monitoring to ensure that this fix holds

identified

We have identified the root cause of the issue causing the message latency and are in the process of finding a fix. If you have any questions, please reach out to support@redoxengine.com

investigating

We are aware that some of the affected customers are also receiving NACKs on their transmissions. We are continuing to investigate the root cause of this issue.

investigating

We are aware that some customers may be experiencing increased latency in transmission processing. We are actively looking into this issue, and will update as we know more information

Report: "Log Processing Delay for a Subset of Customers"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are investigating an issue that may delay asynchronous traffic log processing for a subset of users.

Report: "Additional Customer Filter Errors"

Last update
postmortem

## Summary * On January 22, 2025 from ~0900CT to 1650CT some filters that had been previously deleted were reactivated, which may have caused some filter logic to execute in unexpected ways. Message processing was otherwise unaffected. * Secondarily, on January 23, 2025 we identified that for a subset of customers, messages that should have been filtered via Redox logic, were not. No traffic was sent to you that shouldn’t have been sent, but your customer filter configuration may have affected what messages you received or filtered out during this time. This did not affect who receives your message. * All impacted customers were notified directly and offered replay assistance as well as support for assessing further impact. ## What Happened * On January 22 an initial storage system script was executed. This script inadvertently reactivated some previously deleted filters, causing these previously deleted filters to run against subscription traffic during the time period. * On January 22 we were notified of unexpected filters running against traffic. After detection of the inadvertent reactivation, a second script was executed to re-delete the applicable filters. At this point unexpected filtering was mitigated. * On January 23 we were notified that the secondary script, while mitigating the original issue, had caused a secondary issue. A subset of filters which should have been filtering traffic were no longer doing so. A tertiary script was executed to mitigate the secondary issue. * This sequence of events created the following possible impacts, for a subset of customers * On January 22 some traffic which should not have been filtered was. * On January 23 some traffic which should have been filtered instead was not. * On both January 22 and January 23 impacted customers were contacted and assistance was offered to remediate. * Replays were performed, if requested, for the January 22 filtering. ## What we are doing about this: * We are implementing improved alerting to proactively notify us of misaligned expectations in our underlying storage system. * We are considering improvements to our internal tooling for interacting with resources in our underlying storage system.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are aware of an issue affecting customer message filters. If you suspect your messages are being unexpectedly filtered, or if your messages should be filtered but are not, please reach out to Production Support here: https://redoxengine.atlassian.net/servicedesk/customer/portal/12

Report: "Customer Filter Errors"

Last update
postmortem

## Summary * On January 22, 2025 from ~0900CT to 1650CT some filters that had been previously deleted were reactivated, which may have caused some filter logic to execute in unexpected ways. Message processing was otherwise unaffected. * Secondarily, on January 23, 2025 we identified that for a subset of customers, messages that should have been filtered via Redox logic, were not. No traffic was sent to you that shouldn’t have been sent, but your customer filter configuration may have affected what messages you received or filtered out during this time. This did not affect who receives your message. * All impacted customers were notified directly and offered replay assistance as well as support for assessing further impact. ## What Happened * On January 22 an initial storage system script was executed. This script inadvertently reactivated some previously deleted filters, causing these previously deleted filters to run against subscription traffic during the time period. * On January 22 we were notified of unexpected filters running against traffic. After detection of the inadvertent reactivation, a second script was executed to re-delete the applicable filters. At this point unexpected filtering was mitigated. * On January 23 we were notified that the secondary script, while mitigating the original issue, had caused a secondary issue. A subset of filters which should have been filtering traffic were no longer doing so. A tertiary script was executed to mitigate the secondary issue. * This sequence of events created the following possible impacts, for a subset of customers * On January 22 some traffic which should not have been filtered was. * On January 23 some traffic which should have been filtered instead was not. * On both January 22 and January 23 impacted customers were contacted and assistance was offered to remediate. * Replays were performed, if requested, for the January 22 filtering. ## What we are doing about this: * We are implementing improved alerting to proactively notify us of misaligned expectations in our underlying storage system. * We are considering improvements to our internal tooling for interacting with resources in our underlying storage system.

resolved

Emails will be sent to affected customers to determine replay eligibility tomorrow morning. Please keep an eye on your alert inbox for updates

monitoring

A fix has been implemented and we are working on replaying affected messages

investigating

We are aware of an issue affecting customer message filters. If you suspect your messages are being unexpectedly filtered, please reach out to Production Support here: https://redoxengine.atlassian.net/servicedesk/customer/portal/12

Report: "Message Processing Halted for Subset of Customers"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results. Customers should see that traffic flow has resumed and may notice a larger queue depth until caught up.

identified

The issue has been identified and a fix is being implemented.

investigating

Redox is investigating an issue affecting message processing for a subset of customers. Affected customers may notice a stop to inbound traffic from Redox and an inability to send messages outbound to Redox.

Report: "Advanced MD credentials locked out"

Last update
resolved

A large majority of polling workflows have been re-enabled and traffic should be flowing as expected. A small subset of credentials do need to be reset but we have communicated with these customers individually to resolve these on a case by case basis. The incident is now considered resolved.

identified

The issue has been identified and a fix has been implemented. We are now working with the appropriate AdvancedMD teams to have our credentials reset so polling and other workflows can be resumed.

Report: "Log Visibility Unavailable"

Last update
postmortem

## Summary * On November 20, 2024 from 0915CT to 1000CT Logs details were not visible in the dashboard due to logs processing delays. Message processing was not affected. ## What Happened * We made a code change that changed the underlying infrastructure for Logs. * We had planned for this type of failure and as a result were quickly able to fail forward. ## What are we doing about it? * We will expand our internal alerting and monitoring to include an endpoint that would have enabled us to identify this type of issue more quickly.

resolved

This incident has been resolved. Log visibility is available.

identified

The issue has been identified and a fix is being implemented.

investigating

Redox is investigating an issue affecting the ability to view Logs in the Dashboard. Log processing is NOT affected. Messages are processing as expected. The visibility of logs in the dashboard is unavailable.

Report: "Carequality directory modification requests are failing"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented. We have updated our end to meet the updated Carequality standard and messages are going through as expected. We will continue to monitor on our end for any additional unexpected changes.

investigating

We understand the impacts of this issue and our teams are continuing to collaborate with Carequality and internal partners. Your understanding and patience is appreciated while we continue to work toward a resolution.

investigating

This appears to be a broader Carequality issue at this time and we have reached out to Carequality support to help identify and troubleshoot these request errors.

Report: "Up to half hour delay for a small subset of customers"

Last update
resolved

This incident has been resolved.

monitoring

Connections where we poll external APIs for messages will be delayed by up to 30 minutes. The issue has been Identified and a fix has been released. We are not monitoring.

Report: "Dashboard Search Functionality Limited"

Last update
postmortem

## Summary * From October 25-28, 2024 some Logs searches initiated by use of the search bar errored. ## What Happened * On October 25 we implemented a code change, which was tested in isolation. * On October 28 we were alerted to search errors and started an incident at 1647CT. * At 1710CT we identified the issue and started working on a fix. * At 1819CT the fix was fully deployed and functionality in Logs search was restored. ## What are we doing about it? * We are considering expanding our testing capabilities to account for earlier detection and mitigation of this type of failure in the future.

resolved

This incident has been resolved.

monitoring

We've applied a fix in production and are actively monitoring log searches to ensure that full functionality has been restored

identified

We have identified the issue and are working on a solution presently

investigating

We are aware that dashboard search functionality is extremely limited at the moment. Present workaround is not to use the 'all' category when searching, but we are actively working on a fix.

Report: "Message processing is delayed for a subset of customers"

Last update
resolved

A fix has been implemented and processing has returned to normal. This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

Message processing is delayed for a subset of customers. Observability is not impacted. We are currently investigating the issue.

Report: "Message Processing Degraded for a Subset of Customers"

Last update
resolved

This incident has been resolved.

investigating

Some customers may experience a delay in message processing while we investigate this issue. Thank you for your patience.

Report: "Dashboard logs are delayed at this time. Message processing is not impacted."

Last update
postmortem

# Logs intermittently unavailable to view or search ## Summary From August 25-26, 2024 Logs were intermittently delayed or unavailable to view or search in the Redox dashboard. Message processing was unaffected. ## What Happened & How We Responded * On the morning of August 25, AWS initiated an automated failover of their managed database service due to an underlying storage volume issue which subsequently affected throughput of our logs processing. * On August 25th at 0539CT, we restarted impacted application processes which resolved the immediate issue. * On August 26th at 0758CT, we were alerted that logs were again falling behind in processing time. Working with AWS support, we uncovered that the underlying storage from the failover the previous day was still being optimized, resulting in database write latency. The storage optimization completed at 1500CT, and the service was fully available again at 2229CT. ## What we are doing about this: * We are exploring an underlying storage system change to further increase our infrastructure durability.

resolved

Log observations have stabilized and remained performant. This incident has been resolved.

monitoring

Logs observability is back to performing as expected. We will continue to monitor performance throughout the day (8/27).

monitoring

Logs are continuing to catch up. We expect to return to regular observability of logs by 2:00AM CT tomorrow (8/27).

monitoring

A fix has been implemented and we are catching up on traffic. We expect to return to regular observability of logs by this evening (8/26).

investigating

Logs are available in the dashboard again. Visibility remains delayed.

investigating

We are continuing to investigate the issue. Log visibility in the dashboard is unavailable while we work to resolve this.

investigating

We are currently investigating this issue.

Report: "Dashboard logs are up to two hours behind. Processing should not be impacted."

Last update
postmortem

# Logs intermittently unavailable to view or search ## Summary From August 25-26, 2024 Logs were intermittently delayed or unavailable to view or search in the Redox dashboard. Message processing was unaffected. ## What Happened & How We Responded * On the morning of August 25, AWS initiated an automated failover of their managed database service due to an underlying storage volume issue which subsequently affected throughput of our logs processing. * On August 25th at 0539CT, we restarted impacted application processes which resolved the immediate issue. * On August 26th at 0758CT, we were alerted that logs were again falling behind in processing time. Working with AWS support, we uncovered that the underlying storage from the failover the previous day was still being optimized, resulting in database write latency. The storage optimization completed at 1500CT, and the service was fully available again at 2229CT. ## What we are doing about this: * We are exploring an underlying storage system change to further increase our infrastructure durability.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Some Carequality documents are being returned as base64"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We have identified an issue where some Carequality documents are being returned as base64 instead of XML. Our engineering team is working on a fix.

Report: "Redox Dashboard is slow and intermittently unresponsive"

Last update
resolved

This incident has been resolved.

monitoring

This appears to have recovered. We are currently monitoring and investigating the root cause.

investigating

We are currently investigating this issue.

Report: "Dashboard logs a few minutes behind. Processing is not impacted."

Last update
postmortem

## Summary From June 10-12, 2024 <2.5% of Logs were intermittently unavailable to view or search in the Redox dashboard for approximately two hours accumulatively. Message processing was unaffected. ## What Happened & How We Responded * On the evening of June 10, we experienced bottlenecks in the throughout of our Logs processing. * On June 11th at 0024CT, we scaled up processing power and added additional observability metrics. This initially appeared to remediate the issue. * On June 12th at 0842CT, the issue resurfaced and we discovered and remediated a code limitation that caused the software to fall behind while processing payloads. This fully remediated the issue, and we added additional monitoring to alert us to similar potential failures in the future. ## What we are doing about this: * We will retain the code optimization we made moving forward. * We added additional monitoring to alert us to similar potential failures in the future.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Log Observability Delayed"

Last update
resolved

This incident has been resolved.

investigating

We are investigating an issue impacting the visibility of Logs in the Redox Dashboard. Message processing is not delayed at this time.

Report: "Dashboard logs are up to an hour behind. Processing should not be impacted."

Last update
postmortem

**Summary** On May 6, 2024, from 1300CT to 1506CT, the Redox dashboard experienced a 2-3 hour delay in log updates, resulting in incomplete search results. Message processing was unaffected. **What Happened** * A software defect caused an increase in the system load of the processing environment and delayed log processing * The initial response, restarting the processing environment at 1334CT, had no effect. * At 13:58 CT, increasing throughput between the processing environment and the underlying storage system restored log processing to the expected levels. **What We Are Doing About This** * We fixed the software defect that caused the original issue. * We added additional monitoring to alert us to \(potential\) future similar failures.

resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

investigating

We are currently investigating this issue.

Report: "Redox Dashboard Loading Issues"

Last update
resolved

This incident has been resolved.

investigating

Redox is aware of an issue affecting access to the Dashboard. Customers are unable to load the dashboard at this time. We are investigating this issue and will provide an update as soon as possible.

Report: "Some dashboard functionality degraded"

Last update
resolved

This incident has been resolved.

monitoring

Customer Filters, Customer Translations, and the Escalations/Monitoring Tab were unavailable for a subset of customers. A fix has been implemented and we are monitoring the results.

Report: "Delayed (Outbound Redox) Message Delivery for Subset of Customers"

Last update
resolved

At approximately 7:47 AM CT Redox experienced an issue on one of our VPN appliances which resulted in the delayed delivery of outbound messages for approximately two (2) minutes. The incident was quickly resolved and message flow resumed, however, some customers may have received automated alert emails from Redox regarding connectivity failure. Consequently, any alerts received between 7:47 AM CT - 8:00 AM CT can be ignored. Alerts you receive from now on should be investigated as the issue has been resolved. Thank you!

Report: "Redox VPN Connection Issue"

Last update
resolved

We have resolved this issue. If you continue to experience issues with your VPN tunnel connecting to the Redox peer of 35.168.141.219, please submit a Production Support ticket and we will be happy to assist with the remediation. Thank you.

identified

The issue has been identified and a fix is being implemented.

investigating

Redox is investigating a connectivity issue affecting multiple VPN tunnels connecting to our Peer IP of 35.168.141.219. We can remedy the issue by bouncing the tunnel on the Redox system. If you are experiencing issues connecting to this peer, please place a Production Support ticket (HTTP://www.redoxengine.com/help) and Redox will assist. Thank you.

Report: "Data on Demand - Ingestion delays for a subset of customers"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

Report: "Logs delayed by ~15 minutes in dashboard - message processing not impacted"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

Report: "Access's Record Locator Service for Carequality not returning search results"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

Starting at 10:35am CST this morning (Feb 22nd), the Record Locator Service for Carequality was not returning patient results as expected. The issue has been identified and a fix is on the way.

Report: "VPN Outage for Subset of Customers"

Last update
resolved

This incident has been resolved. More information regarding the root cause and impact will follow. Thank you for your patience and understanding.

monitoring

VPN Tunnel stability has been achieved at this time. Redox is still investigating the root cause and solution of this issue.

investigating

Redox is still investigating this issue. While we troubleshoot, you may notice your VPN tunnel to Redox disconnecting and reconnecting sporadically. Thank you for your patience while we investigate.

investigating

Redox is currently investigating an issue affecting a subset of our customers utilizing VPN connectivity. At this time, if you are experiencing VPN connectivity issues with Redox, it is likely due to this outage. We ask that you wait to submit a ticket until this issue has been resolved. We will provide updates as they become available. Thank you.

Report: "Small subset of logs are delayed"

Last update
resolved

This incident has been resolved and all logs should now be caught up.

monitoring

A small subset of logs are delayed for approximately 20 minutes . A fix has been deployed and we are actively monitoring results.

Report: "Logs display delayed in dashboard - message processing not impacted"

Last update
resolved

The incident is resolved and logs are displaying without delay.

monitoring

A fix has been implemented and we are monitoring the results as logs display catches up to real time.

identified

We are currently investigating an issue causing logs to be about 10 minutes behind real-time in displaying in the dashboard. Actual message processing is not impacted.

Report: "Some VPN connections unable to send traffic to Redox"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

A subset of connections sending traffic to Redox over VPN are currently unable to connect. We are investigating the issue and will post updates as more information becomes available.

Report: "VPN connectivity issues for some connections"

Last update
resolved

The connectivity issues have been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

We are continuing to investigate this issue.

investigating

Connections sending MLLP traffic to 10.253.0.2 may be experiencing connectivity issues. We are investigating the issue and will post updates here as they become available.

Report: "Dashboard pages not loading"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

Report: "Dashboard Pages Not Loading"

Last update
resolved

This incident has been resolved.

identified

The issue has been identified and a fix is being implemented.

investigating

We noticed around 5:15 PM CST that a subset of dashboard functionality is not working. Certain pages are unable to load. A fix is currently being deployed

Report: "Dashboard Login Failing for Some Users"

Last update
resolved

The issue has been resolved. All users should now be able to log into the dashboard.

identified

Some users are currently unable to log into the Redox dashboard. The issue has been identified and a fix is being implemented. We expect access for all users to be restored in the next hour.

Report: "VPN Traffic Degraded for Subset of Customers"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

We are currently investigating an issue affecting our VPN gateway, which has affected traffic for a subset of customers. You may receive alerts regarding messages not being delivered during this time. Redox has identified the issue and is working on implementing a fix for this. Thank you for your patience.

Report: "Issue Processing Client Certificate Traffic"

Last update
resolved

This incident has been resolved.

identified

The issue has been resolved with CEQ. Carequality data is flowing as expected at this time.

investigating

Redox is investigating a message delivery issue: What is affected: XCPD, XDR, XCA and Carequality traffic. What is NOT affected: Redox API, SFTP, HL7v2 traffic.

Report: "Logs not searchable via dashboard and Platform API - message processing not impacted"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and logs are searchable again. We are continuing to monitor.

investigating

Redox is currently experiencing an issue with querying for logs via the dashboard and the Platform API, which means that logs are not searchable. Actual message processing is not impacted. We are investigating the issue and will provide further updates as more information becomes available.

Report: "Delayed Processing of Records pushed to Carequality/RLS"

Last update
resolved

This incident has been resolved.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

investigating

Redox is currently experiencing a processing delay for records pushed to Carequality and RLS. New records pushed to Carequality/RLS are currently not queryable on the network. New records are queueing to be pushed to the network once the issue is resolved, so no data will be lost. We are investigating the processing delay and will post further updates as more information becomes available.

Report: "Message Processing - Slowed for subset of customers"

Last update
resolved

This incident has been resolved.

monitoring

We are continuing to monitor for any further issues.

monitoring

A fix has been implemented and we are monitoring the results.

identified

The issue has been identified and a fix is being implemented.

Report: "Degraded inbound and outbound message processing time"

Last update
resolved

This incident has been resolved.

monitoring

Processing times have returned to normal, the incident is now resolved

monitoring

As of 4:15 PM Central, the RedoxEngine is experiencing degraded performance for message processing. You may experience delayed transmission of messages at this time. We have identified a root cause and have applied a fix, and are actively monitoring, but it will be some time before processing time returns to normal. If you have any additional questions, please notify us at support@redoxengine.com.

Report: "Redox Dashboard Filter Issues"

Last update
resolved

The issue has been resolved. If your organization has been impacted you will receive a separate email with more information.

identified

Customer-maintained dashboard filters are not operating as expected. Redox is implementing a hotfix & we will update you with more information as soon as it is available.