5.7 Release Notes

Cinchy version 5.7 was released on October 3rd, 2023

For instructions on how to upgrade your platform to the latest version, please review the documentation here.

New Capabilities


Test connections

We made it simpler to debug invalid credentials in data syncs by adding a "Test Connection" button to the UI for the following sources and destinations:

NameSupported sourceSupported destination

Amazon Marketplace

✅ Yes


Binary Files

✅ Yes



✅ Yes



✅ Yes

✅ Yes

Delimited File

✅ Yes



✅ Yes



✅ Yes


Fixed Width File

✅ Yes


Kafka Topic


✅ Yes


✅ Yes



✅ Yes

✅ Yes


✅ Yes





Salesforce Object

✅ Yes

✅ Yes


✅ Yes

✅ Yes




MS SQL Server

✅ Yes

✅ Yes

Selecting this button will validate whether your username/password/connection string/etc. are able to connect to your source or destination. If successful, a "Connection Succeeded" popup will appear. If unsuccessful, a "Connection Failed" message will appear, along with the ability to review the associated troubleshooting logs. With this change, you are able to debug access-related data syncs at a more granular level.

Listener config integration

As we continue to enhance our Connections Experience offerings, you can now configure your listener for real-time syncs directly in the UI without having to navigate to a separate table. For any event triggered sync source, (CDC, REST API, Kafka Topic, MongoDB Event, Polling Event, Salesforce Platform Event, and Salesforce Push Topic), there is now the option to input your configurations directly from the Source tab in the Connections Experience. Any configuration you populate via the UI will be automatically reflected back into the Listener Config table of your platform.

You are able to set the:

  • Topic JSON

  • Connections Attributes

  • Auto Offset Reset

  • Listener Status (Enabled/Disabled)

Information on the parameters and configurations for the above settings can be found here and here.

For ease of use, we also added help tips to the UI, as well as examples where necessary.

If there is more than one listener associated with your data sync, you still need to configure it via the Listener Configuration table.

New Source: Oracle Polling Connector

We added Oracle as a new database type for Polling Events in Connections. Data Polling is a source option first featured in Cinchy v5.4 which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.

Source filter additions

For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, we added a new "Conditional" option for source filters in the Connections UI. Similarly to how the "Conditional Changed Record Behaviour" capability, works, once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.

Cinchy Secrets table

The Cinchy platform now comes with a new way to store secrets — the Cinchy Secrets Table. Adhering to Cinchy’s Universal Access Controls, you can use this table as a key vault (such as Azure Key Vault or AWS Secrets Manager) to store sensitive data only accessible to the users or user groups that you give access to.

You can use secrets stored in this table anywhere a regular variable can go when configuring data syncs, including but not limited to:

  • As part of a connection string;

  • Within a REST Header, URL, or Body;

  • As an Access Key ID.

You can also use it in a Listener Configuration.

Additionally, we've implemented a new API endpoint for the retrieval of your secrets. Using the below endpoint, fill in your <base-url>, <secret-name>, and the <domain-name> to retrieve the referenced secret.

This endpoint works with Cinchy’s Personal Access Token capability, as well as Access Tokens retrieved from your IDP.

Blank Example:


Populated Example:


The API will return an object in the below format:

    "secretValue": "password123"

Polling listener optimization

To improve your Connections experience, we made various optimizations to our Polling Event Listener.

  • We added a new configurable property, DataPollingConcurrencyIndex, to the Data Polling Event Listener. This property allows only a certain number of threads to run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file > "DataPollingConcurrencyIndex": <numberOfThreads>

  • We added a new configurable property, QueueWriteConcurrencyIndex, to the Data Polling Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file > "QueueWriteConcurrencyIndex": <numberOfThreads>. Note that this index is shared across all listener configs, meaning that if it's set to 1 only one listener config will be pushing the messages to the queue at a single moment in time.

  • We added a new mandatory property, CursorConfiguration.CursorColumnDataType, to the Listener Topic for the Data Polling Event. This change was made in tandem with an update that ensure that the database query always moved the offset, regardless of if the query returned the records or not—this helps to ensure that the performance of the source database isn't being weighed down by constantly running heavy queries on a wide range of records when the queries returned no data. This value of this mandatory property must match the column type of the source database system for proper casting of parameters.

  • We added a new configurable property, CursorConfiguration.Distinct, to the Listener Topic for the Data Polling Event. This property is a true/false Boolean type that, when set to true, applies a distinct clause on your query to avoid any duplicate records.

// App Settings JSON Example
// Example of the new configurable propeties: DataPollingConcurrencyIndex (set to "1") and QueueWriteConcurrencyIndex (set to "1")
"AppSettings": {
    "GetNewListenerConfigsInterval": "",
    "StateFileWriteDelaySeconds": "",
    "KafkaClientConfig": {
      "BootstrapServers": ""
    "KafkaRealtimeDatasyncTopic": "",
    "KafkaJobCancellationTopic": "",
    "DataPollingConcurrencyIndex":  1,
    "QueueWriteConcurrencyIndex":  1
// Listener Config Topic Example
// Example of the new mandatory CursorColumnDataType property, which below is set to "int", and "Distinct", below set to "true".
   "CursorConfiguration": {
       "FromClause": "",
       "CursorColumn": "",
       "BatchSize": "",
       "FilterCondition": "",
       "Columns": [],
            "Distinct": "true"
            "CursorColumnDataType" : "int"
        "Delay": ""



We made various enhancements to the Connections Experience which should help to simplify and streamline your ability to create and maintain data synchronizations across the platform. Examples of these changes can be found in our Data Sync documentation.

Radio buttons for selection

  • Replaced drop-down menus with radio buttons for the following options:

    • Sync Strategy

    • Source Schema Data Types

    • Source Schema "Add Column"

Improved visibility

  • Expanded the width and height of source, destination, and connections drop-down menus to ensure visibility, even on screens with varying sizes.

Streamlined file-based source fields

  • Streamlined the organization of file-based source fields for greater efficiency.

Simplified options

  • Eliminated the following fields for a more focused interface:

    • Source > Cinchy Table > Model

    • Info > Version

  • The API Response Format field has been removed from the REST Source configuration. This change reflects that the only supported response format is JSON.

Refined order of operations

  • Reorganized the process steps, moving the "Permissions" step within the "Info" tab.

Clearer terminology

  • Adjusted terminology for clarity and consistency:

    • Renamed Sync Behaviour tab to Sync Actions.

    • Replaced Parameters with Variables.

    • Changed "Sync Pattern" to Sync Strategy in the Sync Actions tab.

    • Updated Column Mappings to Mappings in the Destination tab.

    • Substituted Access Token with API Key in the Copper Source, aligning with Copper's documentation language.

Enhanced guidance

  • Included descriptive explanations in various sections, such as Mapping, Schema, and Sync Behaviour, to provide comprehensive guidance during data sync configuration.

Unified language

  • Standardized language used in file-based connectors across all Sources.

Improved clarity

  • Added clarifying text throughout the interface for smoother navigation and configuration, fostering a more user-friendly experience.

Organizational enhancements

  • Grouped Sources by type, distinguishing between Batch and Event categories.

  • Implemented alphabetical sorting for improved accessibility and ease of locating connections.

Simplified Destination setup

We've streamlined the destination setup process for data syncs. When selecting a Source other than Cinchy, the destination is now automatically set as Cinchy Table. This enhancement speeds up the creation of data syncs.

Unique identifiers for saved connections

To assist sharing and collaboration on connections, we've introduced unique URLs for all saved connections. Each connection now possesses a unique URL that can be shared with other platform users. This URL links directly to the saved configuration.

Enhanced Load Metadata process

We've made significant improvements to the Load Metadata sources and destinations, enhancing user experience:

  • The Load Metadata modal no longer appears automatically when selecting a relevant source or destination.

  • The availability of the Load Metadata button is conditional on filling out parameters in the Connection section.

  • Clicking the Load Metadata button now directly takes you to metadata columns, skipping the interstitial modal.

  • In the Schema section, all columns are now collapsed by default. Manually added columns maintain an expanded view.

Redesigned UI for Listener Configurations

For simpler real-time sync setups, the Cinchy Event Broker has a new Listener section. This section assists in creating topic JSON for listener configurations, eliminating the need to manually set up topic JSON in the Listener Config table. Refer to the Cinchy Broker Event source page for details on topic JSON fields.

We've introduced the ability to dismiss most modals using the Escape key. This enhancement provides a more convenient and user-friendly interaction experience.


Log outputs

To help simplify and streamline the Connections experience, you are now able to view the output for each job by clicking on the Output button located in the Jobs tab of the UI after you run a sync.

This links to the Execution Log table with a filter set for your specific sync, which can help you reach your execution related data quicker and easier than before.

Log full REST Target HTTP response

We now log the full REST Target HTTP response in the data sync Execution Errors table to provide you with more detailed information about your job. This replaces the original log that only contained the HTTP response status code.

MongoDB update

We continue to provide optimization updates to our Connections capabilities. v5.7 of the Cinchy platform has the following updates for the MongoDB Event Stream:

  • A new configurable property, QueueWriteConcurrencyIndex, to the MongoDB Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to the appSettings.json > QueueWriteConcurrencyIndex: <numberOfThreads>. This index is shared across all listener configs, meaning that if it's set to 1 - only one listener config will be pushing the messages to the queue at a single moment in time.

  • We also added a new optional property to the MongoDB Listener Topic, 'changeStreamSettings.batchsize’, that's a configurable way to set your own batch size on the MongoDB Change Stream Listener.

  "database": "",
  "collection": "",
  "changeStreamSettings": {
    "pipelineStages": [],
    "batchSize": "1000"

Faster query performance for PostgreSQL multi-select column joins

We optimized PostgreSQL query performance when referencing multi-select columns.

Improved query performance using CASE statements

We improved query performance when using a CASE statement on a Link reference.


UI changes

  • We consolidated all actions into a single menu for easier navigation.

  • We moved Create new record into the single menu and renamed it to Create.

  • We added an option to copy the record link (URL) to the clipboard.

  • We changed Back to Table View to View Record in Table.

Forms action bar

To improve the user experience and make interacting with forms easier, we made the Forms action bar always visible when you scroll through a form.

URL sync with record selection

We updated the URL to accurately match the record currently displayed, when switched from the records dropdown menu.

Unsaved changes prompt in forms

You'll now get a prompt to save if you have unsaved changes in a form.

Required fields alert for child forms

We added a warning message in child forms when essential columns like "Child Form Link Field" or both "Child Form Parent ID" and "Child Form Link ID" are missing, as they're needed for proper functionality.


General security enhancements

We made several updates and enhancements to packages across Cinchy to improve our platform security.

We updated the dropdown menus for Link columns to display selected and deleted values at the top of the list so that you don't need to scroll through long lists just to find the ones you've selected.

IdentityServer4 to IdentityServer6 upgrade

We upgraded our IDP from IdentityServer4 to IdentityServer6 to ensure we're maintaining the highest standard of security for your platform.

Add Execute function to UDF extensions

We added execute, a new method for UDF extensions. This new query call returns a queryResult object that contains additional information about your result. For more information, see the Cinchy User Defined Functions page.

Expand platform support for DXD

We added additional system columns to extend the number of core Cinchy objects that can be managed through DXD 1.7 and higher.

The newly supported Cinchy objects are:

  • Views (Data Browser)

  • Listener Config

  • Secrets

  • Pre-install Scripts

  • Post-install Scripts

  • Webhooks

mTLS support

We implemented Istio mTLS support to ensure secure/TLS in-cluster communication of Cinchy components.



  • We fixed a bug in the Cinchy Upgrade Utility that was causing the use of the -c flag, which is meant to delete extra metadata created on the database, to instead run (or rerun) the entire upgrade process.

  • We fixed a bug that was stripping query parameters from Relative URLs if they were being used as the Application URL of the applets. In the below screenshot, the bug would have stripped out the "q=1" parameter, leaving only an Absolute URL in lieu of a Relative one.

  • We fixed an issue with the behaviour of cached calculated columns when using multi-select data types (Link, Choice, and Hierarchy) with Change Approval enabled. These data types should now work as expected.

  • We resolved an issue that prevented view exports from reaching the maximum limit of 250,000 records.


  • We fixed a bug where the UUID/ObjectId in a MongoDB Change Stream Sourced data sync wasn't being serialized into text format. If you have any MongoDB Stream Sourced syncs currently utilizing the UUID/ObjectId, you may need to adjust accordingly when referencing the columns with those data types.

// Previous UUID/ObjectIDs would have been serialized as the below:
  "_id": ObjectId('644054f5f88104157fa9428e'),
  "uuid": UUID('ca8a3df8-b029-43ed-a691-634f7f0605f6')

// They will now serialize into text format like this:
  "_id": "644054f5f88104157fa9428e",
  "uuid": "ca8a3df8-b029-43ed-a691-634f7f0605f6"
  • We fixed a bug where setting a user’s time zone to UTC (Coordinated Universal Time) would result in no data being returned in any tables.

  • We fixed a bug where the Sync GUID of Saved Queries transferred over via DXD would null out.

  • We fixed a bug affecting the MongoDB Event Listener wherein the “auto offset reset” functionality would not work as anticipated when set to earliest.

  • We fixed a bug where failed jobs would return errors for logs that haven't yet been created. Log files now correctly search for only the relevant logs for the failed job.

  • We fixed an issue in the data configuration table where the IF field for the Delimited File > Conditional Calculated Column wasn't displaying correctly.

  • We resolved an issue where using multiple parameters while configuring data syncs could result in parsing and execution errors.

  • We fixed a bug preventing calculated columns from working in MongoDB targets for data syncs.

  • We fixed a bug where users were prompted to restore unsaved changes for a new connection when no configuration changes to a data sync were made.

  • We fixed a bug that was causing the platform to fail upon initializing when a System User had been added to any user group (such as the Connections or Admin groups).

  • We fixed a bug where passing an encrypted value to a variable used in a field encrypted by the connections UI would cause the sync to fail. You can now use variables with either encrypted or plaintext values.

  • We fixed a bug where using the "Delta" sync strategy led to duplicating existing records in some destinations before inserting the new rows of data.


  • We fixed a bug where child record tables within a form would display data differently when exported to a PDF.

  • We fixed an issue where the first load of an applet wouldn't render sections that require Cinchy data until you refreshed the page.

  • We fixed an issue where raw HTML was being displayed instead of HTML hyperlinks.

  • We fixed a bug that prevented a form from loading if you deleted an associated child form.

  • We fixed an issue with the record dropdown search where inputs of more than 30 characters caused a failure to match.

Last updated