Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This is the overview page for version 5 release notes.
This page contains the release notes for Cinchy version 5.0
Cinchy on Kubernetes: We have added the ability for deploying Cinchy v5 on the Kubernetes system. Kubernetes is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, and with it comes a myriad of features that help to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and easily scale up/down with your current operations.
Fluentbit, Opensearch: Available to those who deploy on Kubernetes, Fluentbit collects logs which are then displayed through the Opensearch visual dashboard, for all pods that write to stdout. This streamlines your search for information by putting the control into your hands and compiling your logs in one easy to access place — you can now easily write a query against all of your logs, in all of your environments. You will have access to a default configuration out of the box, but you can also customize your dashboards as well.
Grafana and Prometheus: With the Kubernetes addition, you now have access to Prometheus to collect your metrics. Prometheus records real-time metrics in a time series database used for event monitoring and alerting. You can then create custom dashboards through Grafana to display your data in an easy to use visual that makes reporting on your metrics easy, and even set up push alerts based on your custom needs.
PostgreSQL: You now have the option to deploy Cinchy on PostgreSQL, an open source alternative to the Microsoft SQL Server that can save you the cost of licensing fees. It is standards-compliant, reliable, highly programmable, and allows for concurrency. Utilizing this deployment makes Cinchy more affordable and scalable. We recommend Amazon Aurora for AWS users.
Kafka: Kafka is an open-source event streaming platform. This is designed to act as the middleware that allows for messaging between components through a queuing mechanism.
Redis: Redis is currently being used to facilitate a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It is also a storage location for the execution output when running batch data syncs.
All components have been transitioned from Log4Net to Serilog.
The BuildIdentifier property from the appsettings.json will now appear in the healthcheck endpoint at the root level of the json payload, with a key of “buildidentifier”.
Elmah was removed from the platform.
Refactored hidden passwords in initialization.
Added the ability to ingest S3 Datasources for delimited or parquet files.
Improved performance of the Connection UI when there is a large (250+) number of columns/mappings.
Optimization of bulk upsert performance.
Added the following UI optimizations for handling large tables: default views set to collapsed, page size limited to 1k records, and added a button for getting the row count. Added the ability for Connections to read/write error files to S3 when an S3 bucket is specified.
Added accessibility fixes.
We have added support for PUT, PATCH, and DELETE in UDF Extensions in addition to GET/POST.
You are able to override a Kafka topic within the appsettings for Connections, the Worker, and the Event Listener.
We have added support for the INSERT INTO SELECT statement, which copies data from one table and inserts it into another table. Click here for more information on INSERT INTO SELECT.
When using Postgres, the SELECT @cinchy_row_id function will fail in queries. Instead, use the OUTPUT clause with INSERT, UPDATE, DELETE. Click here for more information on OUTPUT.
We have added support for the TRUNCATE TABLE statement, which removes all rows from a table, without logging the individual row deletions. Click here for more information on TRUNCATE TABLE. (Please note that we do not support the "With Partitions" argument.)
Fixed an issue where two users approving data could cause the row to become corrupted.
Addressed a memory leak in query translation issue through the creation of a background task that removes expired objects from the cache.
Fixed an issue where updating the formula of a non-cached calculated column wouldn’t reflect properly in the Table Columns table.
Addressed an issue where changing a field in a table with multi-select links results in the removal of the field value from the version history.
Fixed an API issue when updating UDF columns.
Fixed an issue where numeric calculated columns that resolved off of a link column's numeric display column wouldn't work.
Enablement of Content-Type headers to be added for REST API data syncs during GET requests.
Added frame-ancestors to the UI to prevent UI redress attacks.
Implemented HSTS headers for when HTTPS is enabled on Cinchy.
This page details the release notes for Cinchy v5.3
For instructions on how to upgrade to the latest version of Cinchy, see
We are continuing to improve our Connections offerings, and we now support as a data sync target in Connections.
Apache Kafka is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time
Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro stores both the data definition and the data together in one message or file. Avro stores the data definition in JSON format making it easy to read and interpret; the data itself is stored in binary format making it compact and efficient.
Some of the benefits for using AVRO as a data format are:
It's compact;
It has a direct mapping to/from JSON;
It's fast;
It has bindings for a wide variety of programming languages.
Focus the results of your Network Map to show only the data that you really want to see with our new URL parameters.
You can now add Target Node, Depth Level, and Max Depth Level Parameters, if you choose.
Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=
Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.
Target Node uses the TableID number, which you can find in the URL of any table.
Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node
Max Depths: This parameter defines how many levels of network hierarchy you want to display.
Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.
Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.
Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.
The below example visualizer uses the following URL: <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1
It shows Table ID 8 ("Groups") as the central node.
It only displays the Max Depth of 2 connections from the central node.
It highlights the nodes that have a Depth Level of 1 from the central node.
We have increased the length of the [Parameters] field in the [Cinchy].[Execution Log] to 100,000 characters.
Two new parameters are now available to use in real time syncs that have a Cinchy Table as the target. @InsertedRecordIds() and @UpdatedRecordIds() can be used in post sync scripts to insert and update Record IDs respectively, with the desired value input as comma separated list format.
We have fixed a bug that was preventing some new SSO users belonging to existing active directory groups from seeing tables that they should have access to.
We have fixed a bug where a Webhook would return a 400 error if a JSON body was provided, and the key was in the query parameter of the HTTP request.
This page contains the release notes for version 5.4 of the Cinchy platform.
Version 5.4 of the platform was released on January 18th, 2023.
For instructions on how to upgrade to the latest version of Cinchy,
Customize the appearance of your Form text with our new rich text editing capabilities. Enabling this on your text columns will give you access to exciting new formatting options previously unavailable in Forms such as:
Bold, Italic, Underlined text
Checklists
Headers
Hyperlinks
etc.
For more information on how to make a visual impact with our new rich text editing capabilities, please review the
A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique ID. All Cinchy Tables and Columns have a GUID.
This feature is particularly useful when deploying between Cinchy instances.
For example, in a model deployment, you must have matching GUIDs on your columns in order for them to properly load between environment A and environment B. There might be times when these GUIDs don’t automatically match, however, such as if you manually added a new column to environment B and also manually added it to environment A.
In this case, the two columns would have different GUIDs, and the model deployment would fail. With this new capability, however, you can match up conflicting GUIDs to properly load your model.
Version 5.4 of the Cinchy platform introduces data polling, which uses the Cinchy Event Listener to continuously monitor and sync data entries from your SQLServer or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary to capture frequently changing data.
For new environments (or if your setting was previously left blank), we have changed the Cinchy default session timeout from 30 minutes to 7 days. This will keep you logged in and working without interruptions. You can further change or revert this session timeout value in your appsettings.
In an IIS deployment, you can find the value in your CinchySSO > appsettings.json
In a Kubernetes deployment, you can find the value in your deployment.json file.
Becauses of the .NET update, if you are upgrading to 5.4+ on an SQL Server Database you will need to make a change to your connectionString. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.
We have added a silent refresh to the Connections experience to keep your session active while you're on the UI and to keep you working without interruptions.
Real time data sync will now continue to retry if an "Out of Memory Exception" is thrown, avoiding unnecessary downtime.
You now have the ability to choose between Debian or Alpine based Docker images when using a Kubernetes deployment of the Cinchy platform to be able to connect to a DB2 data source in Connections.
Alpine: "5.x.x"
Debian: "5.x.x-debian"
We have increased the average throughput for CDC subscriptions returning the Cinchy ID, so that it will now be able to process a greater number of events per second. Being able to reliably exceed 1000 events per second, based on the average use case, means that you can leverage the CDC capability for more demanding use cases.
Prior to this release, the Files API could only handle files up to 100mb. We have now upped the maximum default file size to 1GB and have added a configurable property to allow you to set your own upload size should you wish.
In an IIS deployment, you can find the value in your Cinchy > appsettings.json
In a Kubernetes deployment, you can find the value in your deployment.json file.
We have fixed an error that occurred when attempting a data sync with conflicting target and source data types in link columns, where the error message would read: Value must be specified from the available options
We have fixed an issue that was preventing new Connection jobs from starting when a previous job got stuck.
We have fixed an issue where data syncs would fail if your sync key used a Target Column with a Link Column property that is different from the Primary Linked Column in the table definition.
We have fixed a bug that was impacting write performance to tables on PostgreSQL with Data Change Notifications enabled.
We have fixed a "cell entitlements failed" error on Forms that would occur if a Form column contained a single quote in the column name.
We have fixed an issue on Forms where adding a [Created By] or [Modified By] field would return an error.
The /healthcheck no longer redirects to the initialization screen during a Cinchy startup, allowing you to properly hit the endpoint.
This page captures the release notes for Cinchy version 5.2
Version 5.2 of the platform was released on September 16th, 2022.
For instructions on how to upgrade to the latest version of Cinchy,
We have increased the number of possible Cinchy IDs that can be generated. This in turn allows the creation of more records within one table, so that you can create and manage larger data sets.
Previous Limit: 2,147,483,647 (2^31-1) Cinchy IDs per table
Updated Limit: 9,223,372,036,854,775,807 (2^63-1) Cinchy IDs per table
For backward compatibility with your database, you will need to manually run the below script against your TSQL or PGSQL databases. For instructions on how to run this upgrade,
WARNING: This script is REQUIRED when upgrading from v5.1 or lower to v5.2 or higher, otherwise your platform will break.
We have added new optimizations and quicker processing of Kafka messages for real time data syncs in Connections.
For added security, any logged password or sensitive parameters from the request details of a SOAP connector data sync is now redacted in the logs.
Dead messages in the event listener are now written out to the execution errors table for easy collection and querying.
The Connections experience now supports sourcing file based data sources from Azure Blob Storage and Amazon S3.
We have made improvements to the Files API to avoid cache build up and optimize the API.
General security fixes and updates.
We have added anonymous API access to GraphQL, no token required.
We have added write operations to our GraphQL beta, meaning that you can now insert and modify data.
We have fixed a UI instability bug that resulted in the inability to resize view panes in the query designer and difficulty in selecting any cell in the first row of a table. This bug was affecting Chromium users (Google Chrome, Brave, Microsoft Edge, etc.) who had recently updated their internet browsers.
We have fixed a bug that caused the “Created” column to incorrectly display as the last approved date of the column instead of the column created date.
You can now add/remove a column from a table that has a columnar index without needing to remove said index entirely. We have also fixed a bug that prevented users from reapplying their columnar index to a table once it had been removed.
We have fixed an issue where an “Unsupported Function Call” error was raised in certain situations when using the REPLACE function in conditional calculated columns in Connections.
We have fixed a bug that caused unnecessary updates to the Users table when a user’s Language and Region was not set.
We have fixed a bug that was causing some GUID calculated columns to appear as blank, such as in the Integrated Clients table. If you are experiencing this bug, a manual update on the affected rows, either through the UI or through an UPDATE query, will resolve it.
We have fixed an issue where the '&' in links was sometimes showing up as 'amp&' in the table view instead. This fix will only appear for customers on Postgres/Microsoft SQL servers 2017 or higher.
We have fixed an issue where certain Update statements on multi-select link columns were failing to properly update with the link values specified. This bug was affecting statement with long strings done via the API.
We have fixed a bug that was causing the Event Listener to pick up and process messages from deleted configs from the LIstener Configs table.
We have fixed a bug that was causing an InvalidOperationException when executing a POST request to a Saved Query API.
We have fixed a bug that was throwing errors on reconciliation when Data Syncs compared Text Conditional Calculated Columns to Links (PGSQL).
FOR JSON PATH now works as expected in Postgres deployments.
For information on setting up data syncs with Kafka as a target,
We have also added support for as a data format and added integration with the Kafka Schema Registry, which helps enforce data governance within a Kafka architecture.
For more about AVRO and Kafka,
For information on configuring AVRO in your platform,
We continue to optimize our capabilities by improving memory utilization and performance.
You can read more about setting up Data Polling
A mandatory database upgrade script was introduced in v5.2 that increased the number of possible Cinchy IDs that can be generated (). To streamline this process further, we have created a utility to deploy the changes. This should save you valuable time and resources when performing the upgrade, even on large databases.
For more information on the utility,
We have upgraded our application components to .NET 6.0 to ensure official Microsoft support for another 2 years.
In an IIS Deployment you must update your connectionString in your and appsettings.
In a Kubernetes deployment you must update your connectionString in your
When either or your platform, you can use the following Docker image tags for the listener, worker, and connections:
You now have the option to update the default passwords for Grafana and Opensearch in a Kubernetes deployment by configuring your deployment.json file. See here for instructions on updating and here for
Note: When choosing your maximum upload size, keep in mind that very large files may slow down your database if you are using
We have expanded our Connections capabilities to support binary file types as a data source.
We have improved the Connections experience by now making it optional to input a username or password when starting a batch sync job. if you want to run the job as the currently logged in user.
You now have the option to free up database space by using S3 compatible or Azure Blob Storage for file storage. This is configured in your for Kubernetes installations and the appsettings.json in an IIS deployment.
If you are upgrading from an earlier v5 version, you can update your previous configuration to take advantage of this. For further instructions,
You are now able to run an on records that link to the Files table to delete the underlying referenced file.
This page details the Cinchy v5.1 release notes
Our GraphQL API provides a complete and understandable description of your data and gives you the power to ask for exactly what data you need and nothing more all in a single request, while leveraging the existing ecosystem of GraphQL developer tools.
This is a beta release that offers read-only queries. Future releases will include more query features and mutation support (writes).
Performing Data Sync from Cinchy to Salesforce no longer requires write access to the sync key column. This means that you can maintain your Salesforce environment and security protocols without needing to either modify them or create additional attributes, for your sync to work.
We have introduced a new STRING_ESCAPE() function that escapes single quotes when wrapped around data sync parameters. It uses the following syntax to wrap around parameters or column references respectively: STRING_ESCAPE(@COLUMN('yourcolumn')) or STRING_ESCAPE(@yourparameter). This function is particularly useful when used in a post sync script's CQL.
We have added WCAG 2.1 AA Accessibility fixes to improve screen-reader performance and keyboard navigation accessibility.
We’ve implemented a new loading screen for when Cinchy is installing and initializing.
We have improved the performance of Meta Forms by reducing the rendering time and adding visual guides to help you see which form sections have completed loading.
Date fields with custom display formats will now render correctly, as opposed to showing up in mm/dd/yyyy format by default.
In forms that have a 1:1 parent/child hierarchy, we have added the option to render the child form as a flattened form, instead of in a table grid.
To improve UI consistency across Forms, the record selection drop down will now appear even if no records exist in the destination table.
We have added a new function, GetLastModifiedBy([Column]), which will return the CinchyID of the user who last modified the specified column. For more information on this new function, review the documentation here.
We have fixed an error where scrolling in a table with a file column in certain situations prevents the UI from rendering all the data.
We have fixed an issue where sorting by columns with a ‘%’ in the column name caused the rows not to sort correctly in the UI.
We have fixed a bug in Meta Forms that prevented queries in child form filters from working as expected when using OR conditions.
This page outlines the various changes made to the Cinchy platform in version 5.6
Cinchy version 5.6 was released on May 31st, 2023.
For instructions on how to upgrade your platform to the latest version, please review the documentation here.
When upgrading to Cinchy v5.6, there are mandatory changes that must be made within your platform appsettings files. For an IIS deployment this involves making manual updates to your appsetting.json files. For a Kubernetes deployment, the changes will reconcile automatically if you are deploying the new 5.6 template. If you are not deploying the new template, please reach out to the Support team. For instructions on how to upgrade your platform to the latest version, please review the documentation here.
If you are planning to update your platform to 5.6 on a Kubernetes deployment, please note that you will also need to update your AWS EKS Kubernetes version to 1.24.
The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.
Instead, there is a new registry: registry.k8s.io.
New Cinchy Deployments: this change will be automatically reflected in your installation.
For Current Cinchy Deployments: please follow the instructions outlined in the upgrade guide to ensure your components are pointed to the correct image repo.
You can review the full details on this change on the Kubernetes blog.
To continuously improve our Connections experience, we have made changes to the Sync Behaviours tab for Full-File data syncs.
Record behaviour is now presented via radio buttons so that you can see and select options quicker and easier than ever before.
We have added a new "Conditional" option for Changed Record Behaviours. When Conditional is selected, you will be able to define the conditions upon which an Update should occur. For instance, you can set your condition such that an update will only occur when a "Status" column is changed to "Red", otherwise it will ignore the changed record. This new feature provides more granularity on the type of data being synced into your destination and allows for more detailed use cases. For more information on this new function please review the documentation here.
We have added support AWS EKS EBS volume encryption for customers wishing to take advantage of industry-standard AES-256 data encryption without having to build, maintain, and secure their own key management infrastructure.
By default, the EKS worker nodes will have a gp3 storage class for new deployments. If you are already running a Cinchy environment: make sure to keep your eks_persistent_apps_storage_class to gp2 within the devops automation aws.json file.
If you want to move to gp3 storage or gp3 storage and volume encryption: you will have to delete the existing volumes/pvc's for Kafka, Redis, Opensearch, Logging Operator and Event Listener with StatefulSets so that ArgoCD can recreate the proper resources.
Should your Kafka cluster pods not come back online after deleting the existing volumes/pvc's, restart the Kafka operators. You can verify the change by running the below command:
Miscellaneous security fixes.
General CDC performance optimizations.
Continuing to increase our data sync capabilities and features, you can now use @CinchyID as a parameter in post sync scripts when the source is from a Cinchy Event (such as the Event Broker, the Event Triggered REST API, and the Event Triggered MongoDB sources). This means that you can now design post sync scripts that take advantage of the unique CinchyID value of your records.
To better communicate the relationship between the Source and any required Listener Configurations, we have added additional help text to event-based sources to the Source step of a connection. This help text will help better elucidate when a listener configuration is required as part of the sync.
We have expanded on our Cinchy Event Triggered data sync source features (REST API and MongoDB), allowing you more freedom to utilize your data. You now have the ability to reference attributes of the CDC Event in your calculated columns. (Note that syncs making use of this must limit their batch size to 1.)
To better enable your business security and permission-based needs, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up an AWS IAM role for use in Connections, please review the documentation here.
To make troubleshooting easier, Connections Builders are now able to download batch sync logs directly from the Experience UI. This prevents the need to have to either wait for logs to appear in Opensearch or relying on an administrator/support to provide logs
You are also able to use AWS IAM roles when syncing S3 file or DynamoDB sources in Connections. For more information, please review the "Auth Type" field in the relevant data sync source pages.
To increase your data sync security and streamline authentication, we have added support for the use of x.509 certificate authentication for MongoDB Collection Sources, MongoDB (Cinchy Event Triggered) Sources, and MongoDB Targets. This new feature can be accessed directly from the Connections UI when configuring your data sync. For more information, please review the configuration pages for MongoDB Collection Source, MongoDB (Cinchy Event Triggered) Source, and MongoDB Targets.
Tip: Click on the below image to enlarge it.
We have fixed a bug that was causing bearer token authenticated APIs to stop working on insecure HTTP Cinchy environments.
We have fixed an issue relating to the .NET 6 upgrade that was causing the Event Listener and Worker to not start as a service on IIS in v5.4+ deployments.
We have fixed a “Column doesn’t exist” error that could occur in Postgres deployments when incrementing a column (ex: changing a column data type from number to text).
We have fixed a bug where table views containing only a single linked column record would appear blank for users with “read-only” permissions.
We have fixed a bug where the Listener Configuration message for a data sync using the MongoDB Event source would return as "running" after it was disabled during an exception event -- the message will now correctly return an error in this case.
We have fixed a bug that was preventing DELETE actions from occurring when Change Approvals were enabled on a CDC source.
In continuing to provide useful troubleshooting tools, we have fixed a bug that was preventing dead messages from appearing in the Execution Errors table when errors occurred during the open connection phase of a target. This error may have also occurred when a MongoDB target had a connection string pointing to a non-existent port/server.
We have fixed a bug that was preventing Action Type column values of "Delete" from working with REST API target Delta syncs.
We have fixed a data sync issue preventing users from using environment variables or other parameters in connection strings.
We have fixed a bug in the Polling Event data sync where records would fail with a “unique constraint violation” if both an insert and an update statement happened at nearly the same time. In order to implement this fix, you need to add the “messageKeyExpression” parameter to your listener config when using the Polling Event as a source. Please review the documentation here for further information.
We have fixed a bug that was causing data syncs to fail when doing platform event inserts of any type into Salesforce targets.
We have fixed a bug where using the ID Column in a Snowflake target sync would prevent insert and update operations from working.
We have fixed a bug where attempting to sync documents using a UUID (Universally Unique IDentifier) as a source ID in a MongoDB Event Triggered batch sync would result in a blank UUID value when saved to a Cinchy table.
We have made application stability and quality fixes to Forms, including:
Custom date formats now work in Grid, Form, and Child Form views.
A child form that has a Link column reference to a parent record now auto-populates with the parent record's identity.
A space has now been added between multi-select values when displaying a record in an embedded child table.
Negative numbers can now be entered into Number type inputs on forms.
We have fixed an issue where updated file attachments on a form would fail to save.
We have fixed a bug that was causing a “Can’t be Bound" error when you attempted to use an UPDATE query on a multi-select link column as a user with multiple filters active.
Cinchy version 5.5 was released on February 24, 2023.
For instructions on how to upgrade your platform to the latest version, please review the documentation
The Cinchy Upgrade Utility was previously introduced in v5.2 in order to facilitate a mandatory INT to BigInt upgrade. This tool will continue to be used in subsequent releases as an easy way to deploy necessary changes to your Cinchy platform.
For version 5.5, you must run the Upgrade Utility in order to fix a row-breaking issue that could be triggered on cells with over 4000 characters, where you are unable to update any column in your record.
Please review the or for further details.
You now have the option to use personal access tokens (PATs) in Cinchy, which are alternatives to using passwords for authentication. Similar to , you can use a Cinchy PAT to call the Cinchy API as your current user, meaning your associated access controls will be honoured as well. Cinchy PATs, however, have an expiration date of up to 1 year. A single user can have up to 5 PATs active at one time.
For information on setting up, configuring, and managing PATs, please
We have added MongoDB to our Connections offering as both a source and target connector.
Review the following documentation to utilize this new capability in Cinchy:
We are continuing to improve our text editor functionality for forms. You can now embed tables and images into your text. We have also made various styling and usability quality of life updates, including the addition of checkbox style lists.
We have added support for ephemeral volumes in Connections on a Kubernetes deployment. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node**;** it allows pods to be started and stopped without being limited to the location of persistent volume. Running more than one pod for Connections per availability zone enables you to effectively leverage auto-scaling functionality.
We have updated the Connections experience to enable more use cases. You can now use CDC parameters in Calculated Columns and use the CinchyID in the sync key in real-time syncs.
Kafka supports cluster encryption and authentication, which can encrypt data-in-transit between your applications and Kafka. We have added the ability to include this encryption/authentication in the Listener Config when setting up real-time syncs using Kafka.
Using this parameter will specify which protocol will be used for communication between client and server. Cinchy currently supports the following options: Plaintext, SaslPlaintext, or SaslSsl.
Paintext: Unauthenticated, non-encrypted.
SaslPlaintext: SASL-based authentication, non-encrypted.
SaslSSL: SASL-based authentication, TLS-based encryption.
We have improved the implementation of tooltips such that linked columns display the tables that they link to. Hovering over the i symbol on a linked column will show the linked domain and table in the following format: Domain - Table; ex: HR - Employees. You can now also see them in the grid view.
We have introduced a Retry Configuration for REST API sources and targets. This will automatically retry HTTP Requests on failure based on a defined set of conditions. This capability provides a mechanism to recover from transient errors such as network disruptions or temporary service outages.
We have increased the default retention for Prometheus from 5GB to 50GB to allow you to store more metric data at a time.
This change is automatically reflected in new v5.5 deployments. Customers on previous v5 versions wishing to implement the change are able to rerun the automation script and deploy the new template to reflect the update.
To make the Forms experience more responsive and process quicker, we have introduced lazy loading of records while searching. Instead of loading and rendering every form record in the search box, which can be a slow process for use cases with millions of records, lazy loading will initially retrieve a limited number of records. These results can then be further optimized by inputting your Lookup Filter Conditions.
We have added the ability to pass parameters from a REST response into post sync scripts during both real-time and batch data syncs, allowing you to do more with your REST API data.
Data changes in Cinchy (CDC) can now be used to trigger a data sync from a REST API or MongoDB data source to a specified target. This works as an alternative to RunQuery.
We have added two new functions, JSON_ESCAPE and URL_ESCAPE, which can be used in Connections to escape parameter values when used in constructing the body of a REST API Request or in the URL
We have added an Authorization header type for REST API data syncs in Connections. An authorization request header can be used to provide credentials that authenticate a user with a server, allowing access to a protected resource.Selecting this header defines the Header Value as a password field.
We have solved an issue that was causing Connections to get stuck behind long running jobs despite there being capacity to execute. This fix enables predictable execution behavior without stoppage.
We have fixed an issue in the MatchEngine where execution was failing in versions of Cinchy above 5.2.
File sourced data syncs will no longer fail, allowing you to run Connection jobs with uploaded files without the risk of a file not found error when auto-scaling is enabled.
In order to prevent needlessly exhausting Cinchy IDs, the platform will no longer continuously retry to update records that have failed to save. This can sometimes occur when a value causes a calculated field to violate a uniqueness constraint. If the below error appears, you will have to manually update the cell to retry the save.
We have fixed a bug that was causing the Connections UI to crash if you attempted to run a job while there was an empty parameter in the Info tab (ex: no name or formula)
We have fixed a bug that would causes images in Forms to sometimes appear with a label above them, using the image's URL as the label's value.
We have fixed a bug that was forcibly terminating authenticated sessions in Grafana, now allowing you to work without interruptions.
We have solved an issue where using a form as a child form with file links wouldn't render the link thumbnail correctly in the "edit record" view.
We have fixed a bug that prevented record updates when multiple users attempted to update a row in too quick of a succession.
We have fixed an issue where doing delta batch syncs with a REST API target wouldn’t replace the @COLUMN parameter correctly.
We have fixed a bug in Connections where an Oracle sync target would have the wrong tag in the Config XML.
We have fixed a bug that was causing a “Listener is running” message to erroneously appear when the status of the listener was actually set to Disabled.
We have fixed a bug that was preventing REST API real-time sync execution errors from being inserted into the execution errors table.
is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
In order for the above tooltip improvement to reconcile in your Cinchy environment, you must deploy an up-to-date version of the Forms Data Experience. You can review the installation instructions and retrieve the package
For more information on using this configuration, refer to the documentation and
For an example and instructions on this capability,
For more information, please review the documentation hereand here