Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page provides information on both Schema Columns (used when configuring Data Sync Sources) and Column Mappings (used when configuring Data sync Destinations).
Schema columns refer to your mapping on your data source. For example, if your source is a CSV with the columns 'Name', 'Age', and 'Company', you would set up three schema columns in the Connections UI or data sync XML to match. These schema columns are then mapped to your destination columns for your data sync target, so that the data knows where to go.
Note that you do not have to, and sometimes won't be able to, set up an exact 1:1 relationship between source columns/data and schema columns.
The only difference between setting up your schema columns in the Connections UI vs in a data sync XML is the addition of the Alias column, which is only in the Experience. This attribute allows the user to give an alternative name to the column mapping (usually used for easier readability). The various types of columns are detailed below.
Note that some source types have unique parameters not otherwise specified in other sources. You can find information on those, where applicable, in the source's main page.
You can review the various attribute descriptions here.
Fill in the following attributes for a Standard Column (Image 1):
Name: The name of your column
Formula: The formula associated with your calculated column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Bool
Geometry
Geography
If a source column (of any type) is being synced into a Cinchy Target Table link column, the source column needs to be of dataType="Text", no matter what type it actually is.
Description: Describe your column
Advanced Settings:
You can select if you want this column to be mandatory
You can choose whether your data must be validated
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
If your data type was chosen as "text", you can choose whether to trim the whitespace or not
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern for your string replacement
Replacement
Note that you can have more than one String Replacement
Fill in the following attributes for a Standard Calculated Column (Image 2):
Name: The name of your column
Formula: The formula associated with your calculated column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Bool
Geometry
Geography
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Fill in the following attributes for a Conditional Calculated Column (Image 3):
Name: The name of your column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Bool
Geometry
Geography
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Condition:
Name:
IF: Click Edit to create the "if" for your Conditional Statement (Image 4)
Then: Click Edit to create the "then" for your Conditional Statement (Image 5)
Default: Click Edit to create your default expression (Image 6)
Fill in the following attributes for a JavaScript Calculated Column (Image 7):
Name: The name of your column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Bool
Geometry
Geography
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Script: Enter in your JavaScript
This XML element defines each column in the data set as well as their data type:
name
The user defined name for each column. This is used in <ColumnMapping> when you want to indicate the name of the sourceColumn.
dataType
The data type of each column could be Text, Date, Number, Bool, Geometry, or Geography.
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
To sync into a Cinchy table with a Geometry or Geography column, those respective data types must be used in the data sync, and the input should be in well-known text (WKT) format.
The dataType affects how the source and target data is parsed, and also determines how the fields are compared for equality. If your sync keeps updating a field that has not changed, double check your data types.
For example, given line 1 of a csv file:
Name, Location, Age
The ordinal for Age would be 3.
maxLength
The max length of data in the column.
isMandatory
Boolean value determining if the field is a mandatory column to create a row entry.
A defined synckey column of any data type can be checked for NULL values using isMandatory="true”. When validation fails, an error message is displayed in the command line. For other columns when validation fails, the Execution Errors Table is updated with Error Type, Mandatory Rule violation for that column and row that failed.
validateData
Boolean value determining whether or not to validate the data before inserting. Valid data means to fit all the constraints of the column (dataType, maxLength, isMandatory, inputFormat). If the data is not valid and validateData is true, then the entry will not be synced into the table. The Execution Errors Table is also updated with the appropriate Error Type (Invalid Format Exception, Max Length Violation, Mandatory Rule Violation, Input Format Exception)
trimWhitespace
Boolean value determining whether or not to trim white space.
description
Description of the column.
inputFormat
Date fields support the inputFormat which adheres to the c# .net DateTime.ParseExact format. See here for reference.
inputFormat
attribute is useful when source file need some format changes in the input data
<Transformations>
Column mappings defines how a single column from the data source maps to a column in a target table. Each <ColumnMapping>
has both a source and a target. If the destination is a Cinchy table and the target column is a link, then a third attribute becomes available called "linkColumn" which is used to specify the column that should be used to resolve the linked record from the source value. The value of sourceColumn should match name attribute of Source . The value of targetColumn should match that of the target table.
Below is an example of a Column Mapping in the experience followed by the equivalent XML. In the experience, the Source Column attribute is a dropdown of columns configured in the Source Section.
sourceColumn
The name of the column in the data source. The name corresponds to the user defined name from the <Column> elements in the schema.
targetColumn
The name of the column in the target table. This would be a table that's already created in Cinchy and defined in the Target.
linkColumn
The name of a column from the linked table. If the target column is a linked column from another table, you may input data based on any of the linked table's columns.
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
You can use this checklist in conjunction with the documentation below and elsewhere in this space to configure your data syncs in Cinchy.
Once you have installed all of your necessary components and decided upon which type of data sync you'd like to use, the next step is to configure your data sync. This involves a series of steps that are outlined in the sections below.
There are two options when you want to create a data sync in Cinchy.
You can input all of your necessary information through the intuitive Connections UI. Once saved, all of this data is uploaded as an XML into the Data Sync configurations table.
Or, you can bypass the UI and upload your XML config directly into the Data Sync configuration table yourself.
For real-time syncs only, you must also set up a listener configuration.
Whether you are setting up a real-time or a batch sync, you will need to create your data sync configuration. The data sync configuration defines the source, the mapping to the target, and synchronization behavior.
To set up a data sync, you can use the Connections UI or manually input an XML into the Data Sync Configuration table in Cinchy.
Navigate to the Connections Experience in Cinchy (Image 1).
In the experience, there are six tabs that you must or can input data for in order to define your connection (Image 2).
The Info tab is used to define some basic information about your data sync such as its name (Image 3). This tab is mandatory. You also have the optional ability to add in Parameters; please review the documentation here for more details on Parameters.
The Source tab is used to define important information about the source of your data sync (Image 4). This tab is mandatory. Cinchy supports many different source options including different file types and popular software systems. Each source will have different, and often unique, parameters that must be populated in the Source tab screen. You can review the full list of supported data sources, as well as their unique parameters and features, here.
The Destination tab is used to define important information about the target of your data sync (Image 5). This tab is mandatory. Like with sources, Cinchy supports many different destination options. Each destination must be properly mapped to its source, and some may have unique parameters that must be populated in the Destination tab. You can review the full list of supported destinations, as well as their unique parameters and features, here.
The Sync Behaviour tab defines what you want to happen to your data (Image 6). This tab is mandatory. There are two options on this page: Full File Sync and Delta Sync. You can review the differences between them here.
The Post Sync tab is an optional field that utilizes Cinchy Query Language (CQL) to perform actions on your resulting data (Image 7). For example, you could set up a post sync script to push retrieved data values into a specific Cinchy table.
The Permissions tab allows you to define various access controls for your data sync (Image 8). This is a role based access system where you can give specific groups read, write, execute, and/or all of the above with admin access.
The Jobs tab will appear when you are configuring a batch data sync (Image 9). This page allows you to start and track your batch jobs, and will show important info on any job successes or failures. You can also use this screen to download error logs for batch syncs.
By default, the job will run as whichever user is logged in (as long as you have authority to run the Job). You have the option to run it as another, non-SSO account if:
You have the credentials
The account has access to run the Job
You can configure this by clicking on Advanced > Run Job as a Different User
The Executions Error tab is a link to the Execution Errors table which will appear when you are configuring a real-time sync (Image 10). This table is useful for tracking any errors associated with your real-time sync. Since you don't need to click "Start a Job" in the UI for real-time syncs, a sync is considered active when your Listener Config is set up and turned to "enabled". You can see more on Error Logging here.
In lieu of using the Connections UI, you can also set up a data sync by uploading a correctly formatted XML into the Data Sync Configs table within Cinchy.
We recommend only doing so once you have a good grasp on how data sync work. Note that not all sources/targets follow the same XML pattern, but you can review a basic version that uses a Delimited File source into a Cinchy Table here.
To set up a data sync using a config XML:
In the Cinchy platform, navigate to the Data Sync Config table (Image 11).
In a new row, paste your Data Sync XML into the Config XML column.
Define your group permissions in the applicable columns.
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 12).
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Tip: Click on the below image to enlarge it.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
To execute your Data Sync you will use the CLI. If you do not have this downloaded, refer to the documentation here.
In this example we will be using the following Data Sync Commands, however, for the full list of commands click here.
Launch Powershell and navigate to the Cinchy CLI directory.
Enter and execute the following into Powershell:
Setting up a Listener Configuration is a required step when doing a real-time data sync. You will configure your Event Stream Source with your data sync information. You can review an more on the Listener Config here.
Navigate to the Listener Config table in Cinchy (Image 13).
In a new row, add in your listener config configuration data. Review the documentation here for more information.
Ensure that it is set to Enabled in order for your real-time data sync to run successfully.
The following subsections provide basic examples of both batch and real-time data syncs. These simple use cases can be used as a jumping off reference point for learning the ropes of Cinchy daata syncs.
When configuring a data sync you must set your sync behaviour. You have two options for this: Full File or Delta.
Full File syncs intake both the source and the destination data and reconcile the records by matching up the sync key. This determines any differences and allows it to perform updates, inserts, ignores, or deletes at the destination.
Delta syncs skip the reconciliation process. In batch syncs, it simply grabs records from the source and inserts it into the destination. In real-time syncs, it may act differently depending on the event type. For example, when using the Cinchy Event Broker/CDC with an insert event, a delta sync will insert the data into the destination, an update event will update, etc.
Delta syncs also have the option to provide an "Action Type Column" for REST API destinations. This reads the value of the source record from a specified column. If the value is "INSERT", then it inserts the record, "UPDATE", then it updates, "DELETE", then it deletes
When using the Full File synchronization pattern there are two distinct sections that must be configured: the Sync Key and the Sync Record Behaviour (Image 1).
The Sync Key is used as a unique key reference when syncing the data from the data source into your destination. It is used to match up the data between the source and the target, which allows for updates to occur on changed records.
To set this using a config XML, use the following guide:
Elements: <SyncKeyColumnReference>
This is used in the <SyncKey> element when specifying which columns in the Target Table to be utilized as a unique key for the syncing process.
Contained-In: <SyncKey>
Attributes: name. The name of a column in the destination that you are syncing data into.
The Sync Record Behaviour is broken down into three subsections which define what action will be taken on certain records (Image 2).
Values in the attributes section of the config XML for record behaviour are case sensitive.
New Record Behaviour defines what action is taken when a new record is found in the sync source. This can be either Insert or Ignore.
To set this using a config XML, use the following guide:
Dropped Record Behaviour defines what action is taken when a new record is not found in the sync source, but exists in the target. This can be either Delete, Ignore, or Expire.
To set this using a config XML, use the following guide:
Changed Record Behaviour defines what action is taken when a new record with a sync key is found in the sync source and also exists in the target. This can be either Update or Ignore.
To set this using a config XML, use the following guide:
When using the Delta synchronization pattern there is one optional configuration that you can choose to provide when running a sync with a REST API destination (Image 3).
The Action Type Column reads the value of the source record from a specified column. If the value is "INSERT", then it inserts the record, "UPDATE", then it updates, "DELETE", then it deletes.
Added in Cinchy v5.6, the Changed Record Behaviour - Conditional feature allows you to define specific conditions upon which to update your records (Image 4).
Multiple Conditions can be added to a single data sync by using the AND/OR and +Rule buttons.
You are able to group your Rules into a Ruleset by using the +Ruleset button.
If your Condition evaluates to true then it will update your records
The left-most drop down is used to select either a source or a target column as defined in your Source and Destination tabs
The centre drop-down is used to select from the following options:
=
!=
Contains
Is Null
Is Not Null
The right-most drop-down can either be used to:
A plain value (ex: text, numerical, etc.) This will adjust based on the column data type picked in the left-most drop down. For example, if in the source schema the column is a date, then it renders a date picker.
Select either a source or a target column as defined in your Source and Destination tabs (when used in conjunction with the Use Columns checkbox)
For example, the below condition would only update records where the target column "Name" is null (Image 5).
Parameter | Description | Example |
---|---|---|
Attribute | Description | Values |
---|---|---|
Attribute | Description | Values |
---|---|---|
Attribute | Description | Value |
---|---|---|
type
The type defines the action upon the new record.
It can either be "INSERT" or "IGNORE".
"INSERT" will insert the new record.
"IGNORE" will do nothing to the record.
type
The type defines the action upon the dropped record.
It can either be "IGNORE", "EXPIRE", or "DELETE".
"IGNORE" will do nothing to the record.
"EXPIRE" will populate a specified expiration timestamp field as the current time. AN expirationTimestampField must be provided. This is a reference to a date/time column in the target that should be updated with the execution timestamp if the record is dropped.
"DELETE" will delete dropped records in the target data set.
expirationTimestampField
This attribute is only applicable if the type is equal to "EXPIRE".
The expirationTimestampField is the name of an existing date field to be filled with the current time.
type
The type defines the action upon the new record.
It can either be "UPDATE", "IGNORE", or "CONDITIONAL".
"IGNORE" will do nothing to the record.
"UPDATE" will update the record.
"CONDITIONAL" will open a new UI section allowing you to define Conditions upon which to update your records. Review Appendix A for more information on the Conditional behaviour.
-s (server)
Required. The full path to the Cinchy server without the protocol (e.g. cinchy.co/Cinchy).
"pilot.cinchy.co/Training/Cinchy/"
-u (userid)
Required. The user id to login to Cinchy that has execution access to the data sync.
"admin"
-p (password)
Required. The password of the above User ID parameter. This must be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section.
"DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU="
-f (feed)
Required. The name of the Data Sync Configuration as defined in Cinchy
"Data Sync Name"
We have outlined some common design patterns below. You can use this non-exhaustive list as a jumping off point for desigining your own data syncs.
When creating data sync, you need to know if you are synchronizing the full data set or simply a subset of the data.
Set <DroppedRecordBehaviour type="DELETE" />
so that any records that are no longer in the source are deleted in the target.
Set <DroppedRecordBehaviour type="IGNORE" />
otherwise, it will delete any records that are not in your partial recordset. You are unable to delete any records during partial data synchronization.
You can create a full data synchronization on a partial dataset by filtering the source and target. For example, if you want to sync transactions from one system to another but there are a lot of transactions, you can run a data sync where you filter by<Filter>[Transaction Date] > 100000 </Filter>
In both the source and the target. This way, you can sync a smaller dataset while still being able to perform deletions and ensuring a full synchronization of that partition of the data.
When syncing data that is linked to other data the order of the data sync is important. The data sync should be ordered as to what data is linked to first.
For example, if you have customers and invoices and the invoices are linked to each customer, then the customer data should be synced first. Therefore, when the invoices are synced they will be linked to the appropriate customer as the customer data is already in the target.
To create a reference data set, for example, a country code, based on a set of shipping labels then the data should first be run against the Country Codes table before the shipping labels are synced to the label's table.
In this scenario a supressDuplicateError=”false”
should be set up when running the data against the Country Codes table, this is because duplicates are expected which are not ‘error’s’ and therefore, this must be identified.
Sometimes different reference data values can mean the same thing, but different systems use different codes. In Cinchy’s country code example under ‘Populating Reference Data’;
System A uses the full country name (ex. Canada, United States),
System B uses the 2 letter ISO code (ex. CA, US), and
System C uses the 3 letter ISO code (ex. CAN, USA).
All three of these systems can sync into one shipping label table, with a link to Country, but depending on the system, we use a different link column reference. The same column mapping will look slightly different in each data sync to the Shipping Labels table.
Note that you can change the display column in the Shipping Labels table to switch between the columns if you decide on a different code as the golden standard, without having to modify your data sync configurations.
When syncing from different sources into the same data set, the source of the data will need to be added as a column, this is to avoid overwriting records from other sources. This column will be added to your sync key as well. For contacts you might have:
Once all your data from various systems in Cinchy, you can master that dataset and sync the master record back into any source systems where you wish to do so. These syncs would simply filter the source by where the [Master Record] column is set to true, and sync on the unique identifier without the source system. We would also likely only want to update records already in the source, rather than deleting unmastered data or adding all records from other systems.
To use different sources to enrich different fields on the same record, the dropped record behaviour should be set to ignore, and the columns should be updated based on a sync key on the record.
Depending on the source system, it may or may not be allowed to create new records; Usually, internal systems, for example, customer invoices should be able to create new customer records.
However, external data sources, for example, published company size or industry report will only add noise to your table when new records are attempted to be inserted.
Post sync scripts can be added to a data sync configuration. This would allow you to run post-processing after the data sync has been completed.
For example, you can run a query to update the assignee on a lead after the leads data sync runs.
This way more complex logic is created, for example, only updating where the assignee is empty, except in the case it is a high-value lead, it will be reassigned to the most senior person on each team (based on another table in the instance that has the seniority and team of each sales director).
If you have a file source (i.e. delimited, csv or excel) and you want to sync data into Cinchy that does not have a sync key, you can add a calculated column for the row number of that record.
You will also want to add a calculated column for the file name (and ensure that it is unique) to be able to re-run the data sync if any failures occur.
To run a bi-directional sync, you need to identify a source system unique identifier. You will then need to run the following four data syncs for bidirectional syncing.
Note that if one of the systems cannot create new records, we can omit the data sync configuration where we are creating new records in the other system.
First, we run a data sync from the source into Cinchy filtering the source by records where the Cinchy ID (a custom field we create in the source) is empty. We insert these records into Cinchy and make sure to populate the source unique identifier column in Cinchy.
We can do the opposite as well by syncing data from Cinchy to the external source by inserting any records where a source unique identifier is empty.
Now that all records exist in both the external system and Cinchy, we can sync data based on the target system's unique ID, in this case, we are syncing data from the external source into Cinchy, based on the Cinchy ID. We filter out records where Cinchy ID is null here to avoid errors (those will be picked up the next time the new records sync runs).
Likewise, we can sync any data changes from Cinchy into the external source using the external unique identifier as the sync key, filtering out records in Cinchy where the external identifier is empty.
In order to run intensive summary queries on the platform, data sync must be created to cache the data in a cinchy table. To do so simply sync the results of your Cinchy query into a Cinchy table with the same schema, and schedule the CLI to run as often as you would like your cache to expire. You can point other queries or reports to the query from this table, rather than the complex query itself.
To add more fields to sync, ensure that the columns are in the target system first. You can then swap in the new data sync configuration whenever and it will get picked up for future execution.
To remove fields from a data sync, swap out your data sync configuration. You can optionally delete the field in the source and/or target system afterwards.
If there are no fields being added or deleted, simply swap out your sync configuration. To add or remove fields, follow the guidelines above of when to swap in the config versus making the data model changes (add columns first, swap out config, validate config, delete unneeded columns).
Simply swap out your data sync configuration if you are changing the sync key. It is a good idea to check if your new sync key is unique in both your source and target. The CLI worker will sync using the first record it found in the source to the first record it finds in the target. Checking for duplicate sync keys will allow you to understand whether any unexpected behaviour will occur.
The pages listed under this section are not required for most data syncs, but they can help you create more robust use cases when applied correctly.
Examples 1 and 2 are shown first within the experience as well as their XML equivalent.
Example 3 demonstrates the use of Javascript in Calculated Columns.
The value of this column for each record is whatever the value is of the lob parameter.
XML equivalent of the above example:
The CONCAT function supports more than 2 parameters, and any literal values must be enclosed in single quotes (e.g. 'abc')
The values of two columns are concatenating together.
XML equivalent of the above example:
The CONCAT function supports more than 2 parameters, and any literal values must be enclosed in single quotes (e.g. 'abc')
This example splits a [Name] column with the format "Lastname, Firstname" into two columns: [First Name] and [Last Name].
name
The user defined name for each calculated column. This is used in <ColumnMapping> when you want to indicate the name of the sourceColumn.
formula
CQL expression used to define formula. Supported functions:
We recommend you salt your value before you hash it.
dataType
The data type of each column could be Text, Date, Number or Bool.
maxLength
The max length of data in the column.
isMandatory
Boolean value determining if the field is a mandatory column to create a row entry.
validateData
Boolean value determining whether or not to validate the data before inserting. Valid data means to fit all the constraints of the column (dataType, maxLength, isMandatory, inputFormat). If the data is not valid and validateData is true, then the entry will not be synced into the table. Also, Execution Errors Table is updated with appropriate Error Type (Invalid Format Exception, Max Length Violation, Mandatory Rule Violation, Input Format Exception)
description
Description of the column.
trimWhitespace
Boolean value determining whether or not to trim white space.
This page outlines the two different types of Data Syncs available in Cinchy.
Batch syncs work by processing a group or a ‘batch’ of data all together rather than each piece of data individually. When the data sync is triggered it will compare the contents of the source to the target. The Cinchy Worker will decide if data needs to be added, deleted or updated. Batch sync can either be run as a one-time data load operation, or it can be scheduled to run periodically using an external Enterprise Scheduler
Batch Sync is ideally used in situations where the results and updates don’t need to occur immediately but they can occur periodically.
For example, a document that will only be reviewed once a month doesn’t necessarily need to be updated every single time a change is made
At a high level, running a batch data sync operation performs these steps (Image 1):
The sync connects to Cinchy and creates a log entry in the Execution Log table with a status of running.
It streams the source and target into the CLI. Any malformed records or duplicate sync keys are written to source and target errors csvs (based on the temp directory)
It compares the sync keys to match up source and target records
The sync checks if there are changes between the matched records
For the records where there are changes, groups them into insert, update, and delete batches.
It sends the batches to the target, records failures in sync errors csv and Execution Errors table.
Once complete, it updates Execution Log entry with final status and execution output.
In real-time syncs, the Cinchy Listener picks up changes in the source immediately as they occur. These syncs do not need to be manually triggered or scheduled using an external scheduler. Setting up a real-time sync does require an extra step of defining a listener configuration in order to execute properly.
Real-time sync is ideally used in situations where results and responses must be immediate.
For example, a document that is constantly checked and referred to should have the most up-to-date and recent information.
The following sources can be used in real-time syncs:
Cinchy Event Broker/CDC
MongoDB Collection (Event Triggered)
Polling Event
REST API (Event Triggered)
Salesforce Platform Event
At a high level, running a real-time data sync operation performs these steps (Image 2):
The Listener is successfully subscribed and waiting for events from streaming source
The Listener receives a message from a streaming source and pushes it to SQL Server Broker.
The Worker picks up message from SQL Server Broker
The Worker fetches the matching record from the target based on the sync key
If there are changes detected, the worker pushes them to the target system. Logs successes and failures in the worker's log file.
Runtime Parameters are values that can be dynamically inserted when the sync job is run. The parameters you define here can be referenced in fields in other parts of your sync config (using the @ prefix) and when the job is run you can be prompted for their values.
The execution parameters are either passed in at the time of execution or calculated through a formula. The value of the name attribute is passed in as command line option, param-values. (Optional, if the path to the source file to load is specified in the path attribute of the source element or calculated column formula do not reference execution parameters)
Below is an element that holds an array of <Parameter>
elements, shown in the Connections experience (Image 1) and the equivalent XML.
You can choose to just use plain text in the Name field of the Parameter, or you can use a calculated formula.
The following formulas are currently supported by Connections.
FILENAME(<some-path>, <some-regex>): The FILENAME formula takes in two parameters. The first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group. The first match group's value is what gets assigned to the parameter. The FILENAME function applies the regex only to the name of the file (excluding the directory structure).
FILEPATH(<some-path>, <some-regex>): Similar to FILENAME, the FILEPATH formula takes in two parameters. The first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group. The first match group's value is what gets assigned to the parameter. The FILEPATH function executes the regex against the full file path (including the directory structure).
GUID(): The GUID formula uses a random GUID for that parameter's value. It's used when you want to generate a unique identifier to be used during the context the sync and can be useful, for example, as a way to track if changes were made from a particular sync that ran.
ENV(<place-environment-variable-here>): The ENV formula uses an environment variable available in the connections/worker pods as the value of the parameter. An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files. To use the ENV() formula you will need to do some back-end configuration; please review Appendix A for more details.
We do not recommend using the ENV formula for credentials.
Below are the three Parameter examples shown in the Connections experience, followed by the XML equivalent of all of the examples:
Example 1: A name attribute reference an execution parameter (Image 2).
Example 2: The FILEPATH function takes in two parameters, the first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group (Image 3). The first match group's value is what gets assigned to the parameter. FILEPATH function executes regex against the full file path (including the directory structure) [full formula in XML at end of page].
Example 3: The FILENAME function takes in two parameters, the first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group (Image 4). The first match group's value is what gets assigned to the parameter. FILENAME function applies the regex only to the name of the file (excluding the directory structure).
Example 4: The ENV formula uses an environment variable available in the connections/worker pods as the value of the parameter (Image 5). An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files.
This section details how to correctly configure your platform to use Environment Variables.
To create or modify environment variables on Windows:
On the Windows taskbar, right-click the Windows Icon > System.
In the Settings window, click Related Settings > Advanced System Settings > Environment Variables (Image 6).
Under System Variables, click New to create your new environment variable (Image 7).
To configure an environment variable in Kubernetes, you will first add the variable to your cluster:
Navigate to your cinchy.kubernetes\environment_kustomizations_template\instance_template\connections\kustomization.yaml file.
Under patchesJson6902 > patch, add in your environment variable as shown in the code snippet below, inputting your own namespace and variable name where indicated.
You must then add the variable to your pod:
Navigate to your platform_components/connections/connections-app.yaml file.
Under Spec > Template > Spec > Containers > Env, add in your environment variable. How this is added can depend on what value you are using as an environment variable. The below code snippet shows a basic example:
Make the same code change to the platform_components/worker/worker-app.yaml file as well.
Ensure that you push and merge your changes.
Setting up a Listener Configuration is a required step when doing a real-time data sync. You will configure your Event Stream Source with your data sync information. You can review an example Listener Config setup here.
Navigate to the Listener Config table in Cinchy (Image 1).
In a new row, add in your listener config configuration data. Review section 3 for more information.
Ensure that it is set to Enabled in order for your real-time data sync to run successfully.
To subscribe to an event stream, make sure you have a Cinchy Listener running connected to the Cinchy instance you are on. You will need to create a record in the Listener Configs table and Enable it to subscribe to events.
The example values in this section follow the use case outlined here.
Refer to the table below for information about the columns in the Listener Config.
For the topic JSON, you need to provide the following:
Each of your Event Listener message keys a message key. By default, this key is dictated by the Cinchy ID of the record being changed.
When the worker processes your Event Listener messages, it does so in batches, and for efficiency and to guarantee order, messages that contain the same key will not be processed in the same batch.
The messageKeyExpression property allows you to change the default message key to something else.
Ensuring records with the same message key can be updated with the proper ordering to reflect an accurate collaboration log history.
In the below example, we want the message key to be based on the [Employee Id] and [Name] column of the table that CDC is enabled on.
Filters can be used in your source and target configurations to define specific subsets of data that you want to use in your syncs.
When syncing a Data Source, you may have the option to add in additional configuration sections, such as a Filter, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Note that if your source only has one of the listed options, it will appear by default instead of in a drop-down.
A filter on your source is optional. It relies on a source specific syntax for filtering out records from your source target. The filter can reference execution parameters.
This is only available if using a table, not a query. For queries, include the filter in the query itself.
There can only be one <Filter> for each source. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
Example 1: Using a filter to sync only source records with [net worth] > 10000 (Image 2).
Example 2: Using a filter to sync only source records with a status like "Complete" (Image 3).
A target destination filter is optional. It relies on a source specific syntax for filtering out records from your target. The filter can reference execution parameters.
There can only be one <Filter> for each target. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
Example 1: Filtering only target records where the Assignee is Null (Image 4).
Example 2: Filtering only target records where the Override ID is not Null (Image 5).
Example 3: Filtering only target records from a specific source (Image 6).
When syncing a Data Source, you may have the option to add in additional configuration sections, such as an Auth Request, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in an Auth Request, fill in the following parameters (Image 2):
HTTP Method: Either POST or GET
Endpoint URL
From the drop down you can also add in:
Request Headers
Body
Variables to Extract
When syncing a Data Source, you may have the option to add in additional configuration sections, such as an Pagination, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
There are two types of pagination available (Image 2):
Cursor: The cursor is considered to be a key parameter in this type of pagination. You receive a variable named Cursor
along with the response. It is a pointer that points at a particular item that needs to be sent with a request. The server then uses the cursor to seek the other set of items. Cursor-based pagination is more complex and is preferred when dealing with a real-time data set.
Offset: Offset-based pagination is for parameters with a specific limit (the number of results) and offset (the number of records that need to be skipped). Offset-based pagination is easy to use and is preferred for static data.
To set up cursor pagination, fill in the following parameters (Image 3):
Type: Select Cursor
Next Page URL JSON Path: This is the JSON Path within the response to the URL for the next page
Cursor Key: This is the key used in the query string to specify the cursor value. This is only required if the cursor returned is not a fully qualified URL.
To set up offset pagination, fill in the following parameters (Image 4):
Type: Select Offset
Limit Key: The key used in the query string to specify the limit
Limit: The desired page size
Offset By: The type of offset that the API does pagination by. This will be either Record Number or Page Number.
Offset Key: The key used in the query string to specify the offset
Initial Offset: The starting offset
Note that a pagination block is mandatory even if the API does not return results from multiple pages. You can use the following as the placeholder:
<Pagination type="OFFSET" limitField="" offsetField="" limit="0" initialOffset="0" />
This example will take you through the creation and execution of a real-time data sync where data will be synced between two Cinchy tables based on real-time changes.
Use Case: Your People table captures a view of various personnel information. Any time a new hire is added to the table, you want that information to be immediately synced into the New Employees table. We can solve this use case using the Cinchy Change Data Capture (CDC) function on our tables. This helps you to better keep track of all incoming people within your company.
Example Use Case:
You can review our documentation on Cinchy Table Sources You can review our documentation on Cinchy Table destinations here.
This section contains steps on how to:
Create the People table.
Create the New Employees table.
When creating tables to be used with real-time syncs, you need to ensure that the Cinchy Change Data Capture feature is turned on through the Design Table > Change Notifications tab. This will ensure real-time updates are captured.
Login to your Cinchy platform
From under My Network, click the create button
Select Table
Select From Scratch
Create a table with the following properties (Image 1):
6. Click Columns in the left hand navigation to create the columns for the table
7. Click the "Click Here to Add" button and add the following columns:
Click Change Notifications in the left hand navigation and ensure "Publish Change Notifications" is checked.
Click the Save button to save your table.
Within the Cinchy platform, from under My Network, click the create button
Select Table
Select From Scratch
Create a table with the following properties (Image 2):
6. Click Columns in the left hand navigation to create the columns for the table.
7. Click the "Click Here to Add" button and add the following columns:
Click the Save button to save your table.
There are two options when you want to create a data sync in Cinchy.
You can input all of your necessary information through the intuitive Connections UI. Once saved, all of this data is uploaded as an XML into the Data Sync configurations table.
Or, you can bypass the UI and upload your XML config directly into the Data Sync configuration table yourself.
This example will walk you through both options.
Within your Cinchy platform, navigate to the Connections Experience (Image 3).
In the Info tab, input the name of your data sync. For this example we are using "New Hires" (Image 4).
Navigate to the Source tab.
Under "Select a Source", choose "Cinchy Event Broker" (Image 5).
In the Schema section, input the columns from your source table (in this case the People table) that you will want to use in your data sync. In this example we are using the Name and Title columns, both of which are data type Text (Image 6). You may also choose to set any Aliases or add a Description.
We are not using the RunQuery functionality in this example but you can review it's properties here.
Navigate to the Destination tab and select Cinchy Table from the drop down (Image 7).
In the Load Metadata pop-up, input the Domain and Table name for your destination. In this example, we are using the Sandbox domain and the People table.
Click Load.
Select the columns that you wish to use in your data sync (Image 8). These will be the columns that your source syncs to your target. In this example, we are using the Name and Title columns. Note that you will have many Cinchy system table available to use as well.
Click Load.
The Connections experience will attempt to automatically map your source and destination columns based on matching names. In the below screenshot, it has been able to correctly match the Name and Title columns (Image 9).
Navigate to the Sync Behaviour tab. There are two options for data syncs: Full File and Delta. In this example, select Full File.
Full load processing means that the entire amount of data is imported iteratively the first time a data source is loaded into the data studio. Delta processing, on the other hand, means loading the data incrementally, loading the source data at specific pre-established intervals.
Set the following parameters (Image 10):
Navigate to the Permissions tab. Here you will define your group access controls for your data sync. You can set this how you like. For this example, we are giving all users access to Execute, Write, and Read our sync (Image 11).
Any groups given Admin Access will have the ability to Execute, Write, and Read the data sync.
Click Save.
Test your data sync by adding a new row to your People table. Ensure that the data is then updated across to the New Employees table (Images 12 & 13).
In lieu of using the Connections UI, you can also set up a data sync by uploading a correctly formatted XML into the Data Sync Configs table within Cinchy.
We recommend only doing so once you have a good grasp on how data sync work. Note that not all sources/targets follow the same XML pattern.
Below is the completed batch data sync configuration for this example. Review the XMLs and then proceed to section 3.2.3 for further instructions.
The below XML shows what a blank data sync could look like for our Cinchy Event Broker/CDC to Cinchy Table real-time sync with full file synchronization.
The below filled XML example matches the Connections UI configuration we made in step 3.1. You can review the parameters used in the table below.
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 14).
In a new row, paste the Data Sync XML into the Config XML column.
Define your group permissions in the applicable columns. In our example, we have given All Users the Admin Access.
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
Launch Powershell and navigate to the Cinchy CLI directory.
Enter and execute the following into Powershell:
Test your data sync by adding a new row to your People table. Ensure that the data is then updated across to the New Employees table (Images 15 & 16).
Setting up a Listener Configuration is a required step when doing a real-time data sync. While this example shows how to configure the sync using the Cinchy Event Broker/CDC, Cinchy also supports other Event Stream Sources. You can review the full list here.
Navigate to the Listener Config table in Cinchy (Image 17).
In a new row, add in your listener config data using the below table as a guide:
For the topic JSON, you need to provide the following:
Prior to executing the data sync command, let's encrypt the password using Powershell
To encrypt a password using Powershell, complete the following:
Launch Powershell and navigate to the Cinchy CLI directory (note, you can always type powershell in the windows explore path for the Cinchy CLI directory)
Enter the following into Powershell .\Cinchy.CLI.exe encrypt -t "password"
Hit enter to execute the command
Copy the password (e.g. notepad, Visual Studio Code, word etc.) so you have it accessible at batch execution time
Please note, you will need to replace "password" with your specific password.
The Execution Log table is a system table in Cinchy that logs the outputs of all data syncs. You can always review the entries in this table for information on the progression of your syncs (Image 18).
The Execution Errors table is a system table in Cinchy that logs any errors that may occur in a data sync (Image 19). Any data sync errors will also be logged in the temp directory outlined in the data sync execution command (e.g. -d "C:\Cinchy\temp"
)
When configuring a Data Sync, you may have the option to add in additional configuration sections, such as an Request Headers, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in Request Headers, fill in the following parameters (Image 2):
Header Type:
Authorization: An authorization request header can be used to provide credentials that authenticate a user with a server, allowing access to a protected resource. Selecting this header defines the Header Value as a password field.
Content-Type
Header
Name: The name of the HTTP Header to add
Is Encrypted?
Header Value
This example will take you through the creation and execution of a batch data sync where data will be loaded into the Cinchy via a CSV. In this example, we will be loading information into the People table in Cinchy. This is a self-contained example and can be replicated in any Cinchy environment without dependencies.
Example Use Case: You have historically maintained a record of all of your employees in a spreadsheet. Knowing that this significantly hinders your data and data management capabilities, you want to sync your file into Cinchy. Once synced, you can manage your employee information through the Cinchy data browser, instead of through a data silo.
You can review our documentation on Delimited File Sources
This section contains:
The People Table XML schema. Alternatively, this can be manually created through the Cinchy Data Browser
A sample source CSV data file to load into Cinchy.
To create the People table used in this example, you can use the below is the XML Alternatively, you can create the table manually using the steps in section 2.1.1.
Log in to your Cinchy platform.
From under My Network, click the create button.
Select Table.
Select From Scratch.
Create a table with the following properties (Image 1):
6. Click Columns in the left hand navigation to create the columns for the table.
7. Click the "Click Here to Add" button and aadd the following columns.
9. Click the Save button to save your table.
The sample CSV file used in this example can be downloaded below.
Please note, if you are downloading this file to recreate this exercise the file path and the file name will need to be:
C:\Data\contacts.csv
Alternatively, you can update the path
parameter in the data sync configuration to match the file path and name of your choosing.
The source file contains the following information which will be synced into our target Cinchy table (Image 2).
As we can see, the file has the following columns:
First Name
Last Name
Email Address
Title
Company
The People table created only has the following columns:
Name
Title
Company
When syncing the data from our source (CSV file) to our target (Cinchy People table) our batch data sync will need to take into consideration the following:
The first and last name from the source will need to end up merged into one column in our target (Name)
Email address from our sources is not a column in the target, therefore his column will not be synced into the target
The title column will be a one to one match from source to target
The company column will also be a one to one match from source to target
There are two options when you want to create a data sync in Cinchy.
You can input all of your necessary information through the intuitive Connections UI. Once saved, all of this data is uploaded as an XML into the Data Sync configurations table.
Or, you can bypass the UI and upload your XML config directly into the Data Sync configuration table yourself.
This example will walk you through option one.
Within your Cinchy platform, navigate to the Connections Experience (Image 3).
In the Info tab, input the name of your data sync. For this example we are using "Contact Import" (Image 4).
Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box in step 5. For this example, we will set it to filepath (Image 5).
Navigate to the Source tab. As a reminder, we are using the .CSV file you downloaded at the beginning of this example as our source.
Under "Select a Source", choose Delimited File (Image 6).
The "Load Metadata" box will appear; this is where you will define some important values about your source needed for the data sync to execute. Using the below table as your guide, fill in your metadata parameters (Image 7):
Click Load.
In the Available Columns pop-up, select all of the columns that you want to import from the CSV. For this example, we will select them all (noting, however, that we will only map a few of them later) (Image 8).
Click Load.
Once you load your source, the schema section of the page will auto-populate with the columns that you selected in step 7 (Image 9). Review the schema to ensure it has the correct Name and Data Type. You may also choose to set any Aliases or add a Description.
Navigate to the Destination tab and select Cinchy Table from the drop down.
In the Load Metadata pop-up, input the Domain and Table name for your destination. In this example, we are using the Sandbox domain and the People table (Image 10).
Click Load.
Select the columns that you wish to use in your data sync (Image 11). These will be the columns that your source syncs to. In this example, we are using the Name, Title, and Company columns. Note that you will have many Cinchy system table available to use as well. Click Load.
The Connections experience will attempt to automatically map your source and destination columns based on matching names. In the below screenshot, it has been able to correctly match the "Company" and "Title" columns (Image 12). The "Name" target column is not a 1:1 match for any of our source columns, so we must match that one manually.
Select "First Name" from the Source Column drop down to finish mapping our data sync (Image 13).
Navigate to the Sync Behaviour tab. There are two options for data syncs: Full File and Delta. In this example, select Full File.
Full load processing means that the entire amount of data is imported iteratively the first time a data source is loaded into the data studio. Delta processing, on the other hand, means loading the data incrementally, loading the source data at specific pre-established intervals.
Set the following parameters (Image 14):
Navigate to the Permissions tab. Here you will define your group access controls for your data sync (Image 15). You can set this how you like. For this example, we are giving all user access to Execute, Write, and Read our sync.
Any groups given Admin Access will have the ability to Execute, Write, and Read the data sync.
Navigate to the Jobs tab. Here you will see a record of all successful or failed jobs for this data sync.
Select "Start a Job" (Image 16).
Load your sample .CSV file in the pop-up window (Image 17).
The job will commence. The Execution window that pops up will help you to verify that your data sync is progressing (Image 18).
Navigate to your destination table to ensure that your data populated correctly (Image 19).
In lieu of using the Connections UI, you can also set up a data sync by uploading a correctly formatted XML into the Data Sync Configs table within Cinchy.
We recommend only doing so once you have a good grasp on how data sync work. Note that not all sources/targets follow the same XML pattern.
Below is the completed batch data sync configuration for this example. Review the XMLs and then proceed to section 3.2.3 for further instructions.
The below XML shows what a blank data sync could look like for a Delimited File source to a Cinchy Table target.
The below filled XML example matches the Connections UI configuration we made in step 3.1. You can review the parameters used in the table below.
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 20).
In a new row, paste the Data Sync XML into the Config XML column (Image 21).
Define your group permissions in the applicable columns. In our example, we have given All Users the Admin Access.
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Tip: Click on the below image to enlarge it.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
Launch Powershell and navigate to the Cinchy CLI directory.
Enter and execute the following into Powershell:
Once executed, navigate to your destination table to validate that your data synced correctly (Image 22).
To encrypt a password using Powershell, complete the following:
Launch Powershell and navigate to the Cinchy CLI directory (note, you can always type powershell in the windows explore path for the Cinchy CLI directory)
Enter the following into Powershell .\Cinchy.CLI.exe encrypt -t "password"
Hit enter to execute the command
Copy the password (e.g. notepad, Visual Studio Code, word etc.) so you have it accessible at batch execution time
Please note, you will need to replace "password" with your specific password.
The Execution Log table is a system table in Cinchy that logs the outputs of all data syncs (Image 23). You can always review the entries in this table for information on the progression of your syncs.
The Execution Errors table is a system table in Cinchy that logs any errors that may occur in a data sync (Image 24). Any data sync errors will also be logged in the temp directory outlined in the data sync execution command (e.g. -d "C:\Cinchy\temp"
)
Column | Description | Example |
---|---|---|
Parameter | |
---|---|
Table Details | Values |
---|
Column Details | Values |
---|
Table Details | Values |
---|
Column Details | Values |
---|
Parameter | Description | Example |
---|
Ensure it is set to Enabled. Your real-time data sync should now be listening to your People table ready to push updates to your New Employees table.
Parameter | Description | Example |
---|
Ensure it is set to Enabled. Your real-time data sync should now be listening to your People table ready to push updates to your New Employees table.
To execute your Data Sync you will use the CLI. If you do not have this downloaded,
In this example we will be using the following Data Sync Commands, however, for the full list of commands click .
Parameter | Description | Example |
---|
Column | Description | Example |
---|
Parameter |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
To execute your Data Sync you will use the CLI. If you do not have this downloaded,
In this example we will be using the following Data Sync Commands, however, for the full list of commands click .
Parameter | Description | Example |
---|
Function
Details
CONCAT(colA, colB, 'literal value1', 'literal value2')
Concatenates multiple columns, parameters or literal values together. Supports two or more parameters.
row_number()
This is the numeric row number of files (Excel, delimited, fixed width). Currently not supported in conjunction with other formulas/parameters.
isnull(colA,'alt value')
If the first column is null, use the second value (can be a literal or another column).
hash('SHA256',colA)
Hashes the column using the algorithm specified.
Name
The name of your Listener Config
New Hire Sync
Event Connector Type
Select from the drop-down list which event stream you are listening in on.
Cinchy CDC
Topic
This column expects a JSON value with certain specific information. Please review the **Topic Column** table below for details.
Connection Attributes
This field will change depending on your stream source. For this example, we are using the attribute expected for the Cinchy Event Broker/CDC stream source.
{}
Status
This sets where your config is Enabled or Disabled. Make sure you set the value to Disabled when creating the config for the first time. This will give you a chance to correctly configure the listener before it is picked up by the service. After configuring the listener correctly you should flip the value to Enabled.
Disabled
Data Sync Config
The name of the Data Sync Config you created in the Connections UI or via XML.
New Hires
Subscription Expires On
This field is a timestamp that is populated by some Event Listeners (eg. Salesforce) service when it has successfully subscribed to a topic.
Auto Offset Reset
In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from. Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed This is the typical configuration. None will not read start reading any events.
Latest
Table GUID
The GUID of the table whose notifications you wish to consume. You can find this in the Design Table screen for Cinchy v5.5+, and in the Tables table otherwise.
Column(s)
The names of the columns you wish to include. Note: If you will be using the runQuery=true parameter in your data sync, you only need to include the Cinchy Id in the topic JSON.
BatchSize
The desired result batch size. This will default to 1 if not passed in. The maximum batch size is 1000; using a number higher than that will result in a Bad Request response.
Filter
Optional. When CDC is enabled, you can set a filter on columns where you are capturing changes in order to receive specific data.
Source
Definition
File
For a file data sources (delimited, fixed width & Excel) the syntax conforms to the .NET frameworks RowFilter on a DataView
Salesforce
For Salesforce syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the T-SQL where clause (without the where expression)
Source
Definition
Salesforce
For Salesforce the syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the TSQL where clause (without the where expression)
Table Name | People |
Icon + Colour | Default |
Domain | Sandbox (if this domain does not exist please either create it or make sure to update this parameter where required during the data sync) |
Column 1 | Column Name: Name Data Type: Text |
Column 2 | Column Name: Title Data Type: Text |
Column 3 | Column Name: Phone Number Data Type: Text |
Column 4 | Column Name: City Data Type: Text |
Table Name | New Employees |
Icon + Colour | Default |
Domain | Sandbox (if this domain does not exist please either create it or make sure to update this parameter where required during the data sync) |
Column 1 | Column Name: Name Data Type: Text |
Column 3 | Column Name: Title Data Type: Text |
Sync Key Column Reference | The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It is used to match data between the source and the target. This allows for updates to occur on changed records. | Name |
New Record Behaviour | This defines what action is taken when a new record is found in the sync source. This can be either Insert or Ignore. | Insert |
Dropped Record Behaviour | This defines what action is taken when a dropped record is found in the sync source. This can be either Delete, Ignore, or Expire. | Delete |
Changed Record Behaviour | This defines what action is taken when a changed record is found in the sync source. This can be either Update, Ignore, or Conditional. | Update |
Name | The name of your data sync. | New Hites |
Column Name | The name(s) of the source columns that you wish to sync. | "Name" "Title" |
Column Data Type | The data type that corresponds to our selected source columns. | "Text" |
Domain | The domain of your Cinchy Target table. | Sandbox |
Table | The name of your Cinchy Target table. | New Employees |
Column Mapping Source Column | The name(s) of the source columns that you are syncing. | "Name" "Title" |
Column Mapping Target Column | The name(s) of the target column as it maps to the specified source column. | "Name" "Title" |
Sync Key Column Reference Name | The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It is used to match data between the source and the target. This allows for updates to occur on changed records. | "Name" |
New Record Behaviour Type | This defines what will happen when new records are found in the source. | INSERT |
Dropped Record Behaviour Type | This defines what will happen when dropped records are found in the source. | DELETE |
Changed Record Behaviour Type | This defines what will happen when changed records are found in the source. | UPDATE |
-s (server) | Required. The full path to the Cinchy server without the protocol (e.g. cinchy.co/Cinchy). | "pilot.cinchy.co/Training/Cinchy/" |
-u (userid) | Required. The user id to login to Cinchy that has execution access to the data sync. | "admin" |
-p (password) | Required. The password of the above User ID parameter. This can optionally be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section. | "DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU=" |
-f (feed) | Required. The name of the Data Sync Configuration as defined in Cinchy | "Contact Import" |
Table Details | Values |
Table Name | People |
Icon + Colour | Default |
Domain | Sandbox (if this domain does not exist please either create it or make sure to update this parameter where required during the data sync) |
Column Details | Values |
Column 1 | Column Name: Name Data Type: Text |
Column 2 | Column Name: Title Data Type: Text |
Column 3 | Column Name: Company Data Type: Text |
Source | The source location of your file. This can be either Local, S3, or Azure Blob Storage. | Local |
Delimiter | The type of delimiter on your source file. | Since our file is a CSV, the delimiter is a comma, and we uses the ',' value. |
Text Qualifier | A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end. | "" |
Header Rows to Ignore | The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Path | The path to your source file. In this case, it is the Parameter that was set in step 3. | @filepath |
Choose File | This option will appear once you've correctly set your Path value. | Upload the sample CSV for this example. |
Sync Key Column Reference | The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It is used to match data between the source and the target. This allows for updates to occur on changed records. | Name |
New Record Behaviour | This defines what action is taken when a new record is found in the sync source. This can be either Insert or Ignore. | Insert |
Dropped Record Behaviour | This defines what action is taken when a dropped record is found in the sync source. This can be either Delete, Ignore, or Expire. | Delete |
Changed Record Behaviour | This defines what action is taken when a changed record is found in the sync source. This can be either Update, Ignore, or Conditional. | Update |
Name | The name of your data sync. | Contact Import |
Parameter | Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box | Parameter |
Source | Defines whether your source is Local (PATH), S3, or Azure. | PATH |
Path | Since this is a local upload, this is the path to your source file. In this case, it is the value that was set for the "Parameter" value, preceded by the '@' sign. | @Parameter |
Delimiter | The type of delimiter on your source file. | Since our file is a CSV, the delimiter is a comma, and we uses the ',' value. |
Text Qualifier | A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end. | ""e; |
Header Rows to Ignore | The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Column Name | The name(s) of the source columns that you wish to sync. Note that in this example we have selected more columns than we will map, in order to show how Connections ignores unmapped data. | "First Name" "Last Name" "Email Address: "Title" "Company" |
Column Data Type | The data type that corresponds to our selected source columns. | "Text" |
Domain | The domain of your Cinchy Target table. | Sandbox |
Table | The name of your Cinchy Target table. | People |
Column Mapping Source Column | The name(s) of the source columns that you are syncing. | "Company" "Title" "First Name" |
Column Mapping Target Column | The name(s) of the target column as it maps to the specified source column. | "Company" "Title" "Name" |
Sync Key Column Reference Name | The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It is used to match data between the source and the target. This allows for updates to occur on changed records. | "Name" |
New Record Behaviour Type | This defines what will happen when new records are found in the source. | INSERT |
Dropped Record Behaviour Type | This defines what will happen when dropped records are found in the source. | DELETE |
Changed Record Behaviour Type | This defines what will happen when changed records are found in the source. | UPDATE |
-s (server) | Required. The full path to the Cinchy server without the protocol (e.g. cinchy.co/Cinchy). | "pilot.cinchy.co/Training/Cinchy/" |
-u (userid) | Required. The user id to login to Cinchy that has execution access to the data sync. | "admin" |
-p (password) | Required. The password of the above User ID parameter. This can optionally be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section. | "DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU=" |
-f (feed) | Required. The name of the Data Sync Configuration as defined in Cinchy | "Contact Import" |
Any task/job scheduling application can be used to run data sync automatically, on a schedule. This page will walk you through how to achieve this goal using Windows Task.
Scheduler Data synchronization commands can be scheduled to run automatically based on your data synchronization requirements (e.g. certain date/times/intervals). The CLI command that is used to execute the data synchronization can be saved as a script in PowerShell, Batch or a Command file.
Here's an example of how to schedule the CLI with Windows Task Scheduler:
Launch Windows Task Scheduler
Create a folder to contain the CLI jobs (options)
Right click on CLI job folder
Select Create Task
On the General tab, enter the Name of the job you want to schedule
Click on the Trigger Tab
Click New and set your schedule preferences
Click the Action Tab
Click New button
Click Browse and navigate to the folder that contains the data sync scheduling script for execution
Copy and Paste the Start In (optional) filed the path for your Cinchy CLI folder
Name | The name of your Listener Config | New Hire Sync |
Event Connector Type | Select from the drop-down list which event stream you are listening in on. | Cinchy CDC |
Topic | This column expects a JSON value with certain specific information. Please review the Topic Column table below for details. |
Connection Attributes | This section is not required for data syncs using the Cinchy Event Broker/CDC, so we can just enter "{}" | {} |
Status | This sets where your config is Enabled or Disabled. You can leave this blank until you want to turn on your config. |
Data Sync Config | The name of the Data Sync Config you created in the Connections UI or via XML. | New Hires |
Auto Offset Reset | In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from. Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed This is the typical configuration. None will not read start reading any events. | Latest |
Table GUID | The GUID of the table whose notifications you wish to consume. You can find this in the Design Table screen for Cinchy v5.5+, and in the Tables table otherwise. |
Column(s) |
BatchSize | The desired result batch size. This will default to 1 if not passed in. The maximum batch size is 1000; using a number higher than that will result in a Bad Request response. |
Filter | Optional. When CDC is enabled, you can set a filter on columns where you are capturing changes in order to receive specific data. |
Post sync scripts are written in CQL and can be added to the end of a sync to allow you to do more with your data, such as inserted retrieved values into a Cinchy table.
In this example, we will be doing a batch data sync using the Cinchy Table [Product].[Names Test]. In this scenario we will be using the following simple API as our destination: https://cinchy.net/API/Product/LatestCinchyReleaseDate
When we run our batch job, it will check for updates in our source table. These updates will trigger our REST API to fetch our defined value, which we can use in a post-sync script. In this case, our script will insert this value into a second table, [Product].[Response Table].
The following steps will walk you through how to utilize this functionality.
Set up your data sync to a REST API as normal. You can review our Source (Image 1), Destination (Image 2), and Sync Behaviour (Image 3) below.
2. Under the Post Sync tab, we want to input the CQL that, in our case, will take the defined variable from our REST API and insert it into the [Product].[Response Test] table (Image 4).
3. Add in your Permissions and click Save.
4. Once we have finished configuring our sync, we can run the job. It will check for new data in our Name column on our [Product].[Names] table (Image 5). When found, it will trigger the REST API to GET our “value” variable. Our post sync script will then take that value and INSERT it into the [Product].[Response Test] table (Image 6).
The names of the columns you wish to include. Note: If you will be using the parameter in your data sync, you only need to include the Cinchy Id in the topic JSON.