Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A data sync in Cinchy synchronizes information between a source and a destination. This guide walks you through the process of setting up your data syncs.
The basic workflow of a data sync in Cinchy is:
You have two ways to set up a data sync in Cinchy:
Use the Connections UI to input and save configuration details. The data will be stored as an XML file in the Data Sync Configurations table.
Directly upload an XML config into the Data Sync Configurations table.
To use the Connections UI, open the Connections Experience.
The UI has six tabs. Each tab requires data for your connection setup:
Info
Source
Destination
Sync Actions
Post Sync (Optional)
Jobs (Optional)
The Info Tab has fundamental details about your data sync, such as its name and access controls. You must add a name and select an Admin Group. You can also use variables for advanced functionality.
The Source Tab defines the origin of your data sync. Each data source type, from specific file formats to integrated software systems, requires unique parameters. See the Supported Data Sources list for available options.
When working with real-time sync sources, you'll notice an extra tab for Listener configuration. The adjustments you make here directly influence the Listener Config table. Navigate through the options and set up as needed. For more info, see the Listener Config and Sync Source pages.
The Destination Tab identifies where your data sync goes. Each destination comes with its own set of parameters. You must map each destination to its source. Consult the Supported Destinations directory for specifics.
In the Sync Actions Tab, you can choose your preferred data action. Your main options are Full File Sync and Delta Sync. Not sure about the differences? Check out this comparison for more details.
In the Post Sync Tab, you can use Cinchy Query Language (CQL) to refine the post-sync data. For example, you could set up a post-sync script to push retrieved data values into a specific Cinchy table. You can find more on this in the Post-sync scripts page.
In the Jobs tab, you can start, monitor, and troubleshoot batch jobs. You can also view sync outputs or download detailed logs for analysis. For non-default user operations, ensure you have the right credentials and permissions.
Track potential issues with your real-time syncs. Syncs become operational once the Listener Config is enabled—no need for manual job starts.
You can also set up a data sync in Cinchy by uploading a formatted XML into the Data Sync Configurations table. This method is only recommended for those with advanced knowledge in data sync operations.
Unique XML patterns may exist across different sources and targets. If you're unfamiliar with this process, check out the Delimited File source to Cinchy Table batch data sync example first.
Access the Data Sync Config Table: In the Cinchy platform, open the Data Sync Configurations table.
Insert Data Sync XML: For a new row, double-click the Config XML column and paste your Data Sync XML.
Define Group Permissions: Adjust the required permissions in the appropriate columns.
Review the XML Data: After finalizing your Data Sync XML, return to the Data Sync Configurations table.
Initiate Sync with the CLI: If you haven't installed the CLI, refer to the CLI installation guide. Otherwise, launch PowerShell and navigate to the Cinchy CLI directory.
Run the CLI Command:
More details on CLI commands can be found in the CLI commands list.
If you are setting up a real-time sync, you must set up a listener configuration. You must configure your Event Stream Source with your data sync information. You can review an more on the Listener Config here.
Navigate to the Listener Config table in Cinchy (Image 12).
In a new row, add in your listener config configuration data. See Supported real-time sync sources for more information.
Make sure to set the config to Enabled.
The following pages show basic examples of both batch and real-time data syncs. Use these examples as a reference point for learning more about Cinchy data syncs.
This page outlines the two different types of Data Syncs available in Cinchy.
Batch syncs work by processing a group or a ‘batch’ of data all together rather than each piece of data individually. When the data sync triggers, it will compare the contents of the source to the target. The Cinchy Worker will decide to add, delete, or update data. You can run a batch sync as a one-time data load operation, or you can schedule it to run periodically using an external Enterprise Scheduler
A batch sync is ideal in situations where the results and updates don’t need to occur immediately but they can occur periodically. For example, a document that's reviewed once a month might not need an update for every change.
At a high level, running a batch data sync operation performs these steps (Image 1):
The sync connects to Cinchy and creates a log entry in the Execution Log table with a status of running.
It streams the source and target into the CLI. Any malformed records or duplicate sync keys are written to source and target errors CSVs (based on the temp directory)
It compares the sync keys to match up source and target records
The sync checks if there are changes between the matched records
For the records where there are changes, groups them into insert, update, and delete batches.
It sends the batches to the target and records failures in the sync errors CSV and Execution Errors table.
Once complete, it updates Execution Log entry with final status and execution output.
In real-time syncs, the Cinchy Listener picks up changes in the source immediately as they occur. These syncs don't need to be manually triggered or scheduled using an external scheduler. Setting up a real-time sync does require an extra step of defining a listener configuration to execute.
Real-time sync is ideally used in situations where results and responses must be immediate.
For example, a document that's constantly checked and referred to should have the most up-to-date and recent information.
You can use the following sources in real-time syncs:
Cinchy Event Broker/CDC
MongoDB Collection (Event Triggered)
Polling Event
REST API (Event Triggered)
Salesforce Platform Event
At a high level, running a real-time data sync operation performs these steps (Image 2):
The Listener is successfully subscribed and waiting for events from streaming source
The Listener receives a message from a streaming source and pushes it to SQL Server Broker.
The Worker picks up message from SQL Server Broker
The Worker fetches the matching record from the target based on the sync key
If there are changes detected, the worker pushes them to the target system. Logs successes and failures in the worker's log file.
When configuring a data sync you must set your sync behaviour. You have two options for this: Full File or Delta.
Full File syncs intake both the source and the destination data and reconcile the records by matching up the sync key. This determines any differences and allows it to perform updates, inserts, ignores, or deletes at the destination.
Delta syncs skip the reconciliation process. In batch syncs, it simply grabs records from the source and inserts it into the destination. In real-time syncs, it may act differently depending on the event type. For example, when using the Cinchy Event Broker/CDC with an insert event, a delta sync will insert the data into the destination, an update event will update, etc.
Delta syncs also have the option to provide an "Action Type Column" for REST API destinations. This reads the value of the source record from a specified column. If the value is INSERT, then it inserts the record, UPDATE, then it updates, DELETE, then it deletes.
When using the Full File synchronization strategy there are four distinct sections that must be configured: the Sync Key and the Sync Record Behaviours, which include actions for New, Dropped, and Changed records. (Image 1).
The sync key is a unique key reference you use for data syncs from the data source into your destination. You can use it to match the data between the source and the target for updates on changed records.
To set this using a config XML, use the following guide:
Elements: <SyncKeyColumnReference>
This is used in the <SyncKey> element when specifying which columns in the Target Table to be utilized as a unique key for the syncing process.
Contained-In: <SyncKey>
Attributes: name. The name of a column in the destination that you are syncing data into.
The Sync Record Behaviour divides into three subsections which defines what action to take on certain records (Image 2).
Values in the attributes section of the config XML for record behaviour are case sensitive.
New Record Behaviour defines what action to take when a new record is found in the sync source. This can be either INSERT or IGNORE.
To set this using a config XML, use the following guide:
Dropped Record Behaviour defines what action to take when a new record isn't found in the sync source, but exists in the target. This can be either DELETE, IGNORE, or EXPIRE.
To set this using a config XML, use the following guide:
Changed Record Behaviour defines what action to take when a new record with a sync key is found in the sync source and also exists in the target. This can be either Update or Ignore.
To set this using a config XML, use the following guide:
When using the Delta synchronization strategy there is one optional configuration that you can expose when running a sync with a REST API destination (Image 3).
The Action Type Column reads the value of the source record from a specified column. If the value is INSERT, then it inserts the record, UPDATE, then it updates, DELETE, then it deletes.
Added in Cinchy v5.6, the Changed Record Behaviour - Conditional feature allows you to define specific conditions upon which to update your records (Image 4).
You can add multiple Conditions to a single data sync by using the AND/OR and +Rule buttons.
You are able to group your Rules into a rule set by using the +Ruleset button.
If your Condition evaluates to true then it will update your records
Use the The left-most drop down to select either a source or a target column as defined in your Source and Destination tabs
Use The centre drop-down to select from the following options:
\=
!=
Contains
Is Null
Is Not Null
Use the The right-most drop-down to:
Add a plain value (ex: text, numerical, etc.) This will adjust based on the column data type picked in the left-most drop down. For example, if in the source schema the column is a date, then it renders a date picker.
Select either a source or a target column as defined in your Source and Destination tabs (when used in conjunction with the Use Columns checkbox)
For example, the below condition would only update records where the target column "Name" is null (Image 5).
Attribute | Description | Values |
---|
Attribute | Description | Values |
---|
Attribute | Description | Value |
---|
type | The type defines the action upon the new record. | It can either be INSERT or IGNORE.
|
type | The type defines the action upon the dropped record. | It can either be IGNORE, EXPIRE, or DELETE.
|
expirationTimestampField | This attribute is only applicable if the type is equal to EXPIRE. | The expirationTimestampField is the name of an existing date field to be filled with the current time. |
This page outlines some common design patterns. You can use this basic list as a starting point to design your own data syncs.
When creating data sync, you need to know if you are synchronizing the full data set or just a subset of the data.
Set <DroppedRecordBehaviour type="DELETE" />
so that any records that are no longer in the source are deleted in the target.
Set <DroppedRecordBehaviour type="IGNORE" />
otherwise, it will delete any records that aren't in your partial recordset. You are unable to delete any records during partial data synchronization.
You can create a full data synchronization on a partial dataset by filtering the source and target. For example, if you want to sync transactions from one system to another but there are a lot of transactions, you can run a data sync where you filter by<Filter>[Transaction Date] > 100000 </Filter>
In both the source and the target. This way, you can sync a smaller dataset while still being able to perform deletions and ensuring a full synchronization of that partition of the data.
When syncing data that's linked to other data the order of the data sync is important. You should order the sync to the data is linked to first.
For example, if you have customers and invoices and the invoices link to each customer, then sync the customer data first. Therefore, when the invoices sync, they will link to the appropriate customer as the customer data is already in the target.
To create a reference data set (such as a country code) based on a set of shipping labels then you should first run the data against the Country Codes table before the shipping labels sync to the labels table.
In this scenario a supressDuplicateError=”false”
should be set up when running the data against the Country Codes table as duplicates are expected, which aren't ‘error’s’ and must be identified.
Sometimes different reference data values can mean the same thing, but different systems use different codes. In Cinchy’s country code example under ‘Populating Reference Data’:
System A uses the full country name (ex. Canada, United States),
System B uses the 2 letter ISO code (ex. CA, US), and
System C uses the 3 letter ISO code (ex. CAN, USA).
All three of these systems can sync into one shipping label table, with a link to Country, but depending on the system, we use a different link column reference. The same column mapping will look slightly different in each data sync to the Shipping Labels table.
You can change the display column in the Shipping Labels table to switch between the columns if you decide on a different code as the golden standard, without having to modify your data sync configurations.
If you sync from different sources into the same data set, you need to add the source of the data as a column to avoid overwriting records from other sources. This column will also be added to your sync key. For contacts you might have:
Once all your data from various systems in Cinchy, you can master that dataset and sync the master record back into any source systems. These syncs would filter the source by where the [Master Record] column is set to true, and sync on the unique identifier without the source system. You would also only want to update records already in the source, rather than deleting unmastered data or adding all records from other systems.
To use different sources to enrich different fields on the same record,you should set the dropped record behaviour to ignore, and update the columns based on a sync key on the record.
The ability to create new records depends on the source system. Internal systems, such as customer invoices, should be able to create new customer records.
External data sources, such as a published industry report, will only add noise to your table when you attempt to insert new records.
You can add post sync scripts to a data sync configuration that run after the data sync has been completed.
For example, you can run a query to update the assignee on a lead after the leads data sync runs.
For example, a query that only updates where the assignee is empty, except for a high-value lead, where it's reassigned to the most senior person on each team (based on another table in the instance that has the seniority and team of each sales director).
If you have a file source (delimited, csv or excel) and you want to sync data into Cinchy that doesn't have a sync key, you can add a calculated column for the row number of that record.
You will also want to add a unique calculated column for the file name to be able to re-run the data sync if any failures occur.
To run a bi-directional sync, you need to identify a source system unique identifier and run the following four data syncs for bidirectional syncing.
If one of the systems can't create new records, you can omit the data sync configuration where you create new records in the other system.
Run a data sync from the source into Cinchy filtering the source by records where the Cinchy ID (a custom field you create in the source) is empty. Insert these records into Cinchy and make sure to populate the source unique identifier column in Cinchy.
You can also sync data from Cinchy to the external source by inserting any records where a source unique identifier is empty.
Now that all records exist in both the external system and Cinchy, you can sync data based on the target system's unique ID. iIn this case, you are syncing data from the external source into Cinchy based on the Cinchy ID. Filter out records where Cinchy ID is null here to avoid errors; the sync will pick up the new records the next time the sync runs.
You can sync any data changes from Cinchy into the external source using the external unique identifier as the sync key, filtering out records in Cinchy where the external identifier is empty.
To run intensive summary queries on the platform, you must create a data sync to cache the data in a Cinchy table. To do so, sync the results of your Cinchy query into a Cinchy table with the same schema. You can then schedule the CLI to run as often as you would like your cache to expire. You can point other queries or reports to the query from this table, rather than the complex query itself.
To add more fields to sync, make sure that the columns are in the target system first. You can then swap in the new data sync configuration whenever and it will get picked up for future execution.
To remove fields from a data sync, swap out your data sync configuration. You can optionally delete the field in the source or target system afterward.
If you aren't adding or deleting fields, swap out your sync configuration. To add or remove fields, follow the guidelines above for when to swap in the config versus making the data model changes (add columns first, swap out config, validate config, delete unneeded columns).
If you want to change the sync key, swap out your data sync configuration. It's a good idea to check if your new sync key is unique in both your source and target. The CLI worker will sync using the first record it finds in the source to the first record it finds in the target. Checking for duplicate sync keys will allow you to understand whether any unexpected behaviour will occur.
type | The type defines the action upon the new record. | It can either be UPDATE, IGNORE, or CONDITIONAL.
|
This page provides information on both Schema Columns (used when configuring Data Sync Sources) and Column Mappings (used when configuring Data sync Destinations).
Schema columns refer to your mapping on your data source. For example, if your source is a CSV with the columns 'Name', 'Age', and 'Company', you would set up three matching schema columns in the Connections UI or data sync XML. These schema columns map to your destination columns for your data sync target, so that the data knows where to go.
You don't have to set up an exact 1:1 relationship between source columns/data and schema columns.
The only difference between the setup of schema columns in the Connections UI compared to data sync XML is the addition of the Alias column, which only appears in the Connections UI. The Alias column gives the user an alternative name to the column mapping (usually used for easier readability). The column types are detailed below.
Note that some source types have unique parameters not otherwise specified in other sources. You can find information on those, where applicable, in the source's main page.
You can review the various attribute descriptions here.
Fill in the following attributes for a Standard Column (Image 1):
Name: The name of your column
Formula: The formula associated with your calculated column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Boolean
If a source column (of any type) is syncing into a Cinchy Target Table link column, the source column must be dataType="Text".
Description: Describe your column
Advanced Settings:
You can select if you want this column to be mandatory
You can choose whether your data must be validated
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
For Text data types, you can choose whether to trim the whitespace.
To add in a Transformation > String Replacement enter the following:
Pattern for your string replacement
Replacement
You can have more than one String Replacement.
Fill in the following attributes for a Standard Calculated Column (Image 2):
Name: The name of your column
Formula: The formula associated with your calculated column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Boolean
If a Destination column is being used as a sync key, its source column must be set to type=Text, regardless of its actual type.**
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Fill in the following attributes for a Conditional Calculated Column (Image 3):
Name: The name of your column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Boolean
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Condition:
Name:
IF: Click Edit to create the "if" for your Conditional Statement (Image 4)
Then: Click Edit to create the "then" for your Conditional Statement (Image 5)
Default: Click Edit to create your default expression (Image 6)
Fill in the following attributes for a JavaScript Calculated Column (Image 7):
Name: The name of your column
Data Type: The return data type of your column, this can be either:
Text
Date
Number
Boolean
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.**
Description: Describe your calculated column
Advanced Settings:
You can select if you want this column to be mandatory.
You can choose whether your data must be validated.
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Script: Enter in your JavaScript
This XML element defines each column and their data type in the data set :
name
The user defined name for each column. This is used in when you want to indicate the name of the sourceColumn
.
dataType
The data type of each column could be Text, Date, Number, Boolean, Geometry, or Geography.
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
To sync into a Cinchy table with a Geometry or Geography column, those respective data types must be used in the data sync, and the input should be in well-known text (WKT) format.
The dataType affects how the source and target data is parsed, and also determines how the fields are compared for equality. If your sync keeps updating a field that hasn't changed, check your data types.
For example, given line 1 of a .csv file:
Name, Location, Age
The ordinal for Age would be 3.
maxLength
The max length of data in the column.
isMandatory
Boolean value that determines if the field is a mandatory column to create a row entry.
A defined SyncKey column of any data type can be checked for NULL values using isMandatory=true
. When validation fails, an error message is displayed in the command line. For other columns when validation fails, the Execution Errors Table is updated with Error Type, Mandatory Rule violation for that column and row that failed.
validateData
Boolean value determining whether to validate the data before insertion. Valid data means to fit all the constraints of the column (dataType
, maxLength
, isMandatory
, inputFormat
). If the data isn't valid and validateData is true, then the entry won't be synced into the table. The Execution Errors Table is also updated with the appropriate Error Type (Invalid Format Exception, Max Length Violation, Mandatory Rule Violation, Input Format Exception)
trimWhitespace
Boolean value determining whether to trim white space.
description
Description of the column.
inputFormat
Date fields support the inputFormat
which adheres to the C# .NET DateTime.ParseExact format. See here for reference.
inputFormat
attribute is useful when source file need some format changes in the input data
Column mappings defines how a single column from the data source maps to a column in a target table. Each <ColumnMapping>
has both a source and a target. If the destination is a Cinchy table and the target column is a link, then a third attribute becomes available called linkColumn
which you can use to specify the column used to resolve the linked record from the source value. The value of sourceColumn
should match name attribute of Source . The value of targetColumn
should match that of the target table.
Below is an example of a Column Mapping in the experience followed by the equivalent XML. In the experience, the Source Column attribute is a dropdown of columns configured in the Source Section.
sourceColumn
The name of the column in the data source. The name corresponds to the user defined name from the elements in the schema.
targetColumn
The name of the column in the target table. This would be a table that's already created in Cinchy and defined in the Target.
linkColumn
The name of a column from the linked table. If the target column is a linked column from another table, you may input data based on any of the linked table's columns.
If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.
Examples 1 and 2 show calculated columns within the Connections UI and their relevant XML.
Example 3 demonstrates the use of JavaScript in Calculated Columns.
The value of this column for each record is whatever the value is of the lob parameter.
The CONCAT function supports more than 2 parameters, and you must enclose any literal values in single quotes ( 'abc')
The values of two columns are concatenating together.
The CONCAT function supports more than 2 parameters, and you must enclose any literal values in single quotes ( 'abc')
This example splits a [Name] column with the format "Lastname, Firstname" into two columns: [First Name] and [Last Name].
name
The user defined name for each calculated column. This is used in when you want to indicate the name of the sourceColumn.
formula
CQL expression used to define formula. Supported functions:
We recommend you salt your value before you hash it.
dataType
The data type of each column could be Text, Date, Number or Bool.
maxLength
The max length of data in the column.
isMandatory
Boolean value that determines if the field is a mandatory column to create a row entry.
validateData
Boolean value determining whether to validate the data before inserting. Valid data means to fit all the constraints of the column (dataType, maxLength, isMandatory, inputFormat). If the data isn't valid and validateData
is true, then the entry won't sync into the table. The Execution Errors Table also updates with the appropriate Error Type (Invalid Format Exception, Max Length Violation, Mandatory Rule Violation, Input Format Exception)
description
Description of the column.
trimWhitespace
Boolean value that determines whether to trim white space.
The pages listed under this section aren't required for most data syncs, but they can help you create more robust use cases when applied correctly.
When syncing a Data Source, you may have the option to add extra configuration sections, such as an Auth Request, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in an Auth Request, fill in the following parameters:
HTTP Method: Either POST or GET
Endpoint URL
From the drop down, you can also add:
Request Headers
Body
Variables to Extract
When configuring a Data Sync, you may have the option to add in extra configuration sections, such as an Request Headers, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in Request Headers, fill in the following parameters:
Header Type:
Authorization: You can use an authorization request header to provide credentials that authenticate a user with a server, allowing access to a protected resource. Selecting this header defines the Header Value as a password field.
Content-Type
Header
Name: The name of the HTTP Header to add
Is Encrypted?
Header Value
Post sync scripts are written in CQL and can be added to the end of a sync to allow you to do more with your data, such as inserted retrieved values into a Cinchy table.
When you run the batch job, it will check for updates in the source table. These updates trigger the REST API to fetch our defined value, which we can use in a post-sync script. In this example, the script will insert this value into a second table, [Product].[Response Table].
The following steps will walk you through how to use this functionality.
Set up your data sync to a REST API. You can review the Source (Image 1), Destination (Image 2), and Sync Behaviour (Image 3) below.
Under the Post Sync tab, input the CQL that takes the defined variable from the REST API and insert it into the [Product].[Response Test] table (Image 4).
Add in your Permissions and click Save.
After you configure the sync, run the job. It will check for new data in the Name column on the [Product].[Names] table (Image 5). When found, it will trigger the REST API to GET the “value” variable. The post sync script will take that value and INSERT it into the [Product].[Response Test] table (Image 6).
Variables are values that can be dynamically inserted when the sync job is run. The variables you define here can be referenced in fields in other parts of your sync config (using the @ prefix) and when the job is run you can be prompted for their values.
The execution variables are either passed in at the time of execution or calculated through a formula. The value of the name attribute is passed in as command line option, param-values. (Optional, if the path to the source file to load is specified in the path attribute of the source element or calculated column formula don't reference execution variables)
While in the UI the term is variables, please note that the paired XML configuration will refer to the term as parameters.
You can choose to just use plain text in the Name field of the Variable or you can use a calculated formula.
The following formulas are currently supported by Connections.
FILENAME(<some-path>, <some-regex>): The FILENAME formula takes in two variables:
A reference to the first parameter (like a file path)
A regular expression that includes a match group. The first match group's value is assigned to the variable. The FILENAME function applies the regex only to the name of the file (excluding the directory structure).
FILEPATH(<some-path>, <some-regex>): The FILEPATH formula takes in two variables:
A reference to the first parameter (like a file path)
A regular expression that includes a match group. The first match group's value is assigned to the variable. The FILEPATH function executes the regex against the full file path (including the directory structure).
GUID(): The GUID formula uses a random GUID for that variable's value. Use GUID() to generate a unique identifier to use during the context the sync. For example, use it to track changes made from a particular sync.
ENV(<place-environment-variable-here>): The ENV formula uses an environment variable available in the connections/worker pods as the value of the variable.
We don't recommend using the ENV formula for credentials.
Below are the three Variable examples shown in the Connections experience, followed by the relevant XML:
A name attribute reference an execution variable (Image 2). You can use this when pulling in a local file for an Excel sync and specify the path to your file on your machine.
The FILEPATH function takes in two variables:
A reference to the first variable, such as a file path.
A regular expression that includes a match group (Image 3). The first match group's value is assigned to the variable. FILEPATH function executes regex against the full file path (including the directory structure). For the full formula, see the XML example below.
The FILENAME function takes in two variables:
A reference to the first variable, such as a file path.
A regular expression that includes a match group (Image 4). The first match group's value is what gets assigned to the variable. FILENAME function applies the regex only to the name of the file (excluding the directory structure).
The ENV formula uses an environment variable available in the connections/worker pods as the value of the variable (Image 5). An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files.
This secret can then be used in the REST Header (Image 7).
While in the UI the term is variables, the paired XML configuration uses the term parameters.
This section details how to configure your platform to use Environment Variables.
To create or change environment variables on Windows:
On the Windows taskbar, right-click the Windows Icon > System.
In the Settings window, click Related Settings > Advanced System Settings > Environment Variables (Image 8).
Under System Variables, click New to create your new environment variable (Image 9).
To configure an environment variable in Kubernetes, do the following:
Navigate to your cinchy.kubernetes\environment\_kustomizations\_template\instance\_template\connections\kustomization.yaml
file.
Under patchesJson6902 > patch, add your environment variable as shown in the code snippet below. Input your own namespace and variable name where indicated.
Navigate to your platform_components/connections/connections-app.yaml file.
Under Spec > Template > Spec > Containers > ENV, add in your environment variable. This addition depends on what value you are using as an environment variable. The below code snippet shows a basic example:
Make the same change to the platform_components/worker/worker-app.yaml file.
Push and merge your changes.
You can use filters in your source and target configurations to define specific subsets of data that you want to use in your syncs.
When syncing a Data Source, you may have the option to add additional configuration sections, such as a Filter, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Note that if your source only has one of the listed options, it will appear by default instead of in a drop-down.
A filter on your source is optional. It relies on a source specific syntax for filtering out records from your source target. The filter can reference execution parameters.
This is only available if using a table, not a query. For queries, include the filter in the query itself.
There can only be one <Filter> for each source. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, there is a "Conditional" option for source filters in the Connections UI.
Once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.
Multiple Conditions can be added to a single data sync by using the AND/OR and +Rule buttons.
You are able to group your Rules into a rule set by using the +Ruleset button.
The left-most drop down is used to select either a source or a target column as defined in your Source and Destination tabs.
The centre drop-down is used to select from the following options:
=
!=
Contains
Is Null
Is Not Null
The right-most drop-down can either be used for a plain value (ex: text, numerical, etc.) This will adjust based on the column data type picked in the left-most drop down. For example, if in the source schema the column is a date, then it renders a date picker.
For example, the below condition would only bring in records where the source column Employee Status isn't null (Image 2).
A target destination filter is optional. It relies on a source specific syntax for filtering out records from your target. The filter can reference execution parameters.
There can only be one <Filter> for each target. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
"CONDITIONAL" will open a new UI section allowing you to define Conditions upon which to update your records. for more information on the Conditional behaviour.
Function | Details |
---|---|
This example takes you through a batch data sync using the Cinchy Table [Product].[Names Test]. This scenario uses the following API as the destination:
GETSECRETVALUE(domain
, secretname
): The GETSECRETVALUE formula can be used to call a secret from the . This secret can then be used anywhere variables are supported where you may need to insert sensitive information, such as a connection string, Access Key ID, or within a REST URL, Body, or Header.
Example 5: The GETSECRETVALUE formula (Image 6) is input as a variable for a REST Source and is used to call a secret from the
Source | Definition |
---|
Source | Definition |
---|
CONCAT(colA, colB, 'literal value1', 'literal value2')
Concatenates multiple columns, parameters or literal values together. Supports two or more parameters.
row_number()
This is the numeric row number of files (Excel, delimited, fixed width). Currently not supported in conjunction with other formulas/parameters.
isnull(colA,'alt value')
If the first column is null, use the second value (can be a literal or another column).
hash('SHA256',colA)
Hashes the column using the algorithm specified.
File | For a file data sources (delimited, fixed width & Excel) the syntax conforms to the .NET frameworks RowFilter on a DataView |
Salesforce | For Salesforce syntax is the SOQL where clause (without the where expression) |
Dynamics | For Dynamics syntax is the OData $filter clause (without the $filter=expression) |
Cinchy | For Cinchy syntax is the CQL where clause (without the where expression) |
SqlServer | For SqlServer the syntax is the T-SQL where clause (without the where expression) |
Salesforce | For Salesforce the syntax is the SOQL where clause (without the where expression) |
Dynamics | For Dynamics syntax is the OData $filter clause (without the $filter=expression) |
Cinchy | For Cinchy syntax is the CQL where clause (without the where expression) |
SqlServer | For SqlServer the syntax is the TSQL where clause (without the where expression) |
You can use any task/job scheduling application to automatically run data syncs on a schedule. This page describes how to do this using Windows Task Scheduler.
You can set Scheduler Data synchronization commands to run automatically based on your data synchronization requirements, such as certain date/times/intervals. You can save the CLI command used to execute the data synchronization as a script in PowerShell, Batch or a Command file.
Here's an example of how to schedule the CLI with Windows Task Scheduler:
Launch Windows Task Scheduler
Create a folder to contain the CLI jobs (options)
Right click on CLI job folder
Select Create Task
On the General tab, enter the Name of the job you want to schedule
Click on the Trigger Tab
Click New and set your schedule preferences
Click the Action Tab
Click New button
Click Browse and navigate to the folder that contains the data sync scheduling script for execution
Copy and Paste the Start In (optional) filed the path for your Cinchy CLI folder
This example will take you through the creation and execution of a batch data sync where data will be loaded into the Cinchy via a CSV. In this example, we will be loading information into the People table in Cinchy. This is a self-contained example you can recreate in any Cinchy environment without dependencies.
Use Case: You have historically maintained a record of all your employees in a spreadsheet. Knowing that this significantly hinders your data and data management capabilities, you want to sync your file into Cinchy. Once synced, you can manage your employee information through the Cinchy data browser, instead of through a data silo.
For more information, see the documentation on Delimited File Sources.
This section contains:
The People Table XML schema.
A sample source CSV data file to load into Cinchy.
To create the People table used in this example, you can use the below is the XML. You can also create the table manually, as shown in the section below.
Log in to your Cinchy platform.
From under My Network, click the create button.
Select Table.
Select From Scratch.
Create a table with the following properties (Image 1):
Select Columns in the left hand navigation to create the columns for the table.
Select the "Click Here to Add" button and add the following columns:
Select Save to save your table.
You can download the sample CSV file used in this example below.
If you are downloading this file to recreate this exercise, the file path and the file name must be the following:
C:\Data\contacts.csv
You can also update the path
parameter in the data sync configuration to match the file path and name of your choosing.
The source file contains the following information which will sync into the target Cinchy table (Image 2).
As we can see, the file has the following columns:
First Name
Last Name
Email Address
Title
Company
The People table created only has the following columns:
Name
Title
Company
When syncing the data from the source (CSV file) to the target (Cinchy People table), the batch data sync must consider the following:
The first and last name from the source must merge into one column in the target (Name).
The email address from the sources isn't a column in the target, so this column won't sync into the target.
The title column will be an exact match from source to target.
The company column will also be an exact match from source to target.
You have two options when you create a data sync in Cinchy:
You can input all of your necessary information through the intuitive Connections UI. Once saved, all of this data is uploaded as an XML into the Data Sync configurations table.
You can bypass the UI and upload your XML config directly into the Data Sync configuration table.
This example will walk you through option one.
Within your Cinchy platform, navigate to the Connections Experience (Image 3).
In the Info tab, input the name of your data sync. This example uses "Contact Import" (Image 4).
Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box in step 5. For this example, we will set it to filepath
(Image 5).
Navigate to the Source tab. This example uses the .CSV file you downloaded at the beginning of this example as our source.
Under Select a Source, select Delimited File (Image 6).
The "Load Metadata" box will appear; this is where you will define some important values about your source needed for the data sync to execute. Using the below table as your guide, fill in your metadata parameters (Image 7):
Click Load.
In the Available Columns pop-up, select all of the columns that you want to import from the CSV. For this example, we will select them all (noting, however, that we will only map a few of them later) (Image 8).
Click Load.
Once you load your source, the schema section of the page will auto populate with the columns that you selected in step 7 (Image 9). Review the schema to ensure it has the correct Name and Data Type. You may also choose to set any Aliases or add a Description.
Navigate to the Destination tab and select Cinchy Table from the drop down.
In the Load Metadata pop-up, input the Domain and Table name for your destination. This example uses the Sandbox
domain and the People
table (Image 10).
Select Load Metadata.
Select the columns that you wish to use in your data sync (Image 11). These will be the columns that your source syncs to. This example uses the Name, Title, and Company columns. Note that you will have many Cinchy system table available to use as well. Click Load.
The Connections experience will attempt to automatically map your source and destination columns based on matching names. In the below screenshot, it matched the "Company" and "Title" columns (Image 12). The "Name" target column isn't an exact match for any of the source columns, so you must match that one manually.
Select "First Name" from the Source Column drop down to finish mapping our data sync (Image 13).
Navigate to the Sync Actions tab. Sync actions have two options: Full File and Delta. In this example, select Full File.
Full load processing means that the entire amount of data is imported iteratively the first time a data source is loaded into the data studio. Delta processing means loading the data incrementally, loading the source data at specific pre-established intervals.
Set the following parameters (Image 14):
Navigate to the Permissions tab. Here you will define your group access controls for your data sync (Image 15). You can set this how you like. This example gives all users access to Execute, Write, and Read our sync.
Any groups given Admin Access will have the ability to Execute, Write, and Read the data sync.
Navigate to the Jobs tab. Here you will see a record of all successful or failed jobs for this data sync.
Select "Start a Job" (Image 16).
Load your sample .CSV file in the pop-up window (Image 17).
The job will commence. The Execution window that pops up will help you to verify that your data sync is progressing (Image 18).
Navigate to your destination table to ensure that your data populated correctly (Image 19).
Instead of the Connections UI, you can also set up a data sync by uploading a formatted XML into the Data Sync Configs table within Cinchy.
We recommend only doing so once you have an understanding of how data syncs work. Not all sources/targets follow the same XML pattern.
The example below is the completed batch data sync configuration. Review the XML and then refer to the filled XML example.
The below XML shows a blank data sync for a Delimited File source to a Cinchy Table target.
The below filled XML example matches the Connections UI configuration made in Use the Connections UI. You can review the parameters used in the table below.
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 20).
In a new row, paste the Data Sync XML into the Config XML column (Image 21).
Define your group permissions in the applicable columns. This example gives all Users the Admin Access*.*
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Tip: Click on the below image to enlarge it.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
To execute your Data Sync you will use the CLI. If you don't have this downloaded, refer to the CLI commands list page.
In this example we will be using the following Data Sync Commands, however, for the full list of commands click here.
Launch PowerShell and navigate to the Cinchy CLI directory.
Enter and execute the following into PowerShell:
Once executed, navigate to your destination table to validate that your data synced correctly (Image 22).
To encrypt a password using PowerShell, complete the following:
Launch PowerShell and navigate to the Cinchy CLI directory (note, you can always type PowerShell
in the windows explore path for the Cinchy CLI directory)
Enter the following into PowerShell .\Cinchy.CLI.exe encrypt -t "password"
Hit enter to execute the command
Copy the password so it's accessible at batch execution time
Please note, you will need to replace "password" with your specific password.
The Execution Log table is a system table in Cinchy that logs the outputs of all data syncs (Image 23). You can always review the entries in this table for information on the progression of your syncs.
The Execution Errors table is a system table in Cinchy that logs any errors that may occur in a data sync (Image 24). Any data sync errors log to the temp directory outlined in the data sync execution command. For example, -d "C:\Cinchy\temp"
.
This example will take you through the creation and execution of a real-time data sync where data will be synced between two Cinchy tables based on real-time changes.
Your People table captures a view of various personnel information. Any time a new hire is added to the table, you want that information to be immediately synced into the New Employees table. We can solve this use case using the Cinchy Change Data Capture (CDC) function on our tables. This helps you to better keep track of all incoming people within your company.
This section has steps on how to:
Create the People table.
Create the New Employees table.
When you create tables to use with real-time syncs, make sure you turn on the Cinchy Change Data Capture feature through the Design Table > Change Notifications tab. This makes sure you capture real-time updates.
Login to your Cinchy platform.
From under My Network, click the Create > Standard Table > From Scratch.
Create a table with the following properties (Image 1):
Table Details | Values |
---|---|
If this domain doesn't exist, either create it or make sure to update this parameter where required during the data sync.
Click Columns in the left hand navigation to create the columns for the table.
Click the "Click Here to Add" button and add the following columns:
Select Change Notifications in the left hand navigation and select Publish Change Notifications.
Select Save to save your table.
Within the Cinchy platform, from under My Network, select Create
Select Table
Select From Scratch
Create a table with the following properties (Image 2):
Select Columns in the left hand navigation to create the columns for the table.
Select the "Click Here to Add" button and add the following columns:
Click the Save button to save your table.
You have two options when you create a data sync in Cinchy.
You can input all the necessary information through the Connections UI. Once saved, this data uploads as an XML file into the Data Sync configurations table.
You can bypass the UI and upload your XML config directly into the Data Sync configuration table yourself.
This example will walk you through both options.
Within your Cinchy platform, navigate to the Connections Experience (Image 3).
In the Info tab, input the name of your data sync. This example uses "New Hires" (Image 4).
As of 5.7, you can now configure the Topic JSON in the Source tab under the Listener section. If you still need to manually configure the Topic JSON, see the appendix for more information.
Navigate to the Source tab.
Under Select a Source, select Cinchy Event Broker (Image 5).
Under Listener, select the People table.
Navigate to the Destination tab and select Cinchy Table from the drop down (Image 6).
In the Connection section, input the Domain and Table name for your destination. This example uses the Sandbox domain and the People table.
Click Load.
Select the columns that you wish to use in your data sync (Image 8). These will be the columns that your source syncs to your target. This example uses the Name and Title columns. You also have many Cinchy system table available to use.
Click Load.
The Connections experience will try to automatically map your source and destination columns based on matching names. In the below screenshot, it has been able to correctly match the Name and Title columns (Image 8).
Navigate to the Sync Actions tab. Two options are available for data syncs: Full File and Delta. For this example, select Full File.
Set the following parameters (Image 9):
Navigate to the Permissions tab. Here you will define your group access controls for your data sync. You can set this how you like. This example gives all users access to Execute, Write, and Read our sync (Image 10).
Click Save.
Navigate to the Cinchy Listener Config table and validate your configuration. Make sure it's set to Enabled. Your real-time data sync should now be listening to your People table ready to push updates to your New Employees table.
Test your data sync by adding a new row to your People table. Ensure that the data is then updated across to the New Employees table (Images 11 & 12).
Instead of using the Connections UI, you can also set up a data sync by uploading a correctly formatted XML into the Data Sync Configs table within Cinchy.
We only recommend doing so once you have a good understanding of how data syncs work. Not all sources and targets follow the same XML pattern.
The below XML shows what a blank data sync could look like for our Cinchy Event Broker/CDC to Cinchy Table real-time sync with full file synchronization.
The below filled XML example matches the Connections UI configuration made in Use the Connections UI . You can review the parameters used in the table below.
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 13).
In a new row, paste the Data Sync XML into the Config XML column.
Define your group permissions in the applicable columns. This example gives All Users the Admin Access.
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
Navigate to the Cinchy Listener Config table and set up your configuration. Ensure it's set to Enabled. Your real-time data sync should now be listening to your People table ready to push updates to your New Employees table.
To execute your Data Sync you will use the CLI. If you don't have this downloaded, refer to the CLI commands list page
In this example we will be using the following Data Sync Commands, however, for the full list of commands click here.
Launch PowerShell and navigate to the Cinchy CLI directory.
Enter and execute the following into PowerShell:
Test your data sync by adding a new row to your People table. Ensure that the data is then updated across to the New Employees table (Images 14 & 15).
This section provides information on how to manually set up the listener config using the Listener Config table. While this example shows how to configure the sync using the Cinchy Event Broker/CDC, Cinchy also supports other Event Stream Sources. For more information, see the supported real-time sync stream sources.
Navigate to the Listener Config table in Cinchy (Image 16).
In a new row, add in your listener config data using the below table as a guide:
Before executing the data sync command, encrypt the password using PowerShell.
To encrypt a password using PowerShell, complete the following:
Launch PowerShell and navigate to the Cinchy CLI directory (note, you can always type PowerShell in the windows explore path for the Cinchy CLI directory).
Enter the following into PowerShell .\Cinchy.CLI.exe encrypt -t "password"
.
Hit enter to execute the command.
Copy the password so you have it accessible at batch execution time.
You will need to replace password
with your specific password.
The Execution Log table is a system table in Cinchy that logs the outputs of all data syncs. You can always review the entries in this table for information on the progression of your syncs (Image 17).
The Execution Errors table is a system table in Cinchy that logs any errors that may occur in a data sync (Image 18). Any data sync errors will also be logged in the temp directory outlined in the data sync execution command (-d "C:\Cinchy\temp"
)
When syncing a Data Source, you may have the option to add in extra configuration sections, such as an Pagination, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Cinchy has two types of pagination available (Image 2):
Cursor: The cursor is a key parameter in this type of pagination. You receive a variable named Cursor
along with the response. It's a pointer that points at a particular item that you must send with a request. The server then uses the cursor to seek the other set of items. Cinchy recommends using cursor-based pagination when dealing with a real-time data set.
Offset: Offset-based pagination is for parameters with a specific limit (the number of results) and offset (the number of records you need to skip). Cinchy recommends using offset-based pagination for static data.
To set up cursor pagination, fill in the following parameters (Image 3):
Type: Select Cursor
Next Page URL JSON Path: This is the JSON Path within the response to the URL of the next page
Cursor Key: This is the key used in the query string to specify the cursor value. This is only required if the cursor returned isn't a fully qualified URL.
To set up offset pagination, fill in the following parameters (Image 4):
Type: Select Offset
Limit Key: The key used in the query string to specify the limit
Limit: The desired page size
Offset By: The offset type that the API uses for pagination. This will be either Record Number or Page Number.
Offset Key: The key used in the query string to specify the offset
Initial Offset: The starting offset
A pagination block is mandatory, even if the API doesn't return results from multiple pages. You can use the following as the placeholder:
<Pagination type="OFFSET" limitField="" offsetField="" limit="0" initialOffset="0" />
Table Details | Values |
---|---|
Column Details | Values |
---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Column Details | Values |
---|---|
Table Details | Values |
---|---|
Column Details | Values |
---|---|
For Return Columns, select All.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Column | Description | Example |
---|---|---|
Table Name
People
Icon + Colour
Default
Domain
Sandbox
Column 1
Column Name: Name
Data Type: Text
Column 2
Column Name: Title
Data Type: Text
Column 3
Column Name: Phone Number
Data Type: Text
Column 4
Column Name: City Data Type: Text
Table Name
New Employees
Icon + Colour
Default
Domain
Sandbox (if this domain doesn't exist, either create it or make sure to update this parameter where required during the data sync)
Column 1
Column Name: Name
Data Type: Text
Column 3
Column Name: Title
Data Type: Text
Sync Key Column Reference
The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It's used to match data between the source and the target. This allows for updates to occur on changed records.
Name
New Record Behaviour
This defines the action taken when a new record is found in the sync source. This can be either Insert or Ignore.
Insert
Dropped Record Behaviour
This defines the action taken when a dropped record is found in the sync source.
This can be either Delete, Ignore, or Expire.
Delete
Changed Record Behaviour
This defines the action taken when a changed record is found in the sync source.
This can be either Update, Ignore, or Conditional.
Update
Name
The name of your data sync.
New Hires
Column Name
The name(s) of the source columns that you wish to sync.
"Name" "Title"
Column Data Type
The data type that corresponds to our selected source columns.
"Text"
Domain
The domain of your Cinchy Target table.
Sandbox
Table
The name of your Cinchy Target table.
New Employees
Column Mapping Source Column
The name(s) of the source columns that you are syncing.
"Name" "Title"
Column Mapping Target Column
The name(s) of the target column as it maps to the specified source column.
"Name" "Title"
Sync Key Column Reference Name
The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. It's used to match data between the source and the target. This allows for updates to occur on changed records.
"Name"
New Record Behaviour Type
This defines what will happen when new records are found in the source.
INSERT
Dropped Record Behaviour Type
This defines what will happen when dropped records are found in the source.
DELETE
Changed Record Behaviour Type
This defines what will happen when changed records are found in the source.
UPDATE
-s (server)
Required. The full path to the Cinchy server without the protocol (cinchy.co/Cinchy).
"pilot.cinchy.co/Training/Cinchy/"
-u (user id)
Required. The user id to login to Cinchy that has execution access to the data sync.
"admin"
-p (password)
Required. The password of the above User ID parameter. This can optionally be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section.
"DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU="
-f (feed)
Required. The name of the Data Sync Configuration as defined in Cinchy
"Contact Import"
Name
The name of your Listener Config
New Hire Sync
Event Connector Type
Select from the drop-down list which event stream you are listening in on.
Cinchy CDC
Topic
This column expects a JSON value with certain specific information. Please review the Topic Column table below for details.
Connection Attributes
This section isn't required for data syncs using the Cinchy Event Broker/CDC, so we can just enter `{}`
{}
Status
This sets where your config is Enabled or Disabled. You can leave this blank until you want to turn on your config.
Data Sync Config
The name of the Data Sync Config you created in the Connections UI or via XML.
New Hires
Auto Offset Reset
In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from. Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed This is the typical configuration. None won't read start reading any events.
Latest
Table Name
People
Icon + Colour
Default
Domain
Sandbox (if this domain doesn't exist, either create it or make sure to update this parameter where required during the data sync)
Column 1
Column Name: Name
Data Type: Text
Column 2
Column Name: Title
Data Type: Text
Column 3
Column Name: Company
Data Type: Text
Source
The source location of your file. This can be either Local, S3, or Azure Blob Storage.
Local
Delimiter
The type of delimiter on your source file.
Since our file is a CSV, the delimiter is a comma, and we uses the ',' value.
Text Qualifier
A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end.
""
Header Rows to Ignore
The number of records from the top of the file to ignore before the data starts (includes column header).
1
Path
The path to your source file (See step 3).
@filepath
Choose File
This option will appear once you've correctly set your Path value.
Upload the sample CSV for this example.
Sync Key Column Reference
The SyncKey is a unique key reference when syncing the data from the data source into the Cinchy table. Use this to match data between the source and the target. This allows for updates to occur on changed records.
Name
New Record Behaviour
This defines the action taken when a new record is found in the sync source. This can be either Insert or Ignore.
Insert
Dropped Record Behaviour
This defines the action taken when a dropped record is found in the sync source.
This can be either Delete, Ignore, or Expire.
Delete
Changed Record Behaviour
This defines the action taken when a changed record is found in the sync source.
This can be either Update, Ignore, or Conditional.
Update
Name
The name of your data sync.
Contact Import
Parameter
Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box
Parameter
Source
Defines whether your source is Local (PATH), S3, or Azure.
PATH
Path
Since this is a local upload, this is the path to your source file. In this case, it's the value that was set for the "Parameter" value, preceded by the '@' sign.
@Parameter
Delimiter
The delimiter type on your source file.
Since our file is a CSV, the delimiter is a comma, and we uses the ',' value.
Text Qualifier
A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end.
""e;
Header Rows to Ignore
The number of records from the top of the file to ignore before the data starts (includes column header).
1
Column Name
The name(s) of the source columns that you wish to sync. In this example there are more selected columns than mapped to show how Connections ignores unmapped data.
"First Name" "Last Name" "Email Address: "Title" "Company"
Column Data Type
The data type that corresponds to our selected source columns.
"Text"
Domain
The domain of your Cinchy Target table.
Sandbox
Table
The name of your Cinchy Target table.
People
Column Mapping Source Column
The name(s) of the source columns that you are syncing.
"Company" "Title" "First Name"
Column Mapping Target Column
The name(s) of the target column as it maps to the specified source column.
"Company" "Title" "Name"
Sync Key Column Reference Name
The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. Use it to match data between the source and the target. This allows for updates to occur on changed records.
"Name"
New Record Behaviour Type
This defines what will happen when new records are found in the source.
INSERT
Dropped Record Behaviour Type
This defines what will happen when dropped records are found in the source.
DELETE
Changed Record Behaviour Type
This defines what will happen when changed records are found in the source.
UPDATE
-s (server)
Required. The full path to the Cinchy server without the protocol (cinchy.co/Cinchy).
"pilot.cinchy.co/Training/Cinchy/"
-u (user id)
Required. The user id to login to Cinchy that has execution access to the data sync.
"admin"
-p (password)
Required. The password of the above User ID parameter. This can optionally be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section.
"DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU="
-f (feed)
Required. The name of the Data Sync Configuration as defined in Cinchy
"Contact Import"