This example will take you through the creation and execution of a batch data sync where data will be loaded into the Cinchy via a CSV. In this example, we will be loading information into the People table in Cinchy. This is a self-contained example you can recreate in any Cinchy environment without dependencies.
Use Case: You have historically maintained a record of all your employees in a spreadsheet. Knowing that this significantly hinders your data and data management capabilities, you want to sync your file into Cinchy. Once synced, you can manage your employee information through the Cinchy data browser, instead of through a data silo.
For more information, see the documentation on Delimited File Sources.
This section contains:
The People Table XML schema.
A sample source CSV data file to load into Cinchy.
To create the People table used in this example, you can use the below is the XML. You can also create the table manually, as shown in the section below.
Log in to your Cinchy platform.
From under My Network, click the create button.
Select Table.
Select From Scratch.
Create a table with the following properties (Image 1):
Table Name
People
Icon + Colour
Default
Domain
Sandbox (if this domain doesn't exist, either create it or make sure to update this parameter where required during the data sync)
Select Columns in the left hand navigation to create the columns for the table.
Select the "Click Here to Add" button and add the following columns:
Column 1
Column Name: Name
Data Type: Text
Column 2
Column Name: Title
Data Type: Text
Column 3
Column Name: Company
Data Type: Text
Select Save to save your table.
You can download the sample CSV file used in this example below.
If you are downloading this file to recreate this exercise, the file path and the file name must be the following:
C:\Data\contacts.csv
You can also update the path
parameter in the data sync configuration to match the file path and name of your choosing.
The source file contains the following information which will sync into the target Cinchy table (Image 2).
As we can see, the file has the following columns:
First Name
Last Name
Email Address
Title
Company
The People table created only has the following columns:
Name
Title
Company
When syncing the data from the source (CSV file) to the target (Cinchy People table), the batch data sync must consider the following:
The first and last name from the source must merge into one column in the target (Name).
The email address from the sources isn't a column in the target, so this column won't sync into the target.
The title column will be an exact match from source to target.
The company column will also be an exact match from source to target.
You have two options when you create a data sync in Cinchy:
You can input all of your necessary information through the intuitive Connections UI. Once saved, all of this data is uploaded as an XML into the Data Sync configurations table.
You can bypass the UI and upload your XML config directly into the Data Sync configuration table.
This example will walk you through option one.
Within your Cinchy platform, navigate to the Connections Experience (Image 3).
In the Info tab, input the name of your data sync. This example uses "Contact Import" (Image 4).
Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box in step 5. For this example, we will set it to filepath
(Image 5).
Navigate to the Source tab. This example uses the .CSV file you downloaded at the beginning of this example as our source.
Under Select a Source, select Delimited File (Image 6).
The "Load Metadata" box will appear; this is where you will define some important values about your source needed for the data sync to execute. Using the below table as your guide, fill in your metadata parameters (Image 7):
Source
The source location of your file. This can be either Local, S3, or Azure Blob Storage.
Local
Delimiter
The type of delimiter on your source file.
Since our file is a CSV, the delimiter is a comma, and we uses the ',' value.
Text Qualifier
A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end.
""
Header Rows to Ignore
The number of records from the top of the file to ignore before the data starts (includes column header).
1
Path
The path to your source file (See step 3).
@filepath
Choose File
This option will appear once you've correctly set your Path value.
Upload the sample CSV for this example.
Click Load.
In the Available Columns pop-up, select all of the columns that you want to import from the CSV. For this example, we will select them all (noting, however, that we will only map a few of them later) (Image 8).
Click Load.
Once you load your source, the schema section of the page will auto populate with the columns that you selected in step 7 (Image 9). Review the schema to ensure it has the correct Name and Data Type. You may also choose to set any Aliases or add a Description.
Navigate to the Destination tab and select Cinchy Table from the drop down.
In the Load Metadata pop-up, input the Domain and Table name for your destination. This example uses the Sandbox
domain and the People
table (Image 10).
Select Load Metadata.
Select the columns that you wish to use in your data sync (Image 11). These will be the columns that your source syncs to. This example uses the Name, Title, and Company columns. Note that you will have many Cinchy system table available to use as well. Click Load.
The Connections experience will attempt to automatically map your source and destination columns based on matching names. In the below screenshot, it matched the "Company" and "Title" columns (Image 12). The "Name" target column isn't an exact match for any of the source columns, so you must match that one manually.
Select "First Name" from the Source Column drop down to finish mapping our data sync (Image 13).
Navigate to the Sync Actions tab. Sync actions have two options: Full File and Delta. In this example, select Full File.
Full load processing means that the entire amount of data is imported iteratively the first time a data source is loaded into the data studio. Delta processing means loading the data incrementally, loading the source data at specific pre-established intervals.
Set the following parameters (Image 14):
Sync Key Column Reference
The SyncKey is a unique key reference when syncing the data from the data source into the Cinchy table. Use this to match data between the source and the target. This allows for updates to occur on changed records.
Name
New Record Behaviour
This defines the action taken when a new record is found in the sync source. This can be either Insert or Ignore.
Insert
Dropped Record Behaviour
This defines the action taken when a dropped record is found in the sync source.
This can be either Delete, Ignore, or Expire.
Delete
Changed Record Behaviour
This defines the action taken when a changed record is found in the sync source.
This can be either Update, Ignore, or Conditional.
Update
Navigate to the Permissions tab. Here you will define your group access controls for your data sync (Image 15). You can set this how you like. This example gives all users access to Execute, Write, and Read our sync.
Any groups given Admin Access will have the ability to Execute, Write, and Read the data sync.
Navigate to the Jobs tab. Here you will see a record of all successful or failed jobs for this data sync.
Select "Start a Job" (Image 16).
Load your sample .CSV file in the pop-up window (Image 17).
The job will commence. The Execution window that pops up will help you to verify that your data sync is progressing (Image 18).
Navigate to your destination table to ensure that your data populated correctly (Image 19).
Instead of the Connections UI, you can also set up a data sync by uploading a formatted XML into the Data Sync Configs table within Cinchy.
We recommend only doing so once you have an understanding of how data syncs work. Not all sources/targets follow the same XML pattern.
The example below is the completed batch data sync configuration. Review the XML and then refer to the filled XML example.
The below XML shows a blank data sync for a Delimited File source to a Cinchy Table target.
The below filled XML example matches the Connections UI configuration made in Use the Connections UI. You can review the parameters used in the table below.
Name
The name of your data sync.
Contact Import
Parameter
Since this is a local file upload, we also need to set a Parameter. This value will be referenced in the "path" value of the Load Metadata box
Parameter
Source
Defines whether your source is Local (PATH), S3, or Azure.
PATH
Path
Since this is a local upload, this is the path to your source file. In this case, it's the value that was set for the "Parameter" value, preceded by the '@' sign.
@Parameter
Delimiter
The delimiter type on your source file.
Since our file is a CSV, the delimiter is a comma, and we uses the ',' value.
Text Qualifier
A text qualifier is a character used to distinguish the point at which the contents of a text field should begin and end.
""e;
Header Rows to Ignore
The number of records from the top of the file to ignore before the data starts (includes column header).
1
Column Name
The name(s) of the source columns that you wish to sync. In this example there are more selected columns than mapped to show how Connections ignores unmapped data.
"First Name" "Last Name" "Email Address: "Title" "Company"
Column Data Type
The data type that corresponds to our selected source columns.
"Text"
Domain
The domain of your Cinchy Target table.
Sandbox
Table
The name of your Cinchy Target table.
People
Column Mapping Source Column
The name(s) of the source columns that you are syncing.
"Company" "Title" "First Name"
Column Mapping Target Column
The name(s) of the target column as it maps to the specified source column.
"Company" "Title" "Name"
Sync Key Column Reference Name
The SyncKey is used as a unique key reference when syncing the data from the data source into the Cinchy table. Use it to match data between the source and the target. This allows for updates to occur on changed records.
"Name"
New Record Behaviour Type
This defines what will happen when new records are found in the source.
INSERT
Dropped Record Behaviour Type
This defines what will happen when dropped records are found in the source.
DELETE
Changed Record Behaviour Type
This defines what will happen when changed records are found in the source.
UPDATE
Once you have completed your Data Sync XML, navigate to the Data Sync Configurations table in Cinchy (Image 20).
In a new row, paste the Data Sync XML into the Config XML column (Image 21).
Define your group permissions in the applicable columns. This example gives all Users the Admin Access*.*
The Name and Config Version columns will be auto populated as they values are coming from the Config XML.
Tip: Click on the below image to enlarge it.
Be sure when you are pasting into the Config XML column that you double click into the column before pasting, otherwise each line of the XML will appear as an individual record in the Data Sync Configurations table.
To execute your Data Sync you will use the CLI. If you don't have this downloaded, refer to the CLI commands list page.
In this example we will be using the following Data Sync Commands, however, for the full list of commands click here.
-s (server)
Required. The full path to the Cinchy server without the protocol (cinchy.co/Cinchy).
"pilot.cinchy.co/Training/Cinchy/"
-u (user id)
Required. The user id to login to Cinchy that has execution access to the data sync.
"admin"
-p (password)
Required. The password of the above User ID parameter. This can optionally be encrypted. For a walkthrough on how to use the CLI to encrypt the password, refer to the Appendix section.
"DESuEGqfffsamx55yl256hjuPYxa4ncc+5+bLkoVIFpgs0Lq6hkcU="
-f (feed)
Required. The name of the Data Sync Configuration as defined in Cinchy
"Contact Import"
Launch PowerShell and navigate to the Cinchy CLI directory.
Enter and execute the following into PowerShell:
Once executed, navigate to your destination table to validate that your data synced correctly (Image 22).
To encrypt a password using PowerShell, complete the following:
Launch PowerShell and navigate to the Cinchy CLI directory (note, you can always type PowerShell
in the windows explore path for the Cinchy CLI directory)
Enter the following into PowerShell .\Cinchy.CLI.exe encrypt -t "password"
Hit enter to execute the command
Copy the password so it's accessible at batch execution time
Please note, you will need to replace "password" with your specific password.
The Execution Log table is a system table in Cinchy that logs the outputs of all data syncs (Image 23). You can always review the entries in this table for information on the progression of your syncs.
The Execution Errors table is a system table in Cinchy that logs any errors that may occur in a data sync (Image 24). Any data sync errors log to the temp directory outlined in the data sync execution command. For example, -d "C:\Cinchy\temp"
.