Cinchy Platform Documentation
Cinchy v5.6

Snowflake Table

1. Overview

t provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Snowflake enables data storage, processing, and analytic solutions.
Prior to setting up your data sync destination, ensure that you've configured your Source.
The Snowflake Table destination supports batch and real-time syncs.

1.1 How Connections Loads Data into Snowflake

For batch syncs of 10 records or less, single Insert/Update/Delete statements are executed to perform operations against the target Snowflake table.
For batch syncs exceeding 10 records, the operations are performed in bulk.
The bulk operation process consists of:
  1. 1.
    Generating a CSV containing a batch of records
  2. 2.
    Creating a temporary table in Snowflake
  3. 3.
    Copying the generated CSV into the temp table
  4. 4.
    If needed, performing Insert operations against the target Snowflake table using the temp table
  5. 5.
    If needed, performing Update operations against the target Snowflake table using the temp table
  6. 6.
    If needed, performing Delete operations against the target Snowflake table using the temp table
  7. 7.
    Dropping the temporary table
Real time sync volume size is based on a dynamic batch size up to configurable threshold.

2. Considerations

  • The temporary table generated in the bulk flow process for high volume scenarios transforms all columns of data type Number to be of type NUMBER(38, 18). This may cause precision loss if the number scale in the target table is higher

2. Destination Tab

The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
Destination Details
Column Mapping
The following parameters will help to define your data sync destination and how it functions.
Mandatory. Select your destination from the drop down menu.
Snowflake Table
Connection String
Mandatory. The encrypted connection string used to connect to your Snowflake instance. You can review Snowflake's Connection String guide and parameter descriptions here.
Unencrypted example:;user=myuser;password=mypassword;db=CINCHY;schema=PUBLIC
Mandatory. The name of the Table in Snowflake that you wish to sync.
ID Column
Mandatory if you want to use "Delete" action in your sync behaviour configuration. The name of the identity column that exists in the target (OR a single column that is guaranteed to be unique and automatically populated for every new record).
Employee ID
ID Column Data Type
Mandatory if using the ID Column parameter. The data type of the above ID Column. Either: Text, Number, Date, Bool, Geography, or Geometry
The Column Mapping section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns. When specifying the Target Column in the Column Mappings section, all names are case-sensitive.
Source Column
Mandatory. The name of your column as it appears in the source.
Target Column
Mandatory. The name of your column as it appears in the destination.
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on destination filters.
Image 2: Define your Destination

4. Next Steps