5️⃣
Cinchy Platform Documentation
Cinchy v5.8
Cinchy v5.8
  • Data Collaboration Overview
  • Release notes
    • Release notes
      • 5.9 release notes
      • 5.8 Release Notes
      • 5.7 Release Notes
      • 5.6 Release Notes
      • 5.5 Release Notes
      • 5.4 Release Notes
      • 5.3 Release Notes
      • 5.2 Release Notes
      • 5.1 Release Notes
      • 5.0 Release Notes
  • Support
  • Glossary
  • FAQ
  • Deployment guide
    • Deploying Cinchy
      • Plan your deployment
        • Deployment architecture
          • Kubernetes architecture
          • IIS architecture
        • Deployment prerequisites
          • Single Sign-On (SSO) integration
            • Enable TLS 1.2
            • Configure ADFS
            • AD Group Integration
      • Kubernetes
        • Disable your Kubernetes applications
        • Change your file storage configuration
        • Configure AWS IAM for Connections
        • Use Self-Signed SSL Certs (Kubernetes)
        • Deploy the CLI (Kubernetes)
      • IIS
  • Upgrade guide
    • Upgrade Cinchy
      • Cinchy Upgrade Utility
      • Kubernetes upgrades
        • v5.1 (Kubernetes)
        • v5.2 (Kubernetes)
        • v5.3 (Kubernetes)
        • v5.4 (Kubernetes)
        • v5.5 (Kubernetes)
        • v5.6 (Kubernetes)
        • v5.7 (Kubernetes)
        • v5.8 (Kubernetes)
        • Upgrade AWS EKS Kubernetes version
        • Update the Kubernetes Image Registry
        • Upgrade Azure Kubernetes Service (AKS)
      • IIS upgrades
        • v4.21 (IIS)
        • v4.x to v5.x (IIS)
        • v5.1 (IIS)
        • v5.2 (IIS)
        • v5.3 (IIS)
        • v5.4 (IIS)
        • v5.5 (IIS)
        • v5.6 (IIS)
        • v5.7 (IIS)
        • v5.8 (IIS)
      • Upgrading from v4 to v5
  • Guides for using Cinchy
    • User Guide
      • Data Browser overview
      • The Admin panel
      • User preferences
        • Personal access tokens
      • Table features
      • Data management
      • Queries
      • Version management
        • Versioning best practices
      • Commentary
    • Builder Guide
      • Best practices
      • Create tables
        • Attach files
        • Columns
        • Data controls
          • Data entitlements
          • Data erasure
          • Data compression
        • Formatting rules
        • Indexing & partitioning
        • Linking data
        • Table and column GUIDs
        • System tables
      • Delete tables
        • Restore tables, columns, and rows
      • Saved queries
      • CinchyDXD
        • Overview
        • DXD workflow
        • Package the data experience
        • Install the data experience
        • Release package
        • Changelog
        • References
          • Cinchy DXD CLI reference
          • Data Experience Definitions table
          • Data Experience Reference table
      • Multilingual support
      • Integration guides
    • Administrator Guide
    • Additional guides
      • Monitor and Log on Kubernetes
        • Grafana
        • OpenSearch dashboards
          • Set up Alerts
        • Monitor via ArgoCD
      • Maintenance
      • Cinchy Secrets Manager
      • GraphQL (Beta)
      • System properties
      • Enable Data At Rest Encryption (DARE)
      • Application experiences
        • Network map
          • Custom node results
          • Custom results in the Network Map
        • Set up experiences
  • API Guide
    • API overview
      • API authentication
      • API saved queries
      • ExecuteCQL
      • Webhook ingestion
  • CQL
    • Overview
      • CQL examples
      • CQL statements overview
        • Cinchy DML statements
        • Cinchy DDL statements
      • Cinchy supported functions
        • Cinchy functions
        • Cinchy system values
        • Cinchy User Defined Functions (UDFs)
          • Table-valued functions
          • Scalar-valued functions
        • Conversion functions
        • Date and Time types and functions
          • Return System Date and Time values
          • Return Date and Time parts
          • Return Date and Time values from their parts
          • Return Date and Time difference values
          • Modify Date and Time values
          • Validate Date and Time values
        • Logical functions
        • Math functions
        • String functions
        • Geometry and Geography data type and functions
          • OGC methods on Geometry & Geography instances
          • Extended methods on Geometry & Geography instances
        • Full Text Search functions
        • Connections functions
        • JSON functions
    • CQL functions reference list
  • Meta-Forms
    • Introduction
    • Install Meta-Forms
      • Deploy Meta-Forms (Kubernetes)
      • Deploy Meta-Forms (IIS)
    • Forms data types
    • Meta-Forms Builder Guide
      • Create a dynamic meta-form with tables
      • Create a dynamic meta-form example with Form Designer
      • Add links to a form
      • Rich text editing in forms
  • Data syncs
    • Get started with data syncs
    • IIS installation
      • Install Connections
      • Install the Worker/Listener
      • Install the Connections CLI
    • Build data syncs
      • Data sync types
      • Design patterns
      • Sync actions
      • Columns and mappings
        • Calculated column examples
      • Advanced settings
        • Filters
        • Variables
        • Auth requests
        • Request headers
        • Post sync scripts
        • Pagination
      • Batch data sync example
      • Real-time sync example
      • Schedule a data sync
      • Connection functions
    • Data sync sources
      • Cinchy Event Broker/CDC
        • Cinchy Event Broker/CDC XML config example
      • Cinchy Table
        • Cinchy Table XML config example
      • Cinchy Query
        • Cinchy Query XML config example
      • Copper
      • DB2 (query and table)
      • Dynamics 2015
      • Dynamics
      • DynamoDB
      • File-based sources
        • Binary file
        • Delimited file
        • Excel
        • Fixed width file
        • Parquet
      • Kafka Topic
        • Kafka Topic example config
        • Apache AVRO data format
      • LDAP
      • MongoDB collection
        • MongoDB collection source example
      • Mongo event
      • MongoDB collection (Cinchy event)
      • MS SQL Server (query and table)
      • ODBC Query
      • Oracle (query and table)
      • Polling event
        • Polling event example config
      • REST API
      • REST API (Cinchy event)
      • SAP SuccessFactors
      • Salesforce Object (Bulk API)
      • Salesforce platform event
      • Salesforce push topic
      • Snowflake
        • Snowflake source example config
      • SOAP 1.2 web service
      • SOAP 1.2 web service (Cinchy Event Triggered)
    • Data sync destinations
      • Cinchy Table
      • DB2 table
      • Dynamics
      • Kafka Topic
      • MongoDB collection
      • MS SQL Server table
      • Oracle table
      • REST API
      • Salesforce
      • Snowflake table
      • SOAP 1.2 web service
    • Real-time sync stream sources
      • The Listener Config table
      • Cinchy Event Broker/CDC
      • Data Polling
      • Kafka Topic
      • MongoDB
      • Salesforce Push Topic
      • Salesforce Platform Event
    • CLI commands list
    • Troubleshooting
  • Other Resources
    • Angular SDK
    • JavaScript SQK
Powered by GitBook
On this page
  • Overview
  • Schema columns
  • Standard column
  • Standard calculated column
  • Conditional calculated column
  • JavaScript Calculated Column
  • Columns in XML
  • Attributes descriptions
  • Elements:
  • Column Mappings
  • Attributes:
  1. Data syncs
  2. Build data syncs

Columns and mappings

PreviousSync actionsNextCalculated column examples

Last updated 1 year ago

Overview

This page provides information on both Schema Columns (used when configuring Data Sync Sources) and Column Mappings (used when configuring Data sync Destinations).

Schema columns

Schema columns refer to your mapping on your data source. For example, if your source is a CSV with the columns 'Name', 'Age', and 'Company', you would set up three matching schema columns in the Connections UI or data sync XML. These schema columns map to your destination columns for your data sync target, so that the data knows where to go.

You don't have to set up an exact 1:1 relationship between source columns/data and schema columns.

The only difference between the setup of schema columns in the Connections UI compared to data sync XML is the addition of the Alias column, which only appears in the Connections UI. The Alias column gives the user an alternative name to the column mapping (usually used for easier readability). The column types are detailed below.

Note that some source types have unique parameters not otherwise specified in other sources. You can find information on those, where applicable, in the source's main page.

You can review the various attribute descriptions

Standard column

Fill in the following attributes for a Standard Column (Image 1):

  • Name: The name of your column

  • Formula: The formula associated with your calculated column

  • Data Type: The return data type of your column, this can be either:

    • Text

    • Date

    • Number

    • Boolean

If a source column (of any type) is syncing into a Cinchy Target Table link column, the source column must be dataType="Text".

  • Description: Describe your column

  • Advanced Settings:

    • You can select if you want this column to be mandatory

    • You can choose whether your data must be validated

  • If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected

  • If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"

  • If just Validated is checked on a column, then all rows are synced.

  • For Text data types, you can choose whether to trim the whitespace.

To add in a Transformation > String Replacement enter the following:

  • Pattern for your string replacement

  • Replacement

You can have more than one String Replacement.

Standard calculated column

Fill in the following attributes for a Standard Calculated Column (Image 2):

  • Name: The name of your column

  • Formula: The formula associated with your calculated column

  • Data Type: The return data type of your column, this can be either:

    • Text

    • Date

    • Number

    • Boolean

If a Destination column is being used as a sync key, its source column must be set to type=Text, regardless of its actual type.**

  • Description: Describe your calculated column

  • Advanced Settings:

    • You can select if you want this column to be mandatory.

    • You can choose whether your data must be validated.

  • If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected

  • If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"

  • If just Validated is checked on a column, then all rows are synced.

Conditional calculated column

Fill in the following attributes for a Conditional Calculated Column (Image 3):

  • Name: The name of your column

  • Data Type: The return data type of your column, this can be either:

    • Text

    • Date

    • Number

    • Boolean

If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.

  • Description: Describe your calculated column

  • Advanced Settings:

    • You can select if you want this column to be mandatory.

    • You can choose whether your data must be validated.

  • If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected

  • If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"

  • If just Validated is checked on a column, then all rows are synced.

  • Condition:

    • Name:

    • IF: Click Edit to create the "if" for your Conditional Statement (Image 4)

  • Then: Click Edit to create the "then" for your Conditional Statement (Image 5)

  • Default: Click Edit to create your default expression (Image 6)

JavaScript Calculated Column

Fill in the following attributes for a JavaScript Calculated Column (Image 7):

  • Name: The name of your column

  • Data Type: The return data type of your column, this can be either:

    • Text

    • Date

    • Number

    • Boolean

If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.**

  • Description: Describe your calculated column

  • Advanced Settings:

    • You can select if you want this column to be mandatory.

    • You can choose whether your data must be validated.

  • If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected

  • If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"

  • If just Validated is checked on a column, then all rows are synced.

  • Script: Enter in your JavaScript

Columns in XML

This XML element defines each column and their data type in the data set :

<Column
    name="string"
    dataType="Text"| "Date"| "Number"| "Bool"| "Geometry"| "Geography"
    ordinal="int"                     -- Depends on the data source
    maxLength="int"                   --OPTIONAL
    isMandatory=["true", "false"]     --OPTIONAL
    validateData=["true", "false"]    --OPTIONAL
    trimWhitespace=["true", "false"]  --OPTIONAL
    description="string"              --OPTIONAL
    inputFormat="string"              --OPTIONAL
    >
    ...
</Column>

Attributes descriptions

name

The user defined name for each column. This is used in when you want to indicate the name of the sourceColumn.

dataType

The data type of each column could be Text, Date, Number, Boolean, Geometry, or Geography.

If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.

To sync into a Cinchy table with a Geometry or Geography column, those respective data types must be used in the data sync, and the input should be in well-known text (WKT) format.

The dataType affects how the source and target data is parsed, and also determines how the fields are compared for equality. If your sync keeps updating a field that hasn't changed, check your data types.

For example, given line 1 of a .csv file:

Name, Location, Age

The ordinal for Age would be 3.

maxLength

The max length of data in the column.

isMandatory

Boolean value that determines if the field is a mandatory column to create a row entry.

A defined SyncKey column of any data type can be checked for NULL values using isMandatory=true. When validation fails, an error message is displayed in the command line. For other columns when validation fails, the Execution Errors Table is updated with Error Type, Mandatory Rule violation for that column and row that failed.

validateData

Boolean value determining whether to validate the data before insertion. Valid data means to fit all the constraints of the column (dataType, maxLength, isMandatory, inputFormat). If the data isn't valid and validateData is true, then the entry won't be synced into the table. The Execution Errors Table is also updated with the appropriate Error Type (Invalid Format Exception, Max Length Violation, Mandatory Rule Violation, Input Format Exception)

trimWhitespace

Boolean value determining whether to trim white space.

description

Description of the column.

inputFormat

inputFormat attribute is useful when source file need some format changes in the input data

Elements:

Column Mappings

Column mappings defines how a single column from the data source maps to a column in a target table. Each <ColumnMapping> has both a source and a target. If the destination is a Cinchy table and the target column is a link, then a third attribute becomes available called linkColumn which you can use to specify the column used to resolve the linked record from the source value. The value of sourceColumn should match name attribute of Source . The value of targetColumn should match that of the target table.

Below is an example of a Column Mapping in the experience followed by the equivalent XML. In the experience, the Source Column attribute is a dropdown of columns configured in the Source Section.

<ColumnMapping
    sourceColumn="string"
    targetColumn="string"
    linkColumn="string"
    >
</ColumnMapping>

Attributes:

sourceColumn

The name of the column in the data source. The name corresponds to the user defined name from the elements in the schema.

targetColumn

The name of the column in the target table. This would be a table that's already created in Cinchy and defined in the Target.

linkColumn

The name of a column from the linked table. If the target column is a linked column from another table, you may input data based on any of the linked table's columns.

If a Destination column is being used as a sync key, its source column has to be set to type=Text, regardless of its actual type.

Date fields support the inputFormat which adheres to the C# .NET DateTime.ParseExact format. See for reference.

here
here.
Image 1: Standard Column
Image 2: Standard Calculated Columns
Image 3: Conditional Calculated Column
Image 4: Creating your Conditional statement
Image 5: Creating your Conditional Statement
Image 6: Creating your Default Expression
Image 7: JavaScript Calculated Column