Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Cinchy Event Broker/CDC (Change Data Capture) source allows you to capture data changes on your table and use these events in your data syncs.
To mitigate the labour and time costs of hosting information in a silo and remove the costly integration tax plaguing your IT teams, you want to connect your legacy systems into Cinchy to take advantage of the platform's sync capabilities.
To do this, you can set up a real-time sync between a Cinchy Table and Salesforce that updates Salesforce any time data is added, updated, or deleted on the Cinchy side. If you enable change notifications on your Cinchy table, you can set up a data sync and listener config with your source as the Cinchy Event Broker/CDC.
The Cinchy Event Broker/CDC supports both batch syncs and real-time syncs (most common).
Remember to set up your listener config if you are creating a real-time sync.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
CDC
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Cinchy Event Broker/CDC
Run Query
Path to Iterate
Optional. For the Cinchy Event Broker/CDC, the Path to Iterate function can be used to provide the JSON path to the array of items that you want to sync (provided that your event message contains JSON values).
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
If there is more than one listener associated with your data sync, you will need to configure the addition listeners via the Listener Configuration table.
If you are creating a CDC listener config for a Cinchy Event Triggered REST API data source, pay in mind the following unique constraints:
Column names in the listener config shouldn't contain spaces. If they do, they will be automatically removed. For example, a column named First Name will become @FirstName.
The replacement variable names are case sensitive.
Column names in the listener config shouldn't be prefixes of other column names. For example, if you have a column called Name, you shouldn't have another called "Name2" as the value of @Name2 may end up being replaced by the value of @Name suffixed with a 2
.
Auto Offset Reset
Earliest, Latest or None. In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from.
None
A Topic JSON is necessary for all real-time syncs. Enter your JSON parameters through the Connections UI, or edit them directly through the Listener Configuration table..
tableGuid
Mandatory. GUID of the table you are reading from.
ef6710ca-6e59-4b4a-86d3-f6d24ed7658b
fields
Array of objects specifying columns to fetch.
See fields
section below.
filter
WHERE clause for filtering records.
New.[Is Valid] = 1 AND (New.[Is Excluded] = 0 OR New.[Is Excluded] IS NULL)
messageKeyExpression
The messageKeyExpression parameter specifies a key that the listener application uses to route messages into specific topics within a Kafka broker. See below for more information.
value
next_cursor
The next_cursor
parameter serves as an offset marker for paginated data retrieval in API requests. It helps in fetching large data sets chunk by chunk, making the process more manageable and efficient.
ABCS
batchSize
Number of records read per request.
1000
The following expands on the available parameters within the fields
section above.
column
Column name to fetch from the table.
Cinchy Id
alias
Alias for the column.
CinchyId
deserializeJsonValue
Converts text to JSON on read out.
true
The following is a topic JSON example:
messageKeyExpression
Each of your Event Listener message keys a message key. By default, this key is dictated by the Cinchy ID of the record being changed.
When the worker processes your Event Listener messages, it does so in batches, and for efficiency and to guarantee order, messages that contain the same key won't be processed in the same batch.
The messageKeyExpression property allows you to change the default message key to something else.
Use Case
Ensuring records with the same message key can be updated with the proper ordering to reflect an accurate collaboration log history.
Example Syntax
In this example, we want the message key to be based on the [Employee Id]
and [Name]
column of the table that CDC is enabled on.
The Cinchy Event Broker/CDC Stream Source has the unique capability to use "Old" and "New" parameters when filtering data. This filter can be a powerful tool for ensuring that you sync only the specific data that you want.
The "New" and "Old" parameters are based on updates to single records, not columns/rows.
"New" Example:
In the below filter, we only want to sync data where the [Approval State]
of a record is newly Approved
. For example, if a record was changed from Draft
to Approved
, the filter would sync the record.
Due to internal logic, newly created records will be tagged as both **New** and **Old**.
"Old" Example:
In the below filter, we only want to sync data where the [Status]
of a record was In Progress
but has since been updated to any other [Status]
. For example, if a record was changed from In Progress
to Done
, the filter would sync the record.
Due to internal logic, newly created records will be tagged as both **New** and **Old**.
Connection Attributes
You don`t need to provide Connections Attributes when using the Cinchy CDC Stream Source.
If you're inputting your configuration via the Listener Config table, you must insert the below text into the column:
The Schema section is where you define which source columns you want to sync in your connection. You have the option to add the following columns:
Standard
Calculated
Conditional
JavaScript
You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination.
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
If more than one listener is needed for a real-time sync, configure it/them via the Listener Config table.
To run a real-time sync, enable your Listener from the Execution tab.
The following sections outline more information about specific parameters you can find on this source.
The Run Query parameter is available as an optional value for the Cinchy Event Broker/CDC connector. If set to true it executes a saved query; whichever record triggered the event becomes a parameter in that query. Thus the query now becomes the source instead of the table itself.
You are able to use any parameters defined in your listener config.
The example below is a data sync using the Event Broker/CDC as a source. Our Listener Config has been set with the CinchyID
attribute (Image 4).
We can enable the Run Query function to use the saved query "CDC Product Ticket Datestamps" as our source instead (Image 5). If we change the data from Record A in our source table to trigger our event, the Query Parameters below show that the Cinchy ID of Record A will be used in the query. This query is now our source.
It would appear in the data sync config XML as follows:
This page has example XML configs that you can review when setting up your own Cinchy Query data source.
You can review the source only example or the full example that shows both source and destination.
The below example shows what the source parameters would look like in XML.
You want to set up batch sync between a Cinchy Query and a Cinchy Table. You query polls for any unapproved timesheets, out of office requests, or sick hours and, if found, adds them to an "Open Approval Tasks" table.
This page has example XML configs that you can review when setting up your own Cinchy Table data source.
You can review the source only example or the full example that shows both source and destination.
The below example shows what the source parameters would look like in XML.
You want to set up a batch sync, that you can run when needed, between a Cinchy Table and a MongoDB Collection. This sync will push out Client Name and Customer Number information.
is a Customer Relationship Management (CRM) software. Copper is a tool focused on automation and simplicity, most known for its Google Workspace integration.
You have customer information currently sitting in the Copper CRM software. You want to sync this data into Cinchy through a batch sync to liberate your data from the silo.
The Copper source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Cinchy queries are commonly used data sync sources that leverage the platform's Saved Query functionality. For more on creating Saved Queries,.
You want to set up batch sync between a Cinchy Query and a Cinchy Table. You query polls for any unapproved timesheets, out of office requests, or sick hours and, if found, adds them to an "Open Approval Tasks" table.
You can review the parameters in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
Click Jobs > Start a Job to begin your sync.
Cinchy Tables are commonly used data sync sources.
You want to set up batch sync between a Cinchy Table and HubSpot to sync important sales analytics information. You can do so by using the Cinchy Table as your source, and a REST API as your target.
The Cinchy Table source supports batch syncs. To do a real-time sync from a Cinchy Table, you would use the Cinchy Event Broker/CDC Source instead.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
This page highlights a few example XML configs that you can review when setting up your own Cinchy Event Broker/CDC data source.
You can review the source only example, the full example that shows both source and destination, and the listener config example.
The below example shows what the source parameters would look like in XML.
You want to set up a real-time sync between two Cinchy tables so that any time specific data is added, updated, or deleted from Table A it gets propagated to Table B. As long as you enable change notifications on your Cinchy table, you can do so by setting up a data sync and listener config with your source as the Cinchy Event Broker/CDC.
The following is the Cinchy CDC listener config Topic and Connection Attributes as it would be set for the above real time sync example to work.
Insert the text below for connection attributes:
A binary file is a computer file that's not a text file, and whose content is in a binary format consisting of a series of sequential bytes, each of which is eight bits in length.
You can use binary files from a Local upload, Amazon S3, or Azure Blob Storage in your data syncs.
Some benefits of using binary files include:
Better efficiency via compression
Better Security through the ability to create custom encoding standards.
Unmatched Speed, since the data is stored in a raw format, and isn't encoded using any character encoding standards, it's faster to read and store.
A fixed-width file is a file that has a specific format which allows for the saving of information in an organized fashion. The data is arranged in rows and columns, with one entry per row. Each column has a fixed-width, specified in characters, which determines the maximum amount of data it can contain. No delimiters are used to separate the fields in the file.
Advantages of using a fixed-width file include:
It's a compact representation of your data
It's fast to parse because every field is in the same place in every line
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Common uses of LDAP include when:
A single piece of data needs to be found and accessed regularly;
Your organization has a lot of smaller data entries;
Your organization wants all smaller pieces of data in one centralized location, and there doesn't need to be an extreme amount of organization between the data.
ODBC is the database portion of the Microsoft Windows Open Services Architecture (WOSA), which is an interface that allows Windows-based desktop applications to connect to multiple computing environments without rewriting the application for each platform.
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
Salesforce objects are database tables that permit you to store data that's specific to an organization. Salesforce objects are of two types:
Standard Objects: Standard objects are the kind of objects that are provided by salesforce.com such as users, contracts, reports, dashboards, etc.
Custom Objects: Custom objects are those objects that are created by users. They supply information that's unique and essential to their organization. Custom objects are the heart of any application and provide a structure for sharing data.
Salesforce Platform Events are secure and scalable messages that contain data. Publishers push out event messages that subscribers receive in real time.
Push Topic events provide a secure and scalable way to receive notifications for changes to Salesforce data that match a SOQL query you define.
You can use PushTopic events to:
Receive notifications of Salesforce record changes, including create, update, delete, and undelete operations.
Capture changes for the fields and records that match a SOQL query.
Receive change notifications for only the records a user has access to based on sharing rules.
Limit the stream of events to only those events that match a subscription filter.
Snowflake enables data storage, processing, and analytic solutions.
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP can communicate between different operating systems using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
Optional. If true, executes a saved query, using the Cinchy ID of the changed record as a parameter. These query results are then used as the sync source, rather than using the Cinchy table where the data change originated. Review for further details on this feature.
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None won't read start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Configure your
Define your
Add in your , if required.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
is a Customer Relationship Management (CRM) software. Copper is a tool focused on automation and simplicity, most known for its Google Workspace integration.
functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
Dynamics 2015 is a legacy CRM predecessor to Microsoft Dynamics 365. Mainstream end of life support finished in , with extended end of life support finishing in January 2025.
is a managed NoSQL database service that's offered by Amazon as part of the AWS portfolio.
is an end-to-end event streaming platform that:
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time
(Lightweight Directory Access Protocol) is a mature, flexible, and well supported standards-based mechanism software protocol for enabling anyone to locate data whether on the public internet or on a corporate intranet.
is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability. Data changes in Cinchy (CDC) can be used to trigger a data sync from a MongoDB data source to a specified target. The attributes of the CDC Event are available to use as parameters within the Data Source Definition to narrow the scope of the request. For example, a lookup.
Open Database Connectivity () is a standard API for accessing database management systems (DBMS).
is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
is an open source data file format built to handle flat columnar storage data formats. Parquet operates well with complex data in large volumes and is known for its both performant data compression and its ability to handle a wide variety of encoding types.
is a cloud-based CRM software designed for service, marketing, and sales.
is a cloud-based CRM software designed for service, marketing, and sales.
is a cloud-based CRM software designed for service, marketing, and sales.
is a fully managed SaaS that provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
solutions are cloud-based HCM software applications that support core HR and payroll, talent management, HR analytics and workforce planning, and employee experience management.
Source
Mandatory. Select your source from the drop down menu.
Copper
Entity
Mandatory. The name of the entity you want to sync as it appears in your Copper CRM.
Companies
API Key
Mandatory. An encrypted version of your Copper API Key. The Connections UI will automatically encrypt this value for you.
"e98HGU72Lp0-fd34"
User Email
Mandatory. The encrypted user email associated with the API key used above. The Connections UI will automatically encrypt this value for you.
"e98HGU72Lp0-fd34hf990b4kLL23"
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Source
Mandatory. Select your source from the drop down menu.
Cinchy Query
Domain
Mandatory. The domain where your source query resides.
Compliance
Query Name
Mandatory. The name of you source query.
Open Tasks
Timeout
Optional. The timeout, in number of seconds, for your source query. If not entered this value will default to 30.
120
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Name
Mandatory. The name of your column as it appears in the source query.
Owner Cinchy ID
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution. Log).
Source
Mandatory. Select your source from the drop down menu.
Cinchy Table
Domain
Mandatory. The domain where your source table resides.
Product
Table Name
Mandatory. The name of you source table.
Q1 Sales
Suppress Duplicate Errors
Optional. This field determines whether duplicate keys in the source are to be reported as warnings (unchecked) or ignored (checked). The default is unchecked. Checking this box can be useful in the event that you only want to load the distinct values from a collection of columns in the source.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
A binary file is a computer file that's not a text file, and whose content is in a binary format consisting of a series of sequential bytes, each of which is eight bits in length.
You can use binary files from a Local upload, Amazon S3, or Azure Blob Storage in your data syncs.
Some benefits of using binary files include:
Better efficiency via compression
Better Security through the ability to create custom encoding standards.
Unmatched Speed, since the data is stored in a raw format, and isn't encoded using any character encoding standards, it's faster to read and store.
You have a binary file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Binary File source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Employee Sync
Variables
@Filepath
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
(Sync) Source
Mandatory. Select your source from the drop down menu.
Binary File
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob StorageThe following authentication methods are supported per source:Amazon S3: Access Key ID/Secret Access KeyAzure Blob Storage: Connection String
Local
Header Rows to Ignore
Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header).
1
Footer Rows to Ignore
Mandatory. The number of records from the bottom of the file to ignore
0
Encoding
Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Variable in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
AuthType
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting Access Key, you must provide the key and key secret. When selecting IAM role, a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:The role must be configured to have at least read access to the source. The Connections pods' role must have permission to assume the role specified in the data sync config.
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source. If configured correctly, a "Connection Successful" pop-up will appear. If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Parse Content By (Only for Standard Columns)
Binary File sources have a unique, mandatory parameter for Standard Columns:
Parse Content By - Choose from the following three options to define how you want to parse your content:
Byte Length - The content length in number of bytes
Trailing Byte Sequence - the trailing sequence in base64 that indicates the end of the field
Succeeding Byte Sequence - the trailing sequence in base64 that indicates the start of the next field, and thus the end of this one.
Byte Length
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
Fixed-width text files are special cases of text files where the format is specified by column widths, pad character and left/right alignment. Column widths are measured in units of characters. For example, if you have data in a text file where the first column always has exactly 10 characters, and the second column has exactly 5, the third has exactly 12, this would be categorized as a fixed-width text file.
If a text file follows the rules below it's a fixed-width text file:
Each row (paragraph) contains one complete record of information.
Each row contains one or many pieces of data (also referred to as columns or fields).
Each data column has a defined width specified as a number of characters that's always the same for all rows.
The data within each column is padded with spaces (or any character you specify) if it doesn't completely use all the characters allotted to it (empty space).
Each piece of data can be left or right aligned, meaning the pad characters can occur on either side.
Each column must consistently use the same number of characters, same pad character and same alignment (left/right).
You have a fixed-width file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The fixed-width file source supports batch syncs.
The fixed-width file source doesn't support Geometry, Geography, or Binary data types.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab.
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Title
Mandatory. Input a name for your data sync
Copper to Cinchy
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
Open Approval Tasks
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, or execute permissions. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
Cinchy to HubSpot
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Dynamics 2015 is a legacy CRM predecessor to Microsoft Dynamics 365. Mainstream end of life support finished in January 2020, with extended end of life support finishing in January 2025.
You have customer information currently sitting in the Dynamics 2015 CRM software. You want to sync this data into Cinchy through a batch sync to liberate your data from the silo.
The Dynamics 2015 source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Dynamics 2015 to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Dynamics 2015
Username
Mandatory. The username the Dynamics 2015 account that has access to the data you want to sync.
RStewart
Password
Mandatory. The password for the above Dynamics 2015 user account.
******
Domain
Mandatory. The Domain name of the Dynamics 2015 server you are connecting to.
Customer
URL
Mandatory. The URL of the Dynamics 2015 server you are connecting to.
Entity
Mandatory. The name of the entity you want to sync as it appears in your Dynamics 2015 CRM.
Companies
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
DB2 (Formerly Db2 for LUW) is a relational database that delivers advanced data management and analytics capabilities for transactional workloads.
The DB2 source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
DB2 to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
DB2
Connection String
Mandatory. The encrypted connection string used to access your DB2 database. The Connection UI will automatically encrypt this value for you.
Object
Mandatory. The type of object you want to use as your data sync. This will be either Table or Query.
Table
Table
Appears when "Table" is selected as the Object Type. The name of your table as it appears in your DB2 database.
dbo.employees
Query
Appears when "Query" is selected as the Object Type. This should be a SELECT statement indicating the data you want to sync out of your DB2 database.
Select * from dbo.employees
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source. This should be in all caps. EXCEPTION: If you chose "query" as your object and use double quotes around the column names, then this value should should match that casing.
NAME
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
Microsoft Dynamics 365 functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
You have customer information currently sitting in the Dynamics CRM software. You want to sync this data into Cinchy through a batch sync to liberate your data from the silo.
The Dynamics source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Dynamics to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Dynamics
Entity
Mandatory. The name of the entity you want to sync as it appears in your Dynamics CRM.
Companies
Service URL
Mandatory. The Web API URL of your instance.
https://org.api.crm.dynamics.com/api/data/v9.0/
Redirect URL
Mandatory. The Redirect URI from the Azure AD app registration
https://example.com/
Client ID
Mandatory. The encrypted Client ID found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you.
Client Secret
Mandatory. The encrypted Client Secret found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you.
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source. If configured correctly, a "Connection Successful" pop-up will appear. If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
Amazon DynamoDB is a managed NoSQL database service that's offered by Amazon as part of the AWS portfolio.
You currently use DynamoDB to store metrics on product use and growth, but being stuck in the DynamoDB silo means that you can't easily use this data across a range of business use cases or teams. You can use a batch sync to liberate your data into Cinchy.
The DynamoDB source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Product Metrics
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
DynamoDB
Entity
Mandatory. The name of the entity you want to sync as it appears in DynamoDB.
Metrics
AWS Access Key (Client ID)
Mandatory. The encrypted AWS Access Key (Client ID) used to access your DynamoDB.
AWS Secret (Client Secret)
Mandatory. The encrypted AWS Secret (Client Secret) used to access your DynamoDB.
AWS Region
Mandatory. The name of the region for your AWS instance.
US-East-1
Username
Mandatory. The name of a user with access to connect to your DynamoDB server.
Password
Mandatory. The password associated with the above user.
AuthType
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting Access Key, you must provide the key and key secret. When selecting IAM role, a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:The role must be configured to have at least read access to the source. The Connections pods' role must have permission to assume the role specified in the data sync config
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time .
You currently use Kafka to store the metrics for user logins, but being stuck in the Kafka silo means that you can't easily use this data across a range of business use cases or teams. You can use a batch sync to liberate your data into Cinchy.
The Kafka Topic source supports real-time syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
Reset behaviour
Topic JSON
The below table can be used to help create your Topic JSON needed to set up a real-time sync.
Example Topic JSON
Connection attributes
The below table can be used to help create your Connection Attributes JSON needed to set up a real-time sync.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
A delimited file is a sequential file with column delimiters. Each delimited file is a stream of records, which consists of fields that are ordered by column. Each record contains fields for one row. Within each row, individual fields are separated by column delimiters.
You have a delimited file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Delimited File source supports batch syncs.
The Delimited File source doesn't support Geometry, Geography, or Binary data types.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Microsoft Excel is a commonly spreadsheet program for managing and analyzing numerical data. You can use Microsoft Excel as a source for your Data Syncs by following the instructions below.
You have an Excel spreadsheet that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Excel source supports batch syncs.
The Excel source doesn't support Binary data types.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
This example syncs from a Kafka Topic source to a Cinchy Table target.
We want to sync the following data from Kafka and map it to the appropriate column in the "Sync Target 2" table in the "Kafka Sync" domain.
This is what the Connections UI will look like with the aforementioned example parameters and data.
Your source tab should be set to "Kafka Topic" and have the following information (Image 1):
Tip: Click on an image in this document to enlarge it.
Your destination tab should be set to Cinchy Table, and have the following information (Image 2):
Domain: The domain where your destination table resides. This example uses the "Kafka Sync" domain.
Table: The name of your destination table. This example uses the "Sync Target 2" table.
Degree of Parallelism: This is the number of parallel batch inserts and updates that can be run. Set this to 1 for our example.
Under the Sync Behaviour tab, we want to use the following parameters:
Synchronization Pattern: Full File
Sync Key Column Reference Name: Employee Id
New Record Behaviour: Insert
Dropped Record Behaviour: Delete
Change Record Behavior: Update
The following code is what the XML for our example connection would look like:
Apache Parquet is a file format designed to support fast data processing for complex data, with several notable characteristics:
1. Columnar: Unlike row-based formats such as CSV or Avro, Apache Parquet is column-oriented – meaning the values of each table column are stored next to each other, rather than those of each record:
2. Open-source: Parquet is free to use and open source under the Apache Hadoop license, and is compatible with most Hadoop data processing frameworks. To quote the , “Apache Parquet is… available to any project… regardless of the choice of data processing framework, data model, or programming language.”
3. Self-describing: In addition to data, a Parquet file contains metadata including schema and structure. Each file stores both the data and the standards used for accessing each record – making it easier to decouple services that write, store, and read Parquet files.
You have a parquet file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Parquet source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab.
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Optional. Review our documentation on for more information about this field. When uploading a local file, set this to filepath.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.
for sample connection strings.
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
If more than one listener is needed for a real-time sync, configure it/them via
To run a real-time sync, enable your Listener from
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
Source
Mandatory. Select your source from the drop down menu.
Kafka Topic
Auto Offset Reset
Earliest, Latest or None. In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from.
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None w read or start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined here.
None
topicName
Mandatory. This is the Kafka topic name to listen messages on.
messageFormat
Optional. Put "AVRO" if your messages are serialized in AVRO, otherwise leave blank.
bootstrapServers
List the Kafka bootstrap servers in a comma-separated list. This should be in the form of host:port
saslMechanism
This will be either PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512. SCRAM-SHA-256 must be formatted as: SCRAMSHA256 SCRAM-SHA-512 must be formatted as: SCRAMSHA512
saslPassword
The password for your chosen SASL mechanism
saslUsername
The username for your chosen SASL mechanism.
url
This is required if your data follows a schema when serialized in AVRO. It's a comma-separated list of URLs for schema registry instances that are used to register or lookup schemas.
basicAuthCredentialsSource
Specifies the Kafka configuration property "schema.registry.basic.auth.credentials.source" that provides the basic authentication credentials. This can be "UserInfo" | "SaslInherit"
basicAuthUserInfo
Basic Auth credentials specified in the form of username:password
sslKeystorePassword
This is the client keystore (PKCS#12) password.
securityProtocol
Kafka supports cluster encryption and authentication, which can encrypt data-in-transit between your applications and Kafka.
Use this field to specify which protocol will be used for communication between client and server. Cinchy currently supports the following options: Plaintext, SaslPlaintext, or SaslSsl. Paintext: Unauthenticated, non-encrypted. SaslPlaintext: SASL-based authentication, non-encrypted. SaslSSL: SASL-based authentication, TLS-based encryption. If no parameter is specified, this will default to Plaintext.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
(Sync) Source
Mandatory. Select your source from the drop down menu.
Delimited File
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob StorageThe following authentication methods are supported per source:Amazon S3: Access Key ID/Secret Access KeyAzure Blob Storage: Connection String
Local
Delimiter
Mandatory. The delimiter character used to separate the. text strings. Use U+#### syntax (U+0001) for unicode characters.
,
Text Qualifier
Mandatory. The text qualifier character, which is used in the event that the delimiter is contained within the row cell. Typically, the text qualifier is a double quote.
"
Header Rows to Ignore
Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header).If you use both useHeaderRecord="true" and HeaderRowsToIgnore = 1, two rows will be ignored. Refer to the below to ensure you are receiving the results you want:One row as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 0Two rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 1 Three rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 2
1
Encoding
Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII.
Use Header Record
Optional. Check this box to use the Header record to match schema. If set to true, fields not present in the record will default to null.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
AuthType
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting Access Key, you must provide the key and key secret. When selecting IAM role, a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:The role must be configured to have at least read access to the source. The Connections pods' role must have permission to assume the role specified in the data sync config
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source. If configured correctly, a "Connection Successful" pop-up will appear. If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
(Sync) Source
Mandatory. Select your source from the drop down menu.
Delimited File
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Sheet Name
Mandatory. The name of the sheet that you want to sync.
Employee Info
Header Rows to Ignore
Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header).
1
Footer Rows to Ignore
Mandatory. The number of records from the bottom of the file to ignore.
0
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Variable in the Info tab of the connection (ex: `filepath`). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
AuthType
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting **Access Key**, you must provide the key and key secret. When selecting **IAM role**, a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
$.employeeId
Employee Id
$.name
Name
Name
$.employeeid
Alias
Employee Id
Data Type
Number
Name
$.name
Alias
Name
Data Type
Text
Trim Whitespace
True
Source Column
Employee Id
Target Column
Employee Id
Source Column
Name
Target Column
Name
(Sync) Source
Mandatory. Select your source from the drop down menu.
Parquet
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Row Group Size
Mandatory. The size of your Parquer Row Groups. Review the documentation here for more on Row Group sizing.
The recommended disk block/row group/file size is 512 to 1024 MB on HDFS.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Variable in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
Auth Type
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting **Access Key**, you must provide the key and key secret. When selecting **IAM role**, a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Employee Sync
Variables
Optional. Review our documentation on Variables here for more information about this field. When uploading a local file, set this to @filepath.
@Filepath
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
(Sync) Source
Mandatory. Select your source from the drop down menu.
Fixed Width File
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Header Rows to Ignore
Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header).
1
Footer Rows to Ignore
Mandatory. The number of records from the bottom of the file to ignore
0
Encoding
Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Variable in the Info tab of the connection (ex: `filepath`). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Website Metrics
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
Employee Sync
Variables
Optional. Review our documentation on Variables here for more information about this field. When uploading a local file, set this to filepath.
Since we're doing a local upload, we use "@Filepath"
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
Please review the following considerations before you set up your MongoDB Collection data sync source:
We currently only support SCRAM authentication (Mongo 4.0+).
Syncs are column based. This means that you must flatten the MongoDB source document prior to sync by using a projection (See section 2: Projection (JSON Object)).
The column names used in the source must match elements on the root object, with the exception of "$" which can be used to retrieve the full document.
By default, MongoDB batch size is 101.
By default, bulk operations size is 5000.
Due to a conversion of doubles to decimals that occurs during the sync process, minor data losses may occur.
The following data types aren't supported:
Binary Data
Regular Expression
DBPointer
JavaScript
JavaScript code with scope
Symbol
Min Key
Max Key
The following data types are supported with conversions:
ObjectID is supported, but converted to string
Object is supported, but converted to JSON
Array is supported, but converted to JSON
Timestamp is supported, but converted to 64-bit integers
The MongoDB Collection source supports batch syncs. (To enable real-time syncs with MongoDB, use the MongoDB Collection (Cinchy Event Triggered) or Mongo Event source instead.)
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
MongoDB Collection to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
MongoDB Collection
Connection String
Mandatory. This is the encrypted connection string. You can review MongoDB's Connection String guide and parameter descriptions here. Don't include the /[database] in your connection URL. By default services like MongoDB Atlas will automatically include it when copying the connection string. If authenticating against a database other than the admin db, please provide the name of the database associated with the user’s credentials using the authSource
parameter.
Example (Default):mongodb+srv://:@Example (Against different database):mongodb+srv://:@?authSource=<authentication_db>
Database
Mandatory. The name of the MongoDB database that contains the collection listed in the "Collection" parameter.
Blog
Collection
Mandatory. The name of your MongoDB collection.
Article
Type
Mandatory. The method for retrieving your data. This will be either:- db.collection.find(): This method is used to select documents in a collection when there is no need to transform (flatten or aggregate) the data. It's used for basic queries where query and projection are sufficient.- db.collection.aggregate(): This method is used when there is a need to transform the data in a collection. It's used for more complex scenarios with single or multi-stages pipelines. In general, you will yield the quickest performance by using the find method, unless you need a specific aggregation operator.
Query (JSON Object)
A query for retrieving your data. This option appears if you have selected db.collection.find().
Example Query
Projection (JSON Object)
This option appears if you have selected db.collection.find().Syncs are column based. This means that you must flatten the MongoDB source document prior to sync using a projection.
Example Projection
Pipeline (JSON Array of Objects)
An aggregation pipeline consists of one or more stages that process documents. This option appears if you have selected db.collection.aggregate().
Use SSL
This checkbox can be used to define the use of x.509 certificate authentication for your sync. If checked, you will need to input the following values taken from your cert:- SSL Key PEM- SSL Certificate PEM- SSL CLA PEM
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source. This field is case sensitive and preserves spaces.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
To run a batch sync, select Jobs > Start Job.
The MongoDB Collection Data Source obtains BSON documents from MongoDB. BSON, short for Binary JSON, is a binary-encoded serialization of JSON-like documents. Like JSON, BSON supports the embedding of documents and arrays
within other documents and arrays. BSON also has extensions that allow representation of data types that aren't part of the JSON spec. For example, BSON makes a distinction between Int32
and Int64
.
The following table shows how MongoDB data types are translated in Cinchy.
Double
Number
Supported
String
Text
Supported
Object
Text (JSON)
Supported
Array
Text (JSON)
Supported
Binary Data
Binary
Unsupported
ObjectId
Text
Supported
Boolean
Boolean
Supported
Date
Date
Supported
Null
-
Supported
RegEx
-
Unsupported
JavaScript
-
Unsupported
Timestamp
Number
Supported
32-bit Integer
Number
Supported
64-bit Integer
Number
Supported
Decimal28
Number
Supported
Min Key
-
Unsupported
Max Key
-
Unsupported
-
Geography
Unsupported
-
Geometry
Unsupported
A retry configuration will automatically retry HTTP Requests on failure based on a defined set of conditions. This capability provides a mechanism to recover from transient errors such as network disruptions or temporary service outages.
Note: the maximum number of retries is capped at 10.
To set up a retry specification:
Select "Add Retry Configuration" from the Source tab.
Select your Delay Strategy.
Linear Backoff: Defines a delay of approximately n seconds where n = current retry attempt.
Exponential Backoff: A strategy where every new retry attempt is delayed exponentially by 2^n seconds, where n = current retry attempt.
Example: you defined Max Attempts = 3. Your first retry is going to be in 2^1 = 2, second: 2^2 = 4, third: 2^3 = 8 sec.
3. Input your Max Attempts. The maximum number of retries allowed is 10.
4. Define your Retry Conditions. You must define the conditions under which a retry should be attempted. For the Retry to trigger, at least one of the "Retry Conditions" has to evaluate to true.
Retry conditions are only evaluated if the response code isn't 2xx Success.
Each Retry Condition contains one or more "Attribute Match" sections. This defines a Regex to evaluate against a section of the HTTP response. The following are the three areas of the HTTP response that can be inspected:
Response Code
Header
Body
If there are multiple "Attribute Match" blocks within a Retry Condition, all have to match for the retry condition to evaluate to true.
The Regex value should be entered as a regular expression. The Regex engine is .NET and expressions can be tested by using this online tool. In the below example, the Regex is designed to match any HTTP 5xx Server Error Codes, using a Regex value of 5[0-9][0-9]
.
For Headers, the format of the Header string which the Regex is applied against is {Header Name}={Header Value}. For example, Content-Type=application/json
.
The following is an example of data we want to sync out of MongoDB.
This example XML uses the following values:
connectionString
The connections string for your source
"87E4lvPf83gLK8eKapH6Y0YqIFSNbFlq62uN9487"
Database
The name of your MongoDB database
"test"
Collection
"Article"
Type
The method used to retrieve your data.
"find"
Query
A query for retrieving your data.
This example query returns data where the price is less than 10$.
Projection
A projection for flattening your source document.
Column Name
The name(s) of your source column(s)
"id" "name" "price" "colour" "size" "stock" "$" (This is used to retrieve the full document.) "Details" (This is imported both as set of fields (flattened from the projection) and as a JSON.)
dataType
The data type of your source column
"Text" "Text" "Number" "Text" "Text" "Number" "Text" "Text"
isMandatory
Whether the column is mandatory or not
"false"
validateData
Whether the column data needs to be validated or not
"false"
Data changes in the Cinchy Event Broker (CDC) can be used to trigger a data sync from a MongoDB data source to a specified target. The attributes of the CDC Event are available to use as parameters within the Data Source Definition to narrow the scope of the request. For example, a lookup.
The MongoDB Collection (Cinchy Event Triggered) Source supports real-time syncs.
Please review the following considerations before you set up your MongoDB Collection data sync source:
We currently only support SCRAM authentication (Mongo 4.0+).
Syncs are column based. This means that you must flatten the MongoDB source document prior to sync by using a projection (See section 2: Projection (JSON Object)).
The column names used in the source must match elements on the root object, except for "$" which can be used to retrieve the full document.
By default, MongoDB batch size is 101.
By default, bulk operations size is 5000.
Due to a conversion of doubles to decimals that occurs during the sync process, minor data losses may occur.
The following data types aren't supported:
Binary Data
Regular Expression
DBPointer
JavaScript
JavaScript code with scope
Symbol
Min Key
Max Key
The following data types are supported with conversions:
ObjectID is supported, but converted to string
Object is supported, but converted to JSON
Array is supported, but converted to JSON
Timestamp is supported, but converted to 64-bit integers
The following sections in the Source configuration of the Connections experience can reference attributes of the CDC Event as parameters:
Connection String
Database Name
Collection Name
Query
Projection
Pipeline
In Cinchy v5.6+, you can also reference attributes of the CDC Event in Calculated Columns.
Note that syncs making use of this must limit their batch size to 1.
Parameters use the column name or alias as defined in the CDC Event Listener Config prefixed with an @
. For example, @CompanyName
would be the parameter name for the following Cinchy CDC listener Topic configuration.
Parameter names are case sensitive when used in the Connection configuration. Parameter matching is performed using literal string replacements. Names shouldn't contain spaces (spaces are automatically removed), and should have differing prefixes.
The following set of parameters will be available on every event even if they're not present in the listener config
@Version
@DraftVersion
@CinchyRecordType
@ApprovalState
@ModifiedBy
@Modified
@Deleted
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
MongoDB Cinchy Event to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
MongoDB Collection (Cinchy Event Triggered)
Connection String
Example (Default):
mongodb+srv://:@
Example (Against different database):
mongodb+srv://:@?authSource=
Database
Blog
Collection
Article
Type
Query (JSON Object)
A query for retrieving your data. This option appears if you have selected db.collection.find().
Projection (JSON Object)
This option appears if you have selected db.collection.find(). Syncs are column based. This means that you must flatten the MongoDB source document prior to sync using a projection.
Pipeline (JSON Array of Objects)
Use SSL
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source. This field is case sensitive and preserves spaces.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
To run a real-time sync (using the Cinchy Event Triggered MongoDB Source), set up your Listener Config using the Cinchy Event Broker/CDC and enable it to begin your sync.
The MongoDB Collection Data Source obtains BSON documents from MongoDB. BSON, short for Binary JSON, is a binary-encoded serialization of JSON-like documents. Like JSON, BSON supports the embedding of documents and arrays
within other documents and arrays. BSON also contains extensions that allow representation of data types that aren't part of the JSON spec. For example BSON makes a distinction between Int32 and Int64.
The following table shows how MongoDB data types are translated in Cinchy.
Double
Number
Supported
String
Text
Supported
Object
Text (JSON)
Supported
Array
Text (JSON)
Supported
Binary Data
Binary
Unsupported
ObjectId
Text
Supported
Boolean
Bool
Supported
Date
Date
Supported
Null
-
Supported
RegEx
-
Unsupported
JavaScript
-
Unsupported
Timestamp
Number
Supported
32-bit Integer
Number
Supported
64-bit Integer
Number
Supported
Decimal28
Number
Supported
Min Key
-
Unsupported
Max Key
-
Unsupported
-
Geography
Unsupported
-
Geometry
Unsupported
To configure a MongoDB Collection (Cinchy Event Triggered) connection, a listener must be configured via the Listener Config table with an Event Connector Type of Cinchy CDC.
Review the Cinchy Event Broker/CDC Listener Configuration values here, and then navigate to the Listener Config table to input a new row.
When setting up your listener configuration for your data sync, keeping the following constraints in mind:
Column names in the listener config shouldn't contain spaces. If they do, they will be automatically removed. For example, a column named Company Name will become the replacement parameter @CompanyName.
The replacement parameter names are case sensitive.
Column names in the listener config shouldn't be prefixes of other column names. For example, if you have a column called Name, you shouldn't have another called Name2 as the value of @Name2 may end up being replaced by the value of @Name suffixed with a 2
._
Title
Mandatory. Input a name for your data sync
Employee Sync
Variables
Optional. Review our documentation on Variables here for more information about this field. When uploading a local file, set this to @filepath.
@Filepath
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
Employee Sync
Variables
Optional. Review our documentation on Variables here for more information about this field. When uploading a local file, set this to @filepath.
@filepath
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Apache AVRO was added as an inbound data format in Cinchy v5.3.
Apache AVRO (inbound) is a data format with added integration with the Kafka Schema Registry, which helps enforce data governance within a Kafka architecture.
Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro stores both the data definition and the data together in one message or file. Avro stores the data definition in JSON format making it easy to read and interpret; the data itself is stored in binary format making it compact and efficient.
Some of the benefits for using AVRO as a data format are:
It's compact
It has a direct mapping to/from JSON
It's fast
It has bindings for a wide variety of programming languages.
For more about AVRO and Kafka, read the documentation here.
To set up the Apache AVRO connection to a Kafka Schema Registry, you will need to configure your Listener Configs table with the below specified attributes.
"topicName"
Mandatory. This is the Kafka topic name to listen messages on.
"messageFormat"
Put "AVRO" if your messages are serialized in AVRO
"bootstrapServers"
Mandatory. List the Kafka bootstrap servers in a comma-separated list. Should be in the form of host:port
"url"
This is required if your data follows a schema when serialized in AVRO. It's a comma-separated list of URLs for schema registry instances that are used to register or lookup schemas.
"basicAuthCredentialsSource"
Specifies the Kafka configuration property "schema.registry.basic.auth.credentials.source" that provides the basic authentication credentials. This can be "UserInfo" | "SaslInherit"
"basicAuthUserInfo"
Basic Auth credentials specified in the form of username:password
"sslKeystorePassword"
The client keystore (PKCS#12) password
LDAP (Lightweight Directory Access Protocol) is a mature, flexible, and well supported standards-based mechanism software protocol for enabling anyone to locate data whether on the public internet or on a corporate intranet.
Common uses of LDAP include when:
A single piece of data needs to be found and accessed regularly;
Your organization has a lot of smaller data entries;
Your organization wants all smaller pieces of data in one centralized location, and there doesn't need to be an extreme amount of organization between the data.
The LDAP source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
LDAP to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
LDAP
Server
Mandatory. The name of your LDAP Server Directory
Company-1
Object Category
Internal-Metrics
Username
Mandatory. The name of a user who has access to the LDAP server.
Product
Password
Mandatory. The password for the above user. The Connections UI will encrypt this value.
CN (Common Name)
Optional.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
The MongoDB Event stream source works similar to Cinchy's Change Data Capture functionality. The listener subscribes to monitor the change stream of a specific collection in the database of the MongoDB server. Any actions performed on document(s) inside of that collection are picked up by the listener and sent to the queue.
To use change streams in MongoDB, there are a few requirements your environment must meet.
The database must be in a replica set or sharded cluster.
The database must use the WiredTiger storage engine.
The replica set or sharded cluster must use replica set protocol version 1.
The MongoDB Event source supports real-time syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Mongo Event to Cinchy
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
MongoDB Event
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via the Listener Configuration table.
Reset Behaviour
Auto Offset Reset
Earliest, Latest or None. In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from.
None
Topic JSON
The below table can be used to help create your Topic JSON needed to set up a real-time sync.
Database
Cinchy
Collection
Employee
Pipeline Stages
Optional. This parameter allows you to specify pipeline stages with filters.
Each stage performs an operation on the input documents. For example, a stage can filter documents, group documents, and calculate values.
The documents that are output from a stage are passed to the next stage.
An aggregation pipeline can return results for groups of documents. For example, return the total, average, maximum, and minimum values.
See the Example Topic JSON below. Our example config uses a filter to return documents with an ID between 0 and 10,000 AND documents with the location set to Montreal, OR where the operation type is 'delete'
Example Topic JSON
Connection Attributes
The below table can be used to help create your Connection Attributes JSON needed to set up a real-time sync.
connectionString
mongodb://localhost:9877
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source. This field is case sensitive and preserves spaces.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
If more than one listener is needed for a real-time sync, configure it/them via the Listener Config table.
To run a real-time sync, enable your Listener from the Execution tab.
is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
The Oracle Query and Table sources support batch syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
The REST API source support batch syncs.
You can find the parameters in the Info tab below (Image 1).
Mandatory and optional parameters for the Source tab are outlined below (Image 2).
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
More options are available to you under the "Add a Section" drop down.
Note that adding a Pagination Block is mandatory.
To get fields in a nested array, you can either set the nested array as the root, or you can use Path to Iterate to expand the array.
Here is a sample JSON response:
Records Root JSONPath: $.users Schema:
$.userId
for ID
$.name
for Name
You can't reference "groupId"
as it's one level above the specified root scope.
Use $.data
in Records Root JSONPath if the API returns a top-level JSON array.
Here is a sample JSON response:
Records Root JSONPath: $.data
Schema:
$.name
for Name
$.age
for Age
Use Path to Iterate to expand and target nested keys within the array. This only applies if the records within an array are objects.
If the record within the path to iterate is an array, each item within the array gets placed under an "item"
key in a new JSON object.
For example, here is a sample JSON response:
In this example, we want to iterate over the "transactions"
array and capture the records for "transactionid"
and assign them to the "Transaction ID" column, and then add the parent "name"
key to a Name column .
Records Root JSONPath: $
Path to iterate: $.transactions
Schema:
$.name
for Name
$.transactions.id
for Transaction ID
To run a batch sync, select Jobs > Start Job
You are able to use this section to add body content.
Retry Configuration automatically retries HTTP Requests on failure based on a defined set of conditions. This provides a mechanism to recover from transient errors, such as network disruptions or temporary service outages.
Note: the maximum number of retries is capped at 10.
To set up a retry specification:
Under the REST API source tab, select API Specification > Retry Configuration
Select your Delay Strategy.
Linear Backoff: Defines a delay of approximately n seconds where n = current retry attempt.
Exponential Backoff: A strategy where every new retry attempt is delayed exponentially by 2^n seconds, where n = current retry attempt.
Example: you defined Max Attempts = 3. Your first retry is going to be in 2^1 = 2, second: 2^2 = 4, third: 2^3 = 8 sec.
3. Input your Max Attempts. The maximum number of retries allowed is 10.
Define your Retry Conditions. You must define the conditions under which a retry should be attempted. For the Retry to trigger, at least one of the "Retry Conditions" has to evaluate to true.
Retry conditions are only evaluated if the response code isn't 2xx Success.
Each Retry Condition contains one or more "Attribute Match" sections. This defines a regex to evaluate against a section of the HTTP response. The following are the three areas of the HTTP response that can be inspected:
Response Code
Header
Body
If there are multiple "Attribute Match" blocks within a Retry Condition, all have to match for the retry condition to evaluate to true.
;;
Microsoft SQL Server is one of the main relational database management systems on the market that serves a wide range of software applications for business intelligence and analysis in corporate environments.
Based on the Transact-SQL language, it incorporates a set of standard language programming extensions and its application is available for use both on premise and in the cloud.
Microsoft SQL Server is ideal for storing all the desired information in relational databases, as well as to manage such data without complications, thanks to its visual interface and the options and tools it has.
The MS SQL Server Query and Table sources support batch syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Version 5.4 of the Cinchy platform introduced data polling, a source option which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.
The Polling Event source supports real-time syncs.
The Polling Event Source supports Oracle, DB2 and SQL Server databases.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
Reset behaviour
Optional AppSettings configurations
DataPollingConcurrencyIndex: This property allows only a certain number of threads to run queries against the source database, which works to reduce the load against the database.
The default number of threads is set to 12.
To configure this property, navigate to your appSettings.json deployment file > "DataPollingConcurrencyIndex": <numberOfThreads>
QueueWriteConcurrencyIndex: This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database.
The default number of threads is set to 12.
To configure this property, navigate to your appSettings.json deployment file > "QueueWriteConcurrencyIndex": <numberOfThreads>.
Note that this index is shared across all listener configs, meaning that if it's set to 1 only one listener config will be pushing the messages to the queue at a single moment in time.
Topic JSON
The below table can be used to help create your Topic JSON needed to set up a real-time sync.
**Example Topic JSON**
Connection Attributes
The below table can be used to help create your Connection Attributes JSON needed to set up a real-time sync.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
The messageKeyExpression parameter is an optional, but recommended, parameter that can be used to ensure that you aren't faced with a unique constraint violation during your data sync. This violation could occur if both an insert and an update statement happened at nearly the same time. If you choose not to use the messageKeyExpression parameter, you could face data loss in your sync.
This parameter was added to the Data Polling event stream in Cinchy v5.6.
Each of your Event Listener message keys a message key. By default, this key is unique for every message in the queue.
When the worker processes your Event Listener messages it does so in batches and, for efficiency and to guarantee order, messages that contain the same key won't be processed in the same batch.
The messageKeyExpression property allows you to change the default message key to something else.
Example:
Optional. Review our documentation on for more information about this field.
Mandatory. The data type of the column values.
The name of your
Optional. Review our documentation on for more information about this field.
Mandatory. This is the encrypted connection string. You can review MongoDB's Connection String guide and parameter descriptions Don't include the /[database] in your connection URL. By default services like MongoDB Atlas will automatically include it when copying the connection string. If authenticating against a database other than the admin db, please provide the name of the database associated with the user’s credentials using the authSource parameter.
Mandatory. The name of the that contains the collection listed in the "Collection" parameter.
Mandatory. The name of your
Mandatory. The method for retrieving your data. This will be either: - db.collection.find(): This method is used to select documents in a collection when there is no need to transform (flatten or aggregate) the data. It's used for basic queries where query and projection are sufficient. - db.collection.aggregate(): This method is used when there is a need to transform the data in a collection. It's used for more complex scenarios with single or multi-stages pipelines. In general, you will yield the quickest performance by using the find method, unless you need a
consists of one or more stages that process documents. This option appears if you have selected db.collection.aggregate().
This checkbox can be used to define the use of authentication for your sync. If checked, you will need to input the following values taken from your cert: - SSL Key PEM - SSL Certificate PEM - SSL CLA PEM
Mandatory. The data type of the column values.
Optional. Review our documentation on for more information about this field.
Mandatory. The name of the that you want to sync into your destination.
Optional. Review our documentation on for more information about this field.
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None won't read or start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined
Mandatory. The name of your MongoDB
Mandatory. The name of your MongoDB
In MongoDB, an aggregation pipeline consists of one or more that process documents:
Mandatory. Your MongoDB
Mandatory. The data type of the column values.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You can learn more about these sections in
Configure your
Define your
Add in your , if required.
For more information, see the page about .
For more information, see the page about .
A pagination block is mandatory. for more on pagination blocks.
Note that the Regex value should be entered as a regular expression. The Regex engine is .NET and expressions can be tested by using . In the below example, the Regex is designed to match any HTTP 5xx Server Error Codes, using a Regex value of 5[0-9][0-9]
.
For Headers, the format of the Header string which the regex is applied against is {Header Name}={Header Value}
. For example "Content-Type=application/json"
.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
If more than one listener is needed for a real-time sync, configure it/them via
To run a real-time sync, enable your Listener from
Source
Mandatory. Select your source from the drop down menu.
Oracle
Connection String
Mandatory. The Connection String to connect to your MS SQL Server. The Connections UI will encrypt this value.
Please see here for example Connection Strings.
Object
Mandatory. The type of Object you want to use in your data sync.
This can be either Table or Query.
Table
Appears when Object = Table. The name of the Table you want to sync out of Oracle.
Employees
Query
Appears when Object = Query. This should be a SELECT statement indicating the data you want to sync out of Oracle.
Select * from dbo.employees
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Source
Mandatory. Select your source from the drop down menu.
REST API
HTTP Method
Mandatory.
This will be either GET or POST.
API Response Format
Mandatory. Use this field to specify a response format of the endpoint. Currently, the Connections UI only supports JSON responses.
JSON
Records Root JSONPath
Mandatory. Specify the JSON path for the results. The root of a JSON object is $
. If the top-element of the response is an array, Cinchy places the array under a "data"
key in a new JSON object. See Best practices for more info.
$.data
, $
, $.ResponseObject
Path to Iterate
The path to select an array of records for capturing elements inside. A record is created for each element which you can use as the input in a source schema. The path is relative to the root JSONPath.
API Endpoint URL
Mandatory. API endpoint, including URL parameters like API key
https://www.quandl.com/api/v3/datatables/CLS/IDHP?fx_business_date=2024-01-01&api_key=@API_KEY
Next Page URL JSONPath
Specify the path for the next page URL. This is only relevant for APIs that use cursor pagination
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
- If both 'Mandatory' and 'Validated': empty rows rejected. - If only 'Mandatory': rows synced but marked as failed with 'Mandatory Rule Violation'.
Validate Data
- If both 'Mandatory' and 'Validated': empty rows rejected. - If only 'Validated': all rows synced.
Trim Whitespace
Optional for text data. Choose to trim whitespace.
Max Length
Optional for text data. Set max length; exceeding values get rejected.
Pattern
Mandatory when using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
ID
Number
NAME
Text
AGE
Number
ADDRESS
Text
SALARY
Number
Source
Mandatory. Select your source from the drop down menu.
MS SQL Server
Connection String
Mandatory. The Connection String to connect to your MS SQL Server. The Connections UI will encrypt this value.
Please see here for example Connection Strings.
Object
Mandatory. The type of Object you want to use in your data sync.
This can be either Table or Query.
Table
Appears when Object = Table. The name of the Table (including the schema) you want to sync out of your MS SQL Server.
Employees
Query
Appears when Object = Query. This should be a SELECT statement indicating the data you want to sync out of your MS SQL Server.
Select * from dbo.employees
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Source
Mandatory. Select your source from the drop down menu.
Polling Event
Auto Offset Reset
Earliest, Latest or None. In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from.
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None won't read or start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined here.
None
CursorConfiguration
Mandatory. The parameters here are used in a basic query which searches for all records in a particular table.
Note that in our example we need to use a sub-query to prevent an infinite loop if the "CursorColumn" parameter isn't unique.
Example basic query:
FromClause
Mandatory. This must contain at least the table name but can also contain Joined tables as written in SQL language.
Example: [Source Table]
CursorColumn
Mandatory. Column name that's used in any 'WHERE' condition(s) and for ordering the result of a query
Example: [Id]
BatchSize
Mandatory. Minimum size of a batch of data per query. This can be larger to prevent infinite loops if the CursorColumn isn't unique.
Example: 100
FilterCondition
All filtering options used in any 'WHERE' condition(s) of the query
Example: Name IS NOT NULL
Columns
Mandatory. A list of columns that we want to show in a result.
Example:Id, Name
ReturnDataConfiguration
The parameters here are used in more complex queries. This example has 2 related tables, but want to show the contents of one of them based on the 'CursorColumn' from a second table. Since Timestamp values aren't unique, we need to find all combinations of Id, Timestamp that match the filter condition in a subquery, and then join this result with the outer-query to get the final result. In `ReturnDataConfiguration`, our parameters area of concern is everything outside of first open parenthesis `(` and last closing parenthesis `)`. For example:
Example complex query:
CursorAlias
Mandatory. This is the alias for a subquery result table. It's used in 'JoinClause', and can be used in 'Columns' if we want to return values from a subquery table.
Example: "t"
JoinClause
Mandatory. Our result table to which we join the subquery result, plus the condition of the join.
Example: [Table1] ts ON ts.[Id] = t.[Id]
FilterCondition
All filtering options used in any 'WHERE' conditions.
Example: "ts.[Id] > 0"
OrderByClause
Mandatory. This is the column(s) that we want to order our final result by.
Example: "Id"
Columns
Mandatory. A list of columns that we want to show in the final result.
Example: "ts.[Id]" "ts.[name]"
Delay
Mandatory. This represents the delay, in second, between data sync cycles once it no longer finds any new data.
Example: 10
messageKeyExpresssion
Optional, but recommended to mitigate data loss. See Appendix A for more information on this parameter.
id
CursorConfiguration.CursorColumnDataType
Mandatory. This property works in tandem with an update that ensures that the database query always moves the offset, regardless of if the query returned the records or not—this helps to ensure that the performance of the source database isn't being weighed down by constantly running heavy queries on a wide range of records when the queries returned no data. This value of this mandatory property must match the column type of the source database system for proper casting of parameters.
int
CursorConfiguration. Distinct
Mandatory. This property is a true/false Boolean type that, when set to true, applies a distinct clause on your query to avoid any duplicate records.
true
databaseType
Mandatory. TSQL, Oracle, or DB2
TSQL
connectionString
Mandatory. This should be the connection string for your data source.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Column Name
The name(s) of your source column(s)
"Id" "Name" "Age" "Address" Salary"
dataType
The data type of your source column
"Number" "Text"
isMandatory
Whether the column is mandatory or not
"false"
validateData
Whether the column data needs to be validated or not
"false"
Data changes in Cinchy (CDC) can be used to trigger a data sync from a REST API data source to a specified target. The attributes of the CDC Event are available to use as parameters within the REST API Data Source Definition to narrow the scope of the request. For example, a lookup.
An organization wants to use the Dun & Bradstreet API for enriching company information, such as the number of employees or their addresses. When a company record is added or modified in a table called Companies inside of Cinchy, a D&B API should be triggered with the Company Name (a mandatory field on the Companies table) passed in as a parameter, and the Company record should be enriched with the company information from the API response.
The following sections in the Source configuration of the Connections experience can reference attributes of the CDC Event as parameters:
Auth Request -> Body
Auth Request -> Request Headers -> Header -> Header Value
Auth Request -> Endpoint URL
Body
Request Headers -> Header -> Header Value
API Endpoint URL
Parameters use the column name or alias as defined in the CDC Event's Listener Config prefixed with an @
. For example, @CompanyName
would be the parameter name for the following Cinchy CDC listener Topic configuration.
Parameter names are case sensitive when used in the Connection configuration. Parameter matching is performed using literal string replacements. Names shouldn't contain spaces (spaces are automatically removed), and should have differing prefixes.
The following set of parameters will be available on every event even if they're not present in the listener config
@Version
@DraftVersion
@CinchyRecordType
@ApprovalState
@ModifiedBy
@Modified
@Deleted
To configure a REST API (Cinchy Event Triggered) connection, a listener must be configured. If configuring using the Listener Config table, you would select the Event Connector Type of Cinchy CDC.
Otherwise, you can set up your listener configuration for your data sync through the Connections UI, keeping the following constraints in mind:
Column names in the listener config shouldn't contain spaces. If they do, they will be automatically removed. For example, a column named **Company Name **will become the replacement parameter @CompanyName.
The replacement parameter names are case sensitive.
Column names in the listener config shouldn't be prefixes of other column names. For example, if you have a column called Name, you shouldn't have another called Name2 as the value of @Name2 may end up being replaced by the value of @Namesuffixed with a 2
.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Push Topic events provide a secure and scalable way to receive notifications for changes to Salesforce data that match a SOQL (Salesforce Object Query Language) query you define.
You can use Push Topic events to:
Receive notifications of Salesforce record changes, including create, update, delete, and undelete operations.
Capture changes for the fields and records that match a SOQL query.
Receive change notifications for only the records a user has access to based on sharing rules.
Limit the stream of events to only those events that match a subscription filter
The Salesforce Push Topic source supports real-time syncs.
You can use a Push Topic already configured in Salesforce, or have Cinchy Event Listener create the Push Topic for you.
Cinchy will compare the JSON with the properties on the push topic in Salesforce by name. If the attributes match, the listener will start listening on the push topic.
Cinchy will compare the JSON with the properties on the push topic in Salesforce by name. If any of the attributes don't match, Cinchy will sync the push topic from Salesforce into Cinchy and disable the listener.
If the Push Topic name doesn't exist in Salesforce, Cinchy will attempt to create the Push Topic. If it's successful, it will sync in the Id from Salesforce and start listening on the push topic.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Salesforce Push Topic
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Salesforce Push Topic
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via the Listener Configuration table.
Reset Behaviour
Auto Offset Reset
Determines the starting point for event reading. Options:
Earliest: Starts from the queue beginning. Useful for recoverable or re-runnable use cases.
Latest: Starts after the last processed event.
None: No events are read.
You can change this setting later.
None
Topic JSON
The below table can be used to help create your Topic JSON needed to set up a real-time sync.
Id
Name
Mandatory. Descriptive name of the PushTopic. Note that there is a 25 character limit on this field.
LeadsTopic
Query
Mandatory. The SOQL query statement that determines which record changes trigger events to be sent to the channel. This field has a 1,300 character limit.
SELECT Id, Name, Email FROM Lead
ApiVersion
Mandatory. The API version to use for executing the query specified in Query. It must be an API version greater than 20.0. If your query applies to a custom object from a package, this value must match the package's ApiVersion.
47.0
NotifyForOperationCreate
Set this to true if a create operation should generate a notification, otherwise, false. Defaults to true.
true
NotifyForOperationUpdate
Set this to true if an update operation should generate a notification, otherwise, false. Defaults to true.
true
NotifyForOperationUndelete
Set this to true if an undelete operation should generate a notification, otherwise, false. Defaults to true.
true
NotifyForOperationDelete
Set this to true if a delete operation should generate a notification, otherwise, false. Defaults to true.
true
NotifyForFields
Specifies which fields are evaluated to generate a notification. Possible values are:AllReferenced (default)SelectWhere
Referenced
Example Topic JSON
Connection Attributes
The below table can be used to help create your Connection Attributes JSON needed to set up a real-time sync.
ApiVersion
Mandatory. Your Salesforce API Version. Note that this needs to be an exact match; for instance 47.0
can't be written as simply 47
.
47.0
GrantType
This value should be set to password
.
password
ClientId
The encrypted Salesforce Client ID. You can encrypt this value using the Cinchy CLI.
Bn8UmtiLydmYQV6//qCL5dqfNUMhqchdk959hu0XXgauGMYAmYoyWN8FD+voGuMwGyJa7onrc60q1Hu6QFsQXHVA==
ClientSecret
The encrypted Salesforce Client Secret. You can encrypt this value using the Cinchy CLI.
DyU1hqde3cWwkPOwK97T6rzwqv6t3bgQeCGq/fUx+tKI=
Username
The encrypted Salesforce username. You can encrypt this value using the Cinchy CLI.
dXNlcm5hbWVAZW1haWwuY29t
Password
The encrypted Salesforce password You can encrypt this value using the Cinchy CLI.
cGFzc3dvcmRwYXNzd29yZA==
InstanceAuthUrl
The authorization URL of the Salesforce instance.
https://login.salesforce.com/services/oauth2/token
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
If more than one listener is needed for a real-time sync, configure it/them via the Listener Config table.
To run a real-time sync, enable your Listener from the Execution tab.
Title
Mandatory. Input a name for your data sync
Oracle Sync
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
REST API Sync
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Title
Mandatory. Input a name for your data sync
MS SQL Sync
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory
Title
Mandatory. Input a name for your data sync
Polling Event Sync
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce objects are database tables that permit you to store data that's specific to an organization. Salesforce objects are of two types:
Standard Objects: Standard objects are the kind of objects that are provided by salesforce.com such as users, contracts, reports, dashboards, etc.
Custom Objects: Custom objects are those objects that are created by users. They supply information that's unique and essential to their organization. Custom objects are the heart of any application and provide a structure for sharing data.
The Salesforce Object (Bulk API) source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Salesforce Bulk API
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Salesforce Object (Bulk API)
Object
Mandatory. The name of your Salesforce Object.
Auth URL
Mandatory. The URL that issues your Salesforce auth token.
Client ID
Mandatory. The encrypted Client ID to connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Client Secret
Mandatory. The encrypted Client Secret for the above Client ID. The Connections UI will automatically encrypt this value for yo.
Username
Mandatory. The encrypted Username of an account that can connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Password
Mandatory. The encrypted Password associated with the above account that can connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
1. Overview
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce Platform Events are secure and scalable messages that contain data. Publishers push out event messages that subscribers receive in real time.
The Salesforce Platform Event source supports real-time syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Salesforce Platform Event
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Salesforce Platform Event
To set up a real-time sync, you must configure your Listener values. You can do so through the Connections UI.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via the Listener Configuration table.
Reset behaviour
Auto Offset Reset
Earliest, Latest or None. In the case where the listener is started and either there is no last message ID, or when the last message ID is invalid (due to it being deleted or it's just a new listener), it will use this column as a fallback to determine where to start reading events from.
None
Topic JSON
The below table can be used to help create your Topic JSON needed to set up a real-time sync.
Name
Mandatory. The name of the Platform Event, as it appears in Salesforce, that you want to subscribe to.
Notification__e
Example Topic JSON
Connection attributes
The below table can be used to help create your Connection Attributes JSON needed to set up a real-time sync.
ApiVersion
Mandatory. Your Salesforce API Version. Note that this needs to be an exact match; for instance 47.0
can't be written as simply 47
.
47.0
GrantType
This value should be set to password
.
password
ClientId
The encrypted Salesforce Client ID. You can encrypt this value using the Cinchy CLI.
Bn8UmtiLydmYQV6//qCL5dqfNUMhqchdk959hu0XXgauGMYAmYoyWN8FD+voGuMwGyJa7onrc60q1Hu6QFsQXHVA==
ClientSecret
The encrypted Salesforce Client Secret. You can encrypt this value using the Cinchy CLI.
DyU1hqde3cWwkPOwK97T6rzwqv6t3bgQeCGq/fUx+tKI=
Username
The encrypted Salesforce username. You can encrypt this value using the Cinchy CLI.
dXNlcm5hbWVAZW1haWwuY29t
Password
The encrypted Salesforce password You can encrypt this value using the Cinchy CLI.
cGFzc3dvcmRwYXNzd29yZA==
InstanceAuthUrl
The authorization URL of the Salesforce instance.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
## Next steps
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
If more than one listener is needed for a real-time sync, configure it/them via the Listener Config table.
To run a real-time sync, enable your Listener from the Execution tab.
Snowflake is a fully managed SaaS that provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Snowflake enables data storage, processing, and analytic solutions.
The Snowflake source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Snowflake Sync
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Snowflake
Connection String
Unencrypted example: account=wr38353.ca-central-1.aws;user=myuser;password=mypassword;db=CINCHY;schema=PUBLIC
Object
Mandatory. Select either Table or Query.
Table or Query.
Table
Appears if Object = Table. The name of the Table in Snowflake that you wish to sync.
Query
Appears if Object = Query. A SELECT statement query that will be used to fetch your data.
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
Open Database Connectivity (ODBC) is a standard application programming interface (API) designed to unify access to SQL databases. An ODBC query allows the extraction of specific information sets from those databases.
The ODBC Query source support batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
ODBC Query Sync
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
ODBC
Connection String
Mandatory. The Connection String to connect to your ODBC Driver. The Connections UI will encrypt this value.
Query
This should be a SELECT statement indicating the data you want to sync out of your ODBC.
Select * from dbo.employees
Test Connection
You can use the "Test Connection" button to ensure that your credentials are properly configured to access your source.
If configured correctly, a "Connection Successful" pop-up will appear.
If configured incorrectly, a "Connection Failed" pop-up will appear along with a link to the applicable error logs to help you troubleshoot.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP allows applications running on different operating systems to communicate using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
The SOAP 1.2 Web Service source supports batch syncs.
You can find the parameters in the Info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Namespace Value
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None won't read start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined
Optional. Review our documentation on for more information about this field.
Mandatory. The encrypted connection string used to connect to your Snowflake instance. You can review Snowflake's Connection String guide and parameter descriptions
Optional. Review our documentation on for more information about this field.
for example Connection Strings.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Configure your
Define your
Add in your , if required.
Source
Mandatory. Select your source from the drop down menu.
SOAP 1.2 Web Service
authType
Mandatory. Select the type of authentication you wish to us in this sync. - None - WSSE: This will allow you to use a Username and Password to authenticate via a WS-Security SOAP envelope header. - Basic: This will allow you to use a Username and Password to authenticate via a basic auth header.
Basic
Use Password Digest
The password digest is a cryptographic hash of the password and timestamp. This parameter should only be used in conjunction with a WSSE authType, and when the Password Type for your auth is **PasswordDigest**. If neither of those applies, leave this value unchecked.
Request Timeout
Mandatory. You can use this field to set a timeout, in milliseconds, for your request. There is no maximum value. The minimum should be greater than 0. The default value is 100 milliseconds
2000
Endpoint
Mandatory. This field should contain your SOAP 1.2 Web Service API endpoint.
Has Mtom Response
This value is required to be true if: the SOAP API response contains an attachment outside of the response message. See here for an example.
Record Xpath
Mandatory. The Xpath to select all records we want to extract from the SOAP response. The path should point to the XML element wrapping the column data. XPath stands for XML Path Language. It uses a non-XML syntax to provide a flexible way of addressing (pointing to) different parts of an XML document. This value should start with ‘//’ and be followed by the tag name of the data. You can refer http://xpather.com/ to find out the Xpath.
Envelope Namespace
The namespace prefix to use for the SOAP request elements. For example, setting the value to "foo" would result in the soap request being prefixed with the "foo" namespace.
"foo"
Namespaces - Name
The name of your SOAP namespace tags in your request and response. This value appear as "soap" in the snippet below.
These should be the values immediately after "xmlns:"
"soap"
Namespaces - Value
The URL describing this namespace in the response. In the below snippet this value is "http://www.dataaccess.com/webservicesserver/"
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
connectionString
The connections string for your source
"87E4lvPf83gLK8eKapH6Y0YqIFSNbFlq62uN9487"
Object
The type of source object
"Table"
Table
The name of your source object (in this case a table)
"Employees"
Column Name
The name(s) of your source column(s)
"name"
dataType
The data type of your source column
"Text"
isMandatory
Whether the column is mandatory or not
"false"
validateData
Whether the column data needs to be validated or not
"false"
Title
Mandatory. Input a name for your data sync
SOAP Sync
Variables
Optional. Review our documentation on Variables here for more information about this field.
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
SAP SuccessFactors solutions are cloud-based HCM software applications that support core HR and payroll, talent management, HR analytics and workforce planning, and employee experience management.
The SAP SuccessFactors source support batch syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
SAP SuccessFactors
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
SAP SuccessFactors
API Key
Mandatory. The encrypted API Key needed to connect to your SuccessFactors entity. The Connections UI will automatically encrypt this value for you.
User ID
Mandatory. A User ID with access to connect to your SuccessFactors entity.
Company ID
Mandatory. The Company ID associated with the above User ID and with access to connect to your SuccessFactors entity.
SAML Assertion URL
Mandatory. The URL used for SAML insertion.
OAUTH Token URL
Mandatory. The URL that issued your oauth token.
Private Key
Mandatory. The key to connect to your SuccessFactors entity. The Connections UI will automatically encrypt this value for you.
OData API URL
Mandatory. The URL of the OData API.
Entity
Mandatory. The name of your SuccessFactors entity.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Select Show Advanced for more options for the Schema section.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. For Text data types, you can choose whether to trim the whitespace._
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
You can choose to add in a Transformation > String Replacement by inputting the following:
Pattern
Mandatory if using a Transformation. The pattern for your string replacement.
Replacement
What you want to replace your pattern with.
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Actions.
Add in your Post Sync Scripts, if required.
Click Jobs > Start a Job to begin your sync.
Optional. Review our documentation on for more information about this field.