Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page highlights a few example XML configs that you can review when setting up your own Cinchy Event Broker/CDC data source.
You can review the source only example, the full example that shows both source and destination, and the listener config example.
The below example shows what the source parameters would look like in XML.
Example Use Case: You want to set up a real-time sync between two Cinchy tables so that any time specific data is added, updated, or deleted from Table A it gets propagated to Table B. As long as you enable change notifications on your Cinchy table, you can do so by setting up a data sync and listener config with your source as the Cinchy Event Broker/CDC.
The following is the Cinchy CDC listener config Topic and Connection Attributes as it would be set for the above real time sync example to work.
This page highlights a few example XML configs that you can review when setting up your own Cinchy Table data source.
You can review the source only example or the full example that shows both source and destination.
The below example shows what the source parameters would look like in XML.
Example Use Case: You want to set up a batch sync, that you can run when needed, between a Cinchy Table and a MongoDB Collection. This sync will push out Client Name and Customer Number information.
Cinchy queries are commonly used data sync sources that leverage the platform's Saved Query functionality. For more on creating Saved Queries, please review the documentation here.
Example Use Case: You want to set up batch sync between a Cinchy Query and a Cinchy Table. You query polls for any unapproved timesheets, out of office requests, or sick hours and, if found, adds them to an "Open Approval Tasks" table.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
A binary file is a computer file that is not a text file, and whose content is in a binary format consisting of a series of sequential bytes, each of which is eight bits in length.
You can use binary files from a Local upload, Amazon S3, or Azure Blob Storage in your data syncs.
Some benefits of using binary files include:
Better efficiency via compression
Better Security through the ability to create custom encoding standards.
Unmatched Speed, since the data is stored in a raw format, and is not encoded using any character encoding standards, it is faster to read and store.
Copper is a Customer Relationship Management (CRM) software. Copper is a tool focused on automation and simplicity, most known for its Google Workspace integration.
Microsoft Dynamics 365 functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
Dynamics 2015 is a legacy CRM predecessor to Microsoft Dynamics 365. Mainstream end of life support finished in January 2020, with extended end of life support finishing in January 2025.
Amazon DynamoDB is a managed NoSQL database service that is offered by Amazon as part of the AWS portfolio.
A fixed width file is a file that has a specific format which allows for the saving of information in an organized fashion. The data is arranged in rows and columns, with one entry per row. Each column has a fixed width, specified in characters, which determines the maximum amount of data it can contain. No delimiters are used to separate the fields in the file.
Advantages of using a fixed width file include:
It is a very compact representation of your data
It is fast to parse because every field is in the same place in every line
Apache Kafka is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time for your key use cases.
LDAP (Lightweight Directory Access Protocol) is a mature, flexible, and well supported standards-based mechanism software protocol for enabling anyone to locate data whether on the public internet or on a corporate intranet.
Common uses of LDAP include when:
A single piece of data needs to be found and accessed regularly;
Your organization has a lot of smaller data entries;
Your organization wants all smaller pieces of data in one centralized location, and there doesn't need to be an extreme amount of organization between the data.
MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability. Data changes in Cinchy (CDC) can be used to trigger a data sync from a MongoDB data source to a specified target. The attributes of the CDC Event are available to use as parameters within the Data Source Definition to narrow the scope of the request, e.g. a lookup.
Open Database Connectivity (ODBC) is a standard API for accessing database management systems (DBMS).
ODBC is the database portion of the Microsoft Windows Open Services Architecture (WOSA), which is an interface that allows Windows-based desktop applications to connect to multiple computing environments without rewriting the application for each platform.
Oracle Database is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
Apache Parquet is an open source data file format built to handle flat columnar storage data formats. Parquet operates well with complex data in large volumes and is known for its both performant data compression and its ability to handle a wide variety of encoding types.
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce objects are database tables that permit you to store data that is specific to an organization. Salesforce objects are of two types:
Standard Objects: Standard objects are the kind of objects that are provided by salesforce.com such as users, contracts, reports, dashboards, etc.
Custom Objects: Custom objects are those objects that are created by users. They supply information that is unique and essential to their organization. They are the heart of any application and provide a structure for sharing data.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce Platform Events are secure and scalable messages that contain data. Publishers push out event messages that subscribers receive in real time.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Push Topic events provide a secure and scalable way to receive notifications for changes to Salesforce data that match a SOQL query you define.
You can use PushTopic events to:
Receive notifications of Salesforce record changes, including create, update, delete, and undelete operations.
Capture changes for the fields and records that match a SOQL query.
Receive change notifications for only the records a user has access to based on sharing rules.
Limit the stream of events to only those events that match a subscription filter.
Snowflake is a fully managed SaaS that provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Snowflake enables data storage, processing, and analytic solutions.
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP allows applications running on different operating systems to communicate using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
SAP SuccessFactors solutions are cloud-based HCM software applications that support core HR and payroll, talent management, HR analytics and workforce planning, and employee experience management.
Cinchy Tables are commonly used data sync sources.
Example Use Case: You want to set up batch sync between a Cinchy Table and Hubspot to sync important sales analytics information. You can do so by using the Cinchy Table as your source, and a REST API as your target.
The Cinchy Table source supports batch syncs. To do a real-time sync from a Cinchy Table, you would use the Cinchy Event Broker/CDC Source instead.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
The Cinchy Event Broker/CDC (Change Data Capture) source allows you to capture data changes on your table and use these events in your data syncs.
Example Use Case: To mitigate the labour and time costs of hosting information in a silo, as well as remove the costly integration tax plaguing your IT teams, you want to connect your legacy systems into Cinchy to take advantage of the platform's sync capabilities. To do so, you want to set up a real-time sync between a Cinchy Table and Salesforce that updates Salesforce any time data is added, updated, or deleted on the Cinchy side. As long as you enable change notifications on your Cinchy table, you can do so by setting up a data sync and listener config with your source as the Cinchy Event Broker/CDC.
The Cinchy Event Broker/CDC supports both batch syncs and real-time syncs (most common).
Remember to set up your Listener Config if you are creating a real-time sync.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
To run a real-time sync, set up your Listener Config and enable it to begin your sync.
The following sections outline more information about specific parameters you can find on this source.
The Run Query parameter is available as an optional value for the Cinchy Event Broker/CDC connector. If set to true it executes a saved query; whichever record triggered the event becomes a parameter in that query. Thus the query now becomes the source instead of the table itself.
You are able to use any parameters defined in your listener config.
In the below example, we have a data sync using the Event Broker/CDC as a source. Our Listener Config has been set with the CinchyID attribute (Image 4).
We can enable the Run Query function to use the saved query "CDC Product Ticket Datestamps" as our source instead (Image 5). If we change the data from Record A in our source table to trigger our event, the Query Parameters below show that the Cinchy ID of Record A will be used in the query. This query is now our source.
It would appear in the data sync config XML as follows:
This page highlights a few example XML configs that you can review when setting up your own Cinchy Query data source.
You can review the source only example or the full example that shows both source and destination.
The below example shows what the source parameters would look like in XML.
Example Use Case: You want to set up batch sync between a Cinchy Query and a Cinchy Table. You query polls for any unapproved timesheets, out of office requests, or sick hours and, if found, adds them to an "Open Approval Tasks" table.
The DB2 source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Example Use Case: You have customer information currently sitting in the Dynamics 2015 CRM software. You want to sync this data into Cinchy through a batch sync in order to liberate your data from the silo.
The Dynamics 2015 source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
(Formerly Db2 for LUW) is a relational database that delivers advanced data management and analytics capabilities for transactional workloads.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Dynamics 2015 is a legacy CRM predecessor to Microsoft Dynamics 365. Mainstream end of life support finished in , with extended end of life support finishing in January 2025.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Title
Mandatory. Input a name for your data sync
Open Approval Tasks
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Cinchy Query
Domain
Mandatory. The domain where your source query resides.
Compliance
Table Name
Mandatory. The name of you source query.
Open Tasks
Timeout
Optional. The timeout, in number of seconds, for your source query. If not entered this value will default to 30.
120
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Name
Mandatory. The name of your column as it appears in the source query.
Owner Cinchy ID
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution. Log).
Title
Mandatory. Input a name for your data sync
Cinchy to Hubspot
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Cinchy Table
Domain
Mandatory. The domain where your source table resides.
Product
Table Name
Mandatory. The name of you source table.
Q1 Sales
Suppress Duplicate Errors
Optional. This field determines whether duplicate keys in the source are to be reported as warnings (unchecked) or ignored (checked). The default is unchecked. Checking this box can be useful in the event that you only want to load the distinct values from a collection of columns in the source.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Cinchy People -> Salesforce
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Cinchy Event Broker/CDC
Run Query
Optional. If true, executes a saved query, using the Cinchy ID of the changed record as a parameter. These query results are then used as the sync source, rather than using the Cinchy table where the data change originated. Review Appendix A for further details on this feature.
Path to Iterate
Optional. For the Cinchy Event Broker/CDC, the Path to Iterate function can be used to provide the JSON path to the array of items that you want to sync (provided that your event message contains JSON values).
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Name | Mandatory. The name of your column as it appears in the source. This should be in all caps. EXCEPTION: If you chose "query" as your object and use double quotes around the column names, then this value should should match that casing. | NAME |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Source | Mandatory. Select your source from the drop down menu. | Dynamics 2015 |
Username | Mandatory. The username the Dynamics 2015 account that has access to the data you want to sync. | RStewart |
Password | Mandatory. The password for the above Dynamics 2015 user account. | ****** |
Domain | Mandatory. The Domain name of the Dynamics 2015 server you are connecting to. | Customer |
URL | Mandatory. The URL for tue Dynamics 2015 server you are connecting to. |
Entity | Mandatory. The name of the entity you want to sync as it appears in your Dynamics 2015 CRM. | Companies |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | DB2 to Cinchy |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | DB2 |
Connection String | Mandatory. The encrypted connection string used to access your DB2 database. The Connection UI will automatically encrypt this value for you. |
Object | Mandatory. The type of object you want to use as your data sync. This will be either Table or Query. | Table |
Table | Appears when "Table" is selected as the Object Type. The name of your table as it appears in your DB2 database. | dbo.employees |
Query | Appears when "Query" is selected as the Object Type. This should be a SELECT statement indicating the data you want to sync out of your DB2 database. | Select * from dbo.employees |
Title | Mandatory. Input a name for your data sync | Dynamics 2015 to Cinchy |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
1. Overview
Copper is a Customer Relationship Management (CRM) software. Copper is a tool focused on automation and simplicity, most known for its Google Workspace integration.
Example Use Case: You have customer information currently sitting in the Copper CRM software. You want to sync this data into Cinchy through a batch sync in order to liberate your data from the silo.
The Copper source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab.
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Microsoft Dynamics 365 functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
Example Use Case: You have customer information currently sitting in the Dynamics CRM software. You want to sync this data into Cinchy through a batch sync in order to liberate your data from the silo.
The Dynamics source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Amazon DynamoDB is a managed NoSQL database service that is offered by Amazon as part of the AWS portfolio.
Example Use Case: You currently use DynamoDB to store metrics on product use and growth, but being stuck in the DynamoDB silo means that you can't easily use this data across a range of business use cases or teams. You can use a batch sync in order to liberate your data into Cinchy.
The DynamoDB source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Fixed width text files are special cases of text files where the format is specified by column widths, pad character and left/right alignment. Column widths are measured in units of characters. For example, if you have data in a text file where the first column always has exactly 10 characters, and the second column has exactly 5, the third has exactly 12 (and so on), this would be categorized as a fixed width text file.
If a text file follows the rules below it is a fixed width text file:
Each row (paragraph) contains one complete record of information.
Each row contains one or many pieces of data (also referred to as columns or fields).
Each data column has a defined width specified as a number of characters that is always the same for all rows.
The data within each column is padded with spaces (or any character you specify) if it does not completely use all the characters allotted to it (empty space).
Each piece of data can be left or right aligned, meaning the pad characters can occur on either side.
Each column must consistently use the same number of characters, same pad character and same alignment (left/right).
Example Use Case: You have a fixed width file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Fixed Width File source supports batch syncs.
The Fixed Width File source does not support Geometry, Geography, or Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
A binary file is a computer file that is not a text file, and whose content is in a binary format consisting of a series of sequential bytes, each of which is eight bits in length.
You can use binary files from a Local upload, Amazon S3, or Azure Blob Storage in your data syncs.
Some benefits of using binary files include:
Better efficiency via compression
Better Security through the ability to create custom encoding standards.
Unmatched Speed, since the data is stored in a raw format, and is not encoded using any character encoding standards, it is faster to read and store.
Example Use Case: You have a binary file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Binary File source supports batch syncs.
The Binary File source does not support Geometry or Geography data types.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
A delimited file is a sequential file with column delimiters. Each delimited file is a stream of records, which consists of fields that are ordered by column. Each record contains fields for one row. Within each row, individual fields are separated by column delimiters.
Example Use Case: You have a delimited file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Delimited File source supports batch syncs.
The Delimited File source does not support Geometry, Geography, or Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Microsoft Excel is a commonly spreadsheet program for managing and analyzing numerical data. You can use Microsoft Excel as a source for your Data Syncs by following the instructions below.
Example Use Case: You have an Excel spreadsheet that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Excel source supports batch syncs.
The Excel source does not support Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Optional. Review our documentation on for more information about this field.
for sample connection strings.
Optional. Review our documentation on for more information about this field.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Title
Mandatory. Input a name for your data sync
Copper to Cinchy
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Copper
Entity
Mandatory. The name of the entity you want to sync as it appears in your Copper CRM.
Companies
Access Token
Mandatory. An encrypted version of your Copper API Key. The Connections UI will automatically encrypt this value for you.
"e98HGU72Lp0-fd34"
User Email
Mandatory. The encrypted user email associated with the API key used above. The Connections UI will automatically encrypt this value for you.
"e98HGU72Lp0-fd34hf990b4kLL23"
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Dynamics to Cinchy
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Dynamics
Entity
Mandatory. The name of the entity you want to sync as it appears in your Dynamics CRM.
Companies
Service URL
Mandatory. The Web API URL for your instance.
Redirect URI
Mandatory. The Redirect URI from the Azure AD app registration
Client ID
Mandatory. The encrypted Client ID found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you.
Client Secret
Mandatory. The encrypted Client Secret found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Product Metrics
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
DynamoDB
Entity
Mandatory. The name of the entity you want to sync as it appears in DynamoDB.
Metrics
AWS Access Key (Client ID)
Mandatory. The encrypted AWS Access Key (Client ID) used to access your DynamoDB.
AWS Secret (Client Secret)
Mandatory. The encrypted AWS Secret (Client Secret) used to access your DynamoDB.
AWS Region
Mandatory. The name of the region for your AWS instance.
US-East-1
Username
Mandatory. The name of a user with access to connect to your DynamoDB server.
Password
Mandatory. The password associated with the above user.
AuthType
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
(Sync) Source | Mandatory. Select your source from the drop down menu. | Fixed Width File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Footer Rows to Ignore | Mandatory. The number of records from the bottom of the file to ignore | 0 |
Encoding | Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII. |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Parse Content By (Only for Standard Columns) | Binary File sources have a unique, mandatory parameter for Standard Columns: Parse Content By - Choose from the following three options to define how you want to parse your content:
| Byte Length |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Binary File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Header lines to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Footer Lines to Ignore | Mandatory. The number of records from the bottom of the file to ignore | 0 |
Encoding | Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII. |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
AuthType | his field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that: |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Delimited File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Delimiter | Mandatory. The delimiter character used to separate the. text strings. Use U+#### syntax (e.g. U+0001) for unicode characters. | , |
Text Qualifier | Mandatory. The text qualifier character, which is used in the event that the delimiter is contained within the row cell. Typically, the text qualifier is a double quote. | " |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). If you use both useHeaderRecord="true" and HeaderRowsToIgnore = 1, two rows will be ignored. Refer to the below to ensure you are receiving the results you want: One row as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 0 Two rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 1 Three rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 2 | 1 |
Encoding | Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII. |
Use Header Record | Optional. Check this box to use the Header record to match schema. If set to true, fields not present inthe record will default to null. |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
AuthType | This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that: |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Delimited File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Sheet Name | Mandatory. The name of the sheet that you want to sync. | Employee Info |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Footer Rows to Ignore | Mandatory. The number of records from the bottom of the file to ignore. | 0 |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
AuthType | This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that: |
Apache AVRO was added as an inbound data format in Cinchy v5.3.
Apache AVRO (inbound) is a data format with added integration with the Kafka Schema Registry, which helps enforce data governance within a Kafka architecture.
Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro stores both the data definition and the data together in one message or file. Avro stores the data definition in JSON format making it easy to read and interpret; the data itself is stored in binary format making it compact and efficient.
Some of the benefits for using AVRO as a data format are:
It is compact
It has a direct mapping to/from JSON
It's fast
It has bindings for a wide variety of programming languages.
For more about AVRO and Kafka, read the documentation here.
To set up the Apache AVRO connection to a Kafka Schema Registry, you will need to configure your Listener Configs table with the below specified attributes.
In this example, we are syncing from a Kafka Topic source to a Cinchy Table target.
We want to sync the following data from Kafka and map it to the appropriate column in the "Sync Target 2" table in the "Kafka Sync" domain.
This is what the Connections UI will look like with the aforementioned example parameters and data.
Your source tab should be set to "Kafka Topic" and have the following information (Image 1):
Tip: Click on an image in this document to enlarge it.
Your destination tab should be set to "Cinchy Table", and have the following information (Image 2):
Domain: The domain where your destination table resides. In our example we are using the "Kafka Sync" domain.
Table: The name of your destination table. In our example we are using the "Sync Target 2" table.
Degree of Parallelism: This is the number of parallel batch inserts and updates that can be run. Set this to 1 for our example.
Under the Sync Behaviour tab, we want to use the following parameters:
Synchronization Pattern: Full File
Sync Key Column Reference Name: Employee Id
New Record Behaviour: Insert
Dropped Record Behaviour: Delete
Change Record Behavior: Update
The following code is what the XML for our example connection would look like:
Apache Kafka is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time for your key use cases.
Example Use Case: You currently use Kafka to store the metrics for user logins, but being stuck in the Kafka silo means that you can't easily use this data across a range of business use cases or teams. You can use a batch sync in order to liberate your data into Cinchy.
The Kafka Topic source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab.
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
To run a real-time sync, set up your Listener Config and enable it to begin your sync.
Apache Parquet is a file format designed to support fast data processing for complex data, with several notable characteristics:
1. Columnar: Unlike row-based formats such as CSV or Avro, Apache Parquet is column-oriented – meaning the values of each table column are stored next to each other, rather than those of each record:
2. Open-source: Parquet is free to use and open source under the Apache Hadoop license, and is compatible with most Hadoop data processing frameworks. To quote the project website, “Apache Parquet is… available to any project… regardless of the choice of data processing framework, data model, or programming language.”
3. Self-describing: In addition to data, a Parquet file contains metadata including schema and structure. Each file stores both the data and the standards used for accessing each record – making it easier to decouple services that write, store, and read Parquet files.
Example Use Case: You have a parquet file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Parquet source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Common uses of LDAP include when:
A single piece of data needs to be found and accessed regularly;
Your organization has a lot of smaller data entries;
Your organization wants all smaller pieces of data in one centralized location, and there doesn't need to be an extreme amount of organization between the data.
The LDAP source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Please review the following considerations prior to setting up your MongoDB Collection data sync source:
We currently only support SCRAM authentication (Mongo 4.0+).
Syncs are column based. This means that you must flatten the MongoDB source document prior to sync by using a projection (See section 2: Projection (JSON Object)).
The column names used in the source must match elements on the root object, with the exception of "$" which can be used to retrieve the full document.
By default, MongoDB batch size is 101.
By default, bulk operations size is 5000.
Due to a conversion of doubles to decimals that occurs during the sync process, minor data losses may occur.
The following data types are not supported:
Binary Data
Regular Expression
DBPointer
JavaScript
JavaScript code with scope
Symbol
Min Key
Max Key
The following data types are supported with conversions:
ObjectID is supported, but converted to string
Object is supported, but converted to JSON
Array is supported, but converted to JSON
Timestamp is supported, but converted to 64-bit integers
The MongoDB Collection source supports batch syncs. (To enable real-time syncs with MongoDB, use the MongoDB Collection (Cinchy Event Triggered) source instead.)
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
To run a batch sync, select Jobs > Start Job.
The MongoDB Collection Data Source obtains BSON documents from MongoDB. BSON, short for Binary JSON, is a binary-encoded serialization of JSON-like documents. Like JSON, BSON supports the embedding of documents and arrays
within other documents and arrays. BSON also contains extensions that allow representation of data types that are not part of the JSON spec. For example BSON makes a distinction between Int32 and Int64.
The following table shows how MongoDB data types are translated in Cinchy.
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.
The role to have at least read access to the source
The Connections pods' role must specified in the data sync config
Optional. Review our documentation on for more information about this field.
The role to have at least read access to the source
The Connections pods' role must specified in the data sync config
Optional. Review our documentation on for more information about this field.
The role to have at least read access to the source
The Connections pods' role must specified in the data sync config
Name | Description |
---|---|
Name | Description |
---|---|
Kafka Source | Cinchy Column |
---|---|
Column 1 (Standard Column) Parameters | Example Data |
---|---|
Column 2 ( Standard Column) Parameters | Example Data |
---|---|
Column 1 (Standard Column) Parameters | Example Data |
---|---|
Column 2 (Standard Column) Parameters | Example Data |
---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Value | Description | Example |
---|
(Lightweight Directory Access Protocol) is a mature, flexible, and well supported standards-based mechanism software protocol for enabling anyone to locate data whether on the public internet or on a corporate intranet.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Configure your
Define your.
Add in your , if required.
Define your .
To run a real-time sync (using the Cinchy Event Triggered MongoDB Source), and enable it to begin your sync.
MongoDB | Cinchy | Notes |
---|
"topicName"
Mandatory. This is the Kafka topic name to listen messages on.
"messageFormat"
Put "AVRO" if your messages are serialized in AVRO
"bootstrapServers"
Mandatory. List the Kafka bootstrap servers in a comma-separated list. Should be in the form of host:port
"url"
This is required if your data follows a schema when serialized in AVRO. It is a comma-separated list of URLs for schema registry instances that are used to register or lookup schemas.
"basicAuthCredentialsSource"
Specifies the Kafka configuration property "schema.registry.basic.auth.credentials.source" that provides the basic authentication credentials. This can be "UserInfo" | "SaslInherit"
"basicAuthUserInfo"
Basic Auth credentials specified in the form of username:password
"sslKeystorePassword"
The client keystore (PKCS#12) password
$.employeeId
Employee Id
$.name
Name
Name
$.employeeid
Alias
Employee Id
Data Type
Number
Name
$.name
Alias
Name
Data Type
Text
Trim Whitespace
True
Source Column
Employee Id
Target Column
Employee Id
Source Column
Name
Target Column
Name
Title
Mandatory. Input a name for your data sync
Website Metrics
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Kafka Topic
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Employee Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Since we are doing a local upload, we use "@Filepath"
(Sync) Source
Mandatory. Select your source from the drop down menu.
Parquet
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Row Group Size
Mandatory. The size of your Parquer Row Groups. Review the documentation here for more on Row Group sizing.
The recommended disk block/row group/file size is 512 to 1024 MB on HDFS.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
Auth Type
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Double | Number | Supported |
String | Text | Supported |
Object | Text (JSON) | Supported |
Array | Text (JSON) | Supported |
Binary Data | Binary | Unsupported |
ObjectId | Text | Supported |
Boolean | Bool | Supported |
Date | Date | Supported |
Null | - | Supported |
RegEx | - | Unsupported |
JavaScript | - | Unsupported |
Timestamp | Number | Supported |
32-bit Integer | Number | Supported |
64-bit Integer | Number | Supported |
Decimal28 | Number | Supported |
Min Key | - | Unsupported |
Max Key | - | Unsupported |
- | Geography | Unsupported |
- | Geometry | Unsupported |
Data changes in Cinchy (CDC) can be used to trigger a data sync from a MongoDB data source to a specified target. The attributes of the CDC Event are available to use as parameters within the Data Source Definition to narrow the scope of the request, e.g. a lookup.
The MongoDB Collection (Cinchy Event Triggered) Source supports real-time syncs.
The options available to the MongoDB Collection (Cinchy Event Triggered) connector are identical to the MongoDB source connector which can be found here.
The following sections in the Source configuration of the Connections experience can reference attributes of the CDC Event as parameters:
Connection String
Database Name
Collection Name
Query
Projection
Pipeline
In Cinchy v5.6+, you can also reference attributes of the CDC Event in Calculated Columns.
Note that syncs making use of this must limit their batch size to 1.
Parameters use the column name or alias as defined in the CDC Event Listener Config prefixed with an "@", e.g. @CompanyName would be the parameter name for the following Cinchy CDC listener Topic configuration.
Parameter names are case sensitive when used in the Connection configuration. Parameter matching is performed using literal string replacements. Names should not contain spaces (spaces are automatically removed), and should have differing prefixes.
The following set of parameters will be available on every event even if they're not present in the listener config
@Version
@DraftVersion
@CinchyRecordType
@ApprovalState
@ModifiedBy
@Modified
@Deleted
In order to configure a MongoDB Collection (Cinchy Event Triggered) connection, a listener must be configured with an Event Connector Type of Cinchy CDC.
Set up your listener configuration for your data sync, keeping the following constraints in mind:
Column names in the listener config should not contain spaces. If they do, they will be automatically removed. E.g. a column named Company Name will become the replacement parameter @CompanyName
The replacement parameter names are case sensitive.
Column names in the listener config should not be prefixes of other column names. E.g. if you have a column called "Name", you shouldn't have another called "Name2" as the value of @Name2 may end up being replaced by the value of @Name suffixed with a "2".
connectionString | The connections string for your source | "87E4lvPf83gLK8eKapH6Y0YqIFSNbFlq62uN9487" |
Database | The name of your MongoDB database | "test" |
Collection | "Article" |
Type | The method used to retrieve your data. | "find" |
Query | A query for retrieving your data. | This example query returns data where the price is less than 10$. |
Projection | A projection for flattening your source document. |
Column Name | The name(s) of your source column(s) | "id" "name" "price" "colour" "size" "stock" "$" (This is used to retrieve the full document.) "Details" (This is imported both as set of fields (flattened from the projection) and as a JSON.) |
dataType | The data type of your source column | "Text" "Text" "Number" "Text" "Text" "Number" "Text" "Text" |
isMandatory | Whether the column is mandatory or not | "false" |
validateData | Whether the column data needs to be validated or not | "false" |
Title | Mandatory. Input a name for your data sync | LDAP to Cinchy |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | LDAP |
Server | Mandatory. The name of your LDAP Server Directory | Company-1 |
Object Category | Internal-Metrics |
Username | Mandatory. The name of a user who has access to the LDAP server. |
Password | Mandatory. The password for the above user. The Connections UI will encrypt this value. |
CN (Common Name) | Optional. |
Title | Mandatory. Input a name for your data sync | MongoDB to Cinchy |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | MongoDB Collection |
Connection String | Example (Default): mongodb+srv://<username>:<password>@<mongo host URI> Example (Against different database): mongodb+srv://<username>:<password>@<mongo host URI>?authSource=<authentication_db> |
Database | Blog |
Collection | Article |
Type |
Query (JSON Object) | A query for retrieving your data. This option appears if you have selected db.collection.find(). |
Projection (JSON Object) | This option appears if you have selected db.collection.find(). Syncs are column based. This means that you must flatten the MongoDB source document prior to sync using a projection. |
Pipeline (JSON Array of Objects) |
Use SSL |
Name | Mandatory. The name of your column as it appears in the source. This field is case sensitive and preserves spaces. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Text |
Description | Optional. You may choose to add a description to your column. |
Open Database Connectivity (ODBC) is a standard application programming interface (API) designed to unify access to SQL databases. An ODBC query allows the extraction of specific information sets from those databases.
The ODBC Query source support batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Microsoft SQL Server is one of the main relational database management systems on the market that serves a wide range of software applications for business intelligence and analysis in corporate environments.
Based on the Transact-SQL language, it incorporates a set of standard language programming extensions and its application is available for use both on premise and in the cloud.
Microsoft SQL Server is ideal for storing all the desired information in relational databases, as well as to manage such data without complications, thanks to its visual interface and the options and tools it has.
The MS SQL Server Query and Table sources support batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Version 5.4 of the Cinchy platform introduced data polling, a source option which uses the Cinchy Event Listener to continuously monitor and sync data entries from your SQLServer or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.
The Polling Event source supports real-time syncs.
The Polling Event Source supports DB2 and SQL Server databases.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
To run a real-time sync, set up your Listener Config and enable it to begin your sync.
Data changes in Cinchy (CDC) can be used to trigger a data sync from a REST API data source to a specified target. The attributes of the CDC Event are available to use as parameters within the REST API Data Source Definition to narrow the scope of the request, e.g. a lookup.
Example Use Case: Let's assume an organization wants to use the Dun & Bradstreet API for enriching company information, e.g. # of Employees, Address, etc. When a company record is added or modified in a table called Companies inside of Cinchy, a D&B API should be triggered with the Company Name (a mandatory field on the Companies table) passed in as a parameter, and the Company record should be enriched with the company information from the API response.
The following sections in the Source configuration of the Connections experience can reference attributes of the CDC Event as parameters:
Auth Request -> Body
Auth Request -> Request Headers -> Header -> Header Value
Auth Request -> Endpoint URL
Body
Request Headers -> Header -> Header Value
API Endpoint URL
Parameters use the column name or alias as defined in the CDC Event's Listener Config prefixed with an "@", e.g. @CompanyName would be the parameter name for the following Cinchy CDC listener Topic configuration.
Parameter names are case sensitive when used in the Connection configuration. Parameter matching is performed using literal string replacements. Names should not contain spaces (spaces are automatically removed), and should have differing prefixes.
The following set of parameters will be available on every event even if they're not present in the listener config
@Version
@DraftVersion
@CinchyRecordType
@ApprovalState
@ModifiedBy
@Modified
@Deleted
In order to configure a REST API (Cinchy Event Triggered) connection, a listener must be configured with an Event Connector Type of Cinchy CDC.
Column names in the listener config should not contain spaces. If they do, they will be automatically removed. E.g. a column named Company Name will become the replacement parameter @CompanyName
The replacement parameter names are case sensitive.
Column names in the listener config should not be prefixes of other column names. E.g. if you have a column called "Name", you shouldn't have another called "Name2" as the value of @Name2 may end up being replaced by the value of @Name suffixed with a "2".
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
The REST API source support batch syncs.
You can find the parameters in the Info tab below (Image 1).
Mandatory and optional parameters for the Source tab are outlined below (Image 2).
Select Show Advanced for more options for the Schema section.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
More options are available to you under the "Add a Section" drop down.
Note that adding a Pagination Block is mandatory.
To get fields in a nested array, you can either set the nested array as the root, or you can use Path to Iterate to expand the array.
Here is a sample JSON response:
Records Root JSONPath: $.users Schema:
$.userId
for ID
$.name
for Name
You can't reference "groupId"
as it's one level above the specified root scope.
Use $.data
in Records Root JSONPath if the API returns a top-level JSON array.
Here is a sample JSON response:
Records Root JSONPath: $.data
Schema:
$.name
for Name
$.age
for Age
Use Path to Iterate to expand on an array. It allows you to target nested keys within the array. This only applies if the records within an array are objects.
If the record within the path to iterate is an array, each item within the array gets placed under an "item"
key in a new JSON object.
For example, here is a sample JSON response:
In this example, we want to iterate over the "transactions"
array and capture the records for "transactionid"
and assign them to the "Transaction ID" column, and then add the parent "name"
key to a Name column .
Records Root JSONPath: $
Path to iterate: $.transactions
Schema:
$.name
for Name
$.transactions.id
for Transaction ID
To run a batch sync, select Jobs > Start Job
You are able to use this section to add body content.
Retry Configuration automatically retries HTTP Requests on failure based on a defined set of conditions. This provides a mechanism to recover from transient errors, such as network disruptions or temporary service outages.
Note: the maximum number of retries is capped at 10.
To set up a retry specification:
Under the REST API source tab, select API Specification > Retry Configuration
Select your Delay Strategy.
Linear Backoff: Defines a delay of approximately n seconds where n = current retry attempt.
Exponential Backoff: A strategy where every new retry attempt is delayed exponentially by 2^n seconds, where n = current retry attempt.
Example: you defined Max Attempts = 3. Your first retry is going to be in 2^1 = 2, second: 2^2 = 4, third: 2^3 = 8 sec.
3. Input your Max Attempts. The maximum number of retries allowed is 10.
Define your Retry Conditions. You must define the conditions under which a retry should be attempted. For the Retry to trigger, at least one of the "Retry Conditions" has to evaluate to true.
Retry conditions are only evaluated if the response code isn't 2xx Success.
Each Retry Condition contains one or more "Attribute Match" sections. This defines a regex to evaluate against a section of the HTTP response. The following are the three areas of the HTTP response that can be inspected:
Response Code
Header
Body
If there are multiple "Attribute Match" blocks within a Retry Condition, all have to match for the retry condition to evaluate to true.
;;
The name of your
Optional. Review our for more information about this field.
Mandatory. The name of the that you want to sync into your destination.
Optional. Review our for more information about this field.
Mandatory. This is the encrypted connection string. You can review MongoDB's Connection String guide and parameter descriptions Do not include the /[database] in your connection URL. By default services like MongoDB Atlas will automatically include it when copying the connection string. If authenticating against a database other than the admin db, please provide the name of the database associated with the user’s credentials using the authSource parameter.
Mandatory. The name of the that contains the collection listed in the "Collection" parameter.
Mandatory. The name of your
Mandatory. The method for retrieving your data. This will be either: - db.collection.find(): This method is used to select documents in a collection when there is no need to transform (e.g. flatten or aggregate) the data. It is used for basic queries where query and projection are sufficient. - db.collection.aggregate(): This method is used when there is a need to transform the data in a collection. It is used for more complex scenarios with single or multi-stages pipelines. In general, you will yield the quickest performance by using the find method, unless you need a
consists of one or more stages that process documents. This option appears if you have selected db.collection.aggregate().
This checkbox can be used to define the use of authentication for your sync. If checked, you will need to input the following values taken from your cert: - SSL Key PEM - SSL Certificate PEM - SSL CLA PEM
Mandatory. The data type of the column values.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
The options available to the REST API (Cinchy Event Triggered) connector are identical to the REST API source connector which can be found .
for your data sync, keeping the following constraints in mind:
Value | Description | Example |
---|
Column Name | Data Type |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You can learn more about these sections in
Configure your
Define your
Add in your , if required.
For more information, see the page about .
For more information, see the page about .
A pagination block is mandatory. for more on pagination blocks.
Note that the Regex value should be entered as a regular expression. The Regex engine is .NET and expressions can be tested by using . In the below example, the Regex is designed to match any HTTP 5xx Server Error Codes, using a Regex value of 5[0-9][0-9]
.
For Headers, the format of the Header string which the regex is applied against is {Header Name}={Header Value}
. For example "Content-Type=application/json"
.
Title
Mandatory. Input a name for your data sync
ODBC Query Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
ODBC
Connection String
Mandatory. The Connection String to connect to your ODBC Driver. The Connections UI will encrypt this value.
Please see here for example Connection Strings.
Query
This should be a SELECT statement indicating the data you want to sync out of your ODBC.
Select * from dbo.employees
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
MS SQL Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
MS SQL Server
Connection String
Mandatory. The Connection String to connect to your MS SQL Server. The Connections UI will encrypt this value.
Please see here for example Connection Strings.
Object
Mandatory. The type of Object you want to use in your data sync.
This can be either Table or Query.
Table
Appears when Object = Table. The name of the Table (including the schema) you want to sync out of your MS SQL Server.
Employees
Query
Appears when Object = Query. This should be a SELECT statement indicating the data you want to sync out of your MS SQL Server.
Select * from dbo.employees
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
Polling Event Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Polling Event
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Column Name | The name(s) of your source column(s) | "Id" "Name" "Age" "Address" Salary" |
dataType | The data type of your source column | "Number" "Text" |
isMandatory | Whether the column is mandatory or not | "false" |
validateData | Whether the column data needs to be validated or not | "false" |
ID | Number |
NAME | Text |
AGE | Number |
ADDRESS | Text |
SALARY | Number |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory | - If both 'Mandatory' and 'Validated': empty rows rejected. - If only 'Mandatory': rows synced but marked as failed with 'Mandatory Rule Violation'. |
Validate Data | - If both 'Mandatory' and 'Validated': empty rows rejected. - If only 'Validated': all rows synced. |
Trim Whitespace | Optional for text data. Choose to trim whitespace. |
Max Length | Optional for text data. Set max length; exceeding values get rejected. |
Pattern | Mandatory when using a Transformation. The pattern for your string replacement. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | REST API Sync |
Variables |
Permissions | Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory. |
Source | Mandatory. Select your source from the drop down menu. | REST API |
HTTP Method | Mandatory. | This will be either GET or POST. |
API Response Format | Mandatory. Use this field to specify a response format of the endpoint. Currently, the Connections UI only supports JSON responses. | JSON |
Records Root JSONPath |
|
Path to Iterate | The path to select an array of records for capturing elements inside. A record is created for each element which you can use as the input in a source schema. The path is relative to the root JSONPath. |
API Endpoint URL | Mandatory. API endpoint, including URL parameters like API key | https://www.quandl.com/api/v3/datatables/CLS/IDHP?fx_business_date=2024-01-01&api_key=@API_KEY |
Next Page URL JSONPath | Specify the path for the next page URL. This is only relevant for APIs that use cursor pagination |
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce objects are database tables that permit you to store data that is specific to an organization. Salesforce objects are of two types:
Standard Objects: Standard objects are the kind of objects that are provided by salesforce.com such as users, contracts, reports, dashboards, etc.
Custom Objects: Custom objects are those objects that are created by users. They supply information that is unique and essential to their organization. They are the heart of any application and provide a structure for sharing data.
The Salesforce Object (Bulk API) source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
SAP SuccessFactors solutions are cloud-based HCM software applications that support core HR and payroll, talent management, HR analytics and workforce planning, and employee experience management.
The SAP SuccessFactors source support batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
1. Overview
Salesforce Platform Events are secure and scalable messages that contain data. Publishers push out event messages that subscribers receive in real time.
The Salesforce Platform Event source supports real-time syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Snowflake enables data storage, processing, and analytic solutions.
The Snowflake source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Optional. Review our documentation on for more information about this field.
Mandatory. Specify the JSON path for the results. The root of a JSON object is $
. If the top-element of the response is an array, Cinchy places the array under a "data"
key in a new JSON object. See for more info.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Value | Description | Example |
---|
is a cloud-based CRM software designed for service, marketing, and sales.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
To run a real-time sync, and enable it to begin your sync.
is a fully managed SaaS that provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Title
Mandatory. Input a name for your data sync
Salesforce Bulk API
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Salesforce Object (Bulk API)
Object
Mandatory. The name of your Salesforce Object.
Auth URL
Mandatory. The URL that issues your Salesforce auth token.
Client ID
Mandatory. The encrypted Client ID to connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Client Secret
Mandatory. The encrypted Client Secret for the above Client ID. The Connections UI will automatically encrypt this value for yo.
Username
Mandatory. The encrypted Username of an account that can connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Password
Mandatory. The encrypted Password associated with the above account that can connect to your Salesforce Object. The Connections UI will automatically encrypt this value for you.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Title
Mandatory. Input a name for your data sync
SAP SuccessFactors
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
SAP SuccessFactors
API Key
Mandatory. The encrypted API Key needed to connect to your SuccessFactors entity. The Connections UI will automatically encrypt this value for you.
User ID
Mandatory. A User ID with access to connect to your SuccessFactors entity.
Company ID
Mandatory. The Company ID associated with the above User ID and with access to connect to your SuccessFactors entity.
SAML Assertion URL
Mandatory. The URL used for SAML insertion.
OAUTH Token URL
Mandatory. The URL that issued your oauth token.
Private Key
Mandatory. The key to connect to your SuccessFactors entity. The Connections UI will automatically encrypt this value for you.
OData API URL
Mandatory. The URL for the OData API.
Entity
Mandatory. The name of your SuccessFactors entity.
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
connectionString | The connections string for your source | "87E4lvPf83gLK8eKapH6Y0YqIFSNbFlq62uN9487" |
Object | The type of source object | "Table" |
Table | The name of your source object (in this case a table) | "Employees" |
Column Name | The name(s) of your source column(s) | "name" |
dataType | The data type of your source column | "Text" |
isMandatory | Whether the column is mandatory or not | "false" |
validateData | Whether the column data needs to be validated or not | "false" |
Source | Mandatory. Select your source from the drop down menu. | Salesforce Platform Event |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | Salesforce Platform Event |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Title | Mandatory. Input a name for your data sync | Snowflake Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | Snowflake |
Connection String | Unencrypted example: account=wr38353.ca-central-1.aws;user=myuser;password=mypassword;db=CINCHY;schema=PUBLIC |
Object | Mandatory. Select either Table or Query. | Table or Query. |
Table | Appears if Object = Table. The name of the Table in Snowflake that you wish to sync. |
Query | Appears if Object = Query. A SELECT statement query that will be used to fetch your data. |
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP allows applications running on different operating systems to communicate using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
The SOAP 1.2 Web Service source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Optional. Review our for more information about this field.
Optional. Review our for more information about this field.
Mandatory. The encrypted connection string used to connect to your Snowflake instance. You can review Snowflake's Connection String guide and parameter descriptions
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Configure your
Define your.
Add in your , if required.
Define your .
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | SOAP Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | SOAP 1.2 Web Service |
authType | Mandatory. Select the type of authentication you wish to us in this sync. - None - WSSE: This will allow you to use a Username and Password to authenticate via a WS-Security SOAP envelope header. - Basic: This will allow you to use a Username and Password to authenticate via a basic auth header. | Basic |
Use Password Digest | The password digest is a cryptographic hash of the password and timestamp. This parameter should only be used in conjunction with a WSSE authType, and when the Password Type for your auth is "PasswordDigest". If neither of those applies, leave this value unchecked. |
Request Timeout | Mandatory. You can use this field to set a timeout, in milliseconds, for your request. There is no maximum value. The minimum should be greater than 0. The default value is 100 milliseconds | 2000 |
Endpoint | Mandatory. This field should contain your SOAP 1.2 Web Service API endpoint. |
Has Mtom Response |
Record Xpath | Mandatory. The Xpath to select all records we want to extract from the SOAP response. The path should point to the XML element wrapping the column data. XPath stands for XML Path Language. It uses a non-XML syntax to provide a flexible way of addressing (pointing to) different parts of an XML document. This value should start with ‘//’ and be followed by the tag name of the data. You can refer http://xpather.com/ to find out the Xpath. |
Envelope Namespace | The namespace prefix to use for the SOAP request elements. For example, setting the value to "foo" would result in the soap request being prefixed with the "foo" namespace. | "foo" |
Namespaces - Name | The name of your SOAP namespace tags in your request and response. This value appear as "soap" in the snippet below. These should be the values immediately after "xmlns:" | "soap" |
Namespaces - Value |
The Oracle Query and Table sources support batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Optional. Review our for more information about this field.
This value is required to be true if: the SOAP API response contains an attachment outside of the response message.
The URL describing this namespace in the response. In the below snippet this value is ""
""
is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Title | Mandatory. Input a name for your data sync | Oracle Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters |
Source | Mandatory. Select your source from the drop down menu. | Oracle |
Connection String | Mandatory. The Connection String to connect to your MS SQL Server. The Connections UI will encrypt this value. |
Object | Mandatory. The type of Object you want to use in your data sync. | This can be either Table or Query. |
Table | Appears when Object = Table. The name of the Table you want to sync out of Oracle. | Employees |
Query | Appears when Object = Query. This should be a SELECT statement indicating the data you want to sync out of Oracle. | Select * from dbo.employees |
Optional. Review our for more information about this field.
for example Connection Strings.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Push Topic events provide a secure and scalable way to receive notifications for changes to Salesforce data that match a SOQL (Salesforce Object Query Language) query you define.
You can use Push Topic events to:
Receive notifications of Salesforce record changes, including create, update, delete, and undelete operations.
Capture changes for the fields and records that match a SOQL query.
Receive change notifications for only the records a user has access to based on sharing rules.
Limit the stream of events to only those events that match a subscription filter
The Salesforce Push Topic source supports real-time syncs.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
To run a real-time sync, set up your Listener Config and enable it to begin your sync.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Title
Mandatory. Input a name for your data sync
Salesforce Push Topic
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Source
Mandatory. Select your source from the drop down menu.
Salesforce Push Topic
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.