Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Post sync scripts are written in CQL and can be added to the end of a sync to allow you to do more with your data, such as inserted retrieved values into a Cinchy table.
This example takes you through a batch data sync using the Cinchy Table [Product].[Names Test]. This scenario uses the following API as the destination: https://cinchy.net/API/Product/LatestCinchyReleaseDate
When you run the batch job, it will check for updates in the source table. These updates trigger the REST API to fetch our defined value, which we can use in a post-sync script. In this example, the script will insert this value into a second table, [Product].[Response Table].
The following steps will walk you through how to use this functionality.
Set up your data sync to a REST API. You can review the Source (Image 1), Destination (Image 2), and Sync Behaviour (Image 3) below.
Under the Post Sync tab, input the CQL that takes the defined variable from the REST API and insert it into the [Product].[Response Test] table (Image 4).
Add in your Permissions and click Save.
After you configure the sync, run the job. It will check for new data in the Name column on the [Product].[Names] table (Image 5). When found, it will trigger the REST API to GET the “value” variable. The post sync script will take that value and INSERT it into the [Product].[Response Test] table (Image 6).
When syncing a Data Source, you may have the option to add extra configuration sections, such as an Auth Request, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in an Auth Request, fill in the following parameters:
HTTP Method: Either POST or GET
Endpoint URL
From the drop down, you can also add:
Request Headers
Body
Variables to Extract
When syncing a Data Source, you may have the option to add in extra configuration sections, such as an Pagination, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Cinchy has two types of pagination available (Image 2):
Cursor: The cursor is a key parameter in this type of pagination. You receive a variable named Cursor
along with the response. It's a pointer that points at a particular item that you must send with a request. The server then uses the cursor to seek the other set of items. Cinchy recommends using cursor-based pagination when dealing with a real-time data set.
Offset: Offset-based pagination is for parameters with a specific limit (the number of results) and offset (the number of records you need to skip). Cinchy recommends using offset-based pagination for static data.
To set up cursor pagination, fill in the following parameters (Image 3):
Type: Select Cursor
Next Page URL JSON Path: This is the JSON Path within the response to the URL of the next page
Cursor Key: This is the key used in the query string to specify the cursor value. This is only required if the cursor returned isn't a fully qualified URL.
To set up offset pagination, fill in the following parameters (Image 4):
Type: Select Offset
Limit Key: The key used in the query string to specify the limit
Limit: The desired page size
Offset By: The offset type that the API uses for pagination. This will be either Record Number or Page Number.
Offset Key: The key used in the query string to specify the offset
Initial Offset: The starting offset
A pagination block is mandatory, even if the API doesn't return results from multiple pages. You can use the following as the placeholder:
<Pagination type="OFFSET" limitField="" offsetField="" limit="0" initialOffset="0" />
When configuring a Data Sync, you may have the option to add in extra configuration sections, such as an Request Headers, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in Request Headers, fill in the following parameters:
Header Type:
Authorization: You can use an authorization request header to provide credentials that authenticate a user with a server, allowing access to a protected resource. Selecting this header defines the Header Value as a password field.
Content-Type
Header
Name: The name of the HTTP Header to add
Is Encrypted?
Header Value
The pages listed under this section aren't required for most data syncs, but they can help you create more robust use cases when applied correctly.
You can use filters in your source and target configurations to define specific subsets of data that you want to use in your syncs.
When syncing a Data Source, you may have the option to add additional configuration sections, such as a Filter, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Note that if your source only has one of the listed options, it will appear by default instead of in a drop-down.
A filter on your source is optional. It relies on a source specific syntax for filtering out records from your source target. The filter can reference execution parameters.
File
For a file data sources (delimited, fixed width & Excel) the syntax conforms to the .NET frameworks RowFilter on a DataView
Salesforce
For Salesforce syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the T-SQL where clause (without the where expression)
This is only available if using a table, not a query. For queries, include the filter in the query itself.
There can only be one <Filter> for each source. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, there is a "Conditional" option for source filters in the Connections UI.
Once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.
Multiple Conditions can be added to a single data sync by using the AND/OR and +Rule buttons.
You are able to group your Rules into a rule set by using the +Ruleset button.
The left-most drop down is used to select either a source or a target column as defined in your Source and Destination tabs.
The centre drop-down is used to select from the following options:
=
!=
Contains
Is Null
Is Not Null
The right-most drop-down can either be used for a plain value (ex: text, numerical, etc.) This will adjust based on the column data type picked in the left-most drop down. For example, if in the source schema the column is a date, then it renders a date picker.
For example, the below condition would only bring in records where the source column Employee Status isn't null (Image 2).
A target destination filter is optional. It relies on a source specific syntax for filtering out records from your target. The filter can reference execution parameters.
Salesforce
For Salesforce the syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the TSQL where clause (without the where expression)
There can only be one <Filter> for each target. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
Variables are values that can be dynamically inserted when the sync job is run. The variables you define here can be referenced in fields in other parts of your sync config (using the @ prefix) and when the job is run you can be prompted for their values.
The execution variables are either passed in at the time of execution or calculated through a formula. The value of the name attribute is passed in as command line option, param-values. (Optional, if the path to the source file to load is specified in the path attribute of the source element or calculated column formula don't reference execution variables)
While in the UI the term is variables, please note that the paired XML configuration will refer to the term as parameters.
You can choose to just use plain text in the Name field of the Variable or you can use a calculated formula.
The following formulas are currently supported by Connections.
FILENAME(<some-path>, <some-regex>): The FILENAME formula takes in two variables:
A reference to the first parameter (like a file path)
A regular expression that includes a match group. The first match group's value is assigned to the variable. The FILENAME function applies the regex only to the name of the file (excluding the directory structure).
FILEPATH(<some-path>, <some-regex>): The FILEPATH formula takes in two variables:
A reference to the first parameter (like a file path)
A regular expression that includes a match group. The first match group's value is assigned to the variable. The FILEPATH function executes the regex against the full file path (including the directory structure).
GUID(): The GUID formula uses a random GUID for that variable's value. Use GUID() to generate a unique identifier to use during the context the sync. For example, use it to track changes made from a particular sync.
ENV(<place-environment-variable-here>): The ENV formula uses an environment variable available in the connections/worker pods as the value of the variable.
We don't recommend using the ENV formula for credentials.
Below are the three Variable examples shown in the Connections experience, followed by the relevant XML:
A name attribute reference an execution variable (Image 2). You can use this when pulling in a local file for an Excel sync and specify the path to your file on your machine.
The FILEPATH function takes in two variables:
A reference to the first variable, such as a file path.
A regular expression that includes a match group (Image 3). The first match group's value is assigned to the variable. FILEPATH function executes regex against the full file path (including the directory structure). For the full formula, see the XML example below.
The FILENAME function takes in two variables:
A reference to the first variable, such as a file path.
A regular expression that includes a match group (Image 4). The first match group's value is what gets assigned to the variable. FILENAME function applies the regex only to the name of the file (excluding the directory structure).
The ENV formula uses an environment variable available in the connections/worker pods as the value of the variable (Image 5). An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files.
This secret can then be used in the REST Header (Image 7).
While in the UI the term is variables, the paired XML configuration uses the term parameters.
This section details how to configure your platform to use Environment Variables.
To create or change environment variables on Windows:
On the Windows taskbar, right-click the Windows Icon > System.
In the Settings window, click Related Settings > Advanced System Settings > Environment Variables (Image 8).
Under System Variables, click New to create your new environment variable (Image 9).
To configure an environment variable in Kubernetes, do the following:
Navigate to your cinchy.kubernetes\environment\_kustomizations\_template\instance\_template\connections\kustomization.yaml
file.
Under patchesJson6902 > patch, add your environment variable as shown in the code snippet below. Input your own namespace and variable name where indicated.
Navigate to your platform_components/connections/connections-app.yaml file.
Under Spec > Template > Spec > Containers > ENV, add in your environment variable. This addition depends on what value you are using as an environment variable. The below code snippet shows a basic example:
Make the same change to the platform_components/worker/worker-app.yaml file.
Push and merge your changes.
GETSECRETVALUE(domain
, secretname
): The GETSECRETVALUE formula can be used to call a secret from the . This secret can then be used anywhere variables are supported where you may need to insert sensitive information, such as a connection string, Access Key ID, or within a REST URL, Body, or Header.
Example 5: The GETSECRETVALUE formula (Image 6) is input as a variable for a REST Source and is used to call a secret from the