Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When configuring a Data Sync, you may have the option to add in additional configuration sections, such as an Request Headers, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in Request Headers, fill in the following parameters (Image 2):
Header Type:
Authorization: An authorization request header can be used to provide credentials that authenticate a user with a server, allowing access to a protected resource. Selecting this header defines the Header Value as a password field.
Content-Type
Header
Name: The name of the HTTP Header to add
Is Encrypted?
Header Value
Post sync scripts are written in CQL and can be added to the end of a sync to allow you to do more with your data, such as inserted retrieved values into a Cinchy table.
In this example, we will be doing a batch data sync using the Cinchy Table [Product].[Names Test]. In this scenario we will be using the following simple API as our destination: https://cinchy.net/API/Product/LatestCinchyReleaseDate
When we run our batch job, it will check for updates in our source table. These updates will trigger our REST API to fetch our defined value, which we can use in a post-sync script. In this case, our script will insert this value into a second table, [Product].[Response Table].
The following steps will walk you through how to utilize this functionality.
Set up your data sync to a REST API as normal. You can review our Source (Image 1), Destination (Image 2), and Sync Behaviour (Image 3) below.
2. Under the Post Sync tab, we want to input the CQL that, in our case, will take the defined variable from our REST API and insert it into the [Product].[Response Test] table (Image 4).
3. Add in your Permissions and click Save.
4. Once we have finished configuring our sync, we can run the job. It will check for new data in our Name column on our [Product].[Names] table (Image 5). When found, it will trigger the REST API to GET our “value” variable. Our post sync script will then take that value and INSERT it into the [Product].[Response Test] table (Image 6).
Filters can be used in your source and target configurations to define specific subsets of data that you want to use in your syncs.
When syncing a Data Source, you may have the option to add in additional configuration sections, such as a Filter, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
Note that if your source only has one of the listed options, it will appear by default instead of in a drop-down.
A filter on your source is optional. It relies on a source specific syntax for filtering out records from your source target. The filter can reference execution parameters.
This is only available if using a table, not a query. For queries, include the filter in the query itself.
There can only be one <Filter> for each source. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
Example 1: Using a filter to sync only source records with [net worth] > 10000 (Image 2).
Example 2: Using a filter to sync only source records with a status like "Complete" (Image 3).
A target destination filter is optional. It relies on a source specific syntax for filtering out records from your target. The filter can reference execution parameters.
There can only be one <Filter> for each target. To specify more than one condition, use AND/OR to allow logical combination of multiple expressions.
Example 1: Filtering only target records where the Assignee is Null (Image 4).
Example 2: Filtering only target records where the Override ID is not Null (Image 5).
Example 3: Filtering only target records from a specific source (Image 6).
When syncing a Data Source, you may have the option to add in additional configuration sections, such as an Auth Request, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
To add in an Auth Request, fill in the following parameters (Image 2):
HTTP Method: Either POST or GET
Endpoint URL
From the drop down you can also add in:
Request Headers
Body
Variables to Extract
Source
Definition
File
For a file data sources (delimited, fixed width & Excel) the syntax conforms to the .NET frameworks RowFilter on a DataView
Salesforce
For Salesforce syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the T-SQL where clause (without the where expression)
Source
Definition
Salesforce
For Salesforce the syntax is the SOQL where clause (without the where expression)
Dynamics
For Dynamics syntax is the OData $filter clause (without the $filter=expression)
Cinchy
For Cinchy syntax is the CQL where clause (without the where expression)
SqlServer
For SqlServer the syntax is the TSQL where clause (without the where expression)
The pages listed under this section are not required for most data syncs, but they can help you create more robust use cases when applied correctly.
Runtime Parameters are values that can be dynamically inserted when the sync job is run. The parameters you define here can be referenced in fields in other parts of your sync config (using the @ prefix) and when the job is run you can be prompted for their values.
The execution parameters are either passed in at the time of execution or calculated through a formula. The value of the name attribute is passed in as command line option, param-values. (Optional, if the path to the source file to load is specified in the path attribute of the source element or calculated column formula do not reference execution parameters)
Below is an element that holds an array of <Parameter>
elements, shown in the Connections experience (Image 1) and the equivalent XML.
You can choose to just use plain text in the Name field of the Parameter, or you can use a calculated formula.
The following formulas are currently supported by Connections.
FILENAME(<some-path>, <some-regex>): The FILENAME formula takes in two parameters. The first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group. The first match group's value is what gets assigned to the parameter. The FILENAME function applies the regex only to the name of the file (excluding the directory structure).
FILEPATH(<some-path>, <some-regex>): Similar to FILENAME, the FILEPATH formula takes in two parameters. The first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group. The first match group's value is what gets assigned to the parameter. The FILEPATH function executes the regex against the full file path (including the directory structure).
GUID(): The GUID formula uses a random GUID for that parameter's value. It's used when you want to generate a unique identifier to be used during the context the sync and can be useful, for example, as a way to track if changes were made from a particular sync that ran.
ENV(<place-environment-variable-here>): The ENV formula uses an environment variable available in the connections/worker pods as the value of the parameter. An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files. To use the ENV() formula you will need to do some back-end configuration; please review Appendix A for more details.
We do not recommend using the ENV formula for credentials.
Below are the three Parameter examples shown in the Connections experience, followed by the XML equivalent of all of the examples:
Example 1: A name attribute reference an execution parameter (Image 2).
Example 2: The FILEPATH function takes in two parameters, the first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group (Image 3). The first match group's value is what gets assigned to the parameter. FILEPATH function executes regex against the full file path (including the directory structure) [full formula in XML at end of page].
Example 3: The FILENAME function takes in two parameters, the first is a reference to the first parameter (i.e. a file path), and the second is a regular expression that includes a match group (Image 4). The first match group's value is what gets assigned to the parameter. FILENAME function applies the regex only to the name of the file (excluding the directory structure).
Example 4: The ENV formula uses an environment variable available in the connections/worker pods as the value of the parameter (Image 5). An example use case for this would be a situation where the URLs used in a REST API sync is different across environments -- instead of manually updating the syncs with the various URLs, you can use this formula to automatically calculated it from your pod configuration files.
This section details how to correctly configure your platform to use Environment Variables.
To create or modify environment variables on Windows:
On the Windows taskbar, right-click the Windows Icon > System.
In the Settings window, click Related Settings > Advanced System Settings > Environment Variables (Image 6).
Under System Variables, click New to create your new environment variable (Image 7).
To configure an environment variable in Kubernetes, you will first add the variable to your cluster:
Navigate to your cinchy.kubernetes\environment_kustomizations_template\instance_template\connections\kustomization.yaml file.
Under patchesJson6902 > patch, add in your environment variable as shown in the code snippet below, inputting your own namespace and variable name where indicated.
You must then add the variable to your pod:
Navigate to your platform_components/connections/connections-app.yaml file.
Under Spec > Template > Spec > Containers > Env, add in your environment variable. How this is added can depend on what value you are using as an environment variable. The below code snippet shows a basic example:
Make the same code change to the platform_components/worker/worker-app.yaml file as well.
Ensure that you push and merge your changes.
When syncing a Data Source, you may have the option to add in additional configuration sections, such as an Pagination, under the "Add a Section" drop down tab in the Connection Experience (Image 1).
There are two types of pagination available (Image 2):
Cursor: The cursor is considered to be a key parameter in this type of pagination. You receive a variable named Cursor
along with the response. It is a pointer that points at a particular item that needs to be sent with a request. The server then uses the cursor to seek the other set of items. Cursor-based pagination is more complex and is preferred when dealing with a real-time data set.
Offset: Offset-based pagination is for parameters with a specific limit (the number of results) and offset (the number of records that need to be skipped). Offset-based pagination is easy to use and is preferred for static data.
To set up cursor pagination, fill in the following parameters (Image 3):
Type: Select Cursor
Next Page URL JSON Path: This is the JSON Path within the response to the URL for the next page
Cursor Key: This is the key used in the query string to specify the cursor value. This is only required if the cursor returned is not a fully qualified URL.
To set up offset pagination, fill in the following parameters (Image 4):
Type: Select Offset
Limit Key: The key used in the query string to specify the limit
Limit: The desired page size
Offset By: The type of offset that the API does pagination by. This will be either Record Number or Page Number.
Offset Key: The key used in the query string to specify the offset
Initial Offset: The starting offset
Note that a pagination block is mandatory even if the API does not return results from multiple pages. You can use the following as the placeholder:
<Pagination type="OFFSET" limitField="" offsetField="" limit="0" initialOffset="0" />