Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Cinchy Tables are commonly used data sync targets.
Prior to setting up your data sync destination, ensure that you've configured your Source.
The Cinchy Table destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|---|---|
The Column Mapping section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|---|---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on destination filters.
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
Prior to setting up your data sync destination,
The Dynamics destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Destination
Mandatory. Select your destination from the drop down menu.
Cinchy Table
Domain
Mandatory. The domain where your destination table resides.
Product
Table Name
Mandatory. The name of you destination table.
Q1 Sales
Suppress Duplicate Errors
Optional. This field determines whether duplicate keys are to be reported as warnings (unchecked) or ignored (checked). The default is unchecked. Checking this box can be useful in the event that you only want to load the distinct values from a collection of columns in the source.
Degree of Parallelism
The number of batch inserts and updates that can be run in parallel. This defaults to 1.
1
Source Column
Mandatory. The name of your column as it appears in the source.
Name
Target Column
Mandatory. The name of your column as it appears in the destination.
Name
Link Column
This value is only used when utilizing a Cinchy Table destination with a linked column. For example:
Your destination column is an Employee's favourite colour that links into the System Colours table.
If your source data contains hex values (eg. #00B050) for that mapping, then you'd put Hex Value in the link column field.
If your source data contains the colour names (eg. Green), then you'd put Name in the link column field.
Destination | Mandatory. Select your destination from the drop down menu. | Dynamics |
Entity | Mandatory. The name of the entity you want to sync to as it appears in your Dynamics CRM. |
Service URL |
Client ID | Mandatory. The encrypted Client ID found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you. |
Client Secret | Mandatory. The encrypted Client Secret found in your Azure AD app registration. The Connection UI will automatically encrypt this value for you. |
ID Column | Mandatory. The unique ID Column name of the Entity that you wish to sync to. |
ID Column Insert | Setting this value to true will allow direct inserts into the ID Column indicated above. The default state is false: ID values will be matched but no data will be inserted. |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
DB2 (Formerly Db2 for LUW) is a relational database that delivers advanced data management and analytics capabilities for transactional workloads.
Microsoft Dynamics 365 functions as an interconnected CRM, ERP, and productivity suite that integrates processes, data, and business logic.
MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
Oracle Database is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Salesforce objects are database tables that permit you to store data that is specific to an organization. Salesforce objects are of two types:
Standard Objects: Standard objects are the kind of objects that are provided by salesforce.com such as users, contracts, reports, dashboards, etc.
Custom Objects: Custom objects are those objects that are created by users. They supply information that is unique and essential to their organization. They are the heart of any application and provide a structure for sharing data.
Snowflake is a fully managed SaaS that provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Snowflake enables data storage, processing, and analytic solutions.
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP allows applications running on different operating systems to communicate using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
DB2 (Formerly Db2 for LUW) is a relational database that delivers advanced data management and analytics capabilities for transactional workloads.
Prior to setting up your data sync destination, ensure that you've configured your Source.
The DB2 Table destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|---|---|
The Column Mapping section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|---|---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on destination filters.
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.
Prior to setting up your data sync destination, ensure that you've configured your Source.
The MongoDB Collection destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|---|---|
The Column Mapping section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|---|---|
Cinchy v5.6 introduced the Retry Configuration for MongoDB targets. This will automatically retry HTTP Requests on failure based on timeout or connection errors. If the final retry attempt fails it gets logged into the Execution Errors table
This capability provides a mechanism to recover from transient errors such as network disruptions or temporary service outages.
Note: the maximum number of retries is capped at 10.
To set up a retry configuration:
Under the MongoDB destination tab, select Retry Configuration
2. Select your Delay Strategy.
Linear Backoff: Defines a delay of approximately n seconds where n = current retry attempt.
Exponential Backoff: A strategy where every new retry attempt is delayed exponentially by 2^n seconds, where n = current retry attempt.
Example: you defined Max Attempts = 3. Your first retry is going to be in 2^1 = 2, second: 2^2 = 4, third: 2^3 = 8 sec.
3. Input your Max Attempts. The maximum number of retries allowed is 10.
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Apache Kafka is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time for your key use cases.
Prior to setting up your data sync destination, ensure that you've configured your Source.
The Kafka Topic destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
and how it functions.
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Parameter | Description | Example |
---|---|---|
The Column Mapping section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
The names you specify in your "Target Column" value will turn into attributes in a JSON payload that will be constructed and pushed to Kafka. The name of this target column can be whatever you choose, but we recommend maintaining your naming convention across columns for simplicity.
Parameter | Description | Example |
---|---|---|
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
If you are running a real-time sync, set up your Listener Config and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Microsoft SQL Server is one of the main relational database management systems on the market that serves a wide range of software applications for business intelligence and analysis in corporate environments.
Based on the Transact-SQL language, it incorporates a set of standard language programming extensions and its application is available for use both on premise and in the cloud.
Microsoft SQL Server is ideal for storing all the desired information in relational databases, as well as to manage such data without complications, thanks to its visual interface and the options and tools it has.
Prior to setting up your data sync destination,
The MS SQL Server Table destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
is a relational database management system, commonly used for running online transaction processing, data warehousing and mixed database workloads. The system is built around a relational database framework in which data objects may be directly accessed by users (or an application front end) through structured query language (SQL).
Prior to setting up your data sync destination,
The Oracle Table destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
t provides a single platform for data warehousing, data lakes, data engineering, data science, data application development, and secure sharing and consumption of real-time/shared data.
Snowflake enables data storage, processing, and analytic solutions.
Prior to setting up your data sync destination,
The Snowflake Table destination supports batch and real-time syncs.
For batch syncs of 10 records or less, single Insert/Update/Delete statements are executed to perform operations against the target Snowflake table.
For batch syncs exceeding 10 records, the operations are performed in bulk.
The bulk operation process consists of:
Generating a CSV containing a batch of records
Creating a temporary table in Snowflake
Copying the generated CSV into the temp table
If needed, performing Insert operations against the target Snowflake table using the temp table
If needed, performing Update operations against the target Snowflake table using the temp table
If needed, performing Delete operations against the target Snowflake table using the temp table
Dropping the temporary table
Real time sync volume size is based on a dynamic batch size up to configurable threshold.
The temporary table generated in the bulk flow process for high volume scenarios transforms all columns of data type Number to be of type NUMBER(38, 18). This may cause precision loss if the number scale in the target table is higher
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Salesforce is a cloud-based CRM software designed for service, marketing, and sales.
Prior to setting up your data sync destination,
The Salesforce destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
A REST API is an application programming interface that conforms to the constraints of REST (representational state transfer) architectural style and allows for interaction with RESTful web services.
REST APIs work by fielding requests for a resource and returning all relevant information about the resource, translated into a format that clients can easily interpret (this format is determined by the API receiving requests). Clients can also modify items on the server and even add new items to the server through a REST API.
Cinchy v5.5 added support for the functions to be used in REST API data syncs anywhere a "parameter" could be utilized (Ex: Endpoint URL, Post Sync Script, etc.). These functions escape parameter values to be safe inside of a URL or JSON document respectively.
Prior to setting up your data sync destination,
The REST API destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
You can use this section to help define your API Specifications. The options available to you are:
Retry Configuration
REST API Source
Insert Specification
Update Specification
Delete Specification
You can learn more about these options in
Cinchy v5.5 introduced the ability to pass parameters from a REST response into post sync scripts during both real-time and batch data syncs, allowing you to do more with your REST API data.
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Cinchy v5.5 introduced the Retry Configuration for REST API targets. This will automatically retry HTTP Requests on failure based on a defined set of conditions. A single retry configuration is defined for the REST API target, and applies to all requests configured in the Insert, Update, and Delete specifications. This capability provides a mechanism to recover from transient errors such as network disruptions or temporary service outages.
Note: the maximum number of retries is capped at 10.
To set up a retry configuration:
Under the REST API destination tab, select API Specification > Retry Configuration
2. Select your Delay Strategy.
Linear Backoff: Defines a delay of approximately n seconds where n = current retry attempt.
Exponential Backoff: A strategy where every new retry attempt is delayed exponentially by 2^n seconds, where n = current retry attempt.
Example: you defined Max Attempts = 3. Your first retry is going to be in 2^1 = 2, second: 2^2 = 4, third: 2^3 = 8 sec.
3. Input your Max Attempts. The maximum number of retries allowed is 10.
4. Define your Retry Conditions. You must define the conditions under which a retry should be attempted. For the Retry to trigger, at least one of the "Retry Conditions" has to evaluate to true.
Retry conditions are only evaluated if the response code is not 2xx Success.
Each Retry Condition contains one or more "Attribute Match" sections. This defines a Regex to evaluate against a section of the HTTP response. The following are the three areas of the HTTP response that can be inspected:
Response Code
Header
Body
If there are multiple "Attribute Match" blocks within a Retry Condition, all have to match for the retry condition to evaluate to true.
Select either:
Conditional Flow
Request
HTTP Method: GET, POST, PUT, PATCH, DELETE
Endpoint URL: Refers to where this request will be made to and inserted.
Select either:
Conditional Flow
Request
HTTP Method: GET, POST, PUT, PATCH, DELETE
Endpoint URL: Refers to where this request will be made to and updated.
Select either:
Conditional Flow
Request
HTTP Method: GET, POST, PUT, PATCH, DELETE
Endpoint URL: Refers to where this request will be made to and deleted.
SOAP (Simple Object Access Protocol) is an XML-based protocol for accessing web services over HTTP.
SOAP allows applications running on different operating systems to communicate using different technologies and programming languages. You can use SOAP APIs to create, retrieve, update or delete records, such as passwords, accounts, leads, and custom objects, from a server.
Prior to setting up your data sync destination,
The SOAP 1.2 Web Service destination supports batch and real-time syncs.
The following table outlines the mandatory and optional parameters you will find on the Destination tab (Image 1).
The following parameters will help to define your data sync destination and how it functions.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns. When specifying the Target Column in the Column Mappings section, all names are case-sensitive.
Parameter | Description | Example |
---|
The API Specification section will default with a mandatory Insert Specification field, however you are also able to add fields for Request Headers, SOAP Body, and Variables to Extract. Insert Specification When specifying the Target Column in the Column Mappings section, all names are case-sensitive.
Parameter | Description | Example |
---|
Request Header
You can add in Request Headers by
SOAP Body
Parameter | Description | Example |
---|
Variables to Extract
You may choose to specify variables to extract from your SOAP response.
Parameter | Description | Example |
---|
SOAP 1.2 Source
This section should be used if you have a set of data from a SOAP API that you need to reconcile against; therefore it should always be used when doing Full-File syncs. You can follow to set up this section.
If you are running a batch sync, click Jobs > Start a Job to begin your sync.
Mandatory. The for your instance.
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync to which destination columns. You can repeat the values for multiple columns. When specifying the Target Column in the Column Mappings section, all names are case-sensitive.
Parameter | Description | Example |
---|
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Define your.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Note that the Regex value should be entered as a regular expression. The Regex engine is .NET and expressions can be tested by using . In the below example, the Regex is designed to match any HTTP 5xx Server Error Codes, using a Regex value of "5[0-9][0-9]". For Headers, the format of the Header string which the Regex is applied against is {Header Name}={Header Value}, e.g. "Content-Type=application/json".
This section has the same parameters as the usual REST API Source and you can r
You have the option to add a destination filter to your data sync. Please review the documentation here for more information on
Define your. Note that if you are doing a Full-File sync, the API Specification > SOAP 1.2 Source section should be filled in.
Add in your , if required.
Define your .
If you are running a real-time sync, and enable it to begin your sync.
Destination
Mandatory. Select your destination from the drop down menu.
Kafka Topic
Bootstrap Servers
Mandatory. Bootstrap Servers are a list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
This parameter should a CSV list of "broker host" or "host:port"
localhost:9092,another.host:9092
Topic Name
Mandatory. The name of the Kafka Topic that messages will be produced to.
Use SSL
Check this if you want to connect to Kafka over SSL
SASL Mechanism
Mandatory. Select the SASL (Simple Authentication and Security Layer) Mechanism to use for authentication: - None - PLAIN - SCRAM-SHA-256 - SCRAM-SHA-512 - OATHBEARER (default) - OATHBEARER (oidc)
Source Column
Mandatory. The name of your column as it appears in the source.
Name
Target Column
Mandatory. The name of your column as it appears in the destination.
Name
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Destination | Mandatory. Select your destination from the drop down menu. | MS SQL Server Table |
Connection String | Mandatory. The encrypted Connection String used to connect to your MS SQL Server. The Connections UI will automatically encrypt this value for you. |
Table | Mandatory. The name of the MS SQL Server table that you want to sync your data to, including the schema | dbo.employees |
ID Column | The name of the identity column that exists in the destination (or a single column that is guaranteed to be unique and automatically populated for every new record) |
ID Column Data Type | The data type of the above ID Column. |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Destination | Mandatory. Select your destination from the drop down menu. | Snowflake Table |
Connection String | Unencrypted example: account=wr38353.ca-central-1.aws;user=myuser;password=mypassword;db=CINCHY;schema=PUBLIC |
Table | Mandatory. The name of the Table in Snowflake that you wish to sync. | Employees |
ID Column | Mandatory if you want to use "Delete" action in your sync behaviour configuration. The name of the identity column that exists in the target (OR a single column that is guaranteed to be unique and automatically populated for every new record). | Employee ID |
ID Column Data Type | Mandatory if using the ID Column parameter. The data type of the above ID Column. Either: Text, Number, Date, Bool, Geography, or Geometry | Number |
Destination
Mandatory. Select your destination from the drop down menu.
DB2 Table
Connection String
Mandatory. The encrypted Connection String used to connect to your DB2 database. The Connections UI will automatically encrypt this value for you.
You can find an example Connection String here.
Table
Mandatory. The name of the DB2 table that you want to sync your data to, including the schema
dbo.employees
ID Column
The name of the identity column that exists in the destination (or a single column that is guaranteed to be unique and automatically populated for every new record)
ID Column Data Type
The data type of the above ID Column.
Source Column
Mandatory. The name of your column as it appears in the source.
Name
Target Column
Mandatory. The name of your column as it appears in the destination.
Name
Destination
Mandatory. Select your destination from the drop down menu.
MongoDB Collection
Connection String
Mandatory. The encrypted connection string for your MongoDB Collection. The Connections UI will automatically encrypt this value for you.
You can review MongoDB's Connection String guide and parameter descriptions here.
Database
Mandatory. The name of your MongoDB database.
Cinchy
Collection
Mandatory. The name of your MongoDB collection.
Employees
Use SSL
This checkbox can be used to define the use of x.509 certificate authentication for your sync. If checked, you will need to input the following values taken from your cert: - SSL Key PEM - SSL Certificate PEM - SSL CLA PEM
Source Column
Mandatory. The name of your column as it appears in the source.
Name
Target Column
Mandatory. The name of your column as it appears in the destination.
Name
Destination | Mandatory. Select your destination from the drop down menu. | Oracle Table |
Connection String | Mandatory. The encrypted Connection String used to connect to your Oracle instance. The Connections UI will automatically encrypt this value for you. |
Table | Mandatory. The name of the Oracle table that you want to sync your data to, including the schema | dbo.employees |
ID Column | The name of the identity column that exists in the destination (or a single column that is guaranteed to be unique and automatically populated for every new record) |
ID Column Data Type | The data type of the above ID Column. |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Destination | Mandatory. Select your destination from the drop down menu. | Salesforce |
Object | Mandatory. The API name of the platform event (with the __e). | Notification__e |
Auth URL | Mandatory. The URL that issues your Salesforce token. |
Client ID | Mandatory. The encrypted Client ID for the connected app. The Connection UI will automatically encrypt this value for you. |
Client Secret | Mandatory. The encrypted Client Secret for the connected app. The Connection UI will automatically encrypt this value for you. |
Username | Mandatory. The Username of a Salesforce account with the permissions to connect to your Object. The Connection UI will automatically encrypt this value for you. |
Password | Mandatory. The Password of the above Salesforce account. The Connection UI will automatically encrypt this value for you. |
Auto Assign |
Platform Event |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Destination | Mandatory. Select your destination from the drop down menu. | REST API |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Destination | Mandatory. Select your destination from the drop down menu. | SOAP 1.2 Web Service |
Source Column | Mandatory. The name of your column as it appears in the source. | Name |
Target Column | Mandatory. The name of your column as it appears in the destination. | Name |
Endpoint URL |
Envelope Namespace | soapenv |
Namespace - Name | The name of your SOAP namespace tags in your request and response. By default, the Connections UI will populate this field with "soapenv", however you can delete this value andor add additional values, as needed. This value appears as "soap" in the snippet below. These should be the values immediately after "xmlns:" | soap |
Namespaces - Value |
XML | The SOAP body is a sub-element of the SOAP envelope, which contains information intended for the ultimate recipient of the message. This field is expecting you to specify the SOAP Body. |
Name | The name of the variable you wish to extract. | Value |
Path in Response | The path to the above variable. | soapenv:Envelope/soapenv:Body/m:NumberToWordsResponse/m:NumberToWordsResult[1] |
You can find an example
Mandatory. The encrypted connection string used to connect to your Snowflake instance. You can review Snowflake's Connection String guide and parameter descriptions
You can find an example
Optional. You can set this to true to leverage in Salesforce.
Optional. You can set this to true to enable
Mandatory. The URL for the the
Has
This is required to be true if the SOAP API response contains an attachment outside of the SOAP response message.
The namespace prefix to use for the SOAP request elements. This value will default to "soapenv" as associated with the following schema: You can append the default value, if you wish. For example, setting the value to "foo" would result in the soap request being prefixed with the "foo" namespace.
The URL describing this namespace in the response. By default, the Connections UI will populate this field with "", however you can delete this value andor add additional values, as needed. In the below snippet this value is ""
""