Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is about the analytics and visualization application Grafana, one of our recommended components for Cinchy v5 on Kubernetes.
Grafana is an open source analytics and interactive visualization web application. When connected to your Cinchy platform, it provides charts, graphs, and alerting capabilities (Image 1).
Grafana, and its paired application Prometheus (which consumes metrics from the running components in your environment) is the recommended visualization application for Cinchy v5 on Kubernetes.
Grafana has a robust library of documentation of tutorials designed to help you learn the fundamentals of the application. We've listed some notable ones below:
When using the default configuration pairing of Grafana and Prometheus, Prometheus is already set up as a data source in your metrics dashboard.
Cinchy comes with some saved dashboards that come out of the box. These dashboards will provide a great jumping off point for your metrics monitoring, and you can always customize, manage, and add further dashboards at your leisure.
Navigate to the left navigation pane, select the Dashboards icon > Manage (Image 2).
2. You will see a list of all of the Dashboards available to you (Image 3). Clicking on any of them will take you to a full metrics view (Image 4).
3. You can favourite any of your commonly used or most important dashboards by clicking on the star (Image 5).
4. Once you favorite a dashboard, you can easily find it by navigating to the left navigation pane, select the Dashboards icon > Home. This will open the Dashboards Home. You can see both your favourite and your recent dashboards in this view (Image 6)
Your Cinchy v5 deployment comes with some out-of-the-box dashboards already made. You are able to customize these to suit your specifications. The following are a few notable ones:
Purpose: This dashboard provides a general overview of your entire cluster including all of your environments and pods (Image 7).
Metrics:
The following are some example metrics that you could expect to see from this dashboard:
CPU Usage
CPU OTA
Memory Use
Memory Requests
Current Network Usage
Bandwidth (Transmitted and Received)
Average Container Bandwidth by Namespace
Rate of Packets
Rate of Packets Dropped
Storage IO & Distribution
Purpose: This dashboard is useful for looking at environment specific details (Image 8). You can use the namespace drop down menu to select which environment you want to visualize (Image 9). This can be particularly helpful during load testing. You are also able to drill down to a specific workload by clicking on its name.
Metrics:
The following are some example metrics that you could expect to see from this dashboard**:**
CPU Usage
CPU OTA
Memory Use
Memory Quota
Current Network Usage
Bandwidth (Transmitted and Received)
Average Container Bandwidth by Workload
Rate of Packets
Rate of Packets Dropped
Grafana lets you to set up push alerts against your dashboards and queries. Once you have created your dashboard, you can follow the steps below to set up your alert.
Grafana doesn't have the capability to run alerts against queries with template variables.
To send emails out from Grafana, you need to configure your SMTP. This would have been done in the automation script run during your initial Cinchy v5 deployment. If you didn't input this information at that time, you must do so before setting up your email alerts.
Your notifications channel refers to who will be receiving your alert. To set one up:
Click on the Alert icon on the left navigation tab (Image 10), and locate "Notifications Channel"
Click the "Add a Channel" button.
Add in the following parameters, including any optional checkboxes you wish to use (Image 11):
Name: The name of this channel.
Type: You have several options here, but email is the most common.
Addresses: Input all the email addresses you want to be notified of this alert, separated by a comma.
Click Test to send out a test email, if desired.
Save your Notification Channel.
The following details how to set up alerts on your dashboards. You can also set up alerts upon creation of your dashboard from the same window.
Navigate to the dashboard and dashboard panel that you want to set up an alert for. This example, sets up an alert for CPU usage on our cluster.
Click on the dashboard name > Edit
Click on the Alert tab (Image 12).
Input the following parameters to set up your alert (Image 13):
Alert Name: A title for your alert
Alert Timing: Choose how often to evaluate and for how long. In this example it's evaluated every minute for five minutes.
Conditions: Here you can set your threshold conditions for when an alert will be sent out. In this example, it's sent when the average of query A is above 75.
Set what happens if there's no data, or an error in your data
Add in your notification channel (who will be sent this notification)
Add a message to accompany the alert.
Click Apply > Save to finalize your alert.
Click on an image to enlarge it.
Below are a few alerts we recommend setting up on your Grafana.
Set up this alert to notify you when the CPU Usage on your nodes exceeds a specified limit.
You can use the following example queries to set up a dashboard that will capture CPU Usage by Node (Image 14).
Set up your alert. This example uses a threshold limit of 75 (Image 15).
Set up this alert to notify you when the Memory Usage on your nodes exceeds a specified limit.
You can use the following example queries to set up a dashboard that will capture CPU Usage by Node (Image 16)
Set up your alert. This example uses a threshold limit of 85 (Image 17).
Set up this alert to notify you when the Disk Usage on your nodes exceeds a specified limit.
You can use the following example queries to set up a dashboard that will capture Disk Usage by Node (Image 18)
Set up your alert. This example uses a threshold limit of 80 (Image 17).
Set up this alert to check the amount of iowait
from the CPU. A high value usually indicates a slow/overloaded HDD or Network.
You can use the following example queries to set up a dashboard that will capture the CPU I/O wait (Image 19).
Set up your alert. This example uses a threshold limit of 60 (Image 19).
This capability was added in Cinchy v5.4.
Your Grafana password can be updated in your deployment.json file (you may have renamed this during your original deployment).
Navigate to cluster_component_config > grafana.
The default password is set to prom-operator; update this with your preferred new password, written in clear text.
Run the below command in the root directory of your devops.automations repository to update your configurations. If you have changed the name of your deployment.json file, make sure to update the command accordingly.
Commit and push your changes.
If your environment isn't set-up to automatically apply upon configuration,navigate to the ArgoCD portal and refresh your component(s). If that doesn't work, re-sync.
This page details monitoring and logging for the Cinchy v5 platform when deployed on Kubernetes.
Cinchy v5 on Kubernetes offers and recommends a robust set of open-source tools and applications for monitoring, logging, and visualizing the data on your Cinchy instances.
Click on the respective tool below to learn more about it:
When deploying Cinchy v5 on Kubernetes, Cinchy recommends using OpenSearch Dashboards for your logging. OpenSearch is a community-driven fork of Elasticsearch created by Amazon, and it captures and indexes all your logs into a single, accessible dashboard location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.
You can view OpenSearch documentation here:
These sections guide you through setting up your first Index, Visualization, Dashboard, and Alert.
OpenSearch comes with sample data that you can use to get a feel of the various capabilities. You will find this on the main page upon logging in.
Navigate to your cinchy.kubernetes/environment_kustomizations/instance_template/worker/kustomization.yaml
file.
In the below code, copy the Base64 encoded string in the value parameter.
Decode the value to retrieve your AppSettings.
Navigate to the below Serilog section of the code and update the "Default" parameter as needed to set your log level. The options are:
Ensure that you commit your changes.
Navigate to ArgoCD > Worker Application and refresh.
The following are some common search patterns when looking through your OpenSearch Logs.
If an HTTP request to Cinchy Web/IDP fails, check the page's requests and the relevant response headers to find the "x-correlation-id" header. That header value can be used to search and find all logs associated with the HTTP request.
When debugging batch syncs, filter the "ExecutionId" field in the logs for your batch sync execution ID to narrow down your search.
When debugging real time syncs, search for your data sync config name in the Event Listener or Workers logs to find all the associated logging information.
The first step to utilizing the power of OpenSearch Dashboards is to set up an index to pull data from your sources. An Index Pattern identifies which indices you want to explore. An index pattern can point to a specific index, for example, your log data from yesterday, or all indices that contain your log data.
Login to OpenSearch. You would have configured the access point during your deployment installation; traditionally it will be found at <baseurl>/dashboard.
If this is your first time logging in, the username and password will be set to admin/admin.
We highly recommend you update the password as soon as possible.
Navigate to the Stack Management tab in the left navigation menu (Image 1).
From the left navigation, click on Index Patterns (Image 2).
Click on the Create Index Pattern button.
To set up your index pattern, you must define the source. OpenSearch will list the sources available to you on the screen below. Input your desired source(s) in the text box (Image 3).
You can use the asterisk (*) to match multiple sources.
Configure your index pattern settings (Image 4).
Time field: Select a primary time field to use with the global time filter
Custom index pattern ID: By default, OpenSearch gives a unique identifier to each index pattern. You can use this field to optional override the default ID with a custom one.
Once created, you can review your Index Patterns from the Index Patterns page (Image 5).
Click on your Index Pattern to review your fields (Image 6).
You can pull out any data from your index sources and view them in a variety of visualizations.
From the left navigation pane, click Visualize (Image 7).
If you have any Visualizations, they will appear on this page. To create a new one, click the Create Visualization button (Image 8).
Select your visualization type from the populated list (Image 9).
Choose your source (Image 10). If the source you want to pull data from isn't listed, you will need to set it up as an index first.
Configure the data parameters that appear in the right hand pane of the Create screen. These options will vary depending on what type of visualization you choose in step 3. The following example uses a pie chart visualization (Image 11):
Metrics
Aggregation: Choose how you want your data aggregated. This example uses Count.
Custom Label: You can use this optional field for custom labelling.
Buckets
Aggregation: Choose how you want your data aggregated. This example uses Split Slices > Terms.
Field: This drop down is populated based on the index source your chose. Select which field you want to use in your visualization. This example uses machine.os.keyword.
Order By: Define how you want your data to be ordered. This example uses Metric: Count, in descending order of size 10.
Choose whether to group other values in a separate bucket. If you toggle this on, you will need to label the new bucket.
Choose whether to show missing values.
Advanced
You can optionally choose a JSON input. These will be merged with the OpenSearch aggregation definition.
Options
The variables in the options tab can be used to configure the UI of the visualization itself.
You can also further focus your visualization:
Use DQL to search your index data (Image 12). You can also save any queries you write for easy access by clicking on the save icon.
Add a filter on any of your fields (Image 13).
Update your date filter (Image 14).
Click save when finished with your visualization.
Once you have created your visualizations, you can combine them together on one Dashboard for easy access.
You can also create new visualizations from the Dashboard screen.
From the left navigation pane, click on Dashboards (Image 15).
If you have any Dashboards, they will appear on this page. To create a new one, click the Create Dashboard button (Image 16).
The "Editing New Dashboard" screen will appear. Click on Add an Existing object (Image 17).
Select any of the visualizations you created and it will automatically add to your Dashboard (Image 18). Repeat this step for as many visualizations as you'd like to appear.
Click Save to finish (Image 19).
This capability was added in Cinchy v5.4.
Your OpenSearch password can be updated in your deployment.json file (you may have renamed this during your original deployment).
Navigate to "cluster_component_config > OpenSearch.
OpenSearch has two users that you can configure the passwords for: Admin and Kibana Server. Kibana Server is used for communication between the opensearch dashboard and the opensearch server. The default password for both is set to "password";. To update this, you will need to use a machine with docker available.
Update your Admin password:
Your password must be hashed. You can do so by running the following command on a machine with docker available, inputting your new password where noted:
Navigate to "opensearch_admin_user_hashed_password" and input your hashed password.
You must also provide your password in a base64 encoded format; input your cleartext password here to receive your new encoded password.
Navigate to "opensearch_admin_user_password_base64" and input your encoded password.
Update your Kibana Server password:
Your password must be hashed. You can do so by running the following command on a machine with docker available, inputting your new password where noted:
Navigate to "opensearch_kibanaserver_user_hashed_password" and input your hashed password.
You must also provide your new password in cleartext. Navigate to "opensearch_kibanaserver_user_password" and input your cleartext password.
Run the below command in the root directory of your devops.automations repo to update your configurations. If you have changed the name of your deployment.json file, make sure to update the command accordingly.
Commit and push your changes.
If your environment isn't set-up to automatically apply upon configuration,navigate to the ArgoCD portal and refresh your component(s). If that doesn't work, re-sync.
This page serves as a general guide to using ArgoCD for monitoring purposes
ArgoCD is implemented as a Kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in your Git repository).
You can use ArgoCD's dashboard (Image 1) to visually monitor your namespaces and pods, and to quickly visualize deployment issues. It can easily show you what your cluster or pods are doing, and if they're healthy.
ArgoCD has a robust set of documentation that can help you to get started with the application. We recommend the following two pages:
Your ArgoCD dashboard has a lot of important information about how your Cinchy instance is behaving.
The application tiles view is the default view when logging into ArgoCD. You can also access it through the grid widget in the upper right hand corner of the screen.
These application tiles each represent a pod in one of your Cinchy environments. By default, you should have an application tile for each of the following pods per namespace (Image 2):
Base
Connections
Event Listener
IDP
Maintenance CLI
Meta Forms
Web
Worker
Each application tile shows you some important information about the pod, including its health and sync status. You can use the Sync, Refresh, and Delete buttons to manage your pods.
You can also click the star button on any application title to favourite the associated pod.
The pie chart view (Image 3) shows an easy visualization of the health of your applications. Access this view by clicking on the pie chart widget located in the upper right hand corner of the screen.
You can filter your view so that it only shows certain data. You can find the various filter options from the Filter column on the left hand side of any of the data views.
Favorites Only: Filter by only favorite tiles (Image 4).
Sync Status: You can easily view what's out of sync using this filter (Image 5).
Health Status: Check on the health of your applications by filtering using these parameters (Image 6).
Labels: Use these labels to filter by specific instances/environments (Image 7).
Projects: If you choose to group your applications into projects, you can filter them using this tile (Image 8).
Clusters: If you have more than one cluster, you can filter for it using the Clusters tile (Image 9).
Namespace: Lastly, you can filter by Namespace. The below example shows a filter based on the dev-aurora-1 namespace (Image 10).
To bring up a more detailed view of your applications, click on the application tile. This view will show you all components, their health and their sync status (Image 11). You can use the top navigational buttons perform actions such as syncing, rolling back, or refreshing.
This view can be useful for load testing, since you can see each individual pod spinning up and down.
View the status of your apps by looking at the health and sync status along the top of the page (Image 12).
Clicking on any individual pod or component tile in this view will bring up its information, including a Summary, a list of Events, your Manifest, and Parameters (Image 13).
You can also use this screen to edit or delete applications.
From the detailed tile summary you can also set your sync policies, such as automation, resource pruning, and self healing (Image 14).
You can click on the Logs tab in your detailed summary page to view the applicable logs for your selected pod (Image 15). You can filter, follow, snooze, copy, or download any logs.
OpenSearch comes with the ability to set up alerts based on any number of monitors. You can then push these alerts via email, should you desire.
Before you set up a monitor or alert, ensure that you have added your data source as an index pattern.
Definitions:
Your destination will be where you want your alerts to be pushed to. OpenSearch supports various options, but this guide focuses on email.
From the left navigation pane, click Alerting (Image 1).
Click on the Destinations Tab > Add Destination
Add a name to label your destination and select Email as type (Image 2)
You will need to assign a Sender. This is the email address that the alert will send from when you specify this specific destination. To add a new Sender, click Manage Senders (Image 3).
Click Add Sender
Add in the following information (Image 4):
Sender Name
Email Address
Host (this is the host address for the email provider)
Port (this is the Port of the email provider)
Encryption
Ensure that you authenticate the Sender, or your alert won't work.
You will need to assign your Recipients. This is the email address(es) that will receive the alert when you specify this specific destination. To add a new Recipient, you can either type their email(s) into the box, or click Manage Senders to create an email group (Image 5).
Click Update to finish your Destination.
You will need to authenticate your sender for emails to come through. Please contact Cinchy Customer Support to help you with this step.
Via email: support@cinchy.com
Via phone: 1-888-792-6051
Through the support portal: Support Portal
Your monitor is a job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.
From the Alerting dashboard, select Monitors > Create Monitor (Image 6).
Under Monitor Details, add in the following information (Image 7).
Monitor Name
Monitor Type (This example uses Per Bucket)
Whereas query-level monitors run your specified query and then check whether the query’s results triggers any alerts, bucket-level monitors let you select fields to create buckets and categorize your results into those buckets.
The alerting plugin runs each bucket’s unique results against a script you define later, so you have finer control over which results should trigger alerts. Each of those buckets can trigger an alert, but query-level monitors can only trigger one alert at a time.
Monitor Defining Method: the way you want to define your query and triggers. (This example uses Visual Editor)
Visual definition works well for monitors that you can define as “some value is above or below some threshold for some amount of time.”
Query definition gives you flexibility in terms of what you query for (using the OpenSearch query DSL) and how you evaluate the results of that query (Painless scripting).
Schedule: Choose a frequency and time zone for your monitor.
Under Data Source add in the following information (Image 8):
Index: Define the index you want to use as a source for this monitor
Time Field: Select the time field that will be used for the x-axis of your monitor
The Query section will appear differently depending on the Monitor Defining Method selected in step 2 (Image 9). This example is using the visual editor.
To define a monitor visually, select an aggregation (for example, count()
or average()
), a data filter if you want to monitor a subset of your source index, and a group-by field if you want to include an aggregation field in your query. At least one group-by field is required if you’re defining a bucket-level monitor. Visual definition works well for most monitors.
A trigger is a condition that, if met, will generate an alert.
To add a trigger, click the Add a Trigger button (Image 10).
Under New Trigger, define your trigger name and severity level (with 1 being the highest) (Image 11).
Under Trigger Conditions, you will specify the thresholds for the query metrics you set up previously (Image 12). In the below example, our trigger will be met if our COUNT threshold goes ABOVE 10000.
You can also use keyword filters to drill down into a more specific subset of data from your data source.
In the Action section you will define what happens if the trigger condition is met (Image 13). Enter the following information to set up your Action:
Action Name
Message Subject: In the case of an email alert, this will be the email subject line.
Message: In the case of an email alert, this will be the email body.
Perform Action: If you’re using a bucket-level monitor, decide whether the action is performed per execution or per alert.
Throttling: Enable action throttling if you wish. Use action throttling to limit the number of notifications you receive within a given span of time.
Click Send Test Message, if you want to test that the alert functions correctly.
Click Save.
This example pushes an alert based on errors. We will monitor our Connections stream for any instance of 'error', and push out an alert when our trigger threshold is hit.
First we create our Monitor by defining the following (Image 14):
Index: This example looks at Connections.
Time Field
Time Range: Define how far back you want to monitor
Data Filter: We want to monitor specifically whenever the Stream field of our index is stderr (standard error).
This is how our example monitor will appear; it shows when in the last 15 days our Connections app had errors in the log (Image 15).
Once our monitor is created, we need to define a trigger condition. When this condition is met, the alert will be pushed out to our defined Recipient(s). In this example we want to be alerted when there is more than one stderr in our Connections stream (Image 16). Input the following:
Trigger Name
Severity Level
Trigger Condition: In this example, we use IS ABOVE and the threshold of 1.
The trigger threshold will be visible on your monitoring graph as a red line.
This example pushes an alert based on the kubectl.kubernetes.io/restartedAt annotation, which updates whenever your pod restarts. We will monitor this annotation across our entire product-mssql instance, and push out an alert when our trigger threshold is hit.
First we create our Monitor by defining the following (Image 17):
Index: This example looks at the entire product-mssql instance.
Time Field
Query: This example is using the total count of the kubectl.kubernetes.io/restartedAt annotation.
Time Range: Define how far back you want to monitor. This example goes back 30 days.
This is how our example monitor will appear; it shows when in the last 30 days our instance had restarts (Image 18).
2. Once our monitor is created, we need to define a trigger condition. When this condition is met, the alert will be pushed out to our defined Recipient(s). In this example we want to be alerted when there is more than 100 restarts across our instance (Image 19). Input the following:
Trigger Name
Severity Level
Trigger Condition: In this example, we use IS ABOVE and the threshold of 100.
The trigger threshold will be visible on your monitoring graph as a red line.
This example pushes an alert based on status codes. We will monitor our entire instance for 400 status codes and push out an alert when our trigger threshold is hit.
First we create our Monitor by defining the following (Image 20):
Index: This example looks across the entire product-mssql-1 instance.
Time Field
Time Range: Define how far back you want to monitor. The time range for this example is the past day.
Data Filter: We want to monitor specifically whenever the Status Code is 400 (bad request).
This is how our example monitor will appear (note that there are no instances of a 400 status code in this graph) (Image 21).
Once our monitor is created, we need to define a trigger condition. When this condition is met, the alert will be pushed out to the defined Recipient(s). In this example we want to be alerted when there is at least one 400 status code across out instance (Image 22). Input the following:
Trigger Name
Severity Level
Trigger Condition: In this example, we use IS ABOVE and the threshold of 0.
The trigger threshold will be visible on your monitoring graph as a red line.
This page outlines the Cinchy Secrets Manager, added to the platform in v5.7.
The Cinchy platform provides a built-in solution for securely storing secrets known as the Cinchy Secrets Table. Built with adherence to Cinchy’s Universal Access Controls, this table functions as a key vault similar to services like Azure Key Vault or AWS Secrets Manager. It allows you to store sensitive data that's accessible only to specific user groups with authorized access.
Within the Connections UI, you can use variables stored in this table, which then resolve as secrets. This approach ensures careful handling of confidential information. Some common use cases include:
Including them in a connection string.
Using them in REST Headers, URLs, or the request body.
Configuring the Listener via the Listener Config table.
Cinchy has also introduced a new API endpoint for retrieving your stored secrets.
To create a secret in Cinchy:
Navigate to the [Cinchy].[Secrets] table on your platform (see Image 1).
Provide the following details for your secret:
Field | Description | Example |
---|---|---|
Cinchy has a new API endpoint designed for retrieving secrets. By utilizing the endpoint provided below, you can specify the <base-url>
, <secret-name>
, and <domain-name>
to retrieve the desired secret.
This endpoint functions seamlessly with Cinchy’s Personal Access Token capability, along with Access Tokens obtained from your Identity Provider (IDP).
Blank Example:
Populated Example:
The example below uses ExampleSecret
as a secretName and Sandbox
as the domain:
The API response will be in the following format:
You can use secrets stored in the Cinchy Secrets table as variables for your data syncs, wherever you use a variable. For instance, you can incorporate them within a connection string, an access key ID, or within a REST Source or Destination in the Header.
To use a Secret within Connections:
In the Connections UI, navigate to Info > Variables.
Under the Variables section, select Secret.
Enter the name of your variable.
Under the Value dropdown, select the secret you want to assign from the Secrets table.
You can also use your Cinchy Secrets when configuring your Listener for real-time syncs.
To use a secret in real-time syncs:
When configuring your sync, navigate to the Info Tab > Variables.
Under the Variables section, choose Secret.
Input the name of your variable.
Under the Value dropdown, choose the secret you intend to assign from the Secrets table.
Go to the Source tab.
Within the Listener section, input the secret variables as values for the relevant property in your Topic or Connection Attribute fields.
For example:
You can also add a secret that's attached to a variable to the Topic or Connection Attributes in the Listener Config table.
Open the Listener Config table.
Select the row that corresponds to your data sync.
Select the Topic or Connection Attribute cell you want to change.
Replace the value for a property with the variable assigned to a secret.
For example, in the JSON code below, the Connection Attribute property connectionString
is replaced with the @connectionString
variable defined in the data sync.
The following table provides an overview of which parameters you can use as secrets for each event connector type.
This page outlines the Cinchy GraphQL (beta) capabilities
GraphQL was first introduced in Cinchy v5.1 as a read-only beta.
Write operations were introduced in v5.2
GraphQL functionalities in Cinchy are currently in beta.
and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives you the power to ask for exactly what you need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
In a Cinchy context, the primary audience for the GraphQL betta **are developers writing apps on top of Cinchy.**It's a simple, efficient, new way to retrieve and manage data and build apps on via API.
GraphQL was introduced to Cinchy as a supplement to the and the functions. With GraphQL, not only are your app building capabilities streamlined and more powerful, but there is no switching between CQL and code; all code changes can be done within the GraphQL user interface. It also adheres to your defined access controls, included anonymous level access, ensuring that your data remains secure.
You can access GraphQL at the following URL: <baseurl>/graphqlplayground.
Note that you need at least Cinchy v5.1 for read-only, and v5.2 for write access.
If there is an error in the responses window when logging in, log out and try again. This error sometimes occurs during a timeout.
GraphQL has a robust set of documentation, videos, and training resources that will help you realize its full capabilities. Here are some to get started:
When writing your query, you can bring up an auto complete menu of fields by hitting the ctrl+space keys when your mouse in inside a {. The fields brought up will be related to the specific level you are on.
You can use the # symbol for in-line commenting.
The following section contains common errors and their solutions.
Problem: If your Cinchy instance times out and prompts you to re-enter your credentials/SSO authentication, you might get the above error when trying to hit the GraphQL endpoint again.
Solution: Log out and log back in to Cinchy. Hit the GraphQL endpoint again and refresh to remove the error.
The following are some query examples to help you get a feel for using GraphQL in a Cinchy context.
This example returns data from a single source, the Videos table in the CinchyTV domain. It specifically requests all data in the Title column of the table.
As a reminder, queries must adhere to the data structure in Cinchy. You must first hit the domain (in this example: Cinchy TV) and then the table (in this example: Videos)
This example returns data from two sources, the Applets table and the Users table, both in the same Cinchy domain.
It requests the following information:
The Name, Description, Application URL, Version, and Domain Name of all the applets on the environment
The Email Address of every user on the environment.
This example uses a filter to return a specific subset of data. This example returns all results from the Videos table in the CinchyTV domain that contains the search term "Apps" in the Title field/column.
Other filters you can use include: Equals, Not Contains, Starts With, Not Starts With, Ends With, Not Ends With, Empty, Not Empty, Is True, Is False, Before, After
The query will give us the Title and Description of all matching data rows.
This page details how to enable data at rest encryption and a few other important features.
Cinchy 2.0 has added the feature to encrypt data at rest. This means that you can encrypt data in the database such that users with access to view data in the database will see in those columns. All users with authorized access to the data via Cinchy will see the data as plain text. To use this feature, your database administrator will be need to create a database master key (see below for instructions).
The first step is to create a master key in the database. Do so by connecting directly whichever database your Cinchy instance is running on.
Run the below query to create your master key:
The password should adhere to your organization's password policy.
3. You can now encrypt data via the user interface (Image 1):
After you have created your master key you can create a backup file of that key in case any data corruption occurs in future.
You will need the password you used to create your master key to complete this operation.
Run the following command:
In the use case where you require to restore your master key due to data corruption, you may use the following steps.
You will need the password you used to create you master key to complete this operation.
Run the following command:
This page describes Cinchy's Application Experiences.
Rather than traditional code-centric applications which creates data silos, you can build meta-data driven application experiences directly on the Cinchy platform. These look and feel like regular applications, but persists its data on the data network autonomously, rather than managing its own persistence.
These experiences automatically adapt as your data evolves (Image 1).
Once you deploy your UI, API and logic, you will need to create an integrated client to leverage the data fabric for persistence and controls. If you would like a link to your experience from the data fabric, you will need to create an Experience in the Applets table. See to see how to set up both. The Cinchy Platform also comes with a built-in Experience called My Data Network, this is a tool to help you visualize your data through its connections. You can read more about the or create your own on your own data.
This page guides you through the overview, installation, and use of Cinchy's MDQE function.
Metadata Quality Exceptions (MDQE) can send out notifications based on a set of rules and the “exceptions” that break them. This powerful tool can be used to send notifications for exceptions such as:
Health checks returning a critical status
Upcoming Project Due Dates/Timelines
Client Risk Ratings reaching a high threshold
Tracking Ticket Urgency or Status markers
Unfulfilled and Pending Tasks/Deliverables
Etc.
MDQE monitors for specific changes in data, and then pushes out notifications when that change occurs.
You will need to have the
Request the installation package from the
To install MDQE in your Cinchy environment, follow the below steps:
Download the MDQE Installation package.
Unzip the file.
Open an instance of PowerShell as an Administrator and navigate to the path where you extracted the MDQE package in step 2 > Metadata Quality Exceptions V.x > Metadata Quality Exceptions.
Run the following command to install all MDQE components in your environment, using the table below as a parameter guide.
Within the MDQE file package, navigate to the PowerShell - DQE Orchestration folder.
Extract the contents.
Navigate to the _config.json file and update the parameter values using the below as a guide. Make sure to save when finished.
In the environment where you installed MDQE, search for and open the [Cinchy MDQE].[Rules] table.
Using the “Create Rule” view and the following data, create your Rule:
Use the “Invalid Rules” view to correct Rules with have syntax errors
All exceptions can be viewed in the [Cinchy MDQE].[Data Quality Exceptions] table
The Default view only displays exceptions assigned to the currently logged in user.
The All Data view displays all exceptions. This is only visible with admin privileges.
Ways to debug your rules:
If your PowerShell scripts aren't running: Run the script files in the PowerShell - DQE Orchestration folder using an IDE to make sure that all the configurations are correct.
Check to see if your bugged Rule is part of the “Invalid Rules” view.
If you have admin privileges, check to see if an equivalent SQL statement has been created in the [Rules CQL] table.
Check if there is a row for the Rule’s Signature value in the [Cinchy].[Formatting Rules] table.
You can use the Windows Task Scheduler to run MDQE jobs at regular intervals.
Navigate to your MDQE installation package > Windows Task Scheduler Jobs folder.
Import the files into the Windows Task Scheduler, updating the parameters accordingly.
This page outlines the Network Map (Previously called My Data Network
Cinchy comes out of the box with a system applet called Network Map (Image 1), which is a visualization of your data on the platform and how everything interconnects.
My Data Network is another way to view and navigate the data you have access to within Cinchy.
Each node represents a table you have access to within Cinchy, and each edge is one link between two tables. The size of the table is determined by the number of links referencing that table. The timeline on the bottom allows you to check out your data network at a point in the past and look at the evolution of your network.
It uses the user's entitlements for viewable tables and linked columns.
When you click on a node, you will see its description in the top right hand corner. You can click the Open button to navigate to the table (Image 2).
You will find the Network Map data experience on the Homepage (Image 3).
You can also set up a custom network visualizer as follows:
The nodes query defines the nodes in the network (Image 4 and 5).
The edges query defines the relationships between the nodes (Image 6).
Node groups are an optional query you can provide to group your nodes (Image 7 and 8).
If no start or end date is specified, the data network is just shown as is. If there's a start or end date, the other CQL queries must have a @date parameter and that will be used to render the data network at a point in time.
You can use @date between [Modified] and [Replaced] with a version history query to see data at a specific time. You can also simply use @date > [Created] if it's an additive system.
Timeline start date
This CQL should return a date value as startDate
.
Timeline end date
This CQL should return a date value as endDate
.
To use slicers, you must define the slicers in the Slicers column and add the additional attributes to the nodes query.
Attribute is the column name from the nodes query, displayName
is what shows up in the visualizer (Image 9 and 10).
All the information above is entered into the [Cinchy].[Networks]
table. To access the network, go to:
<Cinchy URL>/Cinchy/apps/datanetworkvisualizer?network=<NAME>
Alternatively you can go to My Data Network and then add ?network=<NAME>
to the end of it.
It's highly recommended to add a new applet for each custom data network visualizer for ease of access.
Cinchy version 5.2 added the ability to include new parameters on the URL path for your network visualizer to focus your node view. You can now add Target Node, Depth Level, and Max Depth Level Parameters, if you choose.
Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=
Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.
Target Node uses the TableID number, which you can find in the URL of any table.
Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node
Max Depths: This parameter defines how many levels of network hierarchy you want to display.
Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.
Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.
Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.
The below example visualizer uses the following URL: <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1
It shows Table ID 8 ("Groups") as the central node.
It only displays the Max Depth of 2 connections from the central node.
It highlights the nodes that have a Depth Level of 1 from the central node.
The following is an example of a network map (Image 11).
For ease of testing, save the following as saved queries and then in the Networks table, add exec [Domain].[Saved Query Name]
as the CQL queries.
This page details the vales and functions of the Cinchy System Properties table.
System Properties is a table within Cinchy for managing system properties, such as default time zones, system lockout durations, password expiration, password properties, password attempts allowed etc.
The Default of the Systems Properties table is set up as follows:
Property ID | Name | Value (Default) |
---|
This table is case sensitive.
The System Properties requirements can be changed by an admin user by editing the 'Value' columns where applicable:
Users can set their own time zones in their user profile. The default time zone values are entered manually and must correspond with the one of the values in the Default Time Zone value list located below. For changes to take effect, you must either clear the application cache or restart the instance.
If you enter an incorrect value in the Value column, then it will default to Eastern Standard Time (EST)
The minimum password length is 8 characters. The length will always default to 8 if an invalid value is provided, or if you attempt to set it to less than 8. This number can be changed (made higher than 8) in the Value column to require users to have longer passwords.
This property specifies whether symbols are required in a user's password. The 'Value' 0 means symbols aren't required and 1 means they're required.
This property specifies whether numbers are required in a user's password. The 'Value' 0 means numbers aren't required and 1 means they're required.
For a new password policy to take effect, you can set all user's Password Expiration Timestamp to yesterday. They will need to change their password the next time they attempt to log in.
This property specifies how many days until a password expires. This defaults to 90 but can be set to be shorter or longer by changing the number in the 'Value' column.
This property specifies how many bad password attempts a user can make before they're locked out of the system. The default is 3 but this can be set to be more or less attempts by changing the number in the 'Value' column.
This property specifies how long a user is locked out of the system once they've run out of bad password attempts. The default is 15 minutes but this can be set to be shorter or longer by changing the number in the 'Value' column.
Note that an administrator can also go into the 'Users' table to manually unlock a user by clearing the Locked Timestamp.
This property, defaulted to 0, shows this warning when a data owner is setting up Data Erasure or Data Compression on a table (Image 2). It's the administrator's responsibility to set up a scheduled maintenance job for performing compression and erasure, and then to change the property to 1 so that the warning no longer appears.
You can add more blocked passwords to this list as well, and users won't be able to set their password to it (this can be used to add your company's name, or to import other blocked password lists). The check against the list is case insensitive.
Like other password policies, this check occurs when your password changes, so to enforce this you will need to set all passwords to be expired.
Event Connector Type | Topic | Connection Attributes | Value as Parameter/Secrets |
---|---|---|---|
Using GraphQL on Cinchy means you still need to adhere to the Cinchy data structure. Just like with CQL, you have to adhere to the [Domain].[Table] structure when creating your queries. for a use case.
Further documentation on creating a backup master key
Further documentation on restoring the master key
Parameter | Description |
---|
Parameter | Description |
---|
Column | Description |
---|
Parameter | Description |
---|
Parameter | Description |
---|
Parameter | Description |
---|
You can also use a table called Forbidden Passwords to define passwords that won't be accepted by the platform. This table comes with a pre-populated list of passwords from
Time Zone | Time Difference (GMT) |
---|
Verbose
Verbose is the noisiest level, rarely (if ever) enabled for a production app.
Debug
Debug is used for internal system events that aren't necessarily observable from the outside, but useful when determining how something happened. This is the default setting for Cinchy.
Information
Information events describe things happening in the system that correspond to its responsibilities and functions. Generally these are the observable actions the system can perform.
Warning
When service is degraded, endangered, or may be behaving outside of its expected parameters, Warning level events are used.
Error
When functionality is unavailable or expectations broken, an Error event is used.
Fatal
The most critical level, Fatal events demand immediate attention.
-s
Server, Cinchy Base URL (ex. cinchy.com/Cinchy/
)
-u
Username, this will need to be an account that's part of the Cinchy Administrators group
-p
Encrypted password (you can encrypt your password by using Cinchy.CLI.exe encrypt -t "plaintextpassword"
)
-t
Set a maintenance time window in minutes. Maintenance tasks will stop executing after the allotted time. Run this during an allotted maintenance window.
-h
You must add this flag if you are accessing Cinchy over HTTPS.
Secret Source
The location where the secret is stored. This field supports only 'Cinchy' as a source.
Cinchy
Domain
The domain name of the location where the secret is stored.
QA
Name
The identifier for your secret.
Password
Secret Value
The actual secret content.
YourSecretValueHere
Description
A brief explanation of the secret's purpose.
This secret contains the password for logging into the QA environment.
Read Groups
A list of User Groups with read access to the secret. These groups can access the secret via the API, table, Connections UI, or CQL.
GroupA, GroupB
Write Groups
A list of User Groups with write access to configure the secret.
GroupC, GroupD
Cinchy CDC
tableGuid
No
filter
Yes
messageKeyExpression
Yes
batchSize
No
Salesforce Push Topic
Name
Yes
Id
Yes
Query
Yes
InstanceAuthUrl
Yes
GrantType
Yes
ClientId
Yes
ClientSecret
Yes
UserName
Yes
Password
Yes
ApiVersion
No
MongoDB Event
database
Yes
collection
Yes
pipelineStage
Yes
connectionString
Yes
Data Polling
FromClause
Yes
CursorColumn
Yes
FilterCondition
Yes
CursorColumnDataType
Yes
Columns
Yes
BatchSize
No
Delay
No
databaseType
Yes
connectionString
Yes
Kafka Topic
topicName
Yes
bootstrapServers
Yes
Salesforce Platform Event
Name
Yes
InstanceAuthUrl
Yes
GrantType
Yes
ClientId
Yes
ClientSecret
Yes
UserName
Yes
Password
Yes
ApiVersion
No
Amazon SQS
deleteMessages
No
awsRegion
No
awsAccessKey
Yes
awsSecret
Yes
queueUrl
Yes
-s | The base URL of your Cinchy instance, without the protocol. |
-sso | The base URL of your Cinchy SSO, without the protocol. |
-u | Username. We recommend creating a new, specific user for this install. Example: CinchyDQE |
-p | The password for the user designated above. |
-c | This refers to the path where you have your CLI installed. |
-d | This refers to a temporary path for storing error logs. |
-h | This flag must be added for environments set up with HTTPS. |
CinchyServerProtocol | Defaulted to HTTPS |
CinchyServer | The base URL of your Cinchy instance. Example: Cinchy.net |
CinchyServerSSO | The base URL of your Cinchy SSO. Example: Cinchy.net/SSO |
APIClientSecret | You can find this value in the Integrated Clients table > MDQE row > GUID column in your Cinchy instance. |
CinchyCLIPath | The path to your CLI. Example: C:\Cinchy CLI\Cinchy CLI v4.12.0.564 |
CinchyCLITempPath | The path for storing error logs. Example: C:\Cinchy CLI\Cinchy CLI Error |
MailServer | The server that will be sending out your email notifications. Example: smtp.office365.com. |
MailPort | The port number for your chosen email server. Example: 25. |
MailFrom | The email account that notifications will come from. Example: MDQEnotifications@outlook.com |
Mail Subject | A subject line for outbound emails. Example: Data Quality Exception found. |
MailUser | The username for the email address above. This may be the same as the address itself. Example: MDQEnotifications@outlook.com |
MailPswd | The password for the email account above. |
Name | The name of your rule. This must be unique across the rules. Example: Project Timeline Start Date Exception |
Table | Table: The table on which the exception scenario needs to be evaluated Example: Projects |
Table Columns | The columns in the table that should be highlighted in the case of an exception Example: Start Date |
Signature | The CQL for your exception condition. Example: [Start Date] is null |
Description | A description of the rule. Example: This exception will trigger if the start date of a project is left blank. |
User Assignment | This is the owner of the exception. You will use this when you want to assign the rule to a Cinchy user. Example: John Smith |
Severity | Choose from the drop down list. Example: Low Note : In case you would like to define your own severity, use [Cinchy MDQE].[Severity] table. You would need admin privileges to view this table |
Send Notifications | Choose from the drop down list. Use “Never” if you don't want email notifications sent out. Example: Daily Note : In case you would like to define your own Notification frequency, use [Cinchy MDQE].[Notification schedule] table. You would need admin privileges to view this table |
id | Id for the node. This will be used by the edges to define the relationships. |
title | This is the text that's displayed when hovering on a node. |
label | The label shown below the node. |
value | The visual size of the node relative to other nodes. |
mass | The gravitational pull of a node. Unless you really want to customize the visualizer, it's recommended to keep this the same value as the value. |
group | Optionally you can associate a node with a group. |
color | Optional hex code for the color of a node. The node will take the color of the group if a color isn't specified for the node. |
description | The description shows up in the top right hand corner when you click a node. |
nodeURL | Page to display when you click the open button next to the description. |
id | Id for the edge. |
label | Label that shows up on the edge. |
from | Originating node id. |
to | Target node id. Can be the same as the from node, it will show a loop back into the same node. |
showArrowTo | Set this to True if you want to show the direction of the relationship. |
showArrowFrom | Generally should only be used for bi-directional relationships along with the arrow to. |
sub network | Name for the group |
color | Hex value for the color of the group |
2 | Default Time Zone | Eastern Standard Time |
12 | Password Attempts Allowed | 3 |
13 | System Lockout Duration (minutes) | 15 |
8 | Minimum Password Length | 8 |
9 | Password Requires Symbols | 1 |
10 | Password Requires Numbers | 1 |
11 | Password Expiration (Days) | 90 |
15 | Maintenance Enabled | 0 |
Dateline Standard Time | -12:00:00 |
UTC-11 | -11:00:00 |
Aleutian Standard Time | -10:00:00 |
Hawaiian Standard Time | -10:00:00 |
Marquesas Standard Time | -09:30:00 |
Alaskan Standard Time | -09:00:00 |
UTC-09 | -09:00:00 |
Pacific Standard Time (Mexico) | -08:00:00 |
UTC-08 | -08:00:00 |
Pacific Standard Time | -08:00:00 |
US Mountain Standard Time | -07:00:00 |
Mountain Standard Time (Mexico) | -07:00:00 |
Mountain Standard Time | -07:00:00 |
Yukon Standard Time | -07:00:00 |
Central America Standard Time | -06:00:00 |
Central Standard Time | -06:00:00 |
Easter Island Standard Time | -06:00:00 |
Central Standard Time (Mexico) | -06:00:00 |
Canada Central Standard Time | -06:00:00 |
SA Pacific Standard Time | -05:00:00 |
Eastern Standard Time (Mexico) | -05:00:00 |
Eastern Standard Time | -05:00:00 |
Haiti Standard Time | -05:00:00 |
Cuba Standard Time | -05:00:00 |
US Eastern Standard Time | -05:00:00 |
Turks and Caicos Standard Time | -05:00:00 |
Paraguay Standard Time | -04:00:00 |
Atlantic Standard Time | -04:00:00 |
Venezuela Standard Time | -04:00:00 |
Central Brazilian Standard Time | -04:00:00 |
SA Western Standard Time | -04:00:00 |
Pacific SA Standard Time | -04:00:00 |
Newfoundland Standard Time | -03:30:00 |
Tocantins Standard Time | -03:00:00 |
E. South America Standard Time | -03:00:00 |
SA Eastern Standard Time | -03:00:00 |
Argentina Standard Time | -03:00:00 |
Montevideo Standard Time | -03:00:00 |
Magallanes Standard Time | -03:00:00 |
Saint Pierre Standard Time | -03:00:00 |
Bahia Standard Time | -03:00:00 |
UTC-02 | -02:00:00 |
Greenland Standard Time | -02:00:00 |
Mid-Atlantic Standard Time | -02:00:00 |
Azores Standard Time | -01:00:00 |
Cabo Verde Standard Time | -01:00:00 |
Coordinated Universal Time | 00:00:00 |
GMT Standard Time | 00:00:00 |
Greenwich Standard Time | 00:00:00 |
Sao Tome Standard Time | 00:00:00 |
Morocco Standard Time | 00:00:00 |
W. Europe Standard Time | 01:00:00 |
Central Europe Standard Time | 01:00:00 |
Romance Standard Time | 01:00:00 |
Central European Standard Time | 01:00:00 |
W. Central Africa Standard Time | 01:00:00 |
GTB Standard Time | 02:00:00 |
Middle East Standard Time | 02:00:00 |
Egypt Standard Time | 02:00:00 |
E. Europe Standard Time | 02:00:00 |
Syria Standard Time | 02:00:00 |
West Bank Gaza Standard Time | 02:00:00 |
South Africa Standard Time | 02:00:00 |
FLE Standard Time | 02:00:00 |
Jerusalem Standard Time | 02:00:00 |
South Sudan Standard Time | 02:00:00 |
Russia TZ 1 Standard Time | 02:00:00 |
Sudan Standard Time | 02:00:00 |
Libya Standard Time | 02:00:00 |
Namibia Standard Time | 02:00:00 |
Jordan Standard Time | 03:00:00 |
Arabic Standard Time | 03:00:00 |
Turkey Standard Time | 03:00:00 |
Arab Standard Time | 03:00:00 |
Belarus Standard Time | 03:00:00 |
Russia TZ 2 Standard Time | 03:00:00 |
E. Africa Standard Time | 03:00:00 |
Volgograd Standard Time | 03:00:00 |
Iran Standard Time | 03:30:00 |
Arabian Standard Time | 04:00:00 |
Astrakhan Standard Time | 04:00:00 |
Azerbaijan Standard Time | 04:00:00 |
Russia TZ 3 Standard Time | 04:00:00 |
Mauritius Standard Time | 04:00:00 |
Saratov Standard Time | 04:00:00 |
Georgian Standard Time | 04:00:00 |
Caucasus Standard Time | 04:00:00 |
Afghanistan Standard Time | 04:30:00 |
West Asia Standard Time | 05:00:00 |
Russia TZ 4 Standard Time | 05:00:00 |
Pakistan Standard Time | 05:00:00 |
Qyzylorda Standard Time | 05:00:00 |
India Standard Time | 05:30:00 |
Sri Lanka Standard Time | 05:30:00 |
Nepal Standard Time | 05:45:00 |
Central Asia Standard Time | 06:00:00 |
Bangladesh Standard Time | 06:00:00 |
Omsk Standard Time | 06:00:00 |
Myanmar Standard Time | 06:30:00 |
SE Asia Standard Time | 07:00:00 |
Altai Standard Time | 07:00:00 |
W. Mongolia Standard Time | 07:00:00 |
Russia TZ 6 Standard Time | 07:00:00 |
Novosibirsk Standard Time | 07:00:00 |
Tomsk Standard Time | 07:00:00 |
China Standard Time | 08:00:00 |
Russia TZ 7 Standard Time | 08:00:00 |
Malay Peninsula Standard Time | 08:00:00 |
W. Australia Standard Time | 08:00:00 |
Taipei Standard Time | 08:00:00 |
Ulaanbaatar Standard Time | 08:00:00 |
Aus Central W. Standard Time | 08:45:00 |
Transbaikal Standard Time | 09:00:00 |
Tokyo Standard Time | 09:00:00 |
North Korea Standard Time | 09:00:00 |
Korea Standard Time | 09:00:00 |
Russia TZ 8 Standard Time | 09:00:00 |
Cen. Australia Standard Time | 09:30:00 |
AUS Central Standard Time | 09:30:00 |
E. Australia Standard Time | 10:00:00 |
AUS Eastern Standard Time | 10:00:00 |
West Pacific Standard Time | 10:00:00 |
Tasmania Standard Time | 10:00:00 |
Russia TZ 9 Standard Time | 10:00:00 |
Lord Howe Standard Time | 10:30:00 |
Bougainville Standard Time | 11:00:00 |
Russia TZ 10 Standard Time | 11:00:00 |
Magadan Standard Time | 11:00:00 |
Norfolk Standard Time | 11:00:00 |
Sakhalin Standard Time | 11:00:00 |
Central Pacific Standard Time | 11:00:00 |
Russia TZ 11 Standard Time | 12:00:00 |
New Zealand Standard Time | 12:00:00 |
UTC+12 | 12:00:00 |
Fiji Standard Time | 12:00:00 |
Kamchatka Standard Time | 12:00:00 |
Chatham Islands Standard Time | 12:45:00 |
UTC+13 | 13:00:00 |
Tonga Standard Time | 13:00:00 |
Samoa Standard Time | 13:00:00 |
Line Islands Standard Time | 14:00:00 |
Monitor
A job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.
Trigger
Conditions that, if met, generate alerts.
Alert
An event associated with a trigger. When an alert is created, the trigger performs actions, which can include sending a notification.
Action
The information that you want the monitor to send out after being triggered. Actions have a destination, a message subject, and a message body.
Destination
A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook.
Focus the results of your Network Map to show only the data that you want to see with our new URL parameters.
This feature was added in version 5.20 of the Cinchy platform.
You can now add Target Node, Depth Level, and Max Depth Level Parameters.
Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=
Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.
Target Node uses the TableID number, which you can find in the URL of any table.
Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node
Max Depths: This parameter defines how many levels of network hierarchy you want to display.
Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.
Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.
Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.
The below example visualizer uses the following URL: <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1
It shows Table ID 8 ("Groups") as the central node.
It only displays the Max Depth of 2 connections from the central node.
It highlights the nodes that have a Depth Level of 1 from the central node.
Focus the results of your Network Map to show only the data that you really want to see with our new URL parameters.
You can now add Target Node, Depth Level, and Max Depth Level Parameters.
Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=
Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.
Target Node uses the TableID number, which you can find in the URL of any table.
Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node
Max Depths: This parameter defines how many levels of network hierarchy you want to display.
Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.
Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.
Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.
The below example visualizer uses the following URL: <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1
It shows Table ID 8 ("Groups") as the central node.
It only displays the Max Depth of 2 connections from the central node.
It highlights the nodes that have a Depth Level of 1 from the central node.
Client Id
A unique identifier for each client.
Client Name
A friendly name for the client to help users maintaining this record.
Grant Type
The OAuth 2.0 flow that will be used during authentication. "Implicit" should be selected for API calls.
Permitted Login Redirect URLs
Add all URLs of an Applet separated by semicolon which can start login.
Permitted Logout Redirect URLs
Add all URLs of an Applet separated by semicolon which can be used as Post Logout URL.
Permitted Scopes
The list of permitted OAuth scopes, please check all available options.
Access Token Lifetime (seconds)
The time after with the token expires. If left blank, the default is 3600 seconds.
Show Cinchy Login Screen
Deselect to have SSO as default authentication and skip the Cinchy login screen.
Enabled
This checkbox is used to enable or disable a client.
GUID
This is a calculated field that will auto generate the client secret.
Domain
Select a domain for the applet to belong to.
Name
This is the name that will display for the applet in My Network
Full Name
This is a calculated field Domain.Name
Icon
Select a system icon for the applet, this will show in My Network.
Icon Color
Select a system color for the icon.
Description
Similar to table or query description. This field is viewable and searchable in My Network.
Target Window
The default behaviour when opening the applet.
Existing Window (Redirect) - Redirects in the current window
Existing Window (Embedded) - Opens the applet embedded in Cinchy with the Cinchy header visible
New Window - The applet opens in a new window.
Application URL
This is the URL where the applet resides.
Users
Users who can see this applet in the marketplace.
Groups
Groups who can see this applet in the marketplace.
Integrated Client
The integrated client for the applet.
GUID
This is a calculated field that's automatically generated for the applet.