Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 245 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Cinchy v5.7

Loading...

Release notes

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Upgrade guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Guides for using Cinchy

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

IIS upgrades

Overview

The pages under this section can be used to help guide you through an upgrade of your Cinchy instance on IIS.

Kubernetes upgrades

Overview

The pages under this section can be used to help guide you through an upgrade of your Cinchy instance.

Data Collaboration Overview

This page provides a brief overview of data collaboration

You are currently browsing the Cinchy v5.7 platform documentation. For documentation about other versions of the platform, please navigate to the relevant space(s) via the drop-down menu in the upper left of the page.

What’s the purpose of Data Collaboration?

Data collaboration aims to facilitate the sharing, management, and utilization of data across various departments, teams, and systems within an organization. This approach aims to eliminate data silos and reduce the inefficiencies associated with traditional data integration methods like ETL processes or APIs. Data collaboration enables multiple stakeholders to co-produce and maintain data in a controlled, federated manner. Doing so enhances data quality, speeds decision-making, and allows more agile and scalable business operations. Data collaboration aims to maximize the business value derived from data while minimizing the costs and complexities traditionally associated with data management.

The root causes of IT delay and frustration.

When a new IT project is green-lit, you often pay a hefty fine called the integration tax, where you're continuously building new integrations between applications just to reuse data that's already available in your systems.

Over time, this never-ending cycle of copying data between fragmented apps gets more complex, resulting in delayed launches, budget overruns, and “shadow IT” projects.

Facing the rising costs and complexities of traditional data integration? You're not alone. At Cinchy, we understand that the 'Integration Tax' is a growing burden for modern organizations. As your enterprise expands its technology stack, the maze of required integrations becomes exponentially more intricate and expensive to manage. It's not just about initial setup; maintenance, updates, and reconfigurations add up, consuming a large portion of your IT budget and holding back your agility.

“Every year, as the level of new technologies and digital transformation grows, the amount spent on integration increases. So much so that companies collectively spend $700B globally.”

Source: Large Integration Company

That's where Cinchy's Data Collaboration Platform comes in. By introducing a platform designed for co-production, we radically reduce the need for complex integrations. Our platform enables various departments, teams, and systems to collaborate directly on data, effectively eliminating the need for redundant, copy-based integrations. This federated approach to data management means that you maintain control while empowering co-producers to contribute to a shared, dynamic data landscape. The result? A significant reduction in the 'Integration Tax,' liberating your IT resources to focus on delivering real business value. Say goodbye to old-school integration's crippling costs and complexities, and welcome a new era of scalable, efficient data collaboration.

How can this be fixed?

Using Data "Co-Production" to Accelerate IT Delivery

With data collaboration, you shift your approach from copy-based integration for sharing back and forth between collaborators to using an access-based real-time approach.

For every co-production use case built using cinchy, you're avoiding what otherwise would have been a bespoke integration-heavy solution. Also, individual solutions now "pay it forward" by liberating relevant data to participate in future collaboration use cases without any integration effort.

The Cinchy data collaboration platform does for data what the power grid does for electricity. In the same way that buildings no longer need to generate their power thanks to the power grid, with a data collaboration platform, new solutions no longer need to manage, integrate, and protect their own data (Image 4).

Not just connected, but autonomous.

​Simply putting pipes between data silos and centralizing a few housekeeping tasks isn't data collaboration. What that's doing is leading you down a path of managing endless copies. True data collaboration connects and upgrades your data as part of an interconnected autonomous data network.

Autonomous data exists independently of any application. It's self-controlled, self-protected, and self-describing. This creates several benefits compared to traditional app-dependent data, including simplifying cross-application usage and reporting. When you use autonomous data in an interconnected network, wherein individual contributors maintain their roles and priorities as they apply their unique skills and leadership autonomy in a problem-solving process, you get Collaborative Autonomy.

Collaborative Autonomy is the principle underpinning Collaborative Intelligence, the basis of data collaboration and Cinchy.

Individuals aren't homogenized, as in consensus-driven processes, nor equalized through quantitative data processing, as in collective intelligence. Consensus isn't required. Problem resolution is achieved through systematic convergence toward coherent results. Collaborative intelligence relies on Collaborative Autonomy to overcome “the consensus barrier” and succeed where other methods have failed.

Universal access controls and automated data governance

One of the most significant advantages of data collaboration is the ease with which data owners can set universal data access controls at the cellular level and automate data governance.

In effect, it removes the need to maintain access controls within individual apps and centralizes these functions incredibly efficiently.

Compare this with designing and maintaining controls within thousands of apps and systems.

Game Changer: Network Effects for IT Delivery

Data collaboration is a game changer for IT delivery: it produces network effects, where each new solution actually speeds up delivery times and reduces costs.

Network-based designs scale beautifully and become more efficient as they expand. Consider the human brain; its neuroplasticity helps it learn more as it grows. The more interconnected it gets, the better. The neural pathways are reorganizing themselves such that the fewer connections, the higher the intelligence, because information is easier to put to use.

It's the same with data collaboration. The more you connect your data, the simpler your world becomes. And it's also your time machine - you can have applications based on different points in time of your data, and it's all done through network-based design.

There is no going back.

Let's build the connected future, together.

v5.3 (Kubernetes)

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.

If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.3**, you must first run a mandatory process (Upgrade 5.2) **and deploy version 5.2. Once complete, you can continue on with your 5.3 upgrade.

Configure to the newest version

  1. Navigate to your cinchy devops.automation repository

    1. Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)

    2. In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.2.0 > 5.3.0).

  1. Rerun the deployment script by using the following command in the root directory of your devops.automations repository:

3. Commit and push your changes.

Apply the configurations

If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:

  1. Navigate to the ArgoCD portal.

  2. Refresh your component(s). If that doesn't work, re-sync.

v5.1 (IIS)

This page details the upgrade process for Cinchy v5.1 on IIS.

Upgrading on IIS

The following process can be run when upgrading from v5.0 to v5.1 on IIS.

Prerequisites

  1. Take a backup of your database.

  2. Extract thefor the version you wish to upgrade to.

Upgrade process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

Upgrade Cinchy

The pages under this section deal with upgrading your Cinchy platform.

Additional notes

  • If you are currently running Cinchy v4+ or v5+ and wish to upgrade your components, please review the documentation here:

  • If you are currently running Cinchy v4+ and wish to upgrade to v5+, please review the documentation here:

Monitor and Log on Kubernetes

This page details monitoring and logging for the Cinchy v5 platform when deployed on Kubernetes.

Introduction

Cinchy v5 on Kubernetes offers and recommends a robust set of open-source tools and applications for monitoring, logging, and visualizing the data on your Cinchy instances.

Click on the respective tool below to learn more about it:

v5.1 (Kubernetes)
v5.2 (Kubernetes)
v5.3 (Kubernetes)
v5.4 (Kubernetes)
v5.5 (Kubernetes)
v5.6 (Kubernetes)
v5.7 (Kubernetes)
  // The component image tags are specified below to define which versions to deploy
  "connections_image_tag": "v5.3.0",
  "event_listener_image_tag": "v5.3.0",
  "idp_image_tag": "v5.3.0",
  "maintenance_cli_image_tag": "v5.3.0",
  "meta_forms_image_tag": "v5.3.0",
  "web_image_tag": "v5.3.0",
  "worker_image_tag": "v5.3.0"
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
using the Cinchy Utility
iisreset -stop
iisreset -start
new build
Grafana
OpenSearch
ArgoCD
Image 1: Collaboration vs. Integration
Image 2: The growing cost of integration
Image 3: The shift to data collaboration via "co-production"
Image 4: Data collaboration is like the power grid
Image 7: Data collaboration ensures consistent enforcement of owner-defined controls

v5.3 (IIS)

This page details the upgrade process for Cinchy v5.3 on IIS.

Upgrading on IIS

Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.3**, you must first run a mandatory process (Upgrade 5.2) using the Cinchy Utility and deploy version 5.2. Once complete, you can continue on with your 5.3 upgrade.

The following process can be run when upgrading any v5.x instance to Cinchy v5.3 on IIS.

Prerequisites

  1. Take a backup of your database.

  2. Extract the new build for the version you wish to upgrade to.

Upgrade Process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. Execute the following command:

iisreset -stop 
  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

iisreset -start 
  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

v5.1 (Kubernetes)

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.

Configure to the newest version

  1. Navigate to your Cinchy devops.automation repository

    1. Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)

    2. In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.0.0 > 5.1.0).

  // The component image tags are specified below to define which versions to deploy
  "connections_image_tag": "v5.1.0",
  "event_listener_image_tag": "v5.1.0",
  "idp_image_tag": "v5.1.0",
  "maintenance_cli_image_tag": "v5.1.0",
  "meta_forms_image_tag": "v5.1.0",
  "web_image_tag": "v5.1.0",
  "worker_image_tag": "v5.1.0"

2. Rerun the deployment script by using the following command in the root directory of your devops.automations repository:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"

3. Commit and push your changes.

Apply your Configurations

If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:

  1. Navigate to the ArgoCD portal.

  2. Refresh your component(s). If that doesn't work, re-sync.

Versioning best practices

This page outlines some best practices for versioning.

Overview

This page details some best practices for version history in Cinchy. These recommendations are important because they can help:

  • Minimize your database bloat/size.

  • Make it easier to parse through version history when there aren't hundreds of redundant records.

Best practices

  • When doing any type of update statement, it's best to include an opposite “where” clause to avoid creating unnecessary history for unchanged values.

    • For example, if your update was set name to Marc, you would include a where name doesn't already equal Marc. Doing so prevents a redundant update in your version history.

UPDATE
	[Contacts].[Employees] 
SET [Name] = Marc
WHERE [Name] != Marc
  • When writing an update statement, run it more than once. If it results in an update each time, return to your query and troubleshoot.

    • This is relevant anywhere the statement can be run repeatedly, such as in APIs or Post Sync Scripts.

  • In data syncs, ensure that your data types are matched properly.

    • For example, if the source is text and the target is data, even if the values are the same, it will update and create unnecessary version history.

  • When performing a data sync, run it more than once. If it creates an update each time, return to your configuration and troubleshoot.

Upgrading Cinchy Versions
Upgrading from v4 to v5
Kubernetes upgrades
IIS upgrades

User preferences

This page outlines your user preferences.

Your Cinchy Profile has three (3) components that can be changed from user preferences:

  • ​My Photo​

  • ​My Profile​

  • ​My Password​

My Photo

To add a photo to your profile, complete the following (Image 1):

  1. From My Network, click the avatar icon

  2. Select My Profile

  3. From the settings page click on the my photo image

  4. Locate and upload your photo

Image 1: Uploading your photo

My Profile

From the settings page in the My Profile section you are able to update the language, region, and time zone (Image 2).

Image 2: My Profile

My Password

If you don't see the password option in My Profile, you must be logging on to Cinchy using Single Sign-On and won't need to update your password within Cinchy.​

To change your password, complete the following:

  1. In the Old Password field, enter in your existing password

  2. In the New Password field, enter a new password

  3. In the Confirm New Password field, re-enter in your new password

Your Cinchy password must conform to the requirements set by your administrator.

Disable your Kubernetes applications

Disable your applications

There might be times when you want to temporarily disable your Kubernetes pods to perform maintenance or upgrades. You can do so through the following steps:

  1. Access your ArgoCD.

  2. Navigate to the application directory for the namespace you wish to disable, in this case development-cinchy (Image 1). You should see your cluster component applications.

Image 1: Applications
  1. Select the main application (development-cinchy) (Image 2).

Image 3: Navigate to your main app
  1. Navigate to Summary > Sync Policy > Automated, then select Disable Auto-Sync > OK (Image 3).

Image 4: Select "Disable Auto-Sync"
  1. For each of the cluster applications that you wish to disable, select the "..." > Delete (Image 5).

Image 5: Delete your applications
  1. Your apps should all appear as "out of sync" (Image 6).

Image 6: Your apps should all appear out of sync

Re-enable your applications

  1. To re-enable your applications, return to the application directory for your disabled namespace (Image 7).

Image 7: Navigate to your app directory
  1. Select the main application (i.e. development-cinchy) (Image 8).

Image 8: Navigate to your main app
  1. Navigate to Summary > Sync Policy, then select Enable Auto-Sync > OK (Image 9).

Image 9: Enable your Auto Sync

Data compression

This page provides an overview of Data Compression.

If you need to manage space within your database, you can set a data compression policy. Currently we allow you to permanently delete versions in the collaboration log. Be aware that the current version of compression is a LOSSY process (data will be permanently deleted). Take that into consideration when configuring a policy.

We recommend you keep more versions rather than less versions. You can think of the above as keep any versions newer than 180 days and keeping the most recent 50 versions. As long as a version satisfies one of the two keep conditions, we keep it. Using the example above:

  • A version that’s from 205 days ago but is amongst the most recent 50 versions (For example: version 22 of 60) will be kept, because it satisfies at least one condition of being in the most recent 50 versions.

  • A version that’s from 163 days ago but is version 65 of 80 will be kept, because it satisfies at least one condition of being less than 180 days old.

  • A version that’s from 185 days ago and is version 65 of 80 will be deleted because, it doesn’t satisfy either of the conditions.

The actual compression of data happens during the maintenance window. Please check with your system administrators to confirm when maintenance is scheduled.

Change approval enabled tables

The number of versions is based on the major version and not the minor version. This means for a record on version 35.63 with a keep most recent 10 versions, it will keep all versions 26.0 +, rather than all versions 35.44+.

Data erasure

This page provides an overview on data erasure.

Overview

Data erasure allows you to permanently delete data in Cinchy. As the data owner, you can set an erasure policy on your table if you need to delete data for compliance reasons (Image 1).

Image 1: Data Erasure

The actual erasing of data happens during the maintenance window. Please check with your system administrators to confirm when maintenance is scheduled.

Once data is erased, any links pointing to erased data will look like this (Image 2):

Image 2: Data Erasure

Change approval enabled tables

The time is counted based on the record's modified time stamp, not the deleted time stamp. This means for change approval records it's the time when the pending delete request was approved and moved to the Recycle Bin, not when the delete request was made.

Personal access tokens

Overview

You now have the option to use personal access tokens (PATs) in Cinchy, which are alternatives to using passwords for authentication. Like , you can use a Cinchy PAT to call the Cinchy API as your current user, meaning your associated access controls will be honoured as well. Cinchy PATs, however, have an expiration date of up to 1 year. A single user can have up to 5 PATs active at one time. See for details on using a PAT in lieu of a Bearer token.

Create a PAT

  1. From the Cinchy homepage, navigate to your User Settings > Tokens. Any tokens that you make will appear here. You will also be able to see any expired tokens.

  1. Click Generate New Token.

Note: You can have up to 5 active (non-expired) tokens at a time. Once you reach that threshold, the “Generate New Token” button won't work.

  1. Input the following information about your PAT and click Generate:

  2. Token Name

  3. Description

  4. Expiration

  1. Once generated, make sure to copy down the PAT somewhere secure. You won't be able to view the PAT again once you navigate away from this screen.

Deletes a PAT

  1. From the Cinchy homepage, navigate to your User Settings > Tokens. Any tokens that you make will appear here.

  2. Click the “Delete” button next to the applicable PAT.

Use a PAT in an API

Cinchy PATs can be used in much the same way that Bearer tokens are used for in . For example, in the Authorization header with the value: Bearer <token>.

You may also wish to review the information on

Version management

This page details how Cinchy approaches Version Management within the platform.

Data version management

Cinchy natively and automatically manages data versioning in the platform through the ‘always-on’ version tracking, collaboration logging, and recycle bin features (data restore).

Cinchy maintains a version history of all changes to every data element stored in Cinchy. You can query the version history in Cinchy to speed up analysis, and can also be viewed through the , which tracks changes made by users, systems, or external applications (Image 1). When required, you can easily revert data to previous states using the or the Revert button.

Schema version management

This section refers to data schemas/models, not data values themselves.

Your schema/data model version can also be managed when you are using multiple environments. For example, if you have a DEV environment and make a change to a table design (ex: changing a column name), you can export and deploy your data model to a PROD environment and Cinchy will intelligently consolidate and merge the schema changes to adhere to the latest version.

To export a table (like your data model), navigate to the Design Table > Export button (Image 2). You can then import your data model into any other environment using the (Image 3).

This functionality is achieved through the use and synchronization of GUIDs. Each data element in Cinchy (table, column, etc.) will have a matching GUID, which stays consistent even across multiple environments. That means that changes made in your source environment will automatically and accurately be applied once promoted to your higher environment.

A GUID _(globally unique identifier)_** is a 128-bit text string that represents an identification (ID).**

You can find the GUID for your object by navigating to the applicable System Table. Ex: Column GUIDs can be found in the Columns table (Image 4).

Create data packages

While you are able to manually export/import data models across environments, you may want to package up multiple objects (tables, queries, reference data, etc.) and push that all together between environments. This method still adheres to schema version control and management.

This can be accomplished using the Cinchy DXD Utility, which you can learn more about by .

Update the Kubernetes Image Registry

Overview

The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.

**Instead, there is a new registry: **registry.k8s.io.

New Cinchy Deployments: this change will be automatically reflected in your installation.

For Current Cinchy Deployments: please follow the instructions outlined in this upgrade guide to ensure your components are pointed to the correct image repo.

You can review the full details on this change here: https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/

Update Instructions

  1. In a shell/terminal, run the below command to get a list of all the pods that are pointing to the old registry: k8s.gcr.io. These will need to be updated to point to the new image registry.

  1. Once you find which pods are using old image registry, you need to update its deployment/daemonset/stateful set with new registry.k8s.io registry. Find the right resource type for your pods with below command:

  1. Once you have your pod name and resource type, run the below command to open the manifest file in an editor:

The below command uses the autoscaler as an example. You will want to propagate the command with your correct pod and resource type.

  1. In the editor, find any instances of k8s.gcr.io and replace it with registry.k8s.io.

  2. Save and close the file.

  3. Repeat steps 3-5 for the rest of your pods until they're all pointing to the correct registry.

System tables

System tables are included out-of-the-box with your Cinchy platform, and can be used to track and manage a variety of data.

You can easily query for a list of your system tables using the below:

The system tables included are:

  • Applets: This system table manages a list of all your integrated applications

  • Data Experience Definitions: This is a system table for managing data experience definitions

  • Data Experience References: This is a system table for managing reference data for data experiences

  • Data Experience Release Artifacts: This is a system table for maintaining data experience release artifacts

  • Data Experience Releases: This is a system table for maintaining data experience releases

  • Data Security Classifications: This is a system table for maintaining data security classifications

  • Data Sync Configurations: This system table manages a list of all your data sync configurations.

  • Domains: This system table manages a list of all the domains in your instance.

  • Execution Log: This system table tracks the execution logs of data syncs

  • Formatting Rules: This system table manages your formatting rules

  • Groups: System table for managing all groups

  • Literal Groups: This system table maintains a list of groups

  • Literal Translations: This system table maintains a list of literal translations

  • Literals: This system table maintains a list of literals

  • Regions: This system table maintains a list of regions

  • Saved Queries: This system table manages a list of all user saved queries that can be exposed via the REST API

  • System Colours: System table for maintaining colours

  • Table Access Control: This system table maintains a list of all table access controls in your instance

  • Table Columns: This system table manages a list of all the system column definitions

  • Tables: This system table manages a list of all the tables in your instance.

  • User Defined Functions: This system table manages a list of all your user defined functions

  • Users: System table for managing all user information including enabling/disabling the ability to create tables, queries, etc.

  • Views: This system table manages a list of all the views in your instance.

v5.4 (IIS)

This page details the upgrade process for Cinchy v5.4 on IIS.

Upgrading on IIS

Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.4,** you must first run a mandatory process (Upgrade 5.2) and deploy version 5.2. Once complete, you can continue on with your 5.4 upgrade.

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString in your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

Ex:

The following process can be run when upgrading any v5.x instance to v5.4 on IIS.

Prerequisites

  1. Take a backup of your database.

  2. Extract thefor the version you wish to upgrade to.

Upgrade Process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString in both your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

    Ex:

  3. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

7. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

Restore tables, columns, and rows

This page outlines how to restore tables, columns, and rows in Cinchy

Overview

This page documents the method of restoring deleted tables, columns, and rows in your Cinchy table.

Remember that you can always revert a changed or deleted record to a previous state using the

Restore a deleted row

To restore a row that has been deleted:

  1. In the table where you want to restore the row, navigate to the Recycle Bin.

  2. Locate the deleted row.

  3. Right click anywhere in the row > Restore Row (Image 1).

  4. You should see a "Restore Successful" pop-up.

Restore a deleted or changed column

To restore a column that has been deleted or changed:

Note: You need insert access on the Table table to complete these steps.

This method will revert the entire table, including any changes made after the column was deleted.

  1. Navigate to the [Cinchy].[Tables] table.

  2. Find the row with the table that has the column you want to restore > right click anywhere in the row > Collaboration Log > Revert to a previous version (Image 2).

  3. You should see a "Revert Successful" pop-up.

Restore a deleted table

To restore a table that has been deleted:

Note: You need insert access on the Tables table to complete these steps.

  1. Navigate to the [Cinchy].[Tables] table.

  2. Navigate to the Recycle Bin.

  3. Find the row with the table that you want to restore > right click > "Restore Row" (Image 3)

  4. You should see a "Restore Successful" pop-up.

Make sure the Deleted date is the same, and you don't retrieve previously dropped columns.

Administrator Guide

This page details the role of an Admin of the Cinchy Platform

Administrators

The “Admins” of the Cinchy platform are users who belong to the "Cinchy Administrators" User Group.

Admin types

Cinchy Admins fall into two categories:

  • .

Setting a user as an Admin doesn't supersede/change their role as an end-user vs builder.

Builder Admins

A Builder Admin can:

  • Modify all table data (including system tables), all schema, and all data controls.

    • This includes setting up and configuring users, assigning them to groups, and assigning which users have builder access.

  • View all tables (including system tables) and queries in the platform.

End-User Admins

An End-User Admin can:

  • View all tables (including system tables) and queries in the platform.

  • Modify data controls for tables.

Manage Admins

To view and manage who has administrator access, you will use the Groups.

Note that you must have the correct entitlements set to view or access the "Groups" table. If you are part of the "Administrators" group already, then you can view all system tables by default.

Builder Admin management

A builder is able to view which users are part of the "Administrators" group via the "Groups" system table; either in the data browser or by using a saved/ad-hoc query.

If you are an admin, you can also use the "Groups" table to add or remove users from the "Administrators" Group.

End-User Admin management

An End-User Admin is able to view which users are part of the "Administrators" group via the "Groups" system table; either in the data browser or by using a saved query.

Formatting rules

You can apply conditional formatting rules. Our first iteration of this is done directly in the Formatting Rules table. A future iteration will add a UI within a table to create them.

Columns

This section has formatting rules available for columns.

Row Condition

This follows the same syntax as a view filter query.

Ordinal

Order in which the formatting rules will apply on the same table. Ordinal 1 will show up above ordinal 2.

Highlight Color

Color to highlight the cell. If you want to add your own colors you can do so within System Colours and check off highlight under usage.

Table

Table in which to apply the conditional formatting.

Highlight Columns

Columns to apply the conditional formatting rules to. You don't need to include any row condition columns within the highlight columns.

Example

The Admin panel

This page details information about the Admin Panel on Cinchy.

You can view the admin panel of your Cinchy instance by This is only reachable if you are logged in as a user with admin access. The admin panel includes the following sections:

Cinchy Healthcheck

The Cinchy Healthcheck shows information about your system such as current version, IP Address, the system time, and your database status (Image 1).

Cinchy log files

This section shows a list of viewable log files from your system, as well as their size, creation time, and last modified time (Image 2).

Log files in the Admin Panel are only visible on Cinchy deployments on IIS, or on a version earlier than v5. In those cases, you will need to

Upload a logo

Review the for information on uploading a logo.

Attach files

This page will describe how to attach files to your Cinchy table rows.

Overview

You are able to attach files and images to any row in a Cinchy table by creating a linked column that also links to the 'Files' system table of your platform.

If you have access to view the 'Files' table, you can also view every attachment on your system.

Cinchy supports the attaching of any file type.

Attach a file

  1. Navigate to the table where you want to attach your file.

  2. Click Design Table > Columns

  3. Add a new column with the following parameters (Image 1):

    1. Column Name: Cinchy recommends using a straightforward name like "Images and Files"

    2. Data Type: Link

    3. Linked Table: You will link this to the "Cinchy\Files" table

    4. Linked Column: File Name

    5. Advanced Settings: Make sure to select the "Multi-select" checkbox if you want the capability to add more than one file to a row.

  1. Click Save

  2. Navigate back to your table and locate your newly created files column.

You must create the first row before uploading a file.

  1. To attach a file, click on the upload button (located in the top right hand corner of any cell in the column) (Image 2).

  1. From the pop-up window, select Choose Files to pick your file from your machine, then click Submit (Image 3).

  1. Once uploaded, your cell should look like this (Image 4):

Delete an attached file

To fully delete your attachment from the platform, you will need to navigate to the Files system table and delete the row associated with your attachment.

You must have edit access to the Files table.

Your file will appear crossed out in your original table and can't be opened or downloaded (Image 5).

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
grep "k8s.gcr.io"
kubectl get deployment,daemonset,statefulset --all-namespaces
kubectl edit deployment cluster-autoscaler -n kube-system
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
iisreset -stop 
iisreset -start 
using the Cinchy Utility
TrustServerCertificate=True
new build
Download .NET 6.0
TrustServerCertificate=True
Builders
End-users
System table
Cinchy Bearer Tokens
Authentication
API authentication
using PATs in Excel or PowerBI.
Collaboration Log
Recycle Bin
model loader
reviewing the documentation here
Image 1: The Collaboration Log
Image 2: Exporting a data model
Image 3: Importing a data model
Image 4: GUIDs
SELECT [Name]
FROM [Cinchy].[Tables]
WHERE [Deleted] IS NULL
AND [Domain] = 'Cinchy'
Some examples of system tables
Collaboration Log.
Image 1: Restoring a Row
Image 2: Restoring a Column
Image 2: Restoring a Row
Example formatting rules.
Formatting applied in the table
using the /admin/index endpoint.
navigate to OpenSearch (or comparable component).
Data Browser page
Image 1: The Cinchy Healthcheck
Image 2: Cinchy Log Files
Image 1: Step 3, creating your Files column
Image 2: Step 6, uploading a file
Image 3: Step 7, choosing your file to upload
Image 4: Step 8, a completed upload
Image 5: Deleting an attached file

Upgrade Azure Kubernetes Service (AKS)

This page details the process for upgrading your AKS instance once you've deployed Cinchy v5 on Kuubernetes.

Overview

AKS (Azure Kubernetes Service) is a managed K8 service used when deploying Cinchy v5 over Azure. Once you have deployed your v5 instance of Cinchy, you may want to upgrade your AKS to a newer version. You can do so with the below guide.

Prerequisites

Before proceeding for the AKS version upgrade, ensure that you have enough IP address space, as this process will need to start new nodes and pods.

You will need more than 1024 IP addresses in total when you have single Cinchy instance/environment.

If you have limited IP address space, you can upgrade the master node and then the worker node pools one by one.

Before proceeding, make sure to check with Cinchy support which versions of Kubernetes are supported by this process.

Upgrading AKS

  1. Navigate to your ArgoCD Dashboard.

  2. Find and click on your Istio app > disable auto-sync.

  3. Find and click on your Istio-ingress app > disable auto-sync.

  4. Open a terminal and run the following command to get the currently available Kubernetes versions for AKS, inputting your AKS cluster name and resource group where specified:

az aks get-upgrades --resource-group <myresourcegroup> --name <myaksclustername> --output table
  1. Within your cinchy.devops.automations folder/repo, navigate to the deployment-azure.json file and change the values for kubernetes_version and orchestrator_version to the currently available Kubernetes version found in step 4.

  2. Within the cinchy.devops.automations folder/repo, run the following command to push your new values:

dotnet Cinchy.DevOps.Automations.dll deployment-azure.json
  1. Within your cinchy.terraform/azure/aks_cluster/ directory, run bash create.sh and accept the change.

  2. AKS has three node pools, one node pool per availability zone. Terraform will now start the upgrade of the master node and the zone1 node pool.

  3. Verify which nodes that the istio-ingressgateway and istiod pods are running on with below command.

kubectl get pods -n istio-system -o wide
  1. In the output you will see aks-zone1/2/3-xxxxx under the Nodes column.

If one or both pods are running on aks-zone1-xxxxx then you need to scale up replicas 2 by following steps 12-14. If not, skip to step 15.

  1. Use the following command to start replica on zone2 or zone3 nodes; once you scale down it will terminate from zone1 upgrading nodes.

kubectl scale --replicas=2 deployment.apps/istio-ingressgateway -n istio-system
kubectl scale --replicas=2 deployment.apps/istiod -n istio-system
  1. Verify that new pods are running on other node pool by running the following command:

kubectl get pods -n istio-system -o wide
  1. Run the following command to scale it back. This will remove the pod from cordoned node:

kubectl scale --replicas=1 deployment.apps/istio-ingressgateway -n istio-system
kubectl scale --replicas=1 deployment.apps/istiod -n istio-system
kubectl get pods -n istio-system -o wide
  1. Once Maser node and zone1 upgrade is done, you will see a message such as:

module.aks.azurerm_kubernetes_cluster.main: Modifications complete after 9m23s [id=/subscriptions/e5d6912-eeaa-461c-8f6-30e7c6d945a/resourceGroups/<vnet>/providers/Microsoft.ContainerService/managedClusters/<myaksclustername>]
  1. The Terraform script will continue with the zone2 and zone3 node pool upgrade. You will see messages like the below when thee zone2 and zone3 node pool upgrade is in process:

module.aks-node-pool.azurerm_kubernetes_cluster_node_pool.node_pool["zone2"]: Modifying... [id=/subscriptions/ed16912-eeaa-461c-8f36-30e76de945a/resourceGroups/<vnet>/providers/Microsoft.ContainerService/managedClusters/<myaksclustername>/agentPools/zone2]`
`module.aks-node-pool.azurerm_kubernetes_cluster_node_pool.node_pool["zone3"]: Modifying... [id=/subscriptions/e5d1612-eeaa-461c-8f36-30e7c6e945a/resourceGroups/<vnet>/providers/Microsoft.ContainerService/managedClusters/<myaksclustername>/agentPools/zone3]
  1. Repeat steps 11-13. This will make sure that istio-ingressgateway and istiod are running on zone1 nodes.

  2. Return to your ArgoCD dashboard and re-enable the auto sync for Istio and Istio-ingress apps auto sync on ArgoCD dashboard.

Additional guides

Queries

Queries are requests for information within Cinchy.

Create queries

Cinchy Builders and Users have the capability to create ad-hoc queries, however only Cinchy Users can create saved queries. For more information on how to create queries, see creating a saved query.

Execute queries

Cinchy Users can execute pre-built queries based on their access.

You can find a list of saved queries on your network by navigating to the Saved Queries table (Image 1). You can then search the Cinchy homepage for the saved query to execute it.

You will need "Execute Access" for each Saved Query that you want to run. You can find this information in the Saved Queries table.

Image 1: You can execute saved queries by running them from the Saved Queries table

Generate pivot tables

Once you have executed the query, click the Grid drop down list and select Pivot. Here is where you can take your standard table view and slice and dice your data (Image 2).

Image 2: Generating a Pivot Table

Generate charts

From within your pivot view, open the drop down list with the value “table” and select the type of chart you want to use to display the data (Image 3).

Image 3: Generating Charts

Build shared visualizations

Once you have a desired visualization, that visualization can be made available for others as an applet in Cinchy. Grab the Pivot URL and send it to your Cinchy builder to create your mini applet that can be shared and leveraged!

To copy the Pivot URL to build have a visualization created, complete the following:

  1. From within the Pivot, locate the blue Pivot URL

  2. Click Pivot URL button

  3. Click the Copy button

  4. Send the copied URL to your Cinchy builder to create your applet that can be shared and leveraged!

You can also open that visualization by clicking Open in new tab.

Linking data

When you create a column within Cinchy, you can choose to create a link column. A link column allows you to establish inherent relationships with other tables.‌

Overview

Linking is done by the Cinchy ID, which is unique. When you create a link column, you select a column to link to. This is simply a decision on which field to show from the linked record. You should pick a unique field to link on to avoid confusion if possible.

Once a record is created, its Cinchy ID never changes. This means that modifying the row of data in the linked table won't change the relationship in your table to that row. This also means that if you didn't use a unique column, even though the UI looks the same, you are actually linking to different rows.‌

Choose a linked column

‌In general, you should only use unique columns as the linked column. This needs to be balanced with readability in other tables.

For example, Full Name might not be unique to every employee, but it's more readable and understandable than Employee ID. In other cases, it makes sense to link via an ID and add a display column to show relevant information.‌

Allow Linking

‌To help other builders follow best practices of only linking to unique (or close to unique, such as Full Name) columns, you should un-check the Allow Linking checkbox for non-unique columns so they won't be able to use it for linking.‌

Allow Display in Linked View

‌If this option is unchecked, it prevents users from showing this column in another table.

For example, if you have an ID card # within an employees table, you may not want to display it to the rest of the company because it simply would not be relevant when they're linking to employees and want to see additional information (such as department, title, location). Arguably, a lot of these columns are also taken care of by access controls (since most people won't have access to view that column).‌

Deselecting this box should be done sparingly, as it doesn't impact the security of your data, only how convenient it's to see it.‌

Display Columns

‌When you select a record to link to on the Manage Data screen, it can be useful to see additional information about the records to ensure that it's the record you want to link to (Image 1). You can add additional display columns in the advanced options for link columns (Image 2).

Image 1
Image 2

When you type in the cell, all displayed columns will be searched through, not just the Linked Column (Image 3). (Green doesn't have a B in it, but #00B050 does so the Green record shows up)

‌

Image 3

Link Filter

‌The link filter filters out records from the drop down list. This is useful for reducing the options to only the typical use case. Commonly used for filtering the drop down to a list of active users or other resources, while not preventing someone from entering missing records with inactive resources.‌

This is only a display filter; it doesn't prevent other values from being entered as long as they're valid records in the linked table.‌

Relationships

‌You can define 1 to 1, 1 to many, and many to many relationships.‌

1:1 Relationship

‌Generally it's rare to link 1:1 relationships since they should usually be in the same table. For example, you would not have a separate table for Employee Phone Number and Employee Address, they would simply be two columns within the Employees table. However there are cases nonetheless where it makes sense, for example, a Keycard tracking table where each keycard has 1 assigned employee.‌

To enforce a 1:1 relationship within Cinchy, you set the unique constraint and leave it as single-select when creating a link column.‌

1:Many Relationship

‌A common relationship to have is a one to many relationship. For example, one customer can have multiple invoices.‌

To enforce a 1:many relationship within Cinchy, you want to create a link column in the table on the “many” side of the relationship (in the above example, in the invoices table) and leave the link column as single select.‌

Many:Many Relationship

‌You can also have a many to many relationship. For example, you can have multiple customers, and multiple products. Each customer can order multiple products, and each product can be ordered by multiple customers. Another example is books and authors. An author can write multiple books, but a book can also have multiple authors. You can express many to many relationships in two ways.‌

For the use case of multiple customers and multiple products, you can use orders as an intermediary table to create indirect relationships between customers and products. Each order has one customer, and each order has multiple products in it. You can derive the relationship between customers and products through the orders table.‌

To create a many:many relationship through a different entity, you want to create a table for orders. Within orders, you want to create a single-select link to customers and a multi-select link to products.‌

For the use case of books and authors, it makes sense to create a multi-select link column in the Books table where multiple authors can be selected.‌

To create a multi-select link column in Cinchy, you select the Multi-Select option when you create a new link column.

Update the data experience

This page outlines Step 4 of Deploying CinchyDXD: Updating the Data Experience

Introduction

The Data Experience has required updates you must create in your source environment. You don't want to have to repeat the updates in both the source and target environments. The upcoming section will show how to update the data experience in the source environment so that you can then re-package and reinstall in the target environment.

Table updates

  1. Log back into your source environment using the following: URL: <Cinchy source url> User ID: <source user id> Password: <source password>

  2. Make the following changes to the Currency Exchange Rate table:

Column Details
Values

Column 1

Current Column Name Value = Currency 1

New Column NameValue = From Currency

All other settings remain the same

Column 2

Current Column Name Value = Currency 2

New Column NameValue = To Currency

All other settings remain the same

3. Save your changes before leaving the table.

Query updates

  1. Update the Currency Converter query to reflect column name changes that were made in the Table Updates section above (Image 1).

Image 1: Step 1

Be sure to update the @Currency_1 and @Currency_2 labels to better reflect the input fields

  1. Test the query to validate that it's still functioning (Image 2 and 3).

Image 2: Step 2
Image 3: Step 2
  1. Save your query.

5.0 Release Notes

This page contains the release notes for Cinchy version 5.0

New capabilities

  • Cinchy on : You can now deploy Cinchy v5 on the Kubernetes system. Kubernetes is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, and with it comes a myriad of features that help to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and easily scale up/down with your current operations.

    • : Available to those who deploy on Kubernetes, Fluent Bit collects logs which are then displayed through the OpenSearch visual dashboard, for all pods that write to stdout. This streamlines your search for information by putting the control into your hands and compiling your logs in one easy to access place—you can now easily write a query against all of your logs, in all of your environments. You will have access to a default configuration out of the box, but you can also customize your dashboards as well.

    • and : With the Kubernetes addition, you now have access to Prometheus to collect your metrics. Prometheus records real-time metrics in a time series database used for event monitoring and alerting. You can then create custom dashboards through Grafana to display your data in an easy to use visual that makes reporting on your metrics easy, and even set up push alerts based on your custom needs.

    • : You now have the option to deploy Cinchy on PostgreSQL, an open source alternative to the Microsoft SQL Server that can save you the cost of licensing fees. It's standards-compliant, reliable, highly programmable, and allows for concurrency. Utilizing this deployment makes Cinchy more affordable and scalable. We recommend for AWS users.

    • : Kafka is an open-source event streaming platform. This is designed to act as the middleware that allows for messaging between components through a queuing mechanism.

    • : Redis is currently being used to facilitate a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It's also a storage location for the execution output when running batch data syncs.

  • All components have been transitioned from Log4Net to Serilog.

  • The BuildIdentifier property from the appsettings.json will now appear in the healthcheck endpoint at the root level of the JSON payload, with a key of buildidentifier.

Enhancements

  • ELMAH was removed from the platform.

  • Refactored hidden passwords in initialization.

  • Added the ability to ingest S3 data sources for delimited or parquet files.

  • Improved performance of the Connection UI when there is a large (250+) number of columns/mappings.

  • Optimization of bulk UPSERT performance.

  • Added the following UI optimizations for handling large tables: default views set to collapsed, page size limited to 1k records, and added a button for getting the row count. Added the ability for Connections to read/write error files to S3 when an S3 bucket is specified.

  • Added accessibility fixes.

  • We've added support for PUT, PATCH, and DELETE in UDF Extensions in addition to GET/POST.

  • You are able to override a Kafka topic within the appsettings.json for Connections, the Worker, and the Event Listener.

CQL Updates

  • We've added support for the INSERT INTO SELECT statement, which copies data from one table and inserts it into another table. on INSERT INTO SELECT.

  • When using PostGres, the SELECT @cinchy_row_id function will fail in queries. Instead, use the OUTPUT clause with INSERT, UPDATE, DELETE. on OUTPUT.

  • We've added support for the TRUNCATE TABLE statement, which removes all rows from a table, without logging the individual row deletions. on TRUNCATE TABLE. (Please note that we don't support the "With Partitions" argument.)

Bug Fixes

  • Fixed an issue where two users approving data could cause the row to become corrupted.

  • Addressed a memory leak in query translation issue through the creation of a background task that removes expired objects from the cache.

  • Fixed an issue where updating the formula of a non-cached calculated column wouldn’t reflect properly in the Table Columns table.

  • Addressed an issue where changing a field in a table with multi-select links results in the removal of the field value from the version history.

  • Fixed an API issue when updating UDF columns.

  • Fixed an issue where numeric calculated columns that resolved off of a link column's numeric display column wouldn't work.

  • Enabling of Content-Type headers to be added for REST API data syncs during GET requests.

Security Fixes

  • Added frame-ancestors to the UI to prevent UI redress attacks.

  • Implemented HSTS headers for when HTTPS is enabled on Cinchy.

Table features

There are several table features that can be used to better view, collaborate and analyze the data in Cinchy tables.

Views

When you first create a table, a default view called All Data will be created for you under Manage Data. Cinchy builders can create different views for users to manage their data in Cinchy where Views can be filtered and/or sorted according to the requirements.

To switch between views, select the view from the left navigation toolbar under Manage Data (Image 1).

Filters

Users can filter data in a view for one or more columns. Filters persist when users navigate from one view to another. The number of filter criteria is identified against the filter icon (Image 2).

Column display

Users can add, remove or rearrange the columns in a view based on how they need the data represented in the View (Image 3).

Add a column to a view:

  1. Click Display Columns in the top toolbar

  2. From the ‘Add a Column’ drop down, locate and select the appropriate column.

  3. Ensure you click ‘Apply’ to save.

Remove a column from a view:

  1. Click Display Columns in the top toolbar

  2. Click the “X” to the right of the column name to remove.

  3. Ensure you click ‘Apply’ to save.

Rearrange the columns in a view:

  1. Click Display Columns in the top toolbar

  2. Drag the column to the appropriate location in the list of visible columns.

  3. Ensure you click ‘Apply’ to save.

Display Columns don't persist. When you move away from the View, any modifications will be lost.

Sorting

Users can sort data in a view for one or more columns. Sorting can be done by clicking on a column to sort in ascending or descending order (Image 4).

Sorting can also be done by clicking on the Sorting button and selecting the column(s) to be sorted and the order in which the sorting should occur (Image 5).

Sorting Columns don't persist, when you move away from the View any modifications will be lost.

Scrolling

Scrolling “Top” & “Bottom” allows you to jump from the top to the bottom of a view without scrolling (Image 6).

Row height

Row height can be either Collapsed or Expanded using the Row Height drop-down (Image 7).

You can manually resize a row (or multiple rows if you select more of them). You can also double click on the default row number on the left to auto expand the row height.

Freeze or unfreeze rows or columns

Cinchy allows you to freeze and unfreeze a row/columns.

  1. Select the row/column for freezing/unfreezing

  2. Right-click on the row and select Freeze/Unfreeze Row/Column from the menu (Image 8).

v5.4 (Kubernetes)

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by following the below instructions.

Warning: Upgrading to v5.4 will require you to take your Cinchy platform offline. We recommend you perform this upgrade during off-peak hours

Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.4,** you must first run a mandatory process _(Upgrade 5.2)_ and deploy version 5.2. Once complete, you can continue on with your 5.4 upgrade.

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString. Adding will allow you to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

Prerequisites

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.4 k8s-template.zip file.

Take your platform offline

  1. Turn off your Cinchy platform. In a Kubernetes deployment,

Configure to the newest version

  1. Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

  2. Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

  2. Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

  3. Open the new Cinchy v5.4 k8s-template.zip file you downloaded from the Cinchy Releases table.

  4. Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

  1. Perform this step only If you are upgrading to 5.4+ on an SQL Server Database. Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string:

  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

  1. Commit all your changes (if there were any) in each repository.

  2. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.

    1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

    2. Execute the following command to deploy ArgoCD:

  1. If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:

  1. If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:

  1. Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.

Turn on the platform and apply your configurations

  1. Turn your platform back on. In a Kubernetes deployment,

When the new version is started for the first time, one node is made responsible for the migration of the entire database. This process can take upwards of 30 minute to complete and your system will be unavailable during this time.

User Guide

This page provides an overview of the role of the End User in Cinchy.

End-Users

The “End-Users” of the Cinchy platform are those that apply the functionalities created by the “Cinchy Builders” to their business objectives. This can be employees, customers, partners, or systems. Cinchy has two types of end-users: direct and indirect.

  • Direct Users log into Cinchy via the data browser

  • Indirect Users (also commonly referred to as "external users") view/edit data via a third-party application/page that connects to Cinchy via API

All Builders are also End-Users, but not all End-Users are Builders.

What can end-users do?

Cinchy End-Users are able to:

  • Create and save personal queries. Unlike traditional saved queries made by builders, personal saved queries can't be shared and aren't auto exposed as APIs.

  • Use Tables, Saved Queries, and Experiences created by

  • Track version history for the full lifecycle of data

  • Bookmark and manage data

  • Access data through application experiences

  • An end-user can be part of the

View and manage data through a single UI

Cinchy’s data collaboration platform features a Universal Data Browser that allows users to view, change, analyze, and otherwise interact with all data on the network. The Data Browser even enables non-technical business users to manage and update data, build models, and set controls, all through an easy and intuitive UI.

Only see the accessible data

Data on the network is protected by cellular-level access controls, data-driven entitlements, and superior data governance. This means that users can only view, edit, or manipulate data that has been granted access to from the data owner (Image 1).

Track version history for full data lifecycle

All data is automatically version-controlled and can be reverted to previous states with the proper permissions. On all data tables, you can see changes made by users, systems, or external applications through Data Synchronization or by using a Collaboration Log (Image 2).

Access saved queries on the data network

Users can access and run saved queries that are available to them through the Data Marketplace. All queries respect Universal Access Controls meaning you will only see the data that you have access to (Image 3).

Bookmark and manage data in a secure marketplace

Users can also access all accessible tables, queries, and applets through the Cinchy Marketplace. Here you can also order tiles and bookmark favourites (Image 4).

Access data through rich application experiences

Users can also experience their data through custom application experiences that are created by Builders on the platform. All application experiences also respect Universal Access Controls meaning you will only be able to see the data you have been granted access.

Here is an example Experience (Image 5):

Kubernetes
Fluent Bit,
OpenSearch
Grafana
Prometheus
PostgreSQL
Amazon Aurora
Kafka
Redis
Click here for more information
Click here for more information
Click here for more information
"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"},
"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=;Host=;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True",
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
bash deploy_argocd.sh
bash deploy_cluster_components.sh
bash deploy_cinchy_components.sh
using the Cinchy Utility
TrustServerCertificate=True
you can do so via ArgoCD.
TrustServerCertificate=True
you can do so via Argo CD
Image 1: Download the artifact
“Builders"
Administrators group
Image 1: Cellular level access
Image 2: Collaboration Log
Image 3: Executing a Saved Query
Image 4: Accessing everything via a secure marketplace
Image 5: An example experience

v5.5 (Kubernetes)

This page details the instructions for upgrading your Cinchy platform to v5.5 on Kubernetes

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by following the below instructions.

This release requires you to run the Cinchy Upgrade Utility. Please review and follow the directives for Upgrade 5.5 here.

Additionally,** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.5, you must first run a mandatory process (Upgrade 5.2)** using the Cinchy Utility and deploy version 5.2. Once complete, you can continue on with your 5.5 upgrade.

If you are upgrading from Cinchy v5.3 or lower to Cinchy v5.5 on an SQL Server Database, you will need to make a change to your connectionString. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=<password>;Host=<

Prerequisites

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.5 k8s-template.zip file.

Configure to the newest version

  1. Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.

  2. Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.

  2. Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented and your deployment.json.

  3. Open the new Cinchy v5.4 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.

  4. Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

Perform this step only If you are upgrading to 5.5 on an SQL Server Database and didn't already make this change in any previous updates. Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True

"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=;Host=;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True",
  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Commit all your changes (if there were any) in each repository.

  2. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.

    1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

    2. Execute the following command to deploy ArgoCD:

bash deploy_argocd.sh
  1. If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:

bash deploy_cluster_components.sh
  1. If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:

bash deploy_cinchy_components.sh
  1. Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.

Enable TLS 1.2

This page details how to enable TLS 1.2 on Cinchy v5.

  1. Navigate to the CinchySSO Folder > appsettings.json file.

  2. Find the following line:

add key="TlsVersion" value=""
  1. Replace the above line with the following:

add key="TlsVersion" value="1.2"
  1. Navigate to the Cinchy Folder > web.config file.

  2. Find the following line:

add key="TlsVersion" value=""
  1. Replace the above line with the following:

add key="TlsVersion" value="1.2" 
  1. Restart the application pools in IIS for the changes to take effect.

Image 1: Using Views
Image 2: Filtering
Image 3: Display Columns
Image 4: Sorting via a table column
Image 5: Sorting via the Sort button
Image 6: Scrolling
Image 7: Expanding/Collapsing the row height
Image 8: Freezing a Row/Column

5.5 Release Notes

Cinchy version 5.5 was released on February 24, 2023.

For instructions on how to upgrade your platform to the latest version, please review the documentation here.

Cinchy Upgrade Utility

The Cinchy Upgrade Utility was previously introduced in v5.2 to facilitate a mandatory INT to BigInt upgrade. This tool will continue to be used in subsequent releases as an easy way to deploy necessary changes to your Cinchy platform.

For version 5.5, you must run the Upgrade Utility to fix a row-breaking issue that could be triggered on cells with over 4000 characters, where you are unable to update any column in your record.

Please review the Utility Guide or Upgrade Guide for further details.

New: Personal Access Tokens

You now have the option to use personal access tokens (PATs) in Cinchy, which are alternatives to using passwords for authentication. Similar to Cinchy Bearer Tokens, you can use a Cinchy PAT to call the Cinchy API as your current user, meaning your associated access controls will be honoured as well. Cinchy PATs, however, have an expiration date of up to 1 year. A single user can have up to 5 PATs active at one time.

For information on setting up, configuring, and managing PATs, please review the documentation here.

MongoDB

We've added MongoDB to our Connections offering as both a source and target connector.

MongoDB is a scalable, flexible NoSQL document database platform known for its horizontal scaling and load balancing capabilities, which has given application developers an unprecedented level of flexibility and scalability.

Review the following documentation to use this new capability in Cinchy:

  • Setting up a MongoDB Collection (Event Triggered) as a source connector

  • Setting up a MongoDB Collection as a source connector

  • Setting up MongoDB as a target connector

  • Setting up a MongoDB change stream

Enhancements

  • We're continuing to improve our text editor functionality for forms. You can now embed tables and images into your text. We've also made various styling and usability quality of life updates, including the addition of checkbox style lists.

Add the URL to your image
Add or remove columns and rows to your table
  • We've added support for ephemeral volumes in Connections on a Kubernetes deployment. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node**;** it allows pods to be started and stopped without being limited to the location of persistent volume. Running more than one pod for Connections per availability zone enables you to effectively leverage auto scaling functionality.

  • We've updated the Connections experience to enable more use cases. You can now use CDC parameters in Calculated Columns and use the CinchyID in the sync key in real-time syncs.

  • Kafka supports cluster encryption and authentication, which can encrypt data-in-transit between your applications and Kafka. We've added the ability to include this encryption/authentication in the Listener Config when setting up real-time syncs using Kafka.

    • Using this parameter will specify which protocol will be used for communication between client and server. Cinchy currently supports the following options: Plaintext, SaslPlaintext, or SaslSsl.

      • Paintext: Unauthenticated, non-encrypted.

      • SaslPlaintext: SASL-based authentication, non-encrypted.

      • SaslSSL: SASL-based authentication, TLS-based encryption.

    • Review the Listener Config documentation on Kafka here.

  • We've improved the implementation of tooltips such that linked columns display the tables that they link to. Hovering over the i symbol on a linked column will show the linked domain and table in the following format: Domain - Table; ex: HR - Employees. You can now also see them in the grid view.

In order for the above tooltip improvement to reconcile in your Cinchy environment, you must deploy an up-to-date version of the Forms Data Experience. You can review the installation instructions here and retrieve the package here.

  • We've introduced a Retry Configuration for REST API sources and targets. This will automatically retry HTTP Requests on failure based on a defined set of conditions. This capability provides a mechanism to recover from transient errors such as network disruptions or temporary service outages.

    • For more information on using this configuration, refer to the documentation here for REST sources and here for REST targets.

  • We've increased the default retention for Prometheus from 5GB to 50GB to allow you to store more metric data at a time.

    • This change is automatically reflected in new v5.5 deployments. Customers on previous v5 versions wishing to implement the change are able to rerun the automation script and deploy the new template to reflect the update.

  • To make the Forms experience more responsive and process quicker, we've introduced lazy loading of records while searching. Instead of loading and rendering every form record in the search box, which can be a slow process for use cases with millions of records, lazy loading will initially retrieve a limited number of records. These results can then be further optimized by inputting your Lookup Filter Conditions.

  • We've added the ability to pass parameters from a REST response into post sync scripts during both real-time and batch data syncs, allowing you to do more with your REST API data.

    • For an example and instructions on this capability, please refer to the documentation here.

  • Data changes in Cinchy (CDC) can now be used to trigger a data sync from a REST API or MongoDB data source to a specified target. This works as an alternative to RunQuery.

    • For more information, please review the documentation here (REST API) and here (MongoDB).

  • We've added two new functions, JSON_ESCAPE and URL_ESCAPE, which can be used in Connections to escape parameter values when used in constructing the body of a REST API Request or in the URL

  • We've added an Authorization header type for REST API data syncs in Connections. An authorization request header can be used to provide credentials that authenticate a user with a server, allowing access to a protected resource. Selecting this header defines the Header Value as a password field.

Bug Fixes

  • We've solved an issue that was causing Connections to get stuck behind long running jobs despite there being capacity to execute. This fix enables predictable execution behavior without stoppage.

  • We've fixed an issue in the MatchEngine where execution was failing in versions of Cinchy above 5.2.

  • File sourced data syncs will no longer fail, allowing you to run Connection jobs with uploaded files without the risk of a file not found error when auto scaling is enabled.

  • To prevent needlessly exhausting Cinchy IDs, the platform will no longer continuously retry to update records that have failed to save. This can sometimes occur when a value causes a calculated field to violate a uniqueness constraint. If the below error appears, you will have to manually update the cell to retry the save.

  • We've fixed a bug that was causing the Connections UI to crash if you attempted to run a job while there was an empty parameter in the Info tab (ex: no name or formula)

  • We've fixed a bug that would causes images in Forms to sometimes appear with a label above them, using the image's URL as the label's value.

  • We've fixed a bug that was forcibly terminating authenticated sessions in Grafana, now allowing you to work without interruptions.

  • We've solved an issue where using a form as a child form with file links wouldn't render the link thumbnail correctly in the "edit record" view.

  • We've fixed a bug that prevented record updates when multiple users attempted to update a row in too quick of a succession.

  • We've fixed an issue where doing delta batch syncs with a REST API target wouldn’t replace the @COLUMN parameter correctly.

  • We've fixed a bug in Connections where an Oracle sync target would have the wrong tag in the Config XML.

  • We've fixed a bug that was causing a “Listener is running” message to erroneously appear when the status of the listener was actually set to Disabled.

  • We've fixed a bug that was preventing REST API real-time sync execution errors from being inserted into the execution errors table.

5.6 Release Notes

This page outlines the various changes made to the Cinchy platform in version 5.6

Cinchy version 5.6 was released on May 31st, 2023.

For instructions on how to upgrade your platform to the latest version, please review the documentation here.

When upgrading to Cinchy v5.6, there are mandatory changes that must be made within your platform appsettings.json files. For an IIS deployment this involves making manual updates to your appsetting.json files. For a Kubernetes deployment, the changes will reconcile automatically if you are deploying the new 5.6 template. If you aren't deploying the new template, please reach out to the Support team. For instructions on how to upgrade your platform to the latest version, please review the documentation here.

If you are planning to update your platform to 5.6 on a Kubernetes deployment, please note that you will also need to update your AWS EKS Kubernetes version to 1.24.

Deprecation of the k8s.gcr.io Kubernetes Image Repository

The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.

Instead, there is a new registry: registry.k8s.io.

New Cinchy Deployments: this change will be automatically reflected in your installation.

For Current Cinchy Deployments: please follow the instructions outlined in the upgrade guide to ensure your components are pointed to the correct image repository.

You can review the full details on this change on the Kubernetes blog.

Connections UI Change to Sync Behaviours Tab

To continuously improve our Connections experience, we've made changes to the Sync Behaviours tab for Full-File data syncs.

  • Record behaviour is now presented via radio buttons so that you can see and select options quicker and easier than ever before.

  • We've added a new "Conditional" option for Changed Record Behaviours. When Conditional is selected, you will be able to define the conditions upon which an Update should occur. For instance, you can set your condition such that an update will only occur when a "Status" column is changed to Red, otherwise it will ignore the changed record. This new feature provides more granularity on the type of data being synced into your destination and allows for more detailed use cases. For more information on this new function please review the documentation here.

Enhancements

Deployment

We've added support AWS EKS EBS volume encryption for customers wishing to take advantage of industry-standard AES-256 data encryption without having to build, maintain, and secure their own key management infrastructure.

By default, the EKS worker nodes will have a gp3 storage class for new deployments. If you are already running a Cinchy environment: make sure to keep your eks_persistent_apps_storage_class to gp2 within the DevOps automation aws.json file.

If you want to move to gp3 storage or gp3 storage and volume encryption: you will have to delete the existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with StatefulSets so that ArgoCD can recreate the proper resources.

Should your Kafka cluster pods not come back online after deleting the existing volumes/pvc's, restart the Kafka operators. You can verify the change by running the below command:

kubectl get pvc --all-namespaces

Platform

  • Miscellaneous security fixes.

  • General CDC performance optimizations.

Connections

  • Continuing to increase our data sync capabilities and features, you can now use @CinchyID as a parameter in post sync scripts when the source is from a Cinchy Event (such as the Event Broker, the Event Triggered REST API, and the Event Triggered MongoDB sources). This means that you can now design post sync scripts that take advantage of the unique CinchyID value of your records.

  • To better communicate the relationship between the Source and any required Listener Configurations, we've added additional help text to event-based sources to the Source step of a connection. This text will help explain when a listener configuration is required as part of the sync.

  • We've expanded on our Cinchy Event Triggered data sync source features (REST API and MongoDB), allowing you more freedom to utilize your data. You now have the ability to reference attributes of the CDC Event in your calculated columns. (Note that syncs making use of this must limit their batch size to 1.)

  • To better enable your business security and permission-based needs, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up an AWS IAM role for use in Connections, please review the documentation here.

  • You are also able to use AWS IAM roles when syncing S3 file or DynamoDB sources in Connections. For more information, please review the "Auth Type" field in the relevant data sync source pages.

  • To increase your data sync security and streamline authentication, we've added support for the use of x.509 certificate authentication for MongoDB Collection Sources, MongoDB (Cinchy Event Triggered) Sources, and MongoDB Targets. This new feature can be accessed directly from the Connections UI when configuring your data sync. For more information, please review the configuration pages for MongoDB Collection Source, MongoDB (Cinchy Event Triggered) Source, and MongoDB Targets.

Tip: Click on the below image to enlarge it.

Bug Fixes

Platform

  • We've fixed a bug that was causing bearer token authenticated APIs to stop working on insecure HTTP Cinchy environments.

  • We've fixed an issue relating to the .NET 6 upgrade that was causing the Event Listener and Worker to not start as a service on IIS in v5.4+ deployments.

  • We've fixed a “Column doesn’t exist” error that could occur in PostGres deployments when incrementing a column (ex: changing a column data type from number to text).

  • We've fixed a bug where table views containing only a single linked column record would appear blank for users with “read-only” permissions.

Connections

  • We've fixed a bug where the Listener Configuration message for a data sync using the MongoDB Event source would return as "running" after it was disabled during an exception event -- the message will now correctly return an error in this case.

  • We've fixed a bug that was preventing DELETE actions from occurring when Change Approvals were enabled on a CDC source.

  • In continuing to provide useful troubleshooting tools, we've fixed a bug that was preventing dead messages from appearing in the Execution Errors table when errors occurred during the open connection phase of a target. This error may have also occurred when a MongoDB target had a connection string pointing to a non-existent port/server.

  • We've fixed a bug that was preventing Action Type column values of "Delete" from working with REST API target Delta syncs.

  • We've fixed a data sync issue preventing users from using environment variables or other parameters in connection strings.

  • We've fixed a bug in the Polling Event data sync where records would fail with a “unique constraint violation” if both an insert and an update statement happened at nearly the same time. To implement this fix, you need to add the “messageKeyExpression” parameter to your listener config when using the Polling Event as a source.

  • We've fixed a bug that was causing data syncs to fail when doing platform event inserts of any type into Salesforce targets.

  • We've fixed a bug where using the ID Column in a Snowflake target sync would prevent insert and update operations from working.

  • We've fixed a bug where attempting to sync documents using a UUID (Universally Unique IDentifier) as a source ID in a MongoDB Event Triggered batch sync would result in a blank UUID value when saved to a Cinchy table.

Meta-Forms

We've made application stability and quality fixes to Forms, including:

  • Custom date formats now work in Grid, Form, and Child Form views.

  • A child form that has a Link column reference to a parent record now auto populates with the parent record's identity.

  • A space has now been added between multi-select values when displaying a record in an embedded child table.

  • Negative numbers can now be entered into Number type inputs on forms.

  • We've fixed an issue where updated file attachments on a form would fail to save.

CQL

  • We've fixed a bug that was causing a “Can’t be Bound" error when you attempted to use an UPDATE query on a multi-select link column as a user with multiple filters active.

Data Browser overview

This page provides an overview of some of the important pieces of the data browser: the homepage, the login page, and the data network.

Cinchy officially supports Google Chrome and Mozilla Firefox browsers for accessing the data browser.

Homepage

Once you log in to Cinchy, you'll be on the Homepage (Image 1). From here, you can navigate to a variety of tables, queries, and applets you have access to.

You can return to this page at any time by clicking the Cinchy logo in the top left corner.

Image 1: The Cinchy Homepage

Searching

‌All objects you have access to in your Marketplace (including bookmarks) are searchable and can be filtered by typing the partial or full name of the object you are searching for in the search bar (Image 2).

Image 2: The search bar

You can also search by object type by clicking on either Tables, Queries, or Experiences in the toolbar.

Bookmarks

‌You can bookmark your most often used objects and rearrange them to your liking within your bookmarks.

To bookmark an object, select the star. The star will be yellow when bookmarked, and grey when not (Image 3).

Image 3: Bookmarking

The object will pop into your “Bookmark” section. To rearrange your bookmark, drag and drop the object into the desired order.

Network Map

You "Network Map" shows a visualization of all tables in Cinchy you have access to and how they're all connected (Image 4).

Each of the coloured circles represents an object in Cinchy. The lines between them show the links between them.

Image 4: Cinchy Network Map

You are able to search and open tables from this view using the search bar on the left (Image 5).

Image 5: The search function

You can see what the network looked like in the past by clicking and dragging the pink circle along the timeline at the bottom.

You can learn more about the Network Map here.

Network Map extra parameters

Cinchy v5.2 added the ability to include new parameters on the URL path for your network visualizer to focus your node view. You can now add Target Node, Depth Level, and Max Depth Level parameters.

Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=

  • Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.

    • Target Node uses the TableID number, which you can find in the URL of any table.

    • Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node

  • Max Depths: This parameter defines how many levels of network hierarchy you want to display.

    • Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.

  • Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.

    • Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.

The below example visualizer uses the following URL (Image 6): <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1

  • It shows Table ID 8 ("Groups") as the central node.

  • It only displays the Max Depth of 2 connections from the central node.

  • It highlights the nodes that have a Depth Level of 1 from the central node.

Image 6: A Network Map with Parameters

Logo

You can upload a custom logo to appear on your platform login screen and homepage. You will need to have admin access to do so.

Examples without a logo uploaded (Images 7&8)

Examples with a logo uploaded (Images 9&10)

Upload a logo

  1. Navigate to <base url>/admin/index

  2. Scroll to the bottom of the admin panel and navigate to the “Upload Logo” button.

  3. Upload your logo.

Remove a Logo

  1. Once uploaded, your logo is stored in the System Properties table_._

  2. Navigate to the table and find the row with Name: Logo (Image 12)

Image 12: Find the Logo row in the System Properties table
  1. Delete the Logo row to remove the logo.

Multilingual support

This page outlines multi-lingual support information.

Translate API

Translation API

POST <Cinchy-URL>/API/Translate

Pass in a list of literal GUIDs, along with a language and region. If translations are found in that language, they will be returned.

Request Body

Name
Type
Description

Logic

  • If the translation exists in the language and region specified, it will be returned.

  • If the translation exists in the language but not the specified region, it will still be translated and returned.

  • If the GUID exists but it's not available in the specified language, the default text in the Literals table will return.

  • If the GUID doesn't exist or you don't have permission to it, it will return the GUID back as the translation.

System Tables

Cinchy has three tables to provide language support.

  1. [Cinchy].[Literal Groups]

  2. [Cinchy].[Literals]

  3. [Cinchy].[Literal Translations].

Literal Groups

This table can optionally be used to group the translations. The default Cinchy strings belong to the Cinchy literal group. We recommend you create one literal group per applet or UI so you can retrieve the full list of GUIDs required for that page/applet easily.

Literals

This table defines all the strings that you want to translate.

Default Text

String that displays if no translation is found for the language specified.

GUID

GUID used to refer to the literal. A UUID will be generated by default, but can be overrode using the GUID Override field to something more human-readable.

Literal Group

Use this to group your strings so they can be easily retrieved. Note that this is a multi-select so you can use a single literal for multiple applets (including using the default Cinchy literals and translations for custom applets).

Literal Translations

This is the table where the translations are stored.

Translated Text

This is the translated string that's returned.

Literal

This is the literal the translation is for.

Language and Region

A language must be specified for a translation. Region can also be optionally specified for region specific words (ex. color vs colour).​

Builder Guide

This page gives an overview on Cinchy Builders

Cinchy Builders

Cinchy Builders use the Cinchy platform to build an unlimited number of business capabilities and use-cases.

The “Cinchy Builder” has access to perform the following capabilities:

  • Change Table Schema (use Cinchy’s “Design Table” functionality)

  • Grant access (use Cinchy’s “Design Controls” functionality)

  • Edit Cinchy Data in Cinchy System Tables

  • Create, Save, and Share Cinchy Queries

  • Perform Cinchy Queries on the Cinchy data network

  • Import/export packaged business capabilities (like deployment packages)

  • Build Cinchy Experiences

  • Perform integration with Cinchy (such as Cinchy Command Line Interface [CLI] operations)

  • Create and Deliver an unlimited number of Customer Use Cases within Cinchy

  • A builder can be part of the

The End-Users of the Cinchy platform are those that use the functionalities created by the “Cinchy Builders” to their business objectives.

All Builders are also End-Users, but not all End-Users are Builders.

of the Cinchy Platform can:

  • Use Tables, Saved Queries, and Experiences created by “Builders”

What Builders do

Builders can leverage Cinchy as one platform to simplify solutions delivery as they:

  1. Connect

  2. Protect

  3. Collaborate

  4. Build

  5. Reuse

Connect

Create your new network of data

Data collaboration eliminates point-to-point integration, reducing cost and complexity (including real-time, batch, on-prem, cloud, legacy, and SaaS services) and allowing custom data-sync configurations. This drives faster time to market, lower costs and improved usability.

Connect data to make silos and integration obsolete

When you connect a new data source to your data network, you can use it with data from any other source on the network with no further integration efforts. The more sources you connect, the more powerful and efficient it becomes. You can extend data on the network with attributes and entirely new entities, including calculated data, derived data, AI models, and user-managed data.

Protect

Manage and protect data down to the individual cell

Data on Cinchy is protected by cellular-level access controls, data-driven entitlements, and superior data governance. This includes meta architecture, versioning, and write-specific business functions that restrict user views, such as a managed hierarchy. Owner-defined permissions are universally enforced, significantly reducing the effort of managing them at the enterprise level. You can use existing Active Directory and SSO access policies to set controls for an individual user, external system, or user-defined functions (such as approving updates row by row or using bulk approvals).

Collaborate

Track version history for full lifecycle of data

All data is automatically version-controlled and can be reverted to previous states. You can see changes made by users, systems, or external applications through Data Synchronization or by using a Collaboration Log.

View and manage data through a single UI

Use the universal Data Browser to view, change, analyze, and otherwise interact with ALL data on the Fabric. Non-technical business users can manage and update data, build models, and set controls, all through an easy and intuitive UI.

Build

Create and share queries

Cinchy’s data collaboration platform features an intuitive Drag and Drop Query Builder that allows Builders to create using the Cinchy Query Language (), a proprietary language specific to Cinchy. All queries can be saved and shared, and query results automatically generate a full no-code API.

Consolidate legacy systems, create new solutions

By decoupling the data from the application, our Autonomous Data Network lets you consolidate legacy applications to increase operational agility and reduce overhead. You can create enterprise-grade solutions using the Application SDK as quickly as you would build Excel-based solutions and without the operational risk. Application SDK (React Native, Angular, and REST SDKs) lets you build custom applications for end users.

Add third-party visualization tools

For even more flexibility, connect your Data Network to third-party Data Visualization tools. You’ll be able to run cross-queries against all data on the Fabric while maintaining universal access policies for users, systems, and external applications.

Re-use

The more you use data collaboration, the more it’s capable of doing.

Any new data you add to the network will work in conjunction with any previously existing data instantly. This means you can re-use data in new ways, with no time-consuming integration efforts. Teams can collaborate directly in real-time, allowing it to act as a central data hub while simplifying integration. Unlike traditional data architecture projects, which grow more complicated as they involve more data sources, data collaboration delivers solutions faster and faster as more data is added to it.

Kubernetes architecture

This page details the deployment architecture of Cinchy v5 when running on Kubernetes.

Infrastructure configuration (on cluster)

The diagram below shows a high level overview of a possible Infrastructure diagram with components on the cluster, but your specific configuration may vary (Image 1).

Tip: Click on an image to enlarge it.

AWS infrastructure configuration (outside cluster)

When deploying Cinchy version 5 on Kubernetes, you may deploy via Amazon Web Services (AWS). The diagram below shows a high level overview of a possible AWS Infrastructure with components outside the cluster, but your specific configuration may vary (Image 2).

Tip: Click on an image to enlarge it.

Infrastructure component overview

Azure infrastructure configuration (outside cluster)

When deploying Cinchy v5 on Kubernetes, you may deploy via Microsoft Azure. The diagram below shows a high level overview of possible Azure Infrastructure with components outside the cluster, but your specific configuration may vary (Image 3).

Tip: Click on an image to enlarge it.

Infrastructure component overview

Cluster level component overview

The following highlighted area provides a high-level overview of cluster level components used when deploying Cinchy on Kubernetes and what versions they're running.

These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components. This diagram shows them in the cluster (Image 4).

Tip: Click on an image to enlarge it.

Cluster level components

These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components.

  • Service Mesh - : Istio handles and routes all inbound traffic to your Cinchy instance, keeping it secure and managed.

  • Monitoring/Alerting - & Prometheus consumes metrics from the running components in your environment, which you can visualize into user friendly graphs and dashboards by Grafana. Prometheus can also connect to third party services to provide alerting capabilities. Both Prometheus and Grafana use persistent storage.

  • Logging - and : OpenSearch captures and indexes all logs in a single, accessible location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.

  • Caching - : Redis facilitates a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It's also a storage location for the execution output when running batch data syncs.

  • Event Processing - : This acts as the middleware for messaging between components through a queuing mechanism. Kafka features persistent storage.

Cluster configuration

Before you deploy Cinchy on Kubernetes, consider the following about your cluster configuration :

  • How many clusters will you need?

  • Will you be sharing from an existing cluster?

  • Will you be running multiple environments on a single cluster?

Instance component overview

Each Cinchy instance uses the following components to either provide an experience to users/applications or connect data in/out of Cinchy. You can deploy multiple Cinchy instances per cluster, so these components will repeat for each environment.

The following highlighted area provides a high-level overview of instance level components used when running Cinchy on Kubernetes (Image 5).

Tip: Click on an image to enlarge it.

  • Meta Experiences: Cinchy offers pre-packaged experiences that you can import into your Cinchy environment and use on your data network. This includes experiences like and .

  • Connections: Use the Cinchy Connections experience to create data syncs in/out of the platform. It features persistent storage.

  • Data Browser: Cinchy’s data collaboration platform features a Universal Data Browser that allows users to view, change, analyze, and otherwise interact with all data on the network. The Data Browser even enables non-technical business users to manage and update data, build models, and set controls, all through an easy and intuitive UI.

  • Identity Provider: An Identity Provider (IdP) creates and manages user credentials and associated identity attributes. Cinchy uses IdPs authentication services to authenticate end-users.

  • Event Listener: The Event Listener picks up events from connected sources during a data sync. Review the for further information on the Event Listener. The Event Listener uses persistent storage.

  • Event Stream Worker: The Event Stream Worker processes data picked up by the Event Listener during data syncs. Review the for further information on the Event Stream Worker. The Event Worker uses persistent storage.

  • Maintenance (Batch Jobs): Cinchy through the CLI. This includes the data erasure and data compression deletions.

GitOps

is a declarative, GitOps continuous delivery tool for Kubernetes that simplifies the application deployment and lifecycle management. ArgoCD is highly recommended for deploying Cinchy, but you can also use another tool.

Once you set up the configurations, ArgoCD automates the deployment of the desired application states into your specified target environments. Implemented as a Kubernetes controller, it continuously monitors running applications and compares the current, live state against the desired target state (as specified in your repositories).

Support

Here's how to get help on using your Cinchy platform.

Contact support

If at any point you require help or clarification on this documentation or your Cinchy platform, you can reach out to our Support team:

  • Via email: [email protected]

  • Via phone: 1-888-792-6051

  • Through the support portal:

Administrators group
“End-Users”
CQL
Support Portal
Istio
Prometheus
Grafana:
OpenSearch
Fluent Bit
Redis
Kafka
Meta-Forms
Meta-Reports
Data Sync page
Data Sync page
performs maintenance tasks
ArgoCD
Image 1:Infrastructure Configuration (On Cluster)
Image 2: AWS Infrastructure Configuration (Outside Cluster)
Image 3: Azure Infrastructure Configuration (Outside Cluster)
Image 4: Cluster Level Component Overview
Image 5: Instance Component Overview

debug

boolean

Defaults to false if not specified. Debug true will explain why that string was returned as the translation.

region

string

Subtag from the Regions table. User's preferences will be used if not specified.

guids

array

Array of strings. Guids from the Literals table.

language

string

Subtag from the Languages table. User's preferences will be used if not specified.

{
  "data": {
    "button.create": {
      "translation": "Create",
      "language": "en",
      "region": "US",
      "defaultText": false
    },
    "button.cancel": {
      "translation": "Cancel",
      "language": null,
      "region": null,
      "defaultText": true
    },
    "button.favorite": {
      "translation": "Favourite",
      "language": "en",
      "region": "CA",
      "defaultText": false
    },
    "button.delete": {
      "translation": "button.delete",
      "language": null,
      "region": null,
      "defaultText": false
    }
  }
}

Use Self-Signed SSL Certs (Kubernetes)

This page details the optional steps that you can take to use self-signed SSL Certificates in a Kubernetes Deployment of Cinchy.

Follow this process only after running the devops.automations script during your initial deployment and each additional time you run the script (such as updating your Cinchy platform), as it wipes out all custom configurations you set up to use a self-signed certificate.

  1. Execute the following commands in any folder to generate the self-signed certificate:

openssl genrsa -des3 -out rootCA.key 4096
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.crt
openssl genrsa -out mydomain.com.key 2048
openssl req -new -sha256 -key mydomain.com.key -subj "/C=US/ST=CA/O=MyOrg, Inc./CN=mydomain.com " -out mydomain.com.csr
  1. Create a YAML file located at cinchy.kubernetes/platform_components/base/self-signed-ssl-root-ca.yaml.

  2. Add the following to the YAML file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: self-signed-ca-pemstore
data:
  rootCA.crt: |
    <rootCA.crt>
  1. Add the self signed root CA cert file to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base folder.

  2. Add the yaml code snippet to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base/kustomization.yaml file, changing the below files key value as per your root ca cert file name:

configMapGenerator:
- name: self-signed-ca-pemstore
  behavior: replace
  files:
  - rootCA.crt
  1. Add the following line to the cinchy.kubernetes/platform_components/base/kustomization.yaml file

- self-signed-ssl-root-ca.yaml
  1. Add the below Deployment patchesJson6902 to each of your cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/ENV_NAME/PLATFORM_COMPONENT_NAME/kustomization.yaml files, except base.

  • Ensure that the rootCA.crt file name is matched with ConfigMap data, configMapGenerator files, and the patch subpath.

    - op: add
      path: /spec/template/spec/volumes/-
      value: 
        configMap:
          name: self-signed-ca-pemstore
        name: self-signed-ca-pemstore  
    - op: add
      path: /spec/template/spec/containers/0/volumeMounts/-
      value: 
        mountPath: /etc/ssl/certs/rootCA.crt
        name: self-signed-ca-pemstore
        subPath: rootCA.crt
  1. Once the changes are deployed, verify the root CA cert is available on the pod under /etc/ssl/certs with below command. Make sure to input your own POD_NAME and NAMESPACE:

 kubectl exec -it POD_NAME -n NAMESPACE -- openssl x509 -in /etc/ssl/certs/rootCA.crt -text

For further reference material, see the linked article on self-signed certificates in Kubernetes.

Release notes

This is the overview page for version 5 release notes.

v4.21 (IIS)

This page details the upgrade process for Cinchy v4.21 on IIS.

Upgrading on IIS

This process can be run when upgrading from a Cinchy version that's not v5.0+.

Prerequisites

  1. Follow this guide to take a backup of your database.

  2. Extract the new build for the version you wish to upgrade to.

Upgrade process

  1. Swap out the following configs with your current instance configs:

    1. Cinchy/web.config

    2. CinchySSO/appsettings.json

    3. Log4net.config

    4. Web.config

  2. Execute the following command:

iisreset -stop
  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

iisreset -start
  1. Start Cinchy in your browser.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

5.1 Release Notes

This page details the Cinchy v5.1 release notes

New Connector

  • A new connector has been added to the Cinchy Connections experience: you can now use Snowflake as a source and target connector when performing data syncs. You can review the documentation on connecting as a source here and as a target here.

We've added Snowflake's data cloud as a connector for data syncs

GraphQL API Beta Release

  • Our GraphQL API provides a complete and understandable description of your data and gives you the power to ask for exactly what data you need and nothing more all in a single request, while leveraging the existing ecosystem of GraphQL developer tools.

  • This is a beta release that offers read-only queries. Future releases will include more query features and mutation support (writes).

Enhancements

Connections

  • Performing Data Sync from Cinchy to Salesforce no longer requires write access to the sync key column. This means that you can maintain your Salesforce environment and security protocols without needing to either modify them or create additional attributes, for your sync to work.

  • We've introduced a new STRING_ESCAPE() function that escapes single quotes when wrapped around data sync parameters. It uses the following syntax to wrap around parameters or column references respectively: STRING_ESCAPE(@COLUMN('yourcolumn')) or STRING_ESCAPE(@yourparameter). This function is particularly useful when used in a post sync script's CQL.

Data Browser

  • We've added WCAG 2.1 AA Accessibility fixes to improve screen-reader performance and keyboard navigation accessibility.

  • We’ve implemented a new loading screen for when Cinchy is installing and initializing.

New loading screen for install and initialization.

Meta Forms

  • We've improved the performance of Meta Forms by reducing the rendering time and adding visual guides to help you see which form sections have completed loading.

  • Date fields with custom display formats will now render correctly, as opposed to showing up in mm/dd/yyyy format by default.

  • In forms that have a 1:1 parent/child hierarchy, we've added the option to render the child form as a flattened form, instead of in a table grid.

  • To improve UI consistency across Forms, the record selection drop down will now appear even if no records exist in the destination table.

CQL Updates

  • We've added a new function, GetLastModifiedBy([Column]), which will return the CinchyID of the user who last modified the specified column. For more information on this new function, review the documentation here.

Bug Fixes

  • We've fixed an error where scrolling in a table with a file column in certain situations prevents the UI from rendering all the data.

  • We've fixed an issue where sorting by columns with a ‘%’ in the column name caused the rows not to sort correctly in the UI.

  • We've fixed a bug in Meta Forms that prevented queries in child form filters from working as expected when using OR conditions.

Delete tables

This page outlines how to delete tables in the Cinchy platform.

Overview

You can delete tables on your Cinchy platform in three ways:

  1. Use the "Design Table" tab in the relevant table. This option is available to any user with the "Design Table" entitlement on the table.

  2. Use CQL. This option is available to any user with the "Design Table" entitlement on the table.

  3. Use the Tables table. This option is available to any user with the "Delete Row" entitlement on the table, which is usually an Administrator.

To ensure that the relevant user has the correct entitlements, you can navigate to the Data Controls > Entitlements tab of the relevant table. The "Design Table" column should be checked off (Image 1).

Deleting a table via any of the below methods results in your data becoming inaccessible, however it will technically still be available on the underlying database. To fully remove deleted data, you must use the Data Erasure capability, outlined here.

Image 1: Ensuring the correct entitlements

Delete a table using Design Table

  1. Navigate to the table you wish to delete as a user with "Design Table" access on it.

  2. Select "Design Table" > "Info" > Delete (Image 2).

Erroneously deleted tables can be restored via the [Tables] table, as outlined here.

Image 2: Deleting using Design Table

Delete a table using CQL

  1. Navigate to the Query Builder as a user with "Design Access" on the table you wish to delete.

  2. Use the DROP TABLE statement, shown below, to delete your table.

Syntax

DROP TABLE [Domain].[Table_Name]

Example

DROP TABLE [HR].[Employees]

Erroneously deleted tables can be restored via the [Tables] table, as outlined here.

Delete a table using the Tables table

  1. Navigate to the Tables table a user with "Delete Row" access, generally an Administrator.

  2. Find the row with the table that you want to delete.

  3. Right-click on the row > Delete

Erroneously deleted tables can be restored via the [Tables] table, as outlined here.

Commentary

Comments are used in Cinchy to provide context to your data along with providing a means of collaborating directly with and on the data.

Enter comments

Anyone who can view or edit a cell can comment on it. Any data that's read-only doesn't allow comments to be entered.

To add a comment:

  1. Locate the desired cell

  2. Right-click and select comment (Image 1).

  3. Enter the comment in the comment window

  4. Click "Comment" to finish

Edit comments

Comments can be modified only by only the individual(s) that have created the comment(s).To edit a comment, complete the following:

  1. Hover over the comment

  2. Click the pencil icon (Image 2).

  3. Make the appropriate edit

  4. Click the Submit button to save the change

Delete comments

Comments can be deleted only by the individual(s) who has created the comment(s).To delete a comment, complete the following:

  1. Hover over the comment

  2. Click the garbage bin icon (Image 3).

Archive comments

  • A User can archive his own comment regardless of approve permissions

  • A User with the Approve All permission can archive any cell comments.

  • A User with the Approve Select Cell permission can archive comments on that specific cell

To archive all comments in a cell:

  1. Hover over the comment

  2. Click the Archive All button (Image 4).

You can also archive just one comment in a comment string by clicking the archive icon for the specific comment you wish to archive in the thread.

System Table comments

Comments are stored in the [Cinchy].[Comments] table.

v5.5 (IIS)

Upgrading on IIS

The following process can be run when upgrading any v5.x instance to v5.5 on IIS.

Warning: This release requires you to run the Cinchy Upgrade Utility.

Additionally,** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.5, you must first run a mandatory process (Upgrade 5.2)** and deploy version 5.2. Once complete, you can continue on with your 5.5 upgrade.

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString in your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

Ex:

Prerequisites

  1. Take a backup of your database.

  2. Extract thefor the version you wish to upgrade to.

Upgrade process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. If you are upgrading to 5.5 on an SQL Server Database and didn't do so in any previous updates, you will need to make a change to your connectionString in both your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

    Ex:

  3. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

Data controls

You can find the Data Controls menu on the left-hand navigation bar of a table.

From here you may select to:

  • Change your

v5.2 (Kubernetes)

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.

If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.2**, please .

Configure to the newest version

  1. Navigate to your Cinchy devops.automation repository

    1. Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)

    2. In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.1.0 > 5.2.0).

  1. Rerun the deployment script by using the following command in the root directory of your devops.automations repository:

  1. Commit and push your changes.

Apply your configurations

If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:

  1. Navigate to the ArgoCD portal.

  2. Refresh your component(s). If that doesn't work, re-sync.

Deploying Cinchy

Introduction

This section guides you through the deployment process for Cinchy version 5: from planning all the way through to installation and upgrades.

  • If you are looking to deploy Cinchy v5, please start here and read through all the sub-pages:

    • Once you have familiarized yourself with the above documentation, you may move on to either of the below guides, depending on your preference:

  • If you are a customer currently on v4 and want to upgrade to v5, start here:

If you have any questions about the processes outlined in this section, please reach out to the Cinchy Support team:

  • Via email: [email protected]

  • Via phone: 1-888-792-6051

  • Through the support portal:

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
iisreset -stop 
iisreset -start 
Please review and follow the directives for Upgrade 5.5 here.
using the Cinchy Utility
TrustServerCertificate=True
new build
Download .NET 6.0
TrustServerCertificate=True
  // The component image tags are specified below to define which versions to deploy
  "connections_image_tag": "v5.2.0",
  "event_listener_image_tag": "v5.2.0",
  "idp_image_tag": "v5.2.0",
  "maintenance_cli_image_tag": "v5.2.0",
  "meta_forms_image_tag": "v5.2.0",
  "web_image_tag": "v5.2.0",
  "worker_image_tag": "v5.2.0"
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
review and follow the directives for Upgrade 5.2
Deployment Planning Overview and Checklist
Kubernetes Deployment Installation
IIS Deployment Installation
Upgrading from v4 to v5
Support Portal
Image 1: Commenting
Image 2: Editing Comments
Image 3: Deleting Comments
Image 4: Archiving

v5.6 (Kubernetes)

This page details the instructions for upgrading your Cinchy platform to v5.6 on Kubernetes

Upgrading on Kubernetes

When it comes time to upgrade your various components, you can do so by following the below instructions.

If you have made custom changes to your deployment file structure, please contact your Support team prior to upgrading your environments.

Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.2)** using the Cinchy Utility and deploy version 5.2.

If you are upgrading from Cinchy v5.3 or lower to Cinchy v5.6 on an SQL Server Database, you will need to make a change to your connectionString. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.

For a Kubernetes deployment, you can add this value in your deployment.json file:

"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"}

Warning:** If you are upgrading from Cinchy v5.4 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.5) using** the Cinchy Utility and deploy version 5.5.

Prerequisites

  • Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.6 k8s-template.zip file.

  • Review the template changes for this upgrade.

Configuring to the newest version

  1. Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.

  2. Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.

  2. Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.

  3. Open the new Cinchy v5.6 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernete, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.

  4. Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

Perform this step only If you are upgrading to 5.6 on an SQL Server Database and didn't already make this change in any previous updates. \

Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True

"cinchy_instance_configs": {
          "database_connection_string": "User ID=cinchy;Password=<password>;Host=<db_hostname>;Port=5432;Database=development;Timeout=300;Keepalive=300;TrustServerCertificate=True"},
  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Commit all of your changes (if there were any) in each repository.

  2. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.

    1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

    2. Execute the following command to deploy ArgoCD:

bash deploy_argocd.sh
  1. If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:

bash deploy_cluster_components.sh
  1. If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:

bash deploy_cinchy_components.sh
  1. Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.

Appendix A

Template changes (Kubernetes 5.6)

  • The AWS EKS version has been upgraded to support up to v1.24.

  • We've added support for AWS EKS EBS volume encryption. By default EKS worker nodes will have gp3 storage class.

    • For current Cinchy environments you must keep your eks_persistent_apps_storage_class to gp2 in your DevOps automation aws.json file.

    • If you want to move to gp3 storage or gp3 storage and volume encryption, you will have to delete any existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with statefulset. This ensures that ArgoCD will take care of recreation of resources.

    • If your Kafka cluster pods aren't coming back you must restart your Kafka operators.

    • You can verify the change by running the following command: "kubectl get pvc --all-namespaces".

  • The Connections app has changed from StatefulSet to Deployment. The persistence volume has changed to emptyDir.

  • We've modified the replica count from 1 to 2 for istiod and istio ingress.

  • We've disabled the ArgoCD namespace: istio injection.

    • If this is already enabled on your environment you may keep it as is, such as keeping the cinchy.kubernetes/cluster_components/servicemesh/istio/istio-injection/argocd-ns.yaml file as it's without commenting content in it.

  • The Istio namespace injection has been removed.

    • If this is already enabled on your environment please keep it as is -- otherwise it will force you to redeploy all of your Kubernetes application components.

  • We've upgraded the AWS Secret Manager CSI Driver to the latest version due to crashing pods.

  • We've added support for the EKS EBS CSI driver in lieu of using in-tree EBS storage plugin.

  • We've changed the EKS Metrics server port number in order to support newer versions of Kubernetes.

  • We've set fixed AWS Terraform providers version for all components.

  • We've installed the cluster autoscaler from local charts instead of remote charts.

  • The deprecated azurerm_sql_server Terraform resource has been changed to azurerm_mssql_server

  • The deprecated azurerm_sql_database resource has been changed to azurerm_mssql_database

  • The deprecated azurerm_sql_failover_group has been changed to azurerm_mssql_failover_group

  • The deprecated azurerm_sql_firewall_rule has been changed to azurerm_mssql_firewall_rule

Deployment architecture

This page provides an overview for the deployment architecture of Cinchy v5.

Kubernetes vs IIS

When choosing to deploy Cinchy version 5, you must decide whether to deploy via Kubernetes or on a VM (IIS).

Kubernetes is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, which helps to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and scale up/down with your current operations.

  • Grafana, OpenSearch, OpenSearch Dashboard: Working together, these three applications provide a visual logging dashboard for all the information coming in from your database pods. This streamlines your search for information by putting the control into your hands and compiling your logs in one easy to access place — you can now write a query against all your logs, in all your environments. You will have access to a default configuration out of the box, but you can also customize your dashboards as well.

  • Prometheus: With the Kubernetes addition, you now have access to Prometheus for your metrics dashboard. Prometheus records real-time metrics in a time series database used for event monitoring and alerting. You can then create custom dashboards to display your data in an easy to use visual that makes reporting on your metrics easy, and even set up push alerts based on your custom needs.

You also have the option to run Cinchy on Microsoft IIS, which was the traditional deployment method before Cinchy v5. Internet Information Services (IIS) for Windows Server is a flexible, secure and manageable Web server for hosting anything on the Web.

We recommend using Kubernetes to deploy Cinchy v5, because of the robust features that you can leverage, such as improved logging and metrics. Kubernetes helps scale your Cinchy instances and lower your costs by using PostgreSQL.

Choose a database

Before deploying Cinchy v5, you must select which database you want to use.

The following list outlines the supported databases for Kubernetes Deployments.

For IIS Deployments please review the architecture requirements here.

Microsoft SQL Server

MS SQL Server

  • Microsoft SQL Server is a relational database management system. As a database server, it's a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network.

  • Resource limits

Azure SQL Database

  • Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure. It runs on a cloud computing platform, and access to it's provided as a service. Managed database services take care of scalability, backup, and high availability of the database.

  • Differences between Azure SQL and Azure Managed SQL

  • Resource Limits

Azure Managed SQL Instance

  • SQL Managed Instance is a managed, cloud-based, always up-to-date SQL instance that combines broad SQL Server engine compatibility with the benefits of a fully managed PaaS.

  • Differences between Azure SQL and Azure Managed SQL

  • Resource Limits

Amazon Aurora

Amazon Aurora

  • Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora is part of the managed database service Amazon Relational Database Service (Amazon RDS). Amazon RDS is a cloud web service that makes it easier to set up, operate, and scale a relational database.

  • Resource Limits

PostgreSQL

PostgreSQL

  • PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance. PostgreSQL comes with features aimed to help developers build applications, administrators to protect data integrity and build fault-tolerant environments, and help you manage your data no matter how big or small the dataset.

AWS RDS for PostgreSQL

  • Amazon RDS makes it easy to set up, operate, and scale cloud-based PostgreSQL deployments. With Amazon RDS, you can deploy scalable PostgreSQL deployments with cost-efficient and resizable hardware capacity. Amazon RDS manages complex and time-consuming administrative tasks such as PostgreSQL software installation and upgrades; storage management; replication for high availability and read throughput; and backups for disaster recovery. Amazon RDS for PostgreSQL gives you access to the capabilities of the familiar PostgreSQL database engine.

  • Resource Limits

Azure PostgreSQL Database

  • This is a fully managed and intelligent Azure Database for PostgreSQL. Enjoy high availability with a service-level agreement (SLA) up to 99.99 percent, AI-powered performance optimization, and advanced security. A fully managed database that automates maintenance, patching, and updates. Provision in minutes and independently scale compute or storage in seconds.

  • Resource Limits

Sizing considerations and requirements

Before deploying Cinchy v5, you need to define your sizing requirements.

Sizing

Kubernetes sizing

Cluster sizing recommendations vary and are dependant on a number of deployment factors. We've provided the following general sizing recommendations, but encourage you to explore more personalized options.

CPU: 8 Cores

Memory: 32GB Ram

Number of Servers: 3

AWS: m5.2xlarge

Azure: D8 v3

IIS sizing

For sizing recommendations and prerequisites about an IIS deployment, please review the IIS deployment prerequisites.

Application storage requirements

If you are choosing to deploy Cinchy v5 on IIS, then you need to ensure that your VM disks have enough application storage to run your clusters.

Object storage requirements

Cinchy supports both Amazon S3 and Azure Blob Storage.

If you are using Terraform for your Kubernetes deployment, you will need to set up a new S3 compatible bucket to manually to store your state file. You will also need a bucket for Connections, to store error files created during data syncs.

You will create your two S3 compatible buckets using either Amazon or Azure. Ensure that you use the following convention when naming your buckets so that the automation script runs correctly: <org>-<component>-<cluster>. These bucket names will be referenced in your configuration files when you deploy Cinchy on Kubernetes.

Example Terraform Bucket: cinchy-terraform-state

Example Connection Bucket: cinchy-connections-cinchy-nonprod

S3 provides unlimited scalability and it charges only for what you use/how much you store on it, so there are no sizing definitions.

  • How to set up an Amazon S3 bucket

  • How to set up Azure Blob Storage

Upgrading from v4 to v5

Upgrading to v5 on Kubernetes

When you upgrade to Cinchy v5 on Kubernetes you are creating a separate instance. You will need to plan the migration of your database, and then follow the deployment installation guide.

Upgrading to v5 on IIS

To upgrade from v4+ to v5+ on IIS, review the documentation here.

Compatibility issues between databases or versions

If you choose to upgrade to Cinchy v5 on PostgreSQL, please review the following features that aren't currently available in that deployment.

These may be implemented in future versions of the platform.

  • No Partitioning

  • No Geospatial features

  • No Column Indexing

  • No Full Text Indexing

  • No Column Level Encryption

You may also wish to review the list of CQL functions currently unsupported in PGSQL. Please note that this is a living document and that the development team is actively working on function translations between databases. Make sure to check back for the most up to date information.

  • CQL Functions

CinchyDXD

This page guides you through the Cinchy Data Experience Deployment Utility.

Introduction to CinchyDXD

CinchyDXD is a downloadable utility used to move Data Experiences (DX) from one environment to another. This includes any and all objects and components that have been built for, or are required in, support of the Data Experience.

The following sections in this document will outline the basics of how to build, export**,** and install a DX’s.

Items of note moving forward in this document:

  • Source Environment is the environment in which the DX is built.

All objects need to be created in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).

  • Target Environment is the environment in which the DX will be installed.

  • The example DX is a simple Currency Converter DX that consists of

    • One (1) table

    • One (1) query

  • This example doesn't include the following:

    • NO applets

    • NO integrated clients

    • NO Data Sync Configurations

    • NO Reference Data

    • NO Models

    • NO Groups

    • NO System Colours

    • NO Formatting Groups

    • NO Literal Groups

Future iterations of this document will add to this example's complexity level.

Steps

The general steps to deploying the CinchyDXD Utility are as follows:

  • Build the data experience

  • Package the data experience

  • Install the data experience

  • Update the data experience

  • Repackage the data experience

  • Reinstall the data experience

Data Entitlements
Erase your Data
Compress your Data

Build the data experience

This page outlines Step 1 of Deploying CinchyDXD: Building the Data Experience

Remember that you must create all objects in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).

Table Creation

Create your data experience (DX) in a virtual data network.

  1. Log in to Cinchy: URL: <Cinchy source URL> User ID: <source user id> Password: <source password>

  2. From the Homepage, select Create

  3. Select Table > From Scratch

  4. Create the table with the following properties (Image 1).

Table Details

Values

Table Name

Currency Exchange Rate

Icon + Colour

Choose your own icon

Domain

Sandbox (if the domain doesn't exist, create it)

To create a domain on the fly:

  1. Enter domain name in Domain field

  2. Hit enter on keyboard

  3. On the Confirm Domain window, click Yes

Description

This table is a test table for building and deploying a data experience for currency conversion

Image 1: Creating your Table
  1. Click Columns in the left hand navigation to create the columns for the table

  2. Click the “Click Here to Add” a column tab to add a column

Column Details
Values

Column 1

Column Name: Currency 1

Data Type: Text

Advanced Settings:

  • Select Mandatory

  • Leave all other defaults

Column 2

Column Name: Currency 2

Data Type: Text

Advanced Settings:

  • Select Mandatory

  • Leave all other defaults

Column 3

Column Name: Rate

Data Type: Number

Advanced Settings:

  • Set Decimal Places to 4

  • Select Mandatory

  • Leave all other defaults

8. Click the Save button to save your table

Enter sample data

  1. In your newly created table, enter the following sample data (Image 2).

Currency 1
Currency 2
Rate

CAD

USD

0.71

USD

CAD

1.40

Image 2: Filling in your table

Create the query

Create a simple query that pulls information from the Currency Exchange Rate table that will allow a simple currency exchange calculation.

  1. From the Homepage, select Create.

  2. Select Query.

  3. In the query builder locate the Currency Exchange Rate table and drag it to the “FROM” line (Image 3).

You will find the Currency Exchange Rate table in the “Sandbox” domain. To expand the “Sandbox” domain, click on the gray arrow (or double click)

Image 3: Creating your query
  1. In the “SELECT” line drag and drop the “Rate” column and enter in the following (Image 4): SELECT [Rate] * @Amount AS 'Converted Amount'

You will find the Rate column by expanding the Currency Exchange Rate table, similarly to expanding the “Sandbox” domain

Image 4: Creating your query, cont.
  1. Enter in the following for the WHERE clause (Image 5):

WHERE [Deleted] IS NULL AND [Currency 1] = @Currency_1 AND [Currency 2] = @Currency_2

Image 5: Creating your query, cont.
  1. Click the Execute (or play) icon to run the query (Image 6):

Image 6: Creating your query, cont.
  1. Test the query by entering in the following and clicking the submit button (Image 7):

@Amount: 100 @Currency_1: CAD @Currency_2: USD

Image 7: Testing your query
  1. Save the Query by clicking on the Info tab (Left Navigation)\

  2. Enter in the following details for the query (Image 8):

Query Details
Values

Query Name

Currency Converter

Icon + Colour

Choose your own icon

Return

Query Results (Approved Data Only)

Domain

Sandbox

API Result Format

JSON

Description

This query is a test query for building and deploying a data experience for currency conversion

Image 8: Adding in info about your query
  1. Click the Save button (Image 9).

Image 9: Saving your query

v5.6 (IIS)

Upgrading on IIS

The following process can be run when upgrading any v5.x instance to v5.6 on IIS.

Warning: If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.2)** and deploy version 5.2.

If you are upgrading from Cinchy v5.3 or lower to v5.5+ on an SQL Server Database, you will need to make a change to your connectionString in your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

Ex:

Warning:** If you are upgrading from Cinchy v5.4 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.5)** and deploy version 5.5.

The upgrade of any version to Cinchy v5.6 requires changes to be made to various App Setting files. See section 1.2, step 3, for further details.

Prerequisites

  1. Take a backup of your database.

  2. Extract the for the version you wish to upgrade to.

Upgrade process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. If you are upgrading to 5.6 on an SQL Server Database and didn't do so in any previous updates, you will need to make a change to your connectionString in both your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

    Ex:

  3. When upgrading to 5.6, you are required to make the following changes to various appsettings.json files:

CinchySSO\appsettings.json

Navigate to your CinchySSO\appsettings.json file and make the following changes:

  • ADD the following value:

    • "StsPrivateOriginUri" - This should be the private base URL used by the .well-known discovery. If left blank will match the request URL. /cinchysso

Cinchy\appsettings.json

Navigate to your Cinchy\appsettings.json file and make the following changes:

  • REMOVE the following values:

    • "StsAuthorityUri"

    • "RequireHttpsMetadata"

  • ADD the following values:

    • "StsPrivateAuthorityUri" - This should match your private Cinchy SSO URL.

    • "StsPublicAuthorityUri" - This should match your public Cinchy SSO URL.

    • "CinchyPrivateUri" - This should match your private Cinchy URL.

    • "CinchyPublicUri" - This should match your public Cinchy URL.

Worker Directory appsettings.json

Navigate to your appsettings.json file within your Cinchy Worker directory and make the following changes:

  • ADD a new section titled CinchyClientSettings, following the below code snippet as a guide:

  • REMOVE the following:

    • "AuthServiceDomain"

    • "UseHttps"

Event Listener Directory appsettings.json

Navigate to your appsettings.json file within your Cinchy Listener directory and make the following changes:

  • ADD a new section titled CinchyClientSettings, following the below code snippet as a guide:

  • REMOVE the following:

    • "StateFileLocation"

    • "Path"

  1. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

Indexing & partitioning

This page outlines the indexing and partitioning options on your tables.

Indexing

Use indexing to improve query performance on frequently searched columns within large data sets. Without an index, Cinchy begins a data search with the first row of a table and then follows through the entire table sequentially to find all relevant rows. The larger the table(s), the slower the search.

If the table you are searching for has an index for its column(s), however, Cinchy is able to search much quicker.

In the below example, we will set up a query for a Full Name field. When you create an index for that field, an indexed version of your table is created that's sorted sequentially/alphabetically.

When you run your query on this index, that table will be searched using a binary search.

A binary search won't start from the top record. It will check the middle record with your search criteria for a match. If a match it not found, it will check whether the found value is larger or smaller than the desired value. If smaller, it reruns the data check with the top half of the data, finding the median record. If larger, it reruns the data check with the bottom half of the data, finding the median record. It will repeat until your data is found.

Set up an index

This example uses a table with employee names (Image 1). We want to search for John Smith, using the Full Name column.

  1. To set up your index, select Design Table from the left navigation tab.

  2. Click Indexes (Image 2).

  1. Select "Click Here to Add" and fill out the following information for a new index. Click save when done (Image 3):

  • Index Name.

  • Select the column(s) to add to your index. In this example, you want to select the Full Name column to be indexed.

    • You can select more than one column per index.

  • Select the Included column(s) to add to your index, if applicable.

    • The difference between regular columns and Included columns is that indexes with included columns provide the greatest benefit when covering your query because you can include all columns your query may reference, such as columns with data types, numbers, or sizes not allowed as index key columns.

    • For more on Included Columns,

  1. You can now query the full name column for John Smith and receive the results quicker than if you hadn't set up the index (Image 4).

Note that there is no UI change in the query builder or your results when running a query on an indexed column. The difference will be in the speed of your returned results.

Full-text indexing

A full-text index is a special index type that provides index access for full-text queries against character or binary column data. A full-text index breaks the column into tokens and these tokens make up the index data.

Set up a full-text index

  1. Click on Design Table > Full-text Index

  2. Add in the desired column(s) and click save when done (Image 5).

Columnar indexing

Columnar Indexing (also known as ) is available when running SQL Server 2016+. It's not currently available on a PostgreSQL deployment of the Cinchy platform.

are used for storing and querying large tables. This index uses column-based data storage and query processing to improve query performance. Instead of rowstore or b-tree indexes where the data is logically and physically organized and stored as a table with rows and column, the data in a columnstore indexes is physically stored in columns and logically organized in rows and columns.

You may want to use a columnar index when:

  • Your table has over 1 million records.

  • Your data is rarely modified. Having large numbers of deletes can cause fragmentation, which adversely affect compression rates. Updates are also processed as deletes followed by inserts, which will adversely affect the performance of your loading process.

Set up columnar indexing

  1. Click on Design Table > Columnar Index

  2. Add in the desired column(s) and click save when done (Image 6).

When using a Columnar Index, you won't be able to add any new columns to your table. You will need to delete the index, add your column(s), and then re-add the index.

Partitioning data

Partitioning data in a table is essentially organizing and dividing it into units that can then be spread across more than one file in a database. The benefits of this are:

  • Improved efficiency of accessing and transferring data while maintaining its integrity.

  • Maintenance operations can be performed on one or more partitions more efficiently.

  • Query performance is improved based on the types of queries most frequently run.

When creating a partition in Cinchy, you use the values of a specified column to map the rows of a table into partitions.

Set up a partition

This example sets up a partition that divides the employees based on a Years Active column (Image 7). You want to divide the data into two groups: those who have been active for two years or more, and those who have only been active for one year.

  1. Click on Design Table > Partition

  1. Fill in the following information and click save when done (Image 8):

  • Partitioning Column: this is the column value that will be used to map your rows. This example uses the Years Active column.

  • Type: Select either Range Left (which means that your boundary will be <=) or Range Right (where you boundary is only <). In this example we want our boundary to be Range Left.

  • Add Boundary: Add in your boundary value(s). Click the + key to add it to your boundary list. In this example we want to set our boundary to 2.

Once set up, this partition will organize the data into two groups, based on our boundary of those who have a Years Active value of two or above.

  1. You can now run a query on your partitioned table (Image 9).

Note that there is no UI change in the query builder or your results when running a query on a partitioned table. The difference will be in the speed of your returned results.

For more formation on creating, modifying or managing Partitioning, please visit Microsoft's documentation.

5.3 Release Notes

This page details the release notes for Cinchy v5.3

Table of Contents

For instructions on how to upgrade to the latest version of Cinchy, see

New Connector

Kafka

We're continuing to improve our Connections offerings, and we now support as a data sync target in Connections.

Apache Kafka is an end-to-end event streaming platform that:

  • Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.

  • Stores these events durably and reliably for as long as you want.

  • Processes and reacts to the event streams in real-time and retrospectively.

Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time

For information on setting up data syncs with Kafka as a target,

New Inbound Data Format for Connections

Apache AVRO

We've also added support for as a data format and added integration with the Kafka Schema Registry, which helps enforce data governance within a Kafka architecture.

Avro is an open source data serialization system that helps with data exchange between systems, programming languages, and processing frameworks. Avro stores both the data definition and the data together in one message or file. Avro stores the data definition in JSON format making it easy to read and interpret; the data itself is stored in binary format making it compact and efficient.

Some of the benefits for using AVRO as a data format are:

  • It's compact;

  • It has a direct mapping to/from JSON;

  • It's fast;

  • It has bindings for a wide variety of programming languages.

For more about AVRO and Kafka,

For information on configuring AVRO in your platform,

Custom Results in the Network Map

Focus the results of your Network Map to show only the data that you really want to see with our new URL parameters.

You can now add Target Node, Depth Level, and Max Depth Level Parameters, if you choose.

Example: <base url>/apps/datanetworkvisualizer?targetNode=&maxDepth=&depthLevel=

  • Target Node: Using the Target Node parameter defines which of your nodes will be the central node from which all connections branch from.

    • Target Node uses the TableID number, which you can find in the URL of any table.

    • Example: <base url>/apps/datanetworkvisualizer?targetNode=8 will show TableID 8 as the central node

  • Max Depths: This parameter defines how many levels of network hierarchy you want to display.

    • Example: <base url>/apps/datanetworkvisualizer?maxDepth=2 will only show you two levels of connections.

  • Depth Level: Depth Level is a UI parameter that will highlight/focus on a certain depth of connections.

    • Example: <base url>/apps/datanetworkvisualizer?DepthLevel=1 will highlight all first level network connections, while the rest will appear muted.

The below example visualizer uses the following URL: <base url>/apps/datanetworkvisualizer?targetNode=8&maxDepth=2&depthLevel=1

  • It shows Table ID 8 ("Groups") as the central node.

  • It only displays the Max Depth of 2 connections from the central node.

  • It highlights the nodes that have a Depth Level of 1 from the central node.

Enhancements

  • We've increased the length of the [Parameters] field in the [Cinchy].[Execution Log] to 100,000 characters.

  • Two new parameters are now available to use in real time syncs that have a Cinchy Table as the target. @InsertedRecordIds() and @UpdatedRecordIds() can be used in post sync scripts to insert and update Record IDs respectively, with the desired value input as comma separated list format.

Bug Fixes

  • We've fixed a bug that was preventing some new SSO users belonging to existing active directory groups from seeing tables that they should have access to.

  • We've fixed a bug where a Webhook would return a 400 error if a JSON body was provided, and the key was in the query parameter of the HTTP request.

GraphQL

  • We continue to optimize our capabilities by improving memory utilization and performance.

Repackage the data experience

This page outlines Step 5 of Deploying CinchyDXD: Repackaging the Data Experience

Introduction

After you have made any applicable changes to your DX, you must re-export the package out of your source environment.

1. Update the data experience table

If you have added or removed any of the following while updating your DX, you will need to update the Data Experience Definition table:

  • Name

  • Tables

  • Views

  • Integrated Clients

  • Data Sync Configurations

  • Listener Configurations

  • Secrets

  • Reference Data

  • User Defined Functions

  • Models

  • Groups

  • System Colours

  • Saved Queries

  • Pre-Install Scripts

  • Post-Install Scripts

  • Applets

  • Literal Groups

  • Webhooks

  • Builders

  • Builder Groups

  • Sync GUID

2. Update reference data table

If you have added or removed any of the following while updating your DX, you will need to update the Data Experience Reference Data table:

  • Name

  • Ordinal

  • Filter

  • New Records

  • Changed Records

  • Dropped Records

  • Table

  • Sync Key

  • Expiration Timestamp Field

  • Sync GUID

3. Re-run CinchyDXD export

Using PowerShell you will now export the Data Experience you have defined within Cinchy.

  1. Launch PowerShell and navigate to your CinchyDXD folder

You can launch PowerShell right from your file explorer window in the CinchyDXD file, saving you an extra step of navigating to the CinchyDXD folder manually in PowerShell.

2. In the PowerShell window type in cin and hit tab on your keyboard 3. In the PowerShell command line next to .\CinchyDXD.ps1 type in export 4. Hit Enter on your keyboard

If you don't remember the mandatory parameters, you can click the enter on your keyboard after typing in .\CinchyDXD.ps1 export, PowerShell will provide you with the required and optional components to export the data experience.

5. You must now enter your mandatory export parameters.

The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters are on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute

You will need to update your version number

Sample: .\CinchyDXD.ps1 export ` -s "<source Cinchy url>" ` -u "<source user id>" ` -p "<source password>" ` -c "C:\Cinchy CLI v4.0.2" ` -d "C:\CLI Output Logs" ` -g "8C4D08A1-C0ED-4FFC-A695-BBED068507E9" ` -v "2.0.0" ` -o "C:\CinchyDXD_Output" `\

  1. Enter the export parameters into the PowerShell window (Image 1).

  1. Hit Enter on your keyboard to run the export command

PowerShell will begin to process the export. Once the export is complete, PowerShell will provide you with an export complete message (Image 2).

4. Validate the Export

  1. Ensure that the DXD Export Folder is populated (Image 3).

  1. Ensure that the Data Experience Release table is populated in the source environment (Image 4).

  1. Ensure that the Data Experience Release Artifacts table is populated in the source environment (Image 5).

IIS architecture

This page details the deployment architecture of Cinchy v5 when running on a VM.

Component diagram

The below diagram shows a high level overview of Cinchy's Infrastructure components when deploying on IIS.

Some components and configurations are dependent on the platform usage. The table below provides a description of each component.

Tip: Click on an image to enlarge it.

Component overview

Component
Description
Technology Stack
Dependencies

Table and column GUIDs

Overview

A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique identification. Both Cinchy Tables and Columns have a GUID.

This feature is particularly useful when deploying between Cinchy instances.

For example, in a model deployment, you must have matching GUIDs on your columns in order for them to properly load between environment A and environment B. There might be times when these GUIDs don’t automatically match, however, such as if you manually added a new column to environment B and also manually added it to environment A.

In this case, the two columns would have different GUIDs, and the model deployment would fail. With this new feature, however, you can match up conflicting GUIDs to properly load your model.

1. Viewing and Editing GUIDs

You have the ability to display and edit these table and column GUIDs within the Design Table screen.

Table GUIDs musts be unique to your specified environment.

Column GUIDs must be unique to the table.

  1. Table GUIDs can be found under Design Table > Info > GUID (Image 1).

  2. Click on the pencil icon to edit the GUID.

GUIDs must adhere to supported formats for Cinchy.

  • 32 hexadecimal (base-16) digits.

  • Displayed in five groups, separated by hyphens.

  • The groups take the form of 8-4-4-4-12 characters, for a total of 36 characters (32 hexadecimal characters and 4 hyphens).

Example: 123e4567-e89b-12d3-a456-426614174000

Warning: Changing the value may have damaging effects, proceed with caution.

3. Column GUIDs can be found under Design Table > Columns > GUID (Image 2).

4. Click on the pencil icon to edit the GUID.

GUIDs must adhere to supported formats for Cinchy.

  • 32 hexadecimal (base-16) digits.

  • Displayed in five groups, separated by hyphens.

  • The groups take the form of 8-4-4-4-12 characters, for a total of 36 characters (32 hexadecimal characters and 4 hyphens).

Example: 123e4567-e89b-12d3-a456-426614174000

Warning: Changing the value may have damaging effects, proceed with caution.

v5.2 (IIS)

This page details the upgrade process for Cinchy v5.2 on IIS.

Upgrading on IIS

Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.2 or higher,** you must run a mandatory process _(Upgrade 5.2)_ .

The following process can be run when upgrading any v5.x instance to v5.2 on IIS.

Prerequisites

  1. Take a backup of your database.

  2. Extract thefor the version you wish to upgrade to.

Upgrade process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
    "AppSettings": {
      "CinchyUri": "http://localhost",
      "CertificatePath": "C:\\inetpub\\wwwroot\\cinchysso\\cinchyidentitysrv.pfx",
      "CertificatePassword": "",
      "SAMLClientEntityId": "",
      "SAMLIDPEntityId": "",
      "SAMLMetadataXmlPath": "",
      "SAMLSSOServiceURL": "",
      "SAMLEncryptedCertificatePath": "",
      "SAMLEncryptedCertificatePassword": "",
      "SAMLSignCertificatePath": "",
      "SAMLSignCertificatePassword": "",
      "HstsMaxAge": 2592000,
      "HstsIncludeSubDomains": false,
      "HstsPreload": false,
      "SAMLSignCertificateMinAlgorithm": "",
      "SAMLSignCertificateSigningBehaviour": "",
      "AcsURLModule": "",
      "StsPublicOriginUri": "",
      // Add in the below "StsPrivateOriginUri".
      //This should be the private base URL used by the .well-known discovery.
      // If left blank will match the request URL. /cinchysso
      "StsPrivateOriginUri": "",
      "MaxRequestHeadersTotalSize": 65536,
      "MaxRequestBufferSize": 65536,
      "MaxRequestBodySize": -1,
      "MachineKeyXml": "",
      "DpApiKeyRingPath": "",
      "TlsVersion": "",
      "CinchyAccessTokenLifetime": "7.00:00:00",
      "DataChangeCallbackTimeout": 7,
      "RefreshCacheTimeInMin": 10,
      "DefaultExpirationCacheTimeInMin": 360,
      "DBType": "PostgreSQL"
    "AppSettings": {
    // Add the below "StsPrivateAuthorityUri" value.
    // This should match your private Cinchy SSO URL.
      "StsPrivateAuthorityUri": "",
   // Add the below "StsPublicAuthorityUri" value.
   // This should match your public Cinchy SSO URL.
      "StsPublicAuthorityUri": "",
   // Add the below "CinchyPrivateUri" value.
   // This should match your private Cinchy URL.
      "CinchyPrivateUri": "",
   // Add the below "CinchyPublicUri" value.
   // This should match your public Cinchy URL.
      "CinchyPublicUri": "",
      "AllowLogFileDownload": false,
      "LogDirectoryPath": "C:\\CinchyLogs\\CinchyWeb",
      "SSOLogPath": "C:\\CinchyLogs\\CinchySSO\\log.json",
      "UseHttps": true,
      "HstsMaxAge": 2592000,
      "HstsIncludeSubDomains": false,
      "HstsPreload": false,
      "TlsVersion": "",
      "RouteDebuggerEnabled": false,
      "RefreshCacheTimeInMin": 10,
      "DefaultExpirationCacheTimeInMin": 360,
      "DBType": "PostgreSQL",
      "StorageType": "Local", // Local | S3 | AzureBlobStorage
      "MaxRequestBodySize": 1073741824 // 1gb
    },
{
  "CinchyClientSettings": {
    "Url": "",      // Cinchy Url
    "Username": "", // For Cinchy v4 only, remove otherwise
    "Password": ""  // For Cinchy v5, this should be the password for the user [email protected]. For v4 this will be the desired user's password.
  },
  "CinchyClientSettings": {
    "Url": "", // Cinchy Url
    "Username": "", // For Cinchy v4, remove otherwise
    "Password": "" // For Cinchy v5, this should be the password for the user [email protected]. For v4 this will be the desired user's password.
  }
iisreset -stop
iisreset -start
using the Cinchy Utility
TrustServerCertificate=True
using the Cinchy Utility
new build
Download .NET 6.0
TrustServerCertificate=True
iisreset -stop
iisreset -start
using the Cinchy Utility
new build
click here
Columnstore indexing
Columnar indexes
Partitioned table and Indexes
Image 1: An example table
Image 2: Select Indexes from the list
Image 3: An example index
Image 4: An example query on an indexed column
Image 5: Full text indexing
Image 6: Columnar Indexing
Image 7: Partitioning
Image 8: Creating your Partition
Image 9: An example query on a partitioned table
here.
Kafka
for your key use cases.
please review the documentation here.
Apache AVRO (inbound)
read the documentation here.
review the documentation here.
GraphQL beta
Image 1: Step 6
Image 2: Step 7
Image 3: Step 1
Image 4: Step 2
Image 5: Step 3

Cinchy Web Application

This is the primary application for Cinchy, providing both the UI for end users as well as the REST APIs that serve application integration needs. The back-end holds the engine that powers Cinchy's data / metadata management functionality.

ASP.NET MVC 5

.NET Framework 4.7.2+IIS 7.5+Windows Server 2012 or later

Cinchy IdP

This is an OpenID Connect / OAuth 2.0 based Identity Provider that comes with Cinchy for authenticating users. Cinchy supports user group management directly on the platform, but can also connect into an existing IdP available in the organization if it can issue SAML tokens. Optionally, Active Directory groups may be integrated into the platform. When using SSO, this component federates authentication to the customer's SAML enabled IdP. This centralized IdP issues tokens to all integrated applications including the Cinchy web app as well as any components accessing the REST based APIs.

.Net Core 2.1

.NET Framework 4.7.2+IIS 7.5+Windows Server 2012 or later

Cinchy Database

All data managed on Cinchy is stored in a MS SQL Server database. This is the persistence layer.

MS SQL Server Database

Windows Server 2012 or laterMS SQL Server 2012 or later

Cinchy CLI

The CLI offers utilities to get data in and out of Cinchy. It has tools to sync data from a source into a table in Cinchy. it can operate on large datasets with built-in partitioning capability and performs a reconciliation to determine differences before applying changes. Another utility is the data export, which invokes a query against the Cinchy platform and dumps the results to a file for distribution to other systems requiring batch data feeds.

.NET Core 2.0

.NET Core Runtime 2.0.7+ (on Windows or Linux)

ADO.NET Driver

For .NET applications Cinchy provides an ADO.NET driver that can be used to connect into the platform and perform CRUD operations on data.

.NET Standard 2.0

See implementation support table here

JavaScript SDK

Cinchy's JavaScript SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and back end.

JavascriptJQuery

Angular SDK

Cinchy's Angular SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and back end.

Angular 5

Image 1: Table GUID
Image 2: Column GUID

Upgrade AWS EKS Kubernetes version

This page will guide you through how to update your AWS EKS Kubernetes version for your Cinchy v5 platform.

Overview

This page will guide you through how to update your AWS EKS Kubernetes version for your Cinchy v5 platform.

Prerequisites

  • Update your Cinchy platform to the latest version.

  • Confirm the latest Cinchy supported version of EKS. You can find the version number in the cinchy.devops.automations\aws-deployment.json as "cluster_version": "1.xx".

Considerations

  • You must upgrade your EKS sequentially. For example, if you are on EKS cluster version 1.22 and wish to upgrade to 1.24, you must upgrade from 1.22 > 1.23 > 1.24.

Instructions

  1. Navigate to your cinchy.devops.automations\aws-deployment.json file.

  2. Change the cluster_version key value to the EKS version you wish to upgrade to. (Example: "1.24")

  3. Open a shell/terminal and navigate to the cinchy.devops.automations directory location.

  4. Execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. Commit changes for all the repositories (cinchy.argocd, cinchy.kubernetes, cinchy.terraform and cinchy.devops.automation).

  2. Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME directory location.

  3. Execute the following command:

bash create.sh
  1. Verify the changes are as expected and then accept.

This process will first upgrade the managed master node version and then the worker nodes. During the upgrade process, existing pods get migrated to new worker nodes and all pods will get migrated to new upgraded worker nodes automatically.

The below two commands can be used to verify that all pods are being migrated to new worker nodes.

  • To show both old and new nodes:

kubectl get nodes #
  • To show all the pods on the new worker nodes and the old worker nodes.

kubectl get pods --all-namespaces -o wide #

Reinstall the metrics server

For EKS version 1.24, the metrics server goes into a crashed loop status. Reinstalling the metrics server will fix this, should you encounter this during your upgrade.

  1. In a code editor, open the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME\new-vpc.tf or existing-VPC.tf file.

  2. Find the enable_metrics_server key and mark its value to false.

  3. Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME file.

  4. Run the below command to remove metrics server.

terraform apply
  1. Revert the enable_metrics_server key value from step 1 to true.

  2. Run the below command within the same shell/terminal as step 3 to deploy metrics server with

 terraform apply

Single Sign-On (SSO) integration

This page walks through the integration of an Identity Provider with Cinchy via SAML Authentication

Overview

Cinchy supports integration with any Identity Provider that issues SAML tokens (such as Active Directory Federation Services) for authenticating users.

It follows an SP Initiated SSO pattern where the SP will Redirect to the IdP and the IdP must submit the SAML Response via an HTTP Post to the SP Assertion Consumer Service.

Below is a diagram outlining the flow when a non-authenticated user attempt to access a Cinchy resource (Image 1).

Image 1: Non-authenticated user access attempt

Configure SAML authentication - IIS deployments

You must register Cinchy with the Identity Provider. As part of that process you'll supply the Assertion Consumer Service URL, choose a client identifier for the Cinchy application, and generate a metadata XML file.

The Assertion Consumer Service URL of Cinchy is the base URL of the CinchySSO application followed by "{AcsURLModule}/Acs"

https:///\<CinchySSO URL>/Saml2/Acs

https://myCinchyServer/Saml2/Acs

To enable SAML authentication within Cinchy, do the following:

  1. You can find the necessary metadata XML from the applicable identity provider. Place the metadata file in the deployment directory of the CinchySSO web application.

If you are using Azure AD for this process, you can find your metadata XML by following these steps.

If you are using Google Workspace for this process, you can find your metadata XML by following steps 1-6 here.

If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.ad.server>: https://<your.AD.server>/FederationMetadata/2007-06/FederationMetadata.xml

If you are using Okta for this process, you can find your metadata XML by following these steps.

If you are using Auth0 for this process, you can find your metadata XML by following these steps.

If you are using PingIdentity for this process, you can find your metadata XML by following these steps.

  1. Update the values of the below app settings in the CinchySSO appsettings.json file.

  • SAMLClientEntityId - The client identifier chosen when registering with the Identity Provider

  • SAMLIDPEntityId - The entityID from the Identity Provider metadata XML

  • SAMLMetadataXmlPath - The full path to the metadata XML file

  • AcsURLModule - This parameter is needs to be configured per your SAML ACS URL. For example, if your ACS URL looks like this "https:///<CinchySSO URL>/Saml2/Acs", then the value of this parameter should be "/Saml2"

When configuring the Identity Provider, the only required claim is a user name identifier. If you plan to enable automatic user creation, then additional claims must be added to the configuration, see section 4 below for more details.

Once you enable SSO, the next time a user arrives at the Cinchy login screen they will see an additional button for Single Sign-On".

Configure SAML authentication - Kubernetes deployments

  1. Retrieve your metadata.xml file from your identity provider.

If you are using Azure AD for this process, you can find your metadata XML by following these steps.

If you are using Google Workspace for this process, you can find your metadata XML by following steps 1-6 here.

If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.ad.server>: https://<your.AD.server>/FederationMetadata/2007-06/FederationMetadata.xml

If you are using Okta for this process, you can find your metadata XML by following these steps.

If you are using Auth0 for this process, you can find your metadata XML by following these steps.

If you are using PingIdentity for this process, you can find your metadata XML by following these steps.

  1. Navigate to your cinchy.kubernetes\environment\_kustomizations\_template\instance\_template\idp\kustomization.yaml file.

  2. Add your metadata.xml patch into your secrets where specified below as <<metadata.xml>>

- target:
    version: v1
    kind: Secret
    name: idp-secret-appsettings
  patch: |-
    - op: replace
      path: /data/appsettings.json
      value: <<idp_appsettings_json>>
    - op: add
      path: /data/metadata.xml
      value: <<metadata.xml>>
  1. Navigate to your devops.automation > deployment.json in your Cinchy instance.

  2. Add the following fields into the .json and update them below using the metadata.xml.

"sso": {
    "SAMLClientEntityId": "cinchy-dev", // Cinchy instance name
    "SAMLMetadataXmlPath": "/usr/share/appsettings/metadata.xml",
    // All below values get from metadata.xml and fill them here.
    "SAMLIDPEntityId": "",
    "SAMLSSOServiceURL": "",
    "AcsURLModule": "",
    "FirstNameExternalClaimName": "",
    "LastNameExternalClaimName": "",
    "EmailExternalClaimName": "",
    "MemberOfExternalClaimName": "",
    "metadata.xml": "BASE64_ENCODED_METADATA.XML"
}
  1. Navigate to your kubernetes\environment_kustomizations_template\instance_template_encoded_vars\idp_appsettings_json.

  2. Update the below code with your proper AppSettings and ExternalIdentityClaimSection details.

{
    "ConfigSettings": {
        "AppSettings": {
            "SAMLClientEntityId": "<<SAMLClientEntityId>>",
            "SAMLIDPEntityId": "<<SAMLIDPEntityId>>",
            "SAMLMetadataXmlPath": "<<SAMLMetadataXmlPath>>",
            "SAMLSSOServiceURL": "<<SAMLSSOServiceURL>>",
            "AcsURLModule": "<<AcsURLModule>>",
        },
        "ExternalIdentityClaimSection": {
            "FirstName": {
                "ExternalClaimName": "<<FirstNameExternalClaimName>>"
            },
            "LastName": {
                "ExternalClaimName": "<<LastNameExternalClaimName>>"
            },
            "Email": {
                "ExternalClaimName": "<<EmailExternalClaimName>>"
            },
            "MemberOf": {
                "ExternalClaimName": "<<MemberOfExternalClaimName>>"
            }
        }
    }
}
  1. Run DevOps automation script which will populate the updated outputs into the cinchy.kubernetes repository.

  2. Commit your changes and push to your source control system.

  3. Navigate to your ArgoCD dashboard and refresh the idp-app to pick up your changes. It will also delete your currently running pods to pick up the latest secrets.

  4. Once the pods are healthy, you can verify the changes by looking for the SSO Tab on your Cinchy login page.

User management

Before a user is able to login through the SSO flow, the user must be set up in Cinchy with the appropriate authentication configuration.

Users in Cinchy are maintained within the Users table in the Cinchy domain. Each user in the system is configured with 1 of 3 Authentication Methods:

  • Cinchy User Account - These are users that are created and managed directly in the Cinchy application. They log into Cinchy by entering their username and password on the login screen.

  • Non Interactive - These accounts are intended for application use.

  • Single Sign-On - These users authenticate through the SSO Identity Provider (configured using the steps above). They log into Cinchy by clicking the "Login with Single Sign-On" link on the login screen.

Define a new SSO User

Create a new record within the Users table with the Authentication Method set to Single Sign-On.

The password field in the Users table is mandatory. For SSO users, the value entered is ignored. You can input n/a.

Convert an existing user to SSO User

Change the Authentication Method of the existing user to Single Sign-On.

Login with SSO

When a user is configured for SSO, they can select Login with Single Sign-On on the login page, which directs logins through the Identity Provider's authentication flow.

If a user successfully authenticates with the Identity Provider but hasn't been set up in the Users table, then they will see the following error message - " You aren't a registered user in Cinchy. Please contact your Cinchy administrator." To avoid the manual step to add new users, you can consider enabling automatic user creation.

Automatic user creation - IIS deployments

On SSO enabled Cinchy instances, users that don't exist in the Cinchy Users table won't be able to login, regardless if they're authenticated by the Identity Provider.

If you enable Automatic User Creation, the Identity Provider authorizes the user and automatically create a user entry in the Cinchy Users table if one doesn't already exist. This means that any SSO authenticated user is guaranteed to be able to access the platform.

If AD Groups are configured within Cinchy, then the authenticated user is also automatically be added to any Cinchy mapped AD Groups where they're a member. See AD Group Integration for additional information on how to define AD Groups in Cinchy.

See below for details on how to enable Automatic User Creation.

Users that are automatically added won't be allowed to create or modify tables and queries. To provision this access, Can Design Tables and Can Design Queries must be checked on the User record in the Cinchy Users table.

Prerequisites for automatic user creation

The Identity Provider configuration must include the following additions to the base configuration in the SAML token response:

  • First Name

  • Last Name

  • Email

To enable automatic group assignment for newly created users, then you must also include an attribute that captures the groups that this user is a member of. For example, the memberOf field in AD. This is applicable if you plan on using AD Groups.

Configuration setup

To enable automatic user creation, you require the following changes. For IIS Deployments this will be done to the appsettings.json file in the CinchySSO web application.

  1. Add ExternalClaimName attribute values under "ExternalIdentityClaimSection" in appsettings.json file. Don't add the value for MemberOf if you don't want to enable automatic group assignment .

  2. The ExternalClaimName value must be updated to create a mapping between the attribute name in the SAML response and the required field. For example, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname is the name in the SAML response for the FirstName field.

ExternalIdentityClaimSection
"ExternalIdentityClaimSection": {
			"FirstName": {
				"ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"
			},
			"LastName": {
				"ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"
			},
			"Email": {
				"ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
			},
			"MemberOf": {
				"ExternalClaimName": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role"
			}
		}

## 6. Further Reading

  • Configuring ADFS

  • AD Group Integration

Configure ADFS

This document outlines the steps for configuring Active Directory Federation Services (ADFS) to facilitate Single Sign-On (SSO) with Cinchy.

Certainly, presenting the information in a table can help make it easier to understand. Here's how you can structure it:

Before You Begin

Before starting with the ADFS configuration, make sure to have following information:

Information Required
Description
Reference

Cinchy SSO URL

The URL of your Cinchy SSO instance

{your.cinchysso.url}

Cinchy URL

The URL of your main Cinchy instance

{your.cinchy.url}

Cinchy SSO Installation Path

Directory where CinchySSO files are located

{Path/to/CinchySSO}

ADFS Server

The URL of your ADFS server

{your.ADFS.server}

Having these details readily available will streamline the ADFS configuration process.

Configuration Steps in ADFS

  1. Navigate to AD FS Management on your ADFS server.

  2. Right-click on Relying Party Trusts and choose Add Relying Party Trust to open the Add Relying Party Trust Wizard.

  3. In the wizard, select Claims Aware > Start > Select Data Source.

  4. Select Enter Data About the Relying Part Manually > Next.

  5. Fill in a Display Name under Specify Display Name.

  6. Skip certificate configuration in Configure Certificates.

  7. In Configure URL, select Enable support for the SAML 2.0 SSO Web SSO protocol.

  8. Input your login URL as follows:

    https://{your.cinchysso.url}/Saml2/Acs
  9. Under Configure Identifiers, add an Identifier and press Next to complete the setup.

Set up Claim Issuance Policy

  1. Right-click on the newly created Relying Party Trust (located by its Display Name) and select Edit Claim Issuance Policy.

  2. Select Add Rule > Claim Rule > Send LDAP Attributes as Claims.

  3. Input a Claim Rule Name.

  4. In the Attribute Store, select Active Directory. Map the LDAP attributes to the corresponding outgoing claim types as shown in the table below:

LDAP Attribute
Outgoing Claim Type
Comments

User-Principal-Name

Name ID

SAM-Account-Name

sub

Type sub manually to avoid auto complete

Given-Name

Given Name

Required for Auto User Creation

Surname

Surname

Required for Auto User Creation

E-Mail-Address

E-Mail Address

Required for Auto User Creation

Is-Member-Of-DL

Role

Required for Auto User Creation

Image 2: Add Transform Claim Rule Wizard
  1. Select Finish.

  2. Select Edit Rule > View Rule Language. Copy the Claim URLs for later use in configuring your Cinchy appsettings.json. It should look like the following:

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
      => issue(store = "Active Directory",
              types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
                        "sub",
                        "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname",
                        "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname",
                        "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
                        "http://schemas.microsoft.com/ws/2008/06/identity/claims/role"),
              query = ";userPrincipalName,sAMAccountName,givenName,sn,mail,memberOf;{0}",
              param = c.Value);
  3. Press OK to confirm and save.

  4. Right-click on Relying Party Trust > Properties. Move to the Advanced tab and select SHA-256 as the secure hash algorithm.

Configuration for Cinchy

Note: Please ensure that the configurations below are case-sensitive and align exactly with those in your SAML IdP setup.

Initial setup

  1. Retrieve and save the Federation Metadata XML file from the following location: https://{your.ADFS.server}/FederationMetadata/2007-06/FederationMetadata.xml.

  2. If needed, use IIS Manager to establish an HTTPS connection for the Cinchy website.

  3. Also establish an HTTPS connection for the SSO site. Make sure the port number aligns with the one specified in the login URL.

Configuration for appsettings.json

App Settings Section

Attribute
Value or Description

CinchyLoginRedirectUri

URL of the user login redirect https://{your.cinchysso.url}/Account/LoginRedirect

CinchyPostLogoutRedirectUri

URL of the user post-logout redirect https://{your.cinchy.url}

CertificatePath

Path to Cinchy SSO certificate {Path/to/CinchySSO}\cinchyidentitysrv.pfx

SAMLClientEntityId

Relying Party Identifier from earlier-configured Relying Party Trust

SAMLIDPEntityId

Entity ID for SAML IdP, found in FederationMetadata.xml http://{your.AD.server}/adfs/services/trust

SAMLMetadataXmlPath

Location of saved FederationMetadata.xml from Initial setup

SAMLSSOServiceURL

URL path in Domain Controller's in-service endpoints https://{your.AD.server}/Saml2/Acs

AcsURLModule

/Saml2

MaxRequestHeadersTotalSize

Maximum header size in bytes; adjustable if default is insufficient

MaxRequestBufferSize

Should be equal to or larger than MaxRequestHeadersTotalSize

MaxRequestBodySize

Maximum request body size in bytes (use -1 for default; usually no need to change)

External identity claim section

You will need to refer to the Rule Language URLs you copied from the ADFS Configuration. Replace the placeholders below with your own URLs:

{
  "AppSettings": {
    // Replace placeholders below with URLS
  },
  "ExternalIdentityClaimSection": {
    "FirstName": {
      "ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"
    },
    "LastName": {
      "ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"
    },
    "Email": {
      "ExternalClaimName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
    },
    "MemberOf": {
      "ExternalClaimName": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role"
    }
  }
}

Edit web.config

Insert the following lines within the <appSettings> section of your web.config file. Make sure to replace the {your.cinchy.url} and {your.cinchysso.url} with your Cinchy and Cinchy SSO values.

<appSettings>
  <!-- Replace placeholders below with URLS -->
  <add key="UseHttps" value="true" />
  <add key="StsAuthorityUri" value="https://{your.cinchy.url}" />
  <add key="StsRedirectUri" value="https://{your.cinchysso.url}/Account/LoginRedirect" />
  <!--  -->
</appSettings>

Data entitlements

Data Control Entitlements allow you to set up permissions for who can view, edit, or approve data within a table. Note that this was formerly called "Design Controls"

Overview

Data Entitlements define who has access to do what on your Cinchy platform. These access controls are universally set at a cellular level, meaning that you can configure user access in the way that best supports your use case.

You can set entitlements such that specific users can view, edit, delete, or insert data you want them to access.

Cinchy supports user-based, role-based, and attribute-based access controls.

User-based controls

User-based controls are entitlements given to specific users. This is done via the Users column.

Defining access based on a user means that even if the user changes their role, team, group, etc., they will still maintain their data entitlements.

Role-Based Controls

Role-based controls are entitlements given to set(s) or users based on their role in your environment. For example, you are able to define that only the Product team has access to insert records into a Product Roadmap table. Instead of configuring the entitlements user by user, which takes time and can lead to incorrect data when/if employees shift teams, you can configure it such that any user within the Product team automatically maintains the same level of control across the board.

In Cinchy, this is done via the Groups column.

Attribute-based controls

Attribute-based controls are entitlements given to a user(s) based on a defined set of tags. This can include attributes such as their team, their role, their security clearance, their location, etc.

Defining entitlements based on attributes allows you to drill even deeper into the specificity of which users can do what on your tables.

In Cinchy, you can set up an infinite number of attributes based on your specific use case(s). This is done via Row Filters.

For example, if you have an Employee table that contains salary information visible only to certain people, you can configure a Row Filter such that the logged in user MUST have at least one of the following attributes to be able to see it:

  • The user to whom the salary belongs

  • Their manager

  • All VP level executives

  • The CEO

You are able to add as many attributes into your Row Filter as needed. For example you could only allow a user with the following set of tags to view a row: Located in Toronto, on the Marketing Team, and with a Security Clearance level of 2.

Change entitlements

  1. When viewing a table, click on Data Controls > Entitlements from the left navigation menu (Image 1).

Image 1: Step 1, Entitlements
  1. Currently both the table creator and anyone in the Cinchy Administrators group has access to perform any action on any objects. You can give granular entitlements at a Group or a User level, for both viewing and editing access (Image 2).

Image 2: Step 2, An example of Entitlements
  1. In the above scenario, John Smith is part of the Developers group. They're able to view all columns via the entitlement to the Developers group, and they're able to edit both the First Name and Last Name column through different entitlements.

Table-level entitlements

Table-level entitlements apply to the entire table.

Marketplace

Approving this entitlement enables users to see and search for the table in the Marketplace/Homepage.

Bulk Export

Approving this entitlement enables users to export data from the table via the Manage Data screen (Image 3).

Image 3: Step 2.2 Bulk Export

Direct Query

Approving this entitlement enables users to query the data from the table directly in the Query Builder (Image 4).

Image 4: Step 2.3 Direct Queries

Design Table

Approving this entitlement enables users to alter the structure of the table.

This is a builder/administrative function and shouldn't be granted to end users.

Design Controls

Approving this entitlement enables users to change the permissions on a table.

This is a builder/administrative function and shouldn't be granted to end users.

Column-level entitlements

Column-level entitlements apply only to columns.

View All Columns

Approving this entitlement enables users to view all columns within the table.

Note that this applies to any new columns that are added to the table after providing this permission as well.

View Specific Columns

This is a drop down where you can select the specific columns you want to grant view access to for users.

Edit All Columns

Approving this entitlement enables users to edit all columns within the table.

Note that this applies to any new columns that are added to the table after providing this permission as well.

Giving a user edit permission will also give them view permission.

Edit Specific Columns

This is a drop down where you can select the specific columns you want to grant edit access to for users.

Giving a user edit permission will also give them view permission.

Approve All Columns

Approving this entitlement enables users to approve all columns within the table. This also allows users to approve Create and Delete requests.

Note that this applies to any new columns that are added to the table after providing this permission as well.

Approve permissions only apply when Change Approvals are enabled.

Giving a user approve permission will also give them view permission.

Approve Specific Columns

This is a drop down where you can select the specific columns you want to grant approve access to for users.

Approve permissions only apply when Change Approvals are enabled.

Giving a user approve permission will also give them view permission.

Link Columns

Link columns require both permission to the column within the table and the column in the link column itself.

Row-level entitlements

Row-level entitlements apply to specific rows. Used in conjunction with Column Level entitlements this allows for granular cell level entitlements.

Insert Row

Approving this entitlement enables users to create new rows in the table.

Delete Row

Approving this entitlement enables users to delete rows in the table.

Viewable & Editable Row Filter

This is a CQL fragment that applies a filter to which rows will be viewable or editable. Think of the column entitlements and the fragment as a SQL statement applied to the table.SELECT {Edit Selected Columns} WHERE {Editable Row Filter}

Examples for Row Filter

Most of these examples will be with the editable row filter so it's easy to see the underlying data for comparison. However this can be done for viewable row data as well.

Sample data

(Image 5)

Image 5: Sample Data

Example

With the following entitlements (Image 6):

  • Edit Specific Columns: Age

  • Editable Row Filter: [Age] > 30

Image 6: Simple Example

Example with viewable data

(Image 7)

  • View Specific Columns: First Name, Last Name

  • Viewable Row Filter: [End Date] IS NULL OR [End Date] > GetDate()

Image 7: Example with Viewable Data

Layer on another entitlement

(Image 8)

  • View Specific Columns: All

  • Edit Specific Columns: First Name, Last Name, Age

  • Viewable Row Filter: [First Name] = 'John'

  • Editable Row Filter: [First Name] = 'John'

Image 8: Layer on Another Entitlement

Example for current user

(Image 9)

Image 9: Example for current user

For the All Users group:

(Image 10)

  • View All Columns: Check

  • Edit Selected Columns: First Name, Last Name

  • Editable Row Filter: [User Account].[Cinchy Id] = CurrentUserId()

Image 10: For the All Users Group

To allow a user to edit certain fields of their own data, you will need an association from a user to the [Cinchy].[Users] table. You can then use the following function to allow edit for that user, where [...] is the chain of link columns to get to the Users table.

[...].[Cinchy Id] = CurrentUserId()

Reinstall the data experience

This page outlines Step 6 of Deploying CinchyDXD: Reinstalling the Data Experience

Re-run CinchyDXD install

Using PowerShell, you must now install the Data Experience you have exported out of Cinchy.

  1. Open File Explorer and navigate to your exported folder (Image 1).

Image 1: Step 1
  1. In the folder path URL of the exported data experience, type in PowerShell to launch PowerShell for that path.

  2. Hit Enter on your keyboard (Image 2).

Image 2: Step 3
  1. In the PowerShell window, type cin and hit Tab on your keyboard. Type install (Image 3).

Image 3: Step 4
  1. Enter the install parameters into the PowerShell window:

The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters have been put on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute

Sample (Image 4): .\CinchyDXD.ps1 install -s "<taget Cinchy url>" ` -u "<target user id>" ` -p "<target password>" ` -c "C:\Cinchy CLI v4.0.2" ` -d "C:\CLI Output Logs" `

Image 4: Step 5
  1. Hit Enter on your keyboard to run the install command. Once the Data Experience has been installed you will get a message in PowerShell that the install was completed (Image 5).

Image 5: Step 6

Validate Install

  1. Ensure that the Models Table is populated in the target environment with the model that was installed (Image 6).

Image 6: Step 1
  1. Ensure that the Currency Exchange Rate table exists in the target environment with the new column names (Image 7).

Image 7: Step 2
  1. Ensure that the Currency Converter query exists in the target environment with the new column names and labels (Image 8).

Image 8: Step 3
  1. Ensure that the Data Experience Definitions table hasn't changed, unless you have added or removed column details within this table (Image 9).

Image 9: Step 4
  1. Ensure that the Data Experience Releases table in the target environment is populated with the new release version number from the install (For example: 2.0.0) (Image 10).

Image 10: Step 5

Best practices

This page outlines a few common best practices when building in Cinchy.

Naming convention

‌Cinchy is a simple, business user friendly application. This means that you should use business friendly terms to name your tables and columns. For example, you want to name a column Full Name rather than full_name, fullName, or fName.

Domains

‌Domains essentially act as folders to be able to organize your data. Most users will want to split domains by business lines, such as Sales, Marketing, and Human Resources. The key thing is to keep it consistent so users have a general idea where to go to find information.‌‌

Descriptions

‌You can add descriptions to your tables and columns. Descriptions lets users access data in a more self-serve fashion, and also helps prevent misunderstandings of the meaning of your data.‌‌ Table descriptions are shown in the My Network screen, and will show up in search as well.‌ (Image 1).

Image 1: Table Description

Column descriptions show up when you hover on the column in the Manage Data screen (Image 2 and 3).

Image 2: Column Descriptions
Image 3: Column Descriptions

v5.7 (Kubernetes)

What's new

The major changes for the 5.7 Kubernetes upgrade are the following:

  • The Azure AKS and AWS EKS version supports up to 1.27

  • Upgraded ArgoCD from 2.1.7 to v2.7.6

  • Upgraded Istio from 1.3.1 to 1.18.0

  • OpenSearch from 1.2.0 to 2.13.1

  • Upgraded Logging Operator from 3.17.2 to 4.2.2

  • Upgraded Kube Prometheus Stack from 17.2.2 to 47.0.0

  • Upgraded Strimzi Kafka Operator from 0.1.0 to 0.34.0

  • New app Kafka UI 0.7.1

  • OpenSearch Index creation based on date format

Upgrading on Kubernetes

To upgrade your various components, follow the instructions below in the order presented.

Prerequisites

If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.

Upgrade from 5.1 or lower

If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.7, you must run Upgrade 5.2 and deploy version 5.2.

Upgrade from v5.2 or higher

If you are upgrading from 5.2 or higher, follow the 5.7 upgrade instructions below, then use and deploy the target version using the -v "X.X" argument.

Configure to the newest version

Clean existing repositories

  1. Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.

  2. Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.

If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.

  1. Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.

  2. Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.

Download k8s template

  1. Download and open the new Cinchy v5.7 k8s-template.zip file from the Cinchy Releases table and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.

  2. Go to the new aws.json/azure.json files and compare them with your current deployment.json file. All additional fields in the new aws.json/azure.json files should be added to your current deployment.json.

  3. Update the Kubernetes version in your deployment.json. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.24 to 1.25, then from 1.25 to 1.26, and finally from 1.26 to 1.27.

You may have changed the name of the deployment.json file during your original platform deployment. If so, make sure that you swap up the name wherever it appears in this document.

Remove components

In the 5.7 templates, the cluster-level components will upgrade to the latest version. You need to remove kube-prometheus-stack, logging-operator app and kafka-cluster from ArgoCD. This change deletes your recent metrics from Grafana and you will only see the latest metrics after you deploy the new kube-prometheus-stack. The older CRDs created by kube-prometheus-stack and logging-operator charts aren't removed by default during upgrade and should be manually cleaned up with the below commands:

Upgrade and redeploy components

  1. Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:

  2. Commit all of your changes (if there were any) in each repository.

  3. If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  4. Execute the following command to deploy ArgoCD:

  5. Validate ArgoCD pods are running and check that ArgoCD is upgraded v2.7.6 by accessing the ArgoCD application Console.

  6. Execute the following command to deploy cluster components and Cinchy components:

  7. You might see a couple of ArgoCD apps out of sync because you deleted the logging operator. Sync them manually. Redis will take a few minutes to recover.

Upgrade AWS EKS and Azure AKS

To upgrade the AWS EKS and Azure AKS version from 1.24 up to 1.27.x, you have two methods. The method depends on the status of the subnet CIDR range. The CIDR is a blocker for Azure only. For AWS export credentials and for Azure run the az login command, if required.

CIDR range higher than 10.10.0.0/22

If you have AKS subnet CIDR range higher than 10.10.0.0/22 then you can use the 1.25.x and later AKS upgrade without much downtime and AKS resource teardown.

  1. Go to your cinchy.devops.automations repository and change AKS/EKS version in deployment.json (or <cluster name>.json) within the same directory.

  2. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:

AWS - Cinchy.terraform repository structure

The AWS deployment updates a folder named eks_cluster in the Terraform > AWS directory. Within that directory is a subdirectory with the same name as the created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution.

Azure - Cinchy.terraform repository structure

The Azure deployment updates a folder named aks_cluster Within the Terraform > Azure directory. Within that directory is a subdirectory with the same name as the created cluster.

For AWS and Azure export credentials run the az login command if required.

Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.

CIDR range lower than 10.10.0.0/22

This section is only applicable to Azure deployments.

If you have 10.10.0.0/22 CIDR range or smaller, then you won't be able to upgrade AKS version to 1.25.x. The 10.10.0.0/22 CIDR range gives you 1024 IP addresses, which aren't enough to run more than 4 worker nodes. Most customers already run 3 worker nodes and the upgrade process starts another 3 nodes, which will cause a failure.

The values below gives the suggested IP address CIDR range. Cinchy recommends to make your own choice based on your needs. Update these values in your deployment.json file:

Delete cluster components, Cinchy components, and ArgoCD apps

Make sure you are connected to the appropriate cluster. Before you start the upgrade process of AKS, you must delete all your apps from ArgoCD which will delete Cinchy apps and custom components from ArgoCD, which includes load balancer as well kafka-cluster.

To delete Cinchy apps, cluster components and ArgoCD:

  1. From terminal change directory to cinchy.argocd and run the commands below sequentially. Make sure to change your cluster(directory) name and environment name in below commands:

  2. If cluster components deletion takes longer than 10 minutes, run the command below. Check that all pods are deleted except Kubernetes default namespace(kube-system) pods.

  3. Verify the deletion of pods for all namespaces with the kubectl get pods -A command. If the namespaces and pods aren't deleted for some cluster components, then delete the namespace manually with command kubectl delete ns NAMESPACE.

Deploy

  1. Change the AKS version in DevOps automation tools deployment.json against kubernetes_version and orchestrator_version key values. From a shell/terminal, go to the cinchy.devops.automations directory location and execute the following command:

  2. In the Terraform > Azure directory, a new folder named aks_cluster should be updated with the AKS version. In that directory is a subdirectory with the same name as the newly updated cluster.

  3. Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository and run az login.

  4. Execute the following command to create the cluster:

Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.

5.4 Release Notes

This page contains the release notes for version 5.4 of the Cinchy platform.

Version 5.4 of the platform was released on January 18th, 2023.

For instructions on how to upgrade to the latest version of Cinchy,

New Rich Text Editing Capabilities in Forms

Customize the appearance of your Form text with our new rich text editing capabilities. Enabling this on your text columns will give you access to exciting new formatting options previously unavailable in Forms such as:

  • Bold, Italic, Underlined text

  • Checklists

  • Headers

  • Hyperlinks

  • etc.

For more information on how to make a visual impact with our new rich text editing capabilities, please review the

Editable GUIDs

A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique ID. All Cinchy Tables and Columns have a GUID.

This feature is particularly useful when deploying between Cinchy instances.

For example, in a model deployment, you must have matching GUIDs on your columns in order for them to properly load between environment A and environment B. There might be times when these GUIDs don’t automatically match, however, such as if you manually added a new column to environment B and also manually added it to environment A.

In this case, the two columns would have different GUIDs, and the model deployment would fail. With this new capability, however, you can match up conflicting GUIDs to properly load your model.

Polling Event Data Sync

Version 5.4 of the Cinchy platform introduces data polling, which uses the Cinchy Event Listener to continuously monitor and sync data entries from your SQL Server or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary to capture frequently changing data.

You can read more about setting up Data Polling

BigInt Upgrade Utility

A mandatory database upgrade script was introduced in v5.2 that increased the number of possible Cinchy IDs that can be generated (). To streamline this process further, we've created a utility to deploy the changes. This should save you valuable time and resources when performing the upgrade, even on large databases.

New Platform Default Timeout

For new environments (or if your setting was previously left blank), we've changed the Cinchy default session timeout from 30 minutes to 7 days. This will keep you logged in and working without interruptions. You can further change or revert this session timeout value in your appsettings.json.

In an IIS deployment, you can find the value in your CinchySSO > appsettings.json

In a Kubernetes deployment, you can find the value in your deployment.json file.

Enhancements

  • We've upgraded our application components to .NET 6.0 to ensure official Microsoft support for another 2 years.

Because of the .NET update, if you are upgrading to 5.4+ on an SQL Server Database you will need to make a change to your connectionString. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.

In an IIS Deployment you must update your connectionString in your and appsettings.json.

In a Kubernetes deployment you must update your connectionString in your

  • We've added a silent refresh to the Connections experience to keep your session active while you're on the UI and to keep you working without interruptions.

  • Real time data sync will now continue to retry if an "Out of Memory Exception" is thrown, avoiding unnecessary downtime.

  • You now have the ability to choose between Debian or Alpine based Docker images when using a Kubernetes deployment of the Cinchy platform to be able to connect to a DB2 data source in Connections. <<<<<<< HEAD

    • When either or your platform, you can use the following Docker image tags for the listener, worker, and connections: =======

    • When either or your platform, you can use the following Docker image tags for the listener, worker, and connections:

ed39076 (TOC cleanup) * Alpine: "5.x.x" * Debian: "5.x.x-debian"

  • You now have the option to update the default passwords for Grafana and OpenSearch in a Kubernetes deployment by configuring your deployment.json file. See here for instructions on updating and here for

  • We've increased the average throughput for CDC subscriptions returning the Cinchy ID, so that it will now be able to process a greater number of events per second. Being able to reliably exceed 1000 events per second, based on the average use case, means that you can leverage the CDC capability for more demanding use cases.

  • Before this release, the Files API could only handle files up to 100mb. We've now upped the maximum default file size to 1GB and have added a configurable property to allow you to set your own upload size.

    • In an IIS deployment, you can find the value in your Cinchy > appsettings.json

  • In a Kubernetes deployment, you can find the value in your deployment.json file.

When choosing your maximum upload size, keep in mind that large files may slow down your database if you are using

Bug Fixes

  • We've fixed an error that occurred when attempting a data sync with conflicting target and source data types in link columns, where the error message would read: Value must be specified from the available options

  • We've fixed an issue that was preventing new Connection jobs from starting when a previous job got stuck.

  • We've fixed an issue where data syncs would fail if your sync key used a Target Column with a Link Column property that's different from the Primary Linked Column in the table definition.

  • We've fixed a bug that was impacting write performance to tables on PostgreSQL with Data Change Notifications enabled.

  • We've fixed a "cell entitlements failed" error on Forms that would occur if a Form column contained a single quote in the column name.

  • We've fixed an issue on Forms where adding a [Created By] or [Modified By] field would return an error.

  • The /healthcheck no longer redirects to the initialization screen during a Cinchy startup, allowing you to properly hit the endpoint.

Data management

This page goes over the several ways to work with (enter, update, remove, load and extract) data from Cinchy tables.

Data entry

Users are only able to enter data into Cinchy based on their access. Users can also copy and paste data from external sources.

Insert/Delete data rows

Users are only able to insert or delete rows based on their access. If you have the ability to insert and/or delete a row of data it will be visible when right-clicking on a row of data (Image 1).

Import data

Importing data allows you to add new rows of data into a table. If you want to perform a sync, . Importing data acts as a smart copy-and-paste of new data into an existing table.

Importing the first row of your CSV as a header row will match the headers to the column names within your table. Columns that can't be matched are ignored, as well as any columns you don't have edit permissions for.

Users can import data from a CSV file to an existing table in Cinchy. Importing data into a Cinchy table only adds records to the table. This data import type doesn't update or append existing records.

To import data into a table, complete the following:

  1. From within the table, click the Import button on the top toolbar of the table (Image 2).

  1. Click Choose File to locate and import your file.

  2. Validate the imported columns and click next (Image 3).

  1. Click the Import button

  2. Click the OK button on the Import confirmation window

Import errors

If there are import errors, click the download button next to Rejected Rows on the Import Succeeded with Errors window (Image 4).

You will get a file back with all the rejected rows, as well as the 2 columns added called ‘Cinchy Import Errors' and 'Cinchy Import Original Row Number’.

Cinchy original row number

This provides a reference to the row number in the original file you imported in case you need to check it.

You can simply fix any errors in your error log followed by importing the error log since successful rows are omitted.

Export data

You can export your data from a table view in CSV or TSV format. This export starts at the first record. Cinchy doesn't currently support pagination, so the maximum export is 250,000 records. To export a table of more than 250,000 records, you can use

When data is exported out of the network, it's now just a copy and no longer connected to Cinchy.

To export data from a table, complete the following:

  1. From within the table, click the Export button in the table toolbar

  2. Select the Export file type (CSV or TSV) (Image 5).

  3. Open your file in Excel, or any other CSV software, to view.

5. Approve/Reject Data

Cinchy can have data change approvals for when data is added or removed from a table view. A change approval process can be put into place for the addition or removal of specific data. If you have been identified as an "Approval" of data you will have the ability to:

  • Approve a cell of data

  • Approve a row of data

  • Reject a row of data

To approve or reject a cell/row of data, complete the following:

  1. Right-click on the desired row/cell

  2. Select Approve row/cell or Reject row/cell

Collaboration log

The Collaboration log is accessible from every table within Cinchy (including metadata). It shows the version history of ALL changes that have been made to an individual row of data.

To access the Cinchy Collaboration Log:

  1. Open the desired table

  2. Locate the desired row > Right Click > View Collaboration Log (Image 6).

Once the Collaboration Log is open you have the ability to view ALL changes with a version history for the row selected within the table.

Users have the ability to revert to a prior version of the record. To do so, click the Revert button for the desired version (Image 7).

A record can have a Revert button. This indicates that version record is identical to the current version of the record in the table. Hovering over the Revert button displays a tool-tip.

Data erasure and compression policies

By default, Cinchy doesn't delete any data or metadata from within the Data Fabric.

Click here for more information on & Policies in Cinchy

Audit for data synchronization

Audit Logging of data loaded into Cinchy via Data Synchronization such as batch or real-time using the , or through data changes by any Saved Queries exposed as APIs to external clients, is recorded the same way as if a user entered the data into Cinchy. All data synced into Cinchy will have corresponding line items in the Collaboration Log similarly to how it's handled when data is entered / modified in Cinchy by a User.

Collaboration log performance considerations

The Collaboration Log data is also stored within Cinchy as data, allowing the logs to be available for use through a query or for any downstream consumers. The logs have no separate performance considerations needed, as it relies on the Cinchy platform’s performance measures.

Recycle bin

All data records that have been deleted are put into Cinchy’s Recycle Bin. Data that resides in the Recycle Bin can be restored if required.

To restore data from the recycle bin:

  1. From the left-hand navigation, click Recycle Bin (Image 8)

  1. Locate the row for restoring

  2. Right-click and select Restore Row.

The restored row will now be visible in your table.

If Change Approvals are turned on, that row will need to be approved.

5.2 Release Notes

This page captures the release notes for Cinchy version 5.2

Version 5.2 of the platform was released on September 16th, 2022.

For instructions on how to upgrade to the latest version of Cinchy,

Another Move Toward Infinite Scalability

We've increased the number of possible Cinchy IDs that can be generated. This in turn allows the creation of more records within one table, so that you can create and manage larger data sets.

Previous Limit: 2,147,483,647 (2^31-1) Cinchy IDs per table

Updated Limit: 9,223,372,036,854,775,807 (2^63-1) Cinchy IDs per table

For backward compatibility with your database, you will need to manually run the below script against your TSQL or PGSQL databases. For instructions on how to run this upgrade,

WARNING: This script is REQUIRED when upgrading from v5.1 or lower to v5.2 or higher, otherwise your platform will break.

Connections Enhancements

  • We've expanded our Connections capabilities to support binary file types as a data source.

  • We've improved the Connections experience by now making it optional to input a username or password when starting a batch sync job. if you want to run the job as the currently logged in user.

  • We've added new optimizations and quicker processing of Kafka messages for real time data syncs in Connections.

  • For added security, any logged password or sensitive parameters from the request details of a SOAP connector data sync is now redacted in the logs.

  • Dead messages in the event listener are now written out to the execution errors table for easy collection and querying.

  • The Connections experience now supports sourcing file based data sources from Azure Blob Storage and Amazon S3.

General Enhancements

  • You now have the option to free up database space by using S3 compatible or Azure Blob Storage for file storage. This is configured in your for Kubernetes installations and the appsettings.json in an IIS deployment.

    • If you are upgrading from an earlier v5 version, you can update your previous configuration to take advantage of this. For further instructions,

  • You are now able to run an on records that link to the Files table to delete the underlying referenced file.

  • We've made improvements to the Files API to avoid cache build up and optimize the API.

  • General security fixes and updates.

GraphQL (Beta)

  • We've added anonymous API access to GraphQL, no token required.

  • We've added write operations to our GraphQL beta, meaning that you can now insert and modify data.

Bug Fixes

  • We've fixed a UI instability bug that resulted in the inability to resize view panes in the query designer and difficulty in selecting any cell in the first row of a table. This bug was affecting Chromium users (Google Chrome, Brave, Microsoft Edge, etc.) who had recently updated their internet browsers.

  • We've fixed a bug that caused the “Created” column to incorrectly display as the last approved date of the column instead of the column created date.

  • You can now add/remove a column from a table that has a columnar index without needing to remove said index entirely. We've also fixed a bug that prevented users from reapplying their columnar index to a table once it had been removed.

  • We've fixed an issue where an “Unsupported Function Call” error was raised in certain situations when using the REPLACE function in conditional calculated columns in Connections.

  • We've fixed a bug that caused unnecessary updates to the Users table when a user’s Language and Region wasn't set.

  • We've fixed a bug that was causing some GUID calculated columns to appear as blank, such as in the Integrated Clients table. If you are experiencing this bug, a manual update on the affected rows, either through the UI or through an UPDATE query, will resolve it.

  • We've fixed an issue where the '&' in links was sometimes showing up as 'amp&' in the table view instead. This fix will only appear for customers on Postgres/Microsoft SQL servers 2017 or higher.

  • We've fixed an issue where certain Update statements on multi-select link columns were failing to properly update with the link values specified. This bug was affecting statement with long strings done via the API.

  • We've fixed a bug that was causing the Event Listener to pick up and process messages from deleted configs from the LIstener Configs table.

  • We've fixed a bug that was causing an InvalidOperationException when executing a POST request to a Saved Query API.

  • We've fixed a bug that was throwing errors on reconciliation when Data Syncs compared Text Conditional Calculated Columns to Links (PGSQL).

  • FOR JSON PATH now works as expected in PostGres deployments.

Install the data experience

This page outlines Step 3 of Deploying CinchyDXD: Installing the Data Experience

Introduction

The install of a Data Experience is executed in a different environment than that of the export. Please ensure that before moving forward with the following instructions you have an environment to install the data experience into. The install of a data experience MUST be done in the same version. Your source and target environment version MUST be the same (For example, Source Version = 4.11 | Target Version = 4.11).

Below are the details that will be required for the installation environment:

  • Source: <Cinchy target url>

  • UserID: <target user id>

  • Password: <target password>

Install the data experience

Using PowerShell you will now install the Data Experience you have exported out of Cinchy.

  1. Open the File Explorer and navigate to your DX exported folder (Image 1).

  1. In the folder path URL of the exported data experience type in PowerShell to launch PowerShell for that path (Image 2).

  1. Hit Enter on your keyboard, the PowerShell window will appear (Image 3).

  1. In the PowerShell window, type in cin and hit tab on your keyboard (Image 4).

  1. In the PowerShell command line, type install (Image 5).

  1. Hit Enter on your keyboard (Image 6).

The PowerShell window will provide you with the required and optional components to install the DX.

  1. You must now set up your mandatory install parameters

The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters are on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute

Sample: .\CinchyDXD.ps1 install` -s "<target Cinchy url>" ` -u "<target user id>" ` -p "<target password>" ` -c "C:\Cinchy CLI v4.0.2" ` -d "C:\CLI Output Logs" `

Be sure that the user(s) and group(s) required to install a DX are existing in your target environment. If they don't exist, PowerShell will generate an error message when you attempt to install the DX.

  1. Enter the install parameters into the PowerShell window (Image 7).

  1. Hit Enter on your keyboard to run the install command. Once the Data Experience has been installed you will get a message in PowerShell that the install was completed (Image 8).

3. Validate the Install

  1. Ensure that the Models Table is populated in the target environment with the model that was installed (Image 9).

  1. Ensure that the Currency Exchange Rate table exist in the target environment (Image 10).

  1. Ensure that the Currency Converter query exist in the target environment (Image 11).

  1. Ensure that the Data Experience Definitions table is populated with the DX parameters that were set up in the source environment (Image 12).

  1. Ensure that the Data Experience Releases table in the target environment is populated (Image 13).

Change your file storage configuration

This page details how to change your File Storage configuration in Cinchy v5 to S3, Azure Blob, or Local.

Overview

In v5.2, Cinchy implemented the ability to free up database space by using S3 compatible or Azure Blob Storage for file storage. You can set this configuration in the deployment.json of a Kubernetes installation, or the appsettings.json of an IIS installation.

Kubernetes file storage

  1. If you are using a Kubernetes deployment, you will change your file storage config in the deployment.json.

  2. Navigate to the object storage section, where you will see either S3 or Azure Blob storage, depending on your deployment structure.

Azure Example

AWS Example

  1. To use Blob Storage or S3, update each line with your own parameters.

  2. To use Local storage, leave each line blank except for the Connections_Storage_Type, which you should set to Local:

5. Run the deployment script by using the following command in the root directory of your devops.automations repository:

  1. Commit and push your changes.

IIS file storage

  1. If you are using an IIS deployment, you will change your file storage config in the Cinchy Web AppSettings file.

  2. Locate the StorageType section of the file and set it to either Local, AzureBlobStorage, or S3.

  1. If you selected AzureBlobStorage, fill out the following lines in the same file:

  1. If you selected S3, fill out the following lines in the same file:

        "object_storage": {
          // Cinchy requires a new Azure Blob Storage container for it's file storage. Select a unique name, the template convention follows cinchy
          // Storage Account Names can only consist of lowercase letters and numbers, and must be between 3 and 24 characters long
          "storage_account_name": "cinchynonprod",
          // Two storage containers are created, one for the Connections component and one for the Platform. The default naming convention is -
          "connections_storage_container_name": "nonprod-connections",
          "platform_storage_container_name": "nonprod-platform",
          "connections_storage_type": "AzureBlobStorage",
          // The connection string to the Azure Blob Storage account, it can be retrieved by executing the following command after the terraform apply has completed
          // In the below command the  and  must be replaced with the values for those properties within this file
          // az storage account show-connection-string --name  --resource-group 
          "azure_blob_storage_conn_str": ""
        },
        "object_storage": {
          // Cinchy requires a new S3 bucket for it's file storage. Select a unique name, the template convention follows -
          "cinchy_s3_bucket": "-cinchy-nonprod",
          // During the S3 bucket creation, a tag named "Environment" is added to the resource and populated with the following value
          "cinchy_s3_environment_tag": "cinchy-nonprod",
          "connections_storage_type": "S3",
          // IAM user credentials (access key and secret) for access to this bucket. Ensure that the usre has the necessary privileges defined in IAM
          "connections_s3_access_key": "",
          "connections_s3_secret_access_key": "",
          // Optional - only set this value if you are using a third party S3 compatible service
          "connections_s3_service_url": ""
        },
          "connections_storage_type": "Local",
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  "AppSettings": {
   ...
    "StorageType": ""      // Set this to "Local" to store files within Cinchy's database. Set this to "AzureBlobStorage" to store files within Azure Blob Storage. Set this to "S3" to store files within S3.
  },
  "AzureBlobStorageSettings": {
    "ConnectionString": "",    // ConnectionString used to connected to the Azure Blob Storage
    "Container": "",       // The container for Cinchy's file storage
    "BasePath": ""         // The base directory path of where to store files within the container (eg. cinchy/files)
  },
  "S3Settings": {
    "AccessKey": "",       // Access Key for the IAM user. Ensure that the user has the necessary privileges defined in IAM
    "SecretAccessKey": "", // Secret for the IAM user. Ensure that the user has the necessary privileges defined in IAM
    "Region": "",          // Region of where the S3 bucket is located in
    "Bucket": "",          // S3 bucket for Cinchy's file storage
    "BasePath": "",            // The base directory path of where to store files within the bucket (eg. cinchy/files)
    "ServiceURL": ""       // (Optional) - only set this value if you are using a third party S3 compatible service
  }
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com probes.monitoring.coreos.com prometheusagents.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com scrapeconfigs.monitoring.coreos.com servicemonitors.monitoring.coreos.com thanosrulers.monitoring.coreos.com
kubectl delete crd clusterflows.logging.banzaicloud.io clusteroutputs.logging.banzaicloud.io flows.logging.banzaicloud.io loggings.logging.banzaicloud.io outputs.logging.banzaicloud.io eventtailers.logging-extensions.banzaicloud.io fluentbitagents.logging.banzaicloud.io hosttailers.logging-extensions.banzaicloud.io nodeagents.logging.banzaicloud.io syslogngclusterflows.logging.banzaicloud.io syslogngclusterflows.logging.banzaicloud.io syslogngclusteroutputs.logging.banzaicloud.io syslogngflows.logging.banzaicloud.io syslogngoutputs.logging.banzaicloud.io
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
bash deploy_argocd.sh
bash deploy_cluster_components.sh
bash deploy_cinchy_components.sh
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
bash create.sh
          "address_space": "10.10.0.0/19",
          "subnet_prefix": "10.10.0.0/20"
          "net_profile_service_cidr": "10.1.4.0/22",
          "net_profile_dns_service_ip": "10.1.4.2",
kubectl delete -k environment_kustomizations/CLUSTER_NAME/cluster_components
kubectl delete -k environment_kustomizations/CLUSTER_NAME/ENVIRONMENT_NAME/cinchy
kubectl delete -k argocd
kubectl patch app $(kubectl get applications -n argocd --no-headers | awk '{print $1}') -p '{"metadata":{"finalizers":null}}' --type=merge -n argocd
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
bash create.sh
using the Cinchy Utility
the Cinchy Utility
“ConfigSettings”: {
	“AppSettings”: {
		"CinchyAccessTokenLifetime": "7.00:00:00",
// The Cinchy session timeout
"cinchy_session_timeout" : "7.00:00:00",
“ConfigSettings”: {
	“AppSettings”: {
		“MaxRequestBodySize”: 1073741824 // 1g
// this field defaulted at 1gb, dictates the max file upload size set in the web appsettings.json
"web_max_request_body_size" : 1073741824,
see here.
documentation here.
You now have the ability to display and edit table and column GUIDs within the Design Table screen.
here.
read the releases note here
See the documentation here for more on .NET 6.
Cinchy SSO
Cinchy
deployment.json.
installing
upgrading
installing
upgrading
Grafana
OpenSearch.
local storage.
Table GUID
Column GUID
see here.
see the documentation here.
Review the documentation here for more information.
This field can now be left blank
deployment.json
review the documentation here.
erasure
maintenance job
You now have the option to choose Amazon S3 or Azure Blob Storage options when syncing a file as a data source in Connections
Image 1: Step 1
Image 2: Step 2
Image 3: Step 3
Image 4: Step 4
Image 5: Step 5
Image 6: Step 6
Image 7: Step 8
Image 8: Step 9
Image 9: Step 1
Image 10: step 2
Image 11: Step 3
Image 12: Step 4
Image 13: Step 5
Image 3: Set the secure hash algorithm to SHA-256
refer to the CLI
CLI to export your entire table at once.
Data Erasure
Compression
Cinchy CLI
Image 1: Inserting/Deleting Rows
Image 2: Step 1, Clicking the import button
Image 3: Step 3, validate your columns
Image 4: Import Errors
Image 5: Exporting Data
Image 6: Step 2, Open the Collaboration Log
Image 7: Reverting Data in the Collaboration Log
Image 8: The Recycle Bin

Deployment prerequisites

This page details various prerequisites for deploying Cinchy v5.

General Kubernetes deployment prerequisites

Before deploying Cinchy v5 on Kubernetes, you must follow the steps listed below.

Download your tools

Install the following tools on the machine where the deployment will run:

  • (v1.23.0+)

  • (You can also use on Windows)

Create your domains

All your Cinchy environments will need a domain for each of the following:

  • ArgoCD

  • OpenSearch

  • Grafana

Do this through your specific domain registrar. For example, GoDaddy or Google Domains.

SSL certs

You must have valid SSL Certs ready when you deploy Cinchy v5. Cinchy recommends using a wildcard certificate if ArgoCD will be exposed via a subdomain. Without the wildcard certificate, you must create a port forward using kubectl on demand to access ArgoCD's portal.

You also have the option to use Self-Signed Certs in Kubernetes deployments. Find more information

Secrets management

Although optional, Cinchy strongly recommends secret management for storing and accessing secrets that you use in the deployment process. Cinchy currently supports:

Single sign-on

If you would like to set up single sign-on for use in your Cinchy v5 environments, .

Docker images

You can use Cinchy Docker images or your own. If you would like to use Cinchy images, please follow the section below to access them.

Access Cinchy Docker images

You will pull Docker images from Cinchy's AWS Elastic Container Registry (ECR).

To gain access to Cinchy's Docker images, you need login credentials to the ECR. Contact for access.

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source. Use this option if you plan on leveraging a DB2 data sync.

  • When installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

Create your repositories

You must create the following four Git repositories. You can use any source control platform that supports Git, such as , , or .

  • cinchy.terraform: Contains all Terraform configurations.

  • cinchy.argocd: Contains all ArgoCD configurations.

  • cinchy.kubernetes: Contains cluster and application component deployment manifests.

  • cinchy.devops.automations: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.

You must have a service account with read/write permissions to the git repositories created above.

Access to Cinchy artifacts

You will need to access and download the Cinchy artifacts before deployment.

To access the Kubernetes artifacts:

  1. Access the table. Please contact if you don't have the access credentials necessary.

  2. Navigate to the release you wish to deploy.

  3. Download the .zip file(s) listed under the Kubernetes Artifacts column.

  4. Check the contents of each of the directories into their

Please contact Cinchy Support if you are encountering issues accessing the table or the artifacts.

Kubernetes Azure requirements

If you are deploying Cinchy v5 on Azure, you require the following:

Terraform requirements

  • A resource group that will contain the Azure Blob Storage with the terraform state.

  • A storage account and container (Azure Blob Storage) for persisting terraform state.

  • Install the on the deployment machine. It must be set to the correct profile/login

The deployment template has two options available:

  • Use an existing resource group.

  • Creating a new one.

Existing resource group

If you prefer an existing resource group, you must provision the following before the deployment:

  • The resource group.

  • A VNet within the resource group.

  • A single subnet. It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 IPs.

New resource group

  • If you prefer a new resource group, all resources will be automatically provisioned.

  • The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).

  • An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).

Kubernetes AWS requirements

If you are deploying Cinchy v5 on AWS, you require the following:

Terraform requirements:

  • that will contain the terraform state.

  • Install the on the deployment machine. It must be set to the correct profile/login

The template has two options available:

  • Use an existing VPC.

  • Create a new one.

Existing VPC

  • If you prefer an existing VPC, you must provision the following before the deployment:

    • The VPC. It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IPs.

    • 3 Subnets (one per AZ). It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IPs.

    • If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.

New VPC

  • If you prefer a new VPC, all resources will be automatically provisioned.

  • The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).

  • An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).

  • You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).

  • You must import the SSL certificate , or a new certificate can be requested via

  • If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.)

IIS deployment prerequisites

Before deploying Cinchy v5 on IIS, you require the following:

Access the artifacts

You need to access and download the Cinchy binary before deployment:

  • Access the table. Please contact if you don't have the access credentials necessary.

  • Navigate to the release you wish to deploy

  • Download the files listed under the Component Artifacts column. This should include zip files for:

    • Cinchy Platform

    • (optional)

    • (optional)

Please contact if you are encountering issues accessing the table or the artifacts.

General requirements

  1. An instance of SQL Server 2017+

  2. A Windows Server 2012+ machine with IIS 7.5+ installed

    • Specifically, install: ASP.NET Core/.NET Core Runtime & Hosting Bundle

Cinchy Platform 5.4+ uses .NET Core 6.0.

4.18.0+ used .NET Core 3.1 and earlier versions used .NET Core 2.1

System requirements

Minimum web server hardware recommendations

  • 2 × 2 GHz Processor

  • 8 GB RAM

  • 4 GB Hard Disk storage available

Minimum database server hardware recommendations

  • 4 × 2 GHz Processor

  • 12 GB RAM

  • Hard disk storage dependent upon use case. Note that Cinchy maintains historical versions of data and performs soft deletes which will add to the storage requirements.

Clustering

Clustering considerations are applicable to both the Web and Database tiers in the Cinchy deployment architecture.

The web tier can be clustered by introducing a load balancer and scaling web server instances horizontally. Each node within Cinchy uses an in-memory cache of metadata information, and expiration of cached elements is triggered upon data changes that would impact that metadata. Data changes processed by one node wouldn't be known to other nodes without establishing connectivity between them. The nodes must be able to communicate over either HTTP or HTTPS through an IP based binding on the IIS server that allows the broadcast of cache expiration messages. The port used for this communication is different from the standard port that's used by the application when a domain name is involved. Often for customers this means that a firewall port must be opened on these servers.

The database tier relies on standard MS SQL Server failover clustering capabilities.

Scaling Cconsiderations

The web application oversees all interactions with Cinchy be it through the UI or connectivity from an application. It interprets/routes incoming requests, handles serialization/deserialization of data, data validation, enforcement of access controls, and the query engine to transform Cinchy queries into the physical representation for the database. The memory footprint for the application is low, as caching is limited to metadata, but CPU use grows with request volume and complexity(For example, insert/update operations are more complex than select operations). As the user population grows or request volume increases, there may be a need to add nodes.

The database tier relies on a persistence platform that scales vertically. As the user population grows and request volume increases, the system may require additional CPU / Memory. Cinchy recommends you start off in an environment that allows flexibility (such as a VM) until you can profile the real-world load and establish a configuration. On the storage side, Cinchy maintains historical versions of records when changes are made and performs soft deletes of data which will add to the storage requirements. The volume of updates occurring to records should be considered when estimating the storage size.

Backups

Outside of log files there is no other data generated & stored on the web servers by the application, which means backups are centered around the database. Since the underlying persistence platform is a MS SQL Server, this relies on standard procedures for this platform.

Plan your deployment

This page is your first stop when considering a deployment of Cinchy v5.

Overview

This section will guide you through the considerations and the prerequisites before deploying version 5 of the Cinchy platform.

The pages in this section include:

  • : This page explores your two high-level options for deploying Cinchy, on Kubernetes or on VM, and why Cinchy recommends a Kubernetes deployment. It also walks you through selecting a database to run your deployment on and some sizing considerations.

    • : This page provides Infrastructure (for both Azure and AWS), Cluster, and Platform component overviews for Kubernetes deployments. It also guides you through considerations about your cluster configuration.

    • : This page provides Infrastructure and Platform component overviews for IIS (VM) deployments.

  • : This page details important prerequisites for deploying Cinchy v5.

Deployment planning checklist

Use the following checklist when planning for your Cinchy v5 deployment. Each item links to the appropriate documentation page.

The main differences between a Kubernetes based deployment and an IIS deployment are:

  • Kubernetes offers the ability to elastically scale.

  • IIS limits certain components to running single instances.

  • As all caching is in memory in an IIS deployment, if multiple instances are online for redundancy there is point to point communication between them (HTTP requests on the server IPs) required to maintain the cache.

  • Performance is better on Kubernetes because of Kafka/Redis

  • Prometheus/Grafana and OpenSearch aren't available in an IIS deployment

  • The Maintenance CLI runs as a CronJob in Kubernetes while this needs to be orchestrated using a scheduler for an IIS deployment.

  • Upgrades are simpler with the container images on Kubernetes.

Kubernetes checklist

If you will be running on Kubernetes, please review the following checklist:

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

IIS checklist

If you will be running on IIS, please review the following checklist:

Saved queries

This page explores Saved Queries

Access a Saved Query

Saved queries allows you to query any data within Cinchy (respecting entitlements) and save them to be used as APIs by external systems.

You can access your Saved Query directly by either CinchyID or the domain and name of the Saved Query.

<baseurl>/Query/Execute?queryId=<cinchyid>

<baseurl>/Query/Execute/<domain>/<saved query name>

You can find this information in the Saved Queries table.

You can also search your Cinchy Homepage to find your Saved Query.

Create a Saved Query

  1. From the homepage, select Create > Query (Image 1)

  1. Fill out the following information:

The Info Tab

Under the Info tab, you can fill out information on the query if you wish to save it (Image 2):

  • Query Name: ‌Mandatory field. Must be unique within the Domain.‌

  • Icon: ‌You can optionally pick a non-default icon, as well as color for your table. This will be displayed in My Network.

  • Domain: ‌You need to select a Domain your query will reside in. As an admin, you can also create new domains in this screen.

  • Description: ‌You can give your query a description. This description will be displayed on the home screen to users browsing the marketplace. It will also be searchable.

  • Return Type: Queries have six different return types:

    • Query Results (Approved Data Only)

      This is the default return type, it returns a table from a select query with only approved data for tables with Change Approval enabled, or all data for tables without Change Approval. This is generally used for external APIs as you will want to query approved data, rather than drafts.

    • Query Results (Including Draft Data)

      This return type returns a table from a SELECT query (including draft data) for tables with Change Approval enabled. Use this return type when looking to display results of records that are pending approval.

    • Query Results (Including Version History)

      This return type returns a table from a SELECT query (including draft data) with historical data for all tables, as seen in the Collaboration Log of any record. This data includes all changes that happened to all records within the scope of the select query.

    • Number of Rows Affected

      This return type returns a single string response with the number of rows affected if the last statement in the query is an INSERT, UPDATE, or DELETE statement.

    • Execute DDL Script

      Use this return type when your query contains DDL commands that implement schema changes such as CREATE|ALTER|DROP TABLE, CREATE|ALTER|DROP VIEW, or CREATE|DROP INDEX.

    • Single Value (First Column of First Row)

      This return type returns a result of 1 row x 1 column, irrespective of the underlying result set.

The Query Tab

In the Query screen, you can modify and run your query (Image 3).

On the left hand side you have the Object tree, which shows you all the domains, tables, and columns you have access to query within Cinchy. You can search or simply navigate by expanding the domains and tables.

You can drag and drop the columns or table you're looking for into the Query Builder.

Once you are satisfied with your query, you can click save to keep a copy. You can then find your query in the "Saved Queries" table (Image 4):

Display a Saved Query

  1. Once you've set up your saved query, you can find it on your homepage (Image 5).

2. Clicking the query will allow you to "Execute Query" and show you the result set (if there is a SELECT at the end). Sometimes the query will have parameters you need to fill out first before executing (Image 6).

Pivot reports

  1. Once you execute a query, you can switch the Display to Pivot Mode to see different visualizations of the data (Image 7).

Pivot URLs

If you want to share the report, you can click the Pivot URL button on the top right to copy the URL to that pivoted report. Simply and bookmark it to return to the pivoted view!

Additional information

For more information and documentation on Cinchy Query Language (CQL), please see

Terraform
Kubectl
.NET Core 6.0.x
Bash
Git Bash
here.
Amazon Secrets Manager
Azure Key Vault
please review the SSO integration page
Cinchy Support
Gitlab
Azure DevOps
GitHub
Cinchy Releases
Cinchy Support
respective repository.
Azure CLI
An S3 bucket
AWS CLI
into AWS Certificate Manager
AWS Certificate Manager.
Read more on this here.
Cinchy Releases
Cinchy Support
Cinchy Connections
Cinchy Event Listener
Cinchy Maintenance CLI and CLI
Cinchy Meta-Forms
Cinchy Support
Install .net core Hosting bundle Version 6.0
Deployment Architecture Overview
Kubernetes Deployment Architecture
IIS Deployment Architecture
Deployment Prerequisites
Kubernetes or a VM
Which database
sizing.
cluster configuration
clusters will you need
object storage requirements.
S3 compatible bucket
SSL Certs
Self-Signed
Secrets Management,
Docker Images
pull the images.
deployment repositories
SQL Server 201
a Windows Server 2012+ machine with IIS 7.5+ installed
Install .net core Hosting bundle Version 6.0
web server hardware recommendations
database server hardware recommendations
application storage requirements.
release binary.
add it as an applet
the CQL basics page
Image 1: Step 1, Click Create
Image 2: Step 2, The Info Tab
Image 3
Image 4: You can find saved queries in the Saved Queries table.
Image 5
Image 6
Image 7

Integration guides

This page contains various Integration Guides

Excel

You can use various methods to establish a connection between Cinchy and Microsoft Excel, such as using Basic Auth, Personal Access Tokens, or Bearer Tokens.

Review each section below for further details.

Prerequisites

Excel connects to queries within Cinchy, so before you use any of the connection methods below you will need to create one that represents your dataset. Once created, you will need to copy down the REST API URL endpoint, located as a green button on the right-hand side of the Execute Query screen.

The structure of the URL endpoint is <your Cinchy instance URL>/API/<the name of your query>. You might optionally have querystring parameters at the end as well.

For example: http://your.cinchy.instance.domain/API/YourQueryDomain/API Test

Note that for Basic Authentication with a result format of CSV we will use a slightly different URL endpoint. For Basic Auth: /API/ becomes /BasicAuthAPI/ For CSV results you will add the querystring parameter of ResultFormat=CSV

Our example URL of a basic auth using CSV results would then become: http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV

Use basic auth

  1. Launch Excel and navigate to Data > Get Data > From Other Sources > Blank Query (Image 1).

Image 1: Blank Query
  1. In the expression box that appears, enter the below text to add in your query as your data source (Image 2):

=Csv.Document(Web.Contents("API ENDPOINT URL"))

Example:

=Csv.Document(Web.Contents("http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV"))

Image 2: Add the query as your source
  1. Once you've entered that text either click the check mark to the left of the input box or click away and it will automatically attempt to run the expression.

  2. The data may return in HTML format initially and not be what you're expecting. To correct this:

    1. Select the Data Source Settings.

    2. Select Basic and enter the credentials for a Cinchy User Account that has access to run this query.

    3. Select OK.

    4. Within the Edit Permissions dialogue, click OK.

    5. Within the Data Source Settings dialogue, click Close.

    6. Select Refresh Preview.

    7. Select Close & Load and your dataset will be displayed in the Excel worksheet.

Use a Personal Access Token (PAT)

  1. If needed, follow the documentation here to generate a new PAT.

  2. Launch Excel and navigate to Data > From Web.

  3. Select Advanced and input the following values (Image 3):

    1. URL Parts: This is the Query API URL that you created in the Prerequisites section.

    2. HTTP Request Header Parameters:

      1. In the first text box input Authorization

      2. In the second text box type Bearer + your PAT. For example: "Bearer BGFHFHOJDF76DFDFD777"

Image 3: Advanced Settings
  1. Select OK.

  2. Select Load to use the query data in Excel (Image 4).

Image 4: Load

Use a Bearer Token

  1. If needed, follow the documentation here to generate a Bearer Token.

  2. Launch Excel and navigate to Data > From Web.

  3. Select Advanced and input the following values (Image 5):

    1. URL Parts: This is the Query API URL that you created in the Prerequisites section.

    2. HTTP Request Header Parameters:

      1. In the first text box input Authorization

      2. In the second text box type Bearer + your token. For example: "Bearer eyUzI1NiIsImtpZCI6IkE4M0UwQTFEQTY1MzE0NkZENUQxOTFDMzRDNTQ0RDJDODYyMzMzMzkiLCJ0eXAiO"

Image 5: Advanced Settings
  1. Select OK.

  2. Select Load to use the query data in Excel (Image 6).

Image 6: Load

Power BI

You can use various methods to establish a connection between Cinchy and Power BI, such as using Basic Auth, Personal Access Tokens, or Bearer Tokens.

Review each section below for further details.

Prerequisites

Power BI connects to queries within Cinchy, so before you use any of the connection methods below you will need to create one that represents your dataset. Once created, you will need to copy down the REST API URL endpoint, located as a green button on the right-hand side of the Execute Query screen.

The structure of the URL endpoint is <your Cinchy instance URL>/API/<the name of your query>. You might optionally have querystring parameters at the end as well.

For example: http://your.cinchy.instance.domain/API/YourQueryDomain/API Test

Note that for Basic Authentication with a result format of CSV we will use a slightly different URL endpoint. For Basic Auth: /API/ becomes /BasicAuthAPI/ For CSV results you will add the querystring parameter of ResultFormat=CSV

Our example URL of a basic auth using CSV results would then become: http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV

Use basic auth

  1. Launch Power BI and navigate Get Data > Web (Image 7).

Image 7: Get Data > Web
  1. In the window that launches, you will enter the below text, using your own URL endpoint where highlighted (Image 8): =Csv.Document(Web.Contents("http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV"))

Image 8: Enter your expression
  1. Click on the checkmark icon and Power BI will automatically attempt to run the expression (Image 9).

Image 9
  1. Select Edit Credentials > Basic (Image 10). Enter the credentials for a Cinchy User Account that has access to run this query and select the level at which to apply these settings. By default it's the root URL.

This process of entering your credentials won't occur with each query, it's just the first time and then they're saved locally.

Image 10
  1. Select Connect to see your data (Image 11).

Image 11
  1. You can now apply any transformations to the dataset.

In this example we also changed the name from Query1 to Product Roadmap and have edited to use the first row as a header (Image 12).

Image 12
  1. Select Close & Apply. The metadata now shows up on the right hand side and you can begin to use it to create your visualizations (Image 13).

Image 13

Use a Personal Access Token

  1. If needed, follow the documentation here to generate a new Personal Access Token (PAT).

  2. Launch Power BI and navigate to Get Data > Web.

  3. Select Advanced and input the following values (Image 14):

    1. URL Parts: This is the Query API URL that you created in the Prerequisites section.

    2. HTTP Request Header Parameters:

      1. In the first text box input Authorization

      2. In the second text box type Bearer + your PAT. For example: "Bearer BGFHFHOJDF76DFDFD777"

Image 14
  1. Select OK.

  2. Select Load to use the query data in Power BI.

  3. You can now apply any transformations to the dataset.

In this example we also changed the name from Query1 to Product Roadmap and have edited to use the first row as a header (Image 15).

Image 15
  1. Select Close & Apply. The metadata now shows up on the right hand side and you can begin to use it to create your visualizations (Image 16).

Image 13

Use a Bearer Token

  1. If needed, follow the documentation here to generate a Bearer Token.

  2. Launch Power BI and navigate to Get Data > Web.

  3. Select Advanced and input the following values (Image 17):

    1. URL Parts: This is the Query API URL that you created in the Prerequisites section.

    2. HTTP Request Header Parameters:

      1. In the first text box input Authorization

      2. In the second text box type Bearer + your token. For example: "Bearer eyUzI1NiIsImtpZCI6IkE4M0UwQTFEQTY1MzE0NkZENUQxOTFDMzRDNTQ0RDJDODYyMzMzMzkiLCJ0eXAiO"

Image 17
  1. Select OK.

  2. Select Load to use the query data in Power BI.

Tableau

Cinchy exposes a Tableau Web Data Connector that provides access to Cinchy Saved Queries as data sources in Tableau. Tableau versions 2019.2+ are supported.

You need an active internet connection to use the Web Data Connector.

Prerequisites

To get started, you must add a record into the Integrated Clients table in the Cinchy domain with the below values.

Column
Value

Client Id

tableau-connector

Client Name

Tableau

Grant Type

Implicit

Permitted Login Redirect URLs

Permitted Logout Redirect URLs

Permitted Scopes

Id, OpenId, Email, Profile, Roles

Access Token Lifetime (seconds)

3600

Show Cinchy Login Screen

Checked

Enabled

Checked

### Connect from Tableau

  1. Launch Tableau.

  2. Under Connect -> To a Server select the Web Data Connector option.

  3. Enter the URL from the Permitted Login Redirect URLs field on the Integrated Clients record created under the Prerequisites section above.

  4. The Cinchy login screen will appear, enter your credentials

  5. Select one or more queries to add to your data set. The result of each query will be available as a Table in Tableau. If a query has parameters, you will be prompted to provide the parameter values before you can add it to your collection.

  6. Select the Load button.

The Cinchy query results will now be accessible for you to create your visualization.

Create tables

This page guides you through creating table in Cinchy.

Create a table from scratch

  1. Navigate to the Cinchy homepage. In the upper left-hand corner, click on Create to get started. (Image 1)

Image 1: Step 1, Getting Started
  1. Select either a Standard or a Spatial Table (Image 2), per the descriptions below.

  • Spatial Table: A spatial table allows you to create geography and geometry column types, as well as geospatial indexes. You won't be able to create partitions on a spatial table.

  • Standard Table: You can't create geography or geometry columns in a standard table.

You can't convert from one type to another and will have to either recreate your table or link to another table with geospatial columns.

Any existing tables created before installing Cinchy Platform v4.19.0 are standard tables.

Image 2: Step 2, select either a Standard or Spatial Table
  1. Select From Scratch (Image 3).

Image 3: Step 3, Select "From Scratch"
  1. A new page will open with the Table Info tab (Image 4). Input the following information:

  • Table Name: This is a mandatory field, and must be unique to the domain you select.

  • Icon: You can pick an icon and colour to differentiate your table on thee home screen.

  • Domain: This is a mandatory field. Select the domain that this table will reside under. If you are have administrative privileges, you can also create new domains from this screen.

  • Description: You can give your table a description, which will be displayed on the homepage.

Image 4: Step 4, Adding table info.
  1. When you are finished with the Info page, select Columns from the left-hand navigation bar (Image 5). See here for additional information about column types.

Image 5: Step 5, Selecting the Columns tab
  1. A new page will open with the Columns tab (Image 6). Every table in Cinchy must have at least one column. Input the following information:

  • Column Name: Input a unique column name.

  • Data Type: You can select from the following:

    • Text: All data in this column must be input as text

    • Number: All data in this column must be input numerically

    • Date: All data in this column must be in date format. The default is yyyy-mm-dd however you can change that.

    • Yes/No: All data in this column must be in Yes/No format

    • Calculated: Data is this column is calculated using a CQL expression

    • Choice: Data entered in this column must be selected from a set of choice answers that you provide

    • Link: Data in a link column is pulled from elsewhere on Cinchy

Watch the Cinchy TV episode on Data Types here.

  • Description: Enter a description of your column

  • Data Security Classification: You can select from Public, Internal, Confidential, or Restricted. Additional options can be created in the [Cinchy].[Data Security Classifications] table by an Administrator.

    • Restricted: Restricted data is the most sensitive data, so you would have to treat it extra carefully. If compromised or accessed without authorization, it could lead to criminal charges, massive legal fines, or cause irreparable damage to the company. Examples include intellectual property, proprietary information or data protected by state and federal regulations.

    • Confidential: Often, access to confidential data requires additional authorization and explanation of why access to the data is needed. Examples of confidential data include social security numbers, credit card details, phone numbers or medical records. Depending on the industry, confidential data is protected by laws like GDPR, HIPAA, CASL and others.

    • Internal: This type of data is strictly accessible to internal company personnel or employees who are granted access. This might include internal-only memos, business performance, customer surveys or website analytics.

    • Public: This type of data is freely accessible to all employees and company personnel. It can be freely used, reused, and redistributed without repercussions. An example might be job descriptions, press releases or links to articles.

Currently there is no functionality tied directly to Data Security Classification - the tagging is just for internal auditing purposes. Future security settings will be tied to Data Security Classifications, rather than simply done at a column level.

  • Advanced Settings: Select any checkboxes that pertain to your column. See here for more information about these parameters.

You may have further mandatory or optional data to input depending on your selected Data Type.

Image 6: Step 6, defining your column

7. Click on Design Controls > Entitlements in the left navigation pane to set your permissions (Image 7). You may set these as granular as you choose. You may also set permissions on a view by view basis. See here for more about data controls and entitlements.

Image 7: Step 7, setting your permissions
  1. Click Save to finalize your table.

  2. You may return to change the structure of the existing table (such as rename columns, add new columns, change data type) by clicking on the Design Table button on the left-hand navigation (Image 8).

Image 8: Step 9, changing an existing table structure

Import a CSV to create a table

  1. Navigate to the Cinchy homepage. In the upper left-hand corner, click on Create to get started (Image 9).

Image 9: Step 1, Getting Started
  1. Select either a Standard or a Spatial Table (Image 10), per the descriptions below.

  • Spatial Table: A spatial table allows you to create geography and geometry column types, as well as geospatial indexes. You won't be able to create partitions on a spatial table.

  • Standard Table: You can't create geography or geometry columns in a standard table.

You can't convert from one type to another and will have to either recreate your table or link to another table with geospatial columns.

Any existing tables created before installing Cinchy Platform v4.19.0 are standard tables.

Image 10: Step 2, select either a Standard or Spatial Table
  1. Select Import a CSV (Image 11).

Image 11: Step 3, Select "Import a CSV"
  1. Enter the following information:

  • Domain: Select the domain that you table will reside under. If you are have administrative privileges, you can also create new domains from this screen.

  • File: To create the table, you must upload a .csv file.

The column names in your .csv file must not conflict with System Columns.

  1. When creating a table via Import a CSV, some settings will be set by default:

  • Default Table Name: The name of the file will be used as the name of the table (a number will be appended if there is a duplicate - ex. uploading Teams.csv will create a table named Teams 1, then Team 2 if uploaded again). You can always rename the table after it has been created.

  • Default Table Icon: The icon defaults to a green paintbrush.

  • Default Column Types: Columns by default will be created as a text field, with a maximum length of the longest value in the column. If a column has only numeric values in it, it will be created as a numeric column.

  1. To update these settings, navigate to the Design Table tab on the left navigation bar (Image 12).

Image 12: Step 6, Design Table tab

Table views

  1. When you first create a table, a default view called All Data will be created for you, which you can find on the left navigation bar under Manage Data (Image 13).

Image 13: Step 1: The All Data view
  1. You can create additional views by clicking on "+Create View" (Image 14).

Image 14: Step 2, Select "Create View"
  1. You may chose to create a view From Scratch or by Copying an Existing view (Image 15).

Image 15: Step 3, Creating a View

From Scratch:

  1. Select From Scratch.

  2. The Columns tab will open. Create a Name for your View (Image 16).

  3. If you'd like this to become the default view, toggle the default setting to On (Image 16).

Image 16: Steps 2,3

4. Select the column(s) that you want to be visible in this view (Image 17). You may rearrange the column order using drag and drop.

Image 17: Step 4, Selecting Columns

5. Click on the Sort tab in the left navigation bar (Image 18).

Image 18: Step 5, Sorting

6. Use this screen to select which columns you'd like to sort your data by, and in which order. You may rearrange the columns using drag and drop (Image 19).

Image 19: Step 6, Sorting your view

7. Click on the Filter tab in the left navigation bar. Here, you may use query language to focus your view (Image 20).

Image 20: Step 7, Using the Filter tab

8. Click on the Permission tab in the left navigation bar. Here, you may set permissions for who can use this view. By default, it's set to All Users (Image 21).

Image 21: Step 8, setting permissions.

9. Select Save to finalize your view.

From Existing

  1. Select From Existing.

  2. Select which view you would like to copy (Image 22).

Image 22: Step 2, Importing a view

Updating a view

  1. To update any view, including the Add Data view, click on the pencil icon next to the view's name under Manage Data (Image 23).

Image 23: Step 1, Updating a View

4. Bookmarks and the Homepage

Once you create a table, it will be added to your bookmarks by default. Other users (or if you un-star the table from your bookmarks) will see it in the Homepage if they have permissions to.

Cinchy Upgrade Utility

This page details information on the Cinchy Upgrade Utility.

Overview

The Cinchy Upgrade Utility was first introduced in v5.2 to ease a mandatory INT to BigInt upgrade. This tool has continued to be used in subsequent releases as an easy way to deploy necessary changes to your Cinchy platform.

Considerations

  • Upgrades will also be specified on the applicable Upgrade Guide page for each release.

  • Depending on your upgrade path, certain upgrades must be performed in sequential and/or specific order. This will be clearly marked in the "Overview and Considerations" section.

    • For example: To go from v5.1 to v5.5, you would first have to run the 5.2 upgrade utility and deploy the release. Once validated, you would then run the 5.5 upgrade and deploy that version.

  • Not all new releases will have changes that require the utility to be run. Review the table in section 4 for the full list.

Prerequisites

  • You will need to run this process as a user with admin/dbowner privileges to your database.

  • You will need to have .NET Core 6.0.x installed on the machine that you will run the utility on.

  • Retrieve the Upgrade Utility from the Cinchy Releases table.

Upgrades

Release
Upgrade
Kubernetes Upgrade
IIS Upgrade

5.2

5.5

Overview and considerations

v5.2: INT to BigInt

Overview

Cinchy v5.2 introduced the update from INT to BigInt data types to increase the number of possible Cinchy IDs that can be generated. This in turn allows the creation of more records within one table, so that you can create and manage larger data sets.

Previous Limit: 2,147,483,647 (2^31-1) Cinchy IDs per table

Updated Limit: 9,223,372,036,854,775,807 (2^63-1) Cinchy IDs per table

This upgrade is REQUIRED** when upgrading from v5.1 or lower to v5.2 or higher.**

Considerations

  • If you are upgrading from any non-5.x version (3.x or 4.x), we recommend first upgrading to v5.1.4 to process the major database change. Once v5.1.4 has been deployed, you may run the 5.2 utility upgrade.

  • To run the 5.2 upgrade, use the -v "5.2" flag in the upgrade utility. Remember to deploy the release once the upgrade is validated.

v5.5: 4000 Character Bug

Overview

To upgrade to Cinchy version 5.5, you must run the Upgrade Utility to fix a row-breaking issue that could be triggered on cells with over 4000 characters, where you are unable to update any column in your record.

This upgrade is REQUIRED when upgrading to Cinchy v5.5.

Considerations

  • If you are upgrading from any version lower than 5.2, you must first perform the v5.2 INT to BigInt upgrade and deploy that release.

  • To run the 5.5 upgrade, use the -v "5.5" flag in the upgrade utility. Remember to deploy the release once the upgrade is validated.

Upgrade instructions

We recommend you follow this process during off-peak hours.

  1. Turn off your Cinchy platform. (Note: This step is only required for the 5.2 upgrade)

    1. In a Kubernetes deployment, you can do so via ArgoCD.

    2. In an IIS Deployment:

      1. Open your Windows Services Panel.

      2. Select IIS Admin Service.

      3. Stop the service.

      4. Right-click IIS Admin Service and select Properties.

      5. Change 'Start Up Type' to 'Disabled'.

  2. Create a backup snapshot of your platform.

    1. In a Kubernetes deployment on AWS, you can follow the documentation here.

    2. In a Kubernetes deployment on Azure, you can follow the documentation here.

    3. In an IIS Deployment, you can follow the documentation here.

  3. Retrieve the Upgrade Utility from the Cinchy Releases table if you haven't already.

  4. Run the following command through a command window as an admin/dbowner, using the table below as a guide.

dotnet cinchy.upgrade-utility.dll -d "TSQL" -s "Server=LAPTOP-4SUPR0L6;Database=T6;User ID=cinchy;Password=cinchy;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;Encrypt=False" -v "5.2"
Value
Description

-d

Mandatory. Database type. This can be either TSQL or PGSQL.

-s

Mandatory. Connection String. You must provide the unencrypted connection string for your database.

-v

Mandatory. This specifies the upgrade version that you wish to deploy. Ex: specifying "5.2" will run the 5.2 upgrade.

-c

Optional and not recommended to be used on your first run of the utility. This "clean up" value will delete any extra metadata the application created on the database

  1. You will see the below progress bar as your upgrade completes (Image 1). Once it's done, you will see a VALIDATION PASSED check.

Tip: Click on the image below to enlarge it.

Image 1: You will see the below progress bar as your upgrade completes

If there are any errors during execution or your validation fails, we suggest that you restore your database from the backup and contact Cinchy support.

  1. Deploy your Cinchy Upgrade.

Note: You must deploy whichever version of the platform you ran the upgrade utility for.

  1. If it was turned off in step 1, turn your Cinchy platform back on.

    1. In a Kubernetes deployment, you can do so via Argo CD

    2. In an IIS deployment:

      1. Open your Windows Services Panel.

      2. Select IIS Admin Service.

      3. Start the service.

      4. Right-click IIS Admin Service and select Properties.

      5. Change 'Start Up Type' to 'Enabled'.

v4.x to v5.x (IIS)

This page details the upgrade process for Cinchy v4.x to v5.x on IIS.

Upgrading on IIS (v4 to v5+)

Warning: If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.2** or higher, please see the and follow the directives on that page. This process can be run when upgrading your IIS v4 instance to any v5+ instance

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString in steps 3.2.2 and 3.3.2. Adding will allow you to bypass the certificate chain during validation.

Ex:

Prerequisites

  1. to take a backup of your database.

  2. Extract thefor the version you wish to upgrade to.

Update the CinchySSO appsettings.json

  1. Open the C:\CinchySSO\appsettings.json file in a text editor and update the values below.

AppSettings

  1. Under AppSettings section, update the values outlined in the table.

  2. Wherever you see <base url> in the value, replace this with the actual protocol (HTTP or HTTPS) and the domain name (or IP address) you plan to use.

Ex:. if you're using HTTPS with the domain app.cinchy.co, then <base url> should be replaced with https://app.cinchy.co

Key
Value

4.18.0+ includes session expiration based on the CinchyAccessTokenLifetime. For the default of 0.00:30:00, this means that if you have been inactive in Cinchy for 30 minutes, your session will expire and you will need to log in again.

SSO values

The values below are only required for SSO, otherwise leave them as blank.

Key
Value

Connection String

In order for the application to connect to the database, the "SqlServer" value needs to be set.

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString. Adding will allow you to bypass the certificate chain during validation.

Ex:

Ensure your database type is set to TSQL.

  1. Find and update the value under the "ConnectionStrings" section:

External identity claim section

Under the "ExternalIdentityClaimSection" section you'll see the following values.

These values are used for SAML SSO. If you aren't using SSO, keep these values as blank

Key
Value

Serilog

  1. "Serilog" has a property that allows you to configure where it logs to. In the below code, update the following:

    1. "Name" must be set to "File" so it writes to a physical file on the disk.

    2. "Path" must be set to the file path to where you want it to log.

Update the Cinchy appsettings.json

  1. Navigate to C:\Cinchy

  2. Delete the appsettings.Development.json

  3. Navigate to the appsettings.json file and update the following properties:

1.3.1 AppSettings

Key
Value

Connection String

In order for the application to connect to the database, the "SqlServer" value needs to be set.

If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString in steps . Adding will allow you to bypass the certificate chain during validation.

Ex:

Ensure your database type is set to TSQL

  1. Find and update the value under the "ConnectionStrings" section:

1.3.3 Serilog

  1. "Serilog" has a property that allows you to configure where it logs to. In the below code, update the following:

    • "Name" must be set to "File" so it writes to a physical file on the disk.

    • "Path" must be set to the file path to where you want it to log.

You can also use an alternative setting if you want to have rolling log files with retention settings by adding in the following parameters:

  • Your full "Serilog" property, if you choose to use the alternative settings, would look like this, inputting your own variables as required:

Configure the IIS Manager and running your upgrade

  1. Open your Internet Information Services (IIS) Manager.

  2. Navigate to Connections > Sites.

  3. Right click on the Cinchy site and select Manage Application > Advanced Settings.

  4. Change the Cinchy folder path to that of the version you're deploying.

  5. Right click on the CinchySSO site and select Manage Application > Advanced Settings

  6. Ensure that both Applications Pools for Cinchy and CinchySSO have their .NET CLR Versions set to No Managed Code.

  7. Change the Cinchy SSO folder path to that of the version you're deploying.

  8. Execute the following command:

  1. Execute the following command:

  1. Open your Cinchy URL in your browser.

Because Cinchy v5 creates new tables and assets in the background upon initialization, this first startup may take longer to fully load than usual.

  1. Ensure that you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

Grafana

This page is about the analytics and visualization application Grafana, one of our recommended components for Cinchy v5 on Kubernetes.

Grafana Overview

is an open source analytics and interactive visualization web application. When connected to your Cinchy platform, it provides charts, graphs, and alerting capabilities (Image 1).

Grafana, and its paired application (which consumes metrics from the running components in your environment) is the recommended visualization application for Cinchy v5 on Kubernetes.

Get started with Grafana

Grafana has a robust library of documentation of tutorials designed to help you learn the fundamentals of the application. We've listed some notable ones below:

When using the default configuration pairing of Grafana and Prometheus, Prometheus is already set up as a data source in your metrics dashboard.

Access your saved dashboards

Cinchy comes with some saved dashboards that come out of the box. These dashboards will provide a great jumping off point for your metrics monitoring, and you can always customize, manage, and add further dashboards at your leisure.

  1. Navigate to the left navigation pane, select the Dashboards icon > Manage (Image 2).

2. You will see a list of all of the Dashboards available to you (Image 3). Clicking on any of them will take you to a full metrics view (Image 4).

3. You can favourite any of your commonly used or most important dashboards by clicking on the star (Image 5).

4. Once you favorite a dashboard, you can easily find it by navigating to the left navigation pane, select the Dashboards icon > Home. This will open the Dashboards Home. You can see both your favourite and your recent dashboards in this view (Image 6)

Recommended Dashboards

Your Cinchy v5 deployment comes with some out-of-the-box dashboards already made. You are able to customize these to suit your specifications. The following are a few notable ones:

Kubernetes/Compute Resources/Cluster

Purpose: This dashboard provides a general overview of your entire cluster including all of your environments and pods (Image 7).

Metrics:

The following are some example metrics that you could expect to see from this dashboard:

  • CPU Usage

  • CPU OTA

  • Memory Use

  • Memory Requests

  • Current Network Usage

  • Bandwidth (Transmitted and Received)

  • Average Container Bandwidth by Namespace

  • Rate of Packets

  • Rate of Packets Dropped

  • Storage IO & Distribution

Kubernetes/Compute Resources/Namespace (Workloads)

Purpose: This dashboard is useful for looking at environment specific details (Image 8). You can use the namespace drop down menu to select which environment you want to visualize (Image 9). This can be particularly helpful during load testing. You are also able to drill down to a specific workload by clicking on its name.

Metrics:

The following are some example metrics that you could expect to see from this dashboard**:**

  • CPU Usage

  • CPU OTA

  • Memory Use

  • Memory Quota

  • Current Network Usage

  • Bandwidth (Transmitted and Received)

  • Average Container Bandwidth by Workload

  • Rate of Packets

  • Rate of Packets Dropped

Set up alerts

Grafana lets you to set up push alerts against your dashboards and queries. Once you have created your dashboard, you can follow the steps below to set up your alert.

Grafana doesn't have the capability to run alerts against queries with template variables.

To send emails out from Grafana, you need to configure your SMTP. This would have been done in the automation script run during your initial Cinchy v5 deployment. If you didn't input this information at that time, you must do so before setting up your email alerts.

Set up your notifications channel

Your notifications channel refers to who will be receiving your alert. To set one up:

  1. Click on the Alert icon on the left navigation tab (Image 10), and locate "Notifications Channel"

  1. Click the "Add a Channel" button.

  2. Add in the following parameters, including any optional checkboxes you wish to use (Image 11):

Name: The name of this channel.

Type: You have several options here, but email is the most common.

Addresses: Input all the email addresses you want to be notified of this alert, separated by a comma.

  1. Click Test to send out a test email, if desired.

  2. Save your Notification Channel.

Set up your alert

The following details how to set up alerts on your dashboards. You can also set up alerts upon creation of your dashboard from the same window.

  1. Navigate to the dashboard and dashboard panel that you want to set up an alert for. This example, sets up an alert for CPU usage on our cluster.

  2. Click on the dashboard name > Edit

  3. Click on the Alert tab (Image 12).

  1. Input the following parameters to set up your alert (Image 13):

  • Alert Name: A title for your alert

  • Alert Timing: Choose how often to evaluate and for how long. In this example it's evaluated every minute for five minutes.

  • Conditions: Here you can set your threshold conditions for when an alert will be sent out. In this example, it's sent when the average of query A is above 75.

  • Set what happens if there's no data, or an error in your data

  • Add in your notification channel (who will be sent this notification)

  • Add a message to accompany the alert.

  • Click Apply > Save to finalize your alert.

Click on an image to enlarge it.

Recommended alerts

Below are a few alerts we recommend setting up on your Grafana.

CPU usage

Set up this alert to notify you when the CPU Usage on your nodes exceeds a specified limit.

Dashboard Query

You can use the following example queries to set up a dashboard that will capture CPU Usage by Node (Image 14).

Alert:

Set up your alert. This example uses a threshold limit of 75 (Image 15).

Memory Usage

Set up this alert to notify you when the Memory Usage on your nodes exceeds a specified limit.

Dashboard Query

You can use the following example queries to set up a dashboard that will capture CPU Usage by Node (Image 16)

Alert:

Set up your alert. This example uses a threshold limit of 85 (Image 17).

Disk Usage

Set up this alert to notify you when the Disk Usage on your nodes exceeds a specified limit.

Dashboard Query

You can use the following example queries to set up a dashboard that will capture Disk Usage by Node (Image 18)

Alert

Set up your alert. This example uses a threshold limit of 80 (Image 17).

I/O wait

Set up this alert to check the amount of iowait from the CPU. A high value usually indicates a slow/overloaded HDD or Network.

Dashboard Query

You can use the following example queries to set up a dashboard that will capture the CPU I/O wait (Image 19).

Alert

Set up your alert. This example uses a threshold limit of 60 (Image 19).

Update your Grafana Password

This capability was added in Cinchy v5.4.

Your Grafana password can be updated in your deployment.json file (you may have renamed this during your original deployment).

  1. Navigate to cluster_component_config > grafana.

  2. The default password is set to prom-operator; update this with your preferred new password, written in clear text.

  3. Run the below command in the root directory of your devops.automations repository to update your configurations. If you have changed the name of your deployment.json file, make sure to update the command accordingly.

  1. Commit and push your changes.

  2. If your environment isn't set-up to automatically apply upon configuration,navigate to the ArgoCD portal and refresh your component(s). If that doesn't work, re-sync.

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"

CinchyUri

<base url>

CertificatePath

Adjust the certificate path to point to the CinchySSO v5 folder. C:\CinchySSO\cinchyidentitysrv.pfx

StsPublicOriginUri

Base URL used by the .well-known discovery. If left blank will match the request URL.

/cinchysso

CinchyAccessTokenLifetime

Duration for the Cinchy Access Token. This determines how long a user can be inactive until they need to re-enter their credentials. It defaults to 0.00:30:00

DB Type

Set this to "TSQL"

SAMLClientEntityId

Client Entity Id

SAMLIDPEntityId

Identity Provider Entity Id

SAMLMetadataXmlPath

Identity Provider metadata XML file path

SAMLSSOServiceURL

Configure service endpoint for SAML authentication

AcsURLModule

This parameter is needs to be configured as per your SAML ACS URL. For example, if your ACS URL looks like this - https:///CinchySSO/identity/AuthServices/Acs, then the value of this parameter should be "/identity/AuthServices"

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;"
"SqlServer" : "Server=MyServer;Database=Cinchy;Trusted_Connection=True;Connection Timeout=30;Min Pool Size=10;"

ExternalIdentityClaim > FirstName > ExternalClaimName

ExternalIdentityClaim > LastName > ExternalClaimName

ExternalIdentityClaim > Email > ExternalClaimName

ExternalIdentityClaim -> MemberOf -> ExternalClaimName

  "Serilog": {
    "MinimumLevel": {
      "Default": "Debug",
      "Override": {
        "Microsoft": "Warning",
        "System.Net": "Warning"
      }
    },
    "WriteTo": [
      {
        "Name": "File",
        "Args": {
          "path": "C:\\CinchyLogs\\Cinchy\\log.json",
          "formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
        }
      }
    ]
  }

StsAuthorityUri

This should match your Cinchy SSO URL

UseHttps

This is "false" by default.

DB Type

Set this to "TSQL"

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;"
"SqlServer" : "Server=MyServer;Database=Cinchy;Trusted_Connection=True;Connection Timeout=30;
  "Serilog": {
    "MinimumLevel": {
      "Default": "Debug",
      "Override": {
        "Microsoft": "Warning",
        "System.Net": "Warning"
      }
    },
    "WriteTo": [
      {
        "Name": "File",
        "Args": {
          "path": "C:\\CinchyLogs\\Cinchy\\log.json",
          "formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
        }
      }
    ]
  }
          "preserveLogFilename": true,
          "shared": "true",
          "rollingInterval": "Day",
          "rollOnFileSizeLimit": true,
          "fileSizeLimitBytes": 100000000,
          "retainedFileCountLimit": 30,
"Serilog": {
    "MinimumLevel": {
      "Default": "Debug",
      "Override": {
        "Microsoft": "Warning",
        "System.Net": "Warning"
      }
    },
    "WriteTo": [
      {
        "Name": "File",
        "Args": {
          "path": "C:\\CinchyLogs\\Cinchy\\log.txt",
          "preserveLogFilename": true,
          "shared": "true",
          "rollingInterval": "Day",
          "rollOnFileSizeLimit": true,
          "fileSizeLimitBytes": 100000000,
          "retainedFileCountLimit": 30,
          "formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
        }
      }
    ]
  }
iisreset -stop 
iisreset -start 
Cinchy Upgrade Utility
TrustServerCertificate=True
Follow this guide
new build
TrustServerCertificate=True
SQL Server Authentication Example:
SQL Server Windows Authentication Example:
TrustServerCertificate=True
SQL Server Authentication Example:
SQL Server Windows Authentication Example:
avg by (node_name) (100 - ((avg by (cpu,node_name) (irate(node_cpu_seconds_total{mode="idle"}[1m]))) * 100))
100 - ((avg by (cpu,node_name) (irate(node_cpu_seconds_total{mode="idle"}[1m]))) * 100)
((node_memory_MemTotal_bytes-node_memory_MemAvailable_bytes) / (node_memory_MemTotal_bytes))*100
(sum((node_filesystem_size_bytes))by(node_name) - sum((node_filesystem_free_bytes))by(node_name)) *100/(sum((node_filesystem_avail_bytes))by(node_name)+(sum((node_filesystem_size_bytes))by(node_name) - sum((node_filesystem_free_bytes))by(node_name)))
(sum(irate(node_cpu_seconds_total{mode="iowait"}[1m]))by(node_name) * 100 / 4)
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
Grafana
Prometheus
Exploring your Metrics
Best Practices for Creating Dashboards
Building a Dashboard
Guide to Dashboard Types and Capabilities
Creating a Managed Alert
All Documentation
Image 1: An example visualization from Grafana
Image 2: Accessing your Grafana Dashboards
Image 3: Step 2, Viewing all dashboards
Image 4: Step 3, an example visualization
Image 5: Step 3, favorite a dashboard
Image 6: Step 4, favourite and recent dashboards
Image 7: Kubernetes/Compute Resources/Cluster dashboard
Image 8: The Kubernetes/Compute Resources/Namespace (Workloads) dashboard
Image 9: Use the namespace drop down menu to select which environment you want to view details for
Image 10: The Alert icon
Image 11: Setting up your Notification Channel
Image 12: The alert tab
Image 13: Your alert parameters
Image 14: CPU Usage Dashboard Query
Image 15: CPU Threshold Alert
Image 16: Memory Usage Query
Image 17: Memory Usage Alert
Image 18: Disk Usage Query
Image 17: Disk Usage Alert
Image 18: Iowait Query
Image 19: Iowait Alert
INT to BigInt
Upgrade Guide
Upgrade Guide
4000 Character Bug
Upgrade Guide
Upgrade Guide
https://<your Cinchy URL>/Tableau/Connector
https://<your Cinchy URL>/Tableau/Connector

Set up Alerts

Monitoring and alerting

OpenSearch comes with the ability to set up alerts based on any number of monitors. You can then push these alerts via email, should you desire.

Before you set up a monitor or alert, ensure that you have .

Definitions:

Create your destination

Your destination will be where you want your alerts to be pushed to. OpenSearch supports various options, but this guide focuses on email.

  1. From the left navigation pane, click Alerting (Image 1).

  1. Click on the Destinations Tab > Add Destination

  2. Add a name to label your destination and select Email as type (Image 2)

  1. You will need to assign a Sender. This is the email address that the alert will send from when you specify this specific destination. To add a new Sender, click Manage Senders (Image 3).

  1. Click Add Sender

  2. Add in the following information (Image 4):

  • Sender Name

  • Email Address

  • Host (this is the host address for the email provider)

  • Port (this is the Port of the email provider)

  • Encryption

Ensure that you, or your alert won't work.

  1. You will need to assign your Recipients. This is the email address(es) that will receive the alert when you specify this specific destination. To add a new Recipient, you can either type their email(s) into the box, or click Manage Senders to create an email group (Image 5).

  1. Click Update to finish your Destination.

Authenticate your sender

You will need to authenticate your sender for emails to come through. Please contact Cinchy Customer Support to help you with this step.

  • Via email: [email protected]

  • Via phone: 1-888-792-6051

  • Through the support portal:

Create your monitor

Your monitor is a job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.

  1. From the Alerting dashboard, select Monitors > Create Monitor (Image 6).

  1. Under Monitor Details, add in the following information (Image 7).

  • Monitor Name

  • Monitor Type (This example uses Per Bucket)

    • Whereas query-level monitors run your specified query and then check whether the query’s results triggers any alerts, bucket-level monitors let you select fields to create buckets and categorize your results into those buckets.

    • The alerting plugin runs each bucket’s unique results against a script you define later, so you have finer control over which results should trigger alerts. Each of those buckets can trigger an alert, but query-level monitors can only trigger one alert at a time.

  • Monitor Defining Method: the way you want to define your query and triggers. (This example uses Visual Editor)

    • Visual definition works well for monitors that you can define as “some value is above or below some threshold for some amount of time.”

    • Query definition gives you flexibility in terms of what you query for (using ) and how you evaluate the results of that query (Painless scripting).

  • Schedule: Choose a frequency and time zone for your monitor.

  1. Under Data Source add in the following information (Image 8):

  • Index: Define the index you want to use as a source for this monitor

  • Time Field: Select the time field that will be used for the x-axis of your monitor

  1. The Query section will appear differently depending on the Monitor Defining Method selected in step 2 (Image 9). This example is using the visual editor.

To define a monitor visually, select an aggregation (for example, count() or average()), a data filter if you want to monitor a subset of your source index, and a group-by field if you want to include an aggregation field in your query. At least one group-by field is required if you’re defining a bucket-level monitor. Visual definition works well for most monitors.

Add a trigger

A trigger is a condition that, if met, will generate an alert.

  1. To add a trigger, click the Add a Trigger button (Image 10).

  1. Under New Trigger, define your trigger name and severity level (with 1 being the highest) (Image 11).

  1. Under Trigger Conditions, you will specify the thresholds for the query metrics you set up previously (Image 12). In the below example, our trigger will be met if our COUNT threshold goes ABOVE 10000.

You can also use keyword filters to drill down into a more specific subset of data from your data source.

  1. In the Action section you will define what happens if the trigger condition is met (Image 13). Enter the following information to set up your Action:

  • Action Name

  • Message Subject: In the case of an email alert, this will be the email subject line.

  • Message: In the case of an email alert, this will be the email body.

  • Perform Action: If you’re using a bucket-level monitor, decide whether the action is performed per execution or per alert.

  • Throttling: Enable action throttling if you wish. Use action throttling to limit the number of notifications you receive within a given span of time.

  1. Click Send Test Message, if you want to test that the alert functions correctly.

  2. Click Save.

Example alerts

Alerting on Stream Errors

This example pushes an alert based on errors. We will monitor our Connections stream for any instance of 'error', and push out an alert when our trigger threshold is hit.

  1. First we create our by defining the following (Image 14):

  • Index: This example looks at Connections.

  • Time Field

  • Time Range: Define how far back you want to monitor

  • Data Filter: We want to monitor specifically whenever the Stream field of our index is stderr (standard error).

This is how our example monitor will appear; it shows when in the last 15 days our Connections app had errors in the log (Image 15).

  1. Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to our defined In this example we want to be alerted when there is more than one stderr in our Connections stream (Image 16). Input the following:

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 1.

The trigger threshold will be visible on your monitoring graph as a red line.

Alerting on Kubernetes restarts

This example pushes an alert based on the kubectl.kubernetes.io/restartedAt annotation, which updates whenever your pod restarts. We will monitor this annotation across our entire product-mssql instance, and push out an alert when our trigger threshold is hit.

  1. First we create our by defining the following (Image 17):

  • Index: This example looks at the entire product-mssql instance.

  • Time Field

  • Query: This example is using the total count of the kubectl.kubernetes.io/restartedAt annotation.

  • Time Range: Define how far back you want to monitor. This example goes back 30 days.

This is how our example monitor will appear; it shows when in the last 30 days our instance had restarts (Image 18).

2. Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to our defined In this example we want to be alerted when there is more than 100 restarts across our instance (Image 19). Input the following:

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 100.

The trigger threshold will be visible on your monitoring graph as a red line.

Alerting on status codes

This example pushes an alert based on status codes. We will monitor our entire instance for 400 status codes and push out an alert when our trigger threshold is hit.

  1. First we create our by defining the following (Image 20):

  • Index: This example looks across the entire product-mssql-1 instance.

  • Time Field

  • Time Range: Define how far back you want to monitor. The time range for this example is the past day.

  • Data Filter: We want to monitor specifically whenever the Status Code is 400 (bad request).

This is how our example monitor will appear (note that there are no instances of a 400 status code in this graph) (Image 21).

  1. Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to the defined In this example we want to be alerted when there is at least one 400 status code across out instance (Image 22). Input the following:

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 0.

The trigger threshold will be visible on your monitoring graph as a red line.

OpenSearch dashboards

Overview

When deploying Cinchy v5 on Kubernetes, Cinchy recommends using OpenSearch Dashboards for your logging. OpenSearch is a community-driven fork of Elasticsearch created by Amazon, and it captures and indexes all your logs into a single, accessible dashboard location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.

You can view OpenSearch documentation here:

Get started with OpenSearch Dashboards

These sections guide you through setting up your first Index, Visualization, Dashboard, and Alert.

OpenSearch comes with sample data that you can use to get a feel of the various capabilities. You will find this on the main page upon logging in.

Define your log level

  1. Navigate to your cinchy.kubernetes/environment_kustomizations/instance_template/worker/kustomization.yaml file.

  2. In the below code, copy the Base64 encoded string in the value parameter.

  1. to retrieve your AppSettings.

  2. Navigate to the below Serilog section of the code and update the "Default" parameter as needed to set your log level. The options are:

  1. Ensure that you commit your changes.

  2. Navigate to ArgoCD > Worker Application and refresh.

Common log search patterns

The following are some common search patterns when looking through your OpenSearch Logs.

  • If an HTTP request to Cinchy Web/IDP fails, check the page's requests and the relevant response headers to find the "x-correlation-id" header. That header value can be used to search and find all logs associated with the HTTP request.

  • When debugging batch syncs, filter the "ExecutionId" field in the logs for your batch sync execution ID to narrow down your search.

  • When debugging real time syncs, search for your data sync config name in the Event Listener or Workers logs to find all the associated logging information.

Set up an index

The first step to utilizing the power of OpenSearch Dashboards is to set up an index to pull data from your sources. An Index Pattern identifies which indices you want to explore. An index pattern can point to a specific index, for example, your log data from yesterday, or all indices that contain your log data.

  1. Login to OpenSearch. You would have configured the access point during your ; traditionally it will be found at <baseurl>/dashboard.

If this is your first time logging in, the username and password will be set to admin/admin.

We highly recommend you

  1. Navigate to the Stack Management tab in the left navigation menu (Image 1).

  1. From the left navigation, click on Index Patterns (Image 2).

  1. Click on the Create Index Pattern button.

  2. To set up your index pattern, you must define the source. OpenSearch will list the sources available to you on the screen below. Input your desired source(s) in the text box (Image 3).

You can use the asterisk (*) to match multiple sources.

  1. Configure your index pattern settings (Image 4).

  • Time field: Select a primary time field to use with the global time filter

  • Custom index pattern ID: By default, OpenSearch gives a unique identifier to each index pattern. You can use this field to optional override the default ID with a custom one.

  1. Once created, you can review your Index Patterns from the Index Patterns page (Image 5).

  1. Click on your Index Pattern to review your fields (Image 6).

Create a visualization

You can pull out any data from your index sources and view them in a variety of visualizations.

  1. From the left navigation pane, click Visualize (Image 7).

  1. If you have any Visualizations, they will appear on this page. To create a new one, click the Create Visualization button (Image 8).

  1. Select your visualization type from the populated list (Image 9).

  1. Choose your source (Image 10). If the source you want to pull data from isn't listed, you will need to

  1. Configure the data parameters that appear in the right hand pane of the Create screen. These options will vary depending on what type of visualization you choose in step 3. The following example uses a pie chart visualization (Image 11):

  • Metrics

    • Aggregation: Choose how you want your data aggregated. This example uses Count.

    • Custom Label: You can use this optional field for custom labelling.

  • Buckets

    • Aggregation: Choose how you want your data aggregated. This example uses Split Slices > Terms.

    • Field: This drop down is populated based on the index source your chose. Select which field you want to use in your visualization. This example uses machine.os.keyword.

    • Order By: Define how you want your data to be ordered. This example uses Metric: Count, in descending order of size 10.

    • Choose whether to group other values in a separate bucket. If you toggle this on, you will need to label the new bucket.

    • Choose whether to show missing values.

  • Advanced

    • You can optionally choose a JSON input. These will be merged with the OpenSearch aggregation definition.

  • Options

    • The variables in the options tab can be used to configure the UI of the visualization itself.

  1. You can also further focus your visualization:

  • to search your index data (Image 12). You can also save any queries you write for easy access by clicking on the save icon.

  • Add a filter on any of your fields (Image 13).

  • Update your date filter (Image 14).

  1. Click save when finished with your visualization.

Create a dashboard

Once you have created your visualizations, you can combine them together on one Dashboard for easy access.

You can also create new visualizations from the Dashboard screen.

  1. From the left navigation pane, click on Dashboards (Image 15).

  1. If you have any Dashboards, they will appear on this page. To create a new one, click the Create Dashboard button (Image 16).

  1. The "Editing New Dashboard" screen will appear. Click on Add an Existing object (Image 17).

  1. Select any of the visualizations you created and it will automatically add to your Dashboard (Image 18). Repeat this step for as many visualizations as you'd like to appear.

  1. Click Save to finish (Image 19).

Update your OpenSearch password

This capability was added in Cinchy v5.4.

Your OpenSearch password can be updated in your deployment.json file (you may have renamed this during your original deployment).

  1. Navigate to "cluster_component_config > OpenSearch.

  2. OpenSearch has two users that you can configure the passwords for: Admin and Kibana Server. Kibana Server is used for communication between the opensearch dashboard and the opensearch server. The default password for both is set to "password";. To update this, you will need to use a machine with docker available.

  3. Update your Admin password:

    1. Your password must be hashed. You can do so by running the following command on a machine with docker available, inputting your new password where noted:

    1. Navigate to "opensearch_admin_user_hashed_password" and input your hashed password.

    2. You must also provide your password in a base64 encoded format; input your cleartext password to receive your new encoded password.

    3. Navigate to "opensearch_admin_user_password_base64" and input your encoded password.

  4. Update your Kibana Server password:

    1. Your password must be hashed. You can do so by running the following command on a machine with docker available, inputting your new password where noted:

    1. Navigate to "opensearch_kibanaserver_user_hashed_password" and input your hashed password.

    2. You must also provide your new password in cleartext. Navigate to "opensearch_kibanaserver_user_password" and input your cleartext password.

  5. Run the below command in the root directory of your devops.automations repo to update your configurations. If you have changed the name of your deployment.json file, make sure to update the command accordingly.

  6. Commit and push your changes.

  7. If your environment isn't set-up to automatically apply upon configuration,navigate to the ArgoCD portal and refresh your component(s). If that doesn't work, re-sync.

Configure AWS IAM for Connections

Overview

In Cinchy v5.6, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up AWS IAM role authentication, please review the procedure below.

AWS IAM role authentication

  1. To check that you have an OpenID Connect set up with the cluster (the default for deployments made using the Cinchy automation process), run the below command within a terminal:

  1. The output should appear like the below. Make sure to note this down for later use.

  1. Log in to your AWS account and through the AWS UI. Ensure that it has S3 access.

  2. Run the below command in a terminal to create a service account with the role created in step 3. If your cluster has a special character like an underscore, skip to the next section.

Cluster names with special characters

If your cluster name has a special character, like an underscore, you will need to create and apply the YAML. Follow section 1 up until step 4, and then follow the below procedure.

  1. In an IDE (Visual Studio, VsCode), create a new file titled my-service-account.yaml in your working directory. It should contain the below content.

  1. In a terminal, run the below command:

  1. In an IDE (Visual Studio, VsCode), create a new file titled trust-relationship.json in your working directory. It should contain the below content.

For example:

  1. Execute the following command to create the role, referencing the above .json file:

For example:

  1. Execute the following command to attach the IAM policy to your role:

For example:

  1. Execute the following command to annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume:

For example:

  1. Confirm that the role and service account are correctly configured by verifying the output of the following commands:

Authorize for Data Syncs

To ensure that the Connections pod's role has the correct permissions, the role specified by the user in AWS must have its Trusted Relationships configured as such:

Confirmation

To confirm that the Connections app is using the service account:

  1. Navigate to the cinchy.kubernetes repository > connections/kustomization.yaml file

  2. Execute the following:

  1. From a terminal, run the below command:

  1. The output should look like the following:

Monitor

A job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.

Trigger

Conditions that, if met, generate alerts.

Alert

An event associated with a trigger. When an alert is created, the trigger performs actions, which can include sending a notification.

Action

The information that you want the monitor to send out after being triggered. Actions have a destination, a message subject, and a message body.

Destination

A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook.

added your data source as an index pattern
authenticate the Sender
Support Portal
the OpenSearch query DSL
Destination
Monitor Cluster Metrics
Monitor
trigger condition
Recipient(s).
Monitor
trigger condition
Recipient(s).
Monitor
trigger condition
Recipient(s).
Image 1: Click on Alerting
Image 2: Update your destination
Image 3: Manage your Senders
Image 4: Configure your Sender
Image 5: Configure your Recipients
Image 6: Create your Monitor
Image 7: Define your Monitor details
Image 8: Configure your Data Source
Image 9: Define your Query
Image 10: Add a Trigger
Image 11: Define your Trigger.
Image 12: Trigger Conditions
Image 13: Define your Actions
Image 14: Define your Data Source and Query
Image 15: Example monitor
Image 16: Example Trigger
Image 17: Define your Query and Data Source
Image 18: Example Monitor
Image 19: Trigger Conditions
Image 20: Define your Query and Data Source
Image 21: Example Monitor
Image 22: Trigger Conditions
patch: |-
  - op: replace
    path: /data/appsettings.json
    value: wcxJItEmCWQJQPZidpLUuV6Ll79ZUr8BimlMJysLwcxJItEmCWQJQPZidpLUuV6Ll79ZUr8BimlMJysL

Verbose

Verbose is the noisiest level, rarely (if ever) enabled for a production app.

Debug

Debug is used for internal system events that aren't necessarily observable from the outside, but useful when determining how something happened. This is the default setting for Cinchy.

Information

Information events describe things happening in the system that correspond to its responsibilities and functions. Generally these are the observable actions the system can perform.

Warning

When service is degraded, endangered, or may be behaving outside of its expected parameters, Warning level events are used.

Error

When functionality is unavailable or expectations broken, an Error event is used.

Fatal

The most critical level, Fatal events demand immediate attention.

"Serilog": {
    "MinimumLevel": {
      "Default": "Debug",
docker run -it opensearchproject/opensearch /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <<newpassword>>
docker run -it opensearchproject/opensearch /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <<newpassword>>
dotnet Cinchy.DevOps.Automations.dll "deployment.json"
Introducing OpenSearch
General OpenSearch Documentation
Using DQL (Dashboards Query Language)
Troubleshooting and Common Errors
Alerts
Anomaly Detection
Decode the value
deployment installation
update the password as soon as possible.
set it up as an index first.
Use DQL
here
Image 1: Select Stack Management
Image 2: Select Index Patterns
Image 3: Define your sources
Image 4: Configure your index pattern settings
Image 5: Review your Index Patterns
Image 6: Reviewing your Index Pattern fields
Image 7: Click Visualize
Image 8: Click Create New
Image 9: Select your Visualization type
Image 10: Select your Source
Image 11: Creating your Visualization
Image 12: Use a query on your Visualization
Image 13: Add a filter on any of your fields
Image 14: Update your date filter
Image 15: Click Dashboards
Image 16: Click Create Dashboard
Image 17: Click Add An Existing
Image 18: Add as many visualizations as you'd like
Image 19: Click Save.
aws eks describe-cluster --name <CLUSTER_NAME> --query "cluster.identity.oidc.issuer"
https://oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>
eksctl create iamserviceaccount --name my-service-account --namespace default --cluster my-cluster --role-name "my-role" --attach-policy-arn arn:aws:iam::111122223333:policy/my-policy --approve
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: default
kubectl apply -f my-service-account.yaml -n <namespace>
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "$oidc_provider:aud": "sts.amazonaws.com",
          "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
        }
      }
    }
  ]
}
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::204393242335:oidc-provider/oidc.eks.ca-central-1.amazonaws.com/id/8A024CF24ADD3925BEFA224C4BDD005B"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.ca-central-1.amazonaws.com/id/8A024CF24ADD3925BEFA224C4BDD005B:aud": "sts.amazonaws.com",
          "oidc.eks.ca-central-1.amazonaws.com/id/8A024CF24ADD3925BEFA224C4BDD005B:sub": "system:serviceaccount:devops-aurora-1:connections-serviceaccount-s3"
        }
      }
    }
  ]
}
aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"
aws iam create-role --role-name connections-role-test --assume-role-policy-document file://trust-relationship.json --description "testing sa role for pod"
aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::$account_id:policy/my-policy
aws iam attach-role-policy --role-name connections-role-test --policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess
kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/my-role
kubectl annotate serviceaccount -n devops-aurora-1 connections-serviceaccount-s3 eks.amazonaws.com/role-arn=arn:aws:iam::204393242335:role/connections-role-test
aws iam get-role --role-name my-role --query Role.AssumeRolePolicyDocument
aws iam get-role --role-name connections-role-test --query Role.AssumeRolePolicyDocument
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<role-arn-created-above>",
        "Service": "s3.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
- op: replace
      path: /spec/template/spec/serviceAccountName
      value: connections-serviceaccount-s3
kubectl describe deployment connections-app -n <namespace>
create an IAM Role policy

IIS

This guide serves as a walkthrough of how to deploy v5 on IIS.

Overview

Cinchy version 5 on IIS comes bundled with common components such as Connections, Meta Forms, and the Event Listener. This page details the configuration and deployment instructions for the Cinchy Platform, including SSO.

Prerequisites

System Requirements

  • SQL SERVER 2017+

  • SSMS (optional)

  • Install IIS 7.5+ / enable IIS from Windows features

  • Dotnet 6

DotNet 6 Installation

  • DotNet Core 6 SDK which includes ASP.NET Core /.NET Core Runtime

  • DotNet Core 6 Hosting Bundle

Dotnet 7 isn't supported with Cinchy 5.x

Minimum Hardware Requirements

  • 2 × 2 GHz Processor

  • 8 GB RAM

  • 4 GB Hard Disk storage available

Minimum Database Server Hardware Recommendations

  • 4 × 2 GHz Processor

  • 12 GB RAM

  • Hard disk storage dependent upon use case. Cinchy maintains historical versions of data and performs soft deletes which will add to the storage requirements.

Get Access to Cinchy.net (Cinchy Prod Access)

  • Access to Cinchy.net (Cinchy Prod) can be obtained during onboarding.

  • Alternatively, users can request access by sending an email to [email protected].

Access Cinchy Releases Table from Cinchy UI

Navigate to the Cinchy Releases table from the Cinchy user interface.

Download Release Artifacts

Download the following items from the "Release Artifacts" column:

  • Cinchy VX.X.zip

  • Cinchy Connection

  • Cinchy Event Listener

  • Cinchy Meta-Forms (optional)

  • Cinchy Maintenance CLI (optional)

Create a Database

For more information about creating a database in SQL server, see the Microsoft Create a database page.

  1. On your SQL Server 2017+ instance, create a new database and name it Cinchy.

If you choose an alternate name, use the name in the rest of the instructions instead of **Cinchy**.

  1. Create a single user account with db_owner privileges for Cinchy to connect to the database. If you choose to use Windows Authentication instead of SQL Server Authentication, the authorized account must be the same account that runs the IIS Application Pool.

Create an IIS application pool

  1. On the Windows Server machine, launch an instance of PowerShell as Administrator.

  2. Copy and run the PowerShell snippet below to create the application pool and set its priorities. You can also manually create the app pool via the IIS Manager.

  3. Verify Db_name → Security → Users → select the user → properties → membership

Import-Module WebAdministration
$applicationPoolNameSSO="CinchySSO"
$applicationPoolNameWeb="CinchyWeb"
New-WebAppPool -Name $applicationPoolNameSSO
$appPath = "IIS:\AppPools\"+ $applicationPoolNameSSO
$appPool = Get-IISAppPool $applicationPoolNameSSO
$appPool.managedRuntimeVersion = ""
Set-ItemProperty -Path $appPath -Name managedRuntimeVersion $appPool.managedRuntimeVersion
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameSSO" -Name Recycling.periodicRestart.time -Value 0.00:00:00
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameSSO" -Name ProcessModel.idleTimeout -Value 1.05:00:00
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameSSO" -Name Recycling.periodicRestart.privateMemory -Value 0
New-WebAppPool -Name $applicationPoolNameWeb
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -Name Recycling.periodicRestart.time -Value 0.00:00:00
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -Name ProcessModel.idleTimeout -Value 1.05:00:00
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -Name Recycling.periodicRestart.privateMemory -Value 0
  1. If you use Windows Authentication in the database or want to run the application under a different user account, execute the commands below to change the application pool identity.

You can also use an alternate name in the application pool.

$credentials = (Get-Credential -Message "Please enter the Login credentials including your Domain Name").GetNetworkCredential()
$userName = $credentials.Domain + '\' + $credentials.UserName
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -name processModel.identityType -Value SpecificUser
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -name processModel.userName -Value $username
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameWeb" -name processModel.password -Value $credentials.Password
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameSSO" -name processModel.identityType -Value SpecificUser
Set-ItemProperty "IIS:\AppPools\$applicationPoolNameSSO" -name processModel.userName -Value $username

Create the application directories

  1. Download and unzip the "Cinchy vX.X" application package from the Releases Table. This will create two directories: Cinchy and CinchySSO. For example, if you unzip at the root of your C drive, the two directories will be C:\Cinchy and C:\CinchySSO.

  2. Make sure your application pool accounts has read and execute access to these directories.

  3. Run the below commands in the Administrator instance of PowerShell to create separate directories for Errorlogs and Logs.

md C:\CinchyLogs\Cinchy
md C:\CinchyLogs\CinchySSO
md C:\CinchyErrors

You can create it under your single folder as well. For example, md C:\your_folder_name\CinchyLogs\Cinchy. If you do, make sure to replace any related directory instructions with the your folder path. pool.

Update the CinchySSO appsettings.json

  1. Open the C:\CinchySSO\appsettings.json file in a text editor and update the values below.

App Settings

  1. Under AppSettings section, update the values outlined in the table.

Replace <base url> with your chosen protocol and domain. For example, if using HTTPS on app.cinchy.co, substitute <base url> with https://app.cinchy.co. For localhost, use http://localhost/Cinchy.

Parameter
Description
Example

CinchyUri

The base URL appended with /Cinchy.

http://localhost/Cinchy, {base_cinchy_url}/Cinchy

CertificatePath

Path to the CinchySSO v5 folder for the certificate.

C:\\CinchySSO\\cinchyidentitysrv.pfx

StsPublicOriginUri

Base URL of the .well-known discovery.

http://localhost/CinchySSO, {base_cinchy_url}/CinchySSO

StsPrivateOriginUri

Private Base URL of the .well-known discovery.

http://localhost/CinchySSO, {base_cinchy_url}/CinchySSO

CinchyAccessTokenLifetime

Duration for the Cinchy Access Token in v5.4+. Defaults to 7.00:00:00 (7 days).

7.00:00:00

DB Type

Database type. Either PostgreSQL or TSQL.

For SQLSERVER installation:TSQL

SSO installation

For more information on the SSO installation, please seee the SSO installation page

Connection string

To connect the application to the database, you must set the SqlServer value.

  1. Find and update the value under the "ConnectionStrings" section:

    "SqlServer" : ""

SQL Server Authentication example

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True;"

SQL Server Windows Authentication example

"SqlServer" : "Server=MyServer;Database=Cinchy;Trusted_Connection=True;Connection Timeout=30;Min Pool Size=10;"

Serilog

Cinchy has a serilog property that configures where the logs are located. In the below code, update the following:

  • "Name" must be set to "File" so it writes to a physical file on the disk.

  • Set "path" to the file path to where you want it to log.

  • Replace "WriteTo" section with following:

"WriteTo": [
      {
        "Name": "File",
        "Args": {
// For the "path" variable, please refer to the original path in your system where these log folders were created.
          "path": "C:\\CinchyLogs\\CinchySSO\\log.json",
          "preserveLogFilename": true,
          "shared": "true",
          "rollingInterval": "Day",
          "rollOnFileSizeLimit": true,
          "fileSizeLimitBytes": 100000000,
          "retainedFileCountLimit": 30,
          "formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
        }
      }
    ]

Update appsettings.json

  1. Navigate to the installation folder for Cinchy (C:\Cinchy).

  2. Open the appsettings.json file and update the following properties:

AppSettings

Key
Description
Example

StsPrivateAuthorityUri

Match your private Cinchy SSO URL.

http://localhost/CinchySSO, {base_cinchy_url}/CinchySSO

StsPublicAuthorityUri

Match your public Cinchy SSO URL.

http://localhost/CinchySSO, {base_cinchy_url}/CinchySSO

CinchyPrivateUri

Match your private Cinchy URL.

http://localhost/Cinchy, {base_cinchy_url}/CinchySSO

CinchyPublicUri

Match your public Cinchy URL.

http://localhost/Cinchy, {base_cinchy_url}/Cinchy

UseHttps

Use HTTPS.

false

DB Type

Database type.

TSQL

MaxRequestBodySize

Introduced in Cinchy v5.4. Sets file upload size for the Files API. Defaults to 1G.

1073741824 // 1g

LogDirectoryPath

Match your Web/IDP logs folder path.

C:\\CinchyLogs\\CinchyWeb

SSOLogPath

Match your SSO log folder path.

C:\\CinchyLogs\\CinchySSO\\log.json

Setup the connection string

To connect the application to the database, the SqlServer value needs to be set.

SQL Server Authentication example

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True;"

SQL Server Windows Authentication example

"SqlServer" : "Server=MyServer;Database=Cinchy;Trusted_Connection=True;Connection Timeout=30;Min Pool Size=10;"

Create the IIS applications

  1. Open an administrator instance of PowerShell.

  2. Execute the below commands to create the IIS applications and enable anonymous authentication. (This is required to allow authentication to be handled by the application).

New-WebApplication -Name Cinchy -Site 'Default Web Site' -PhysicalPath C:\Cinchy -ApplicationPool CinchyWeb
New-WebApplication -Name CinchySSO -Site 'Default Web Site' -PhysicalPath C:\CinchySSO -ApplicationPool CinchySSO
Set-WebConfigurationProperty -Filter "/system.webServer/security/authentication/anonymousAuthentication" -Name Enabled -Value True -PSPath IIS:\ -Location "Default Web Site"

To enable HTTPS, you must load the server certificate and the standard IIS configuration completed at the Web Site level to add the binding.

Test the application

  1. Access the <base url>/Cinchy (http://app.cinchy.co/Cinchy) through a web browser.

  2. Once the login screen appears, enter the credentials:

    • The default username is admin and the password is cinchy.

    • You will be prompted to change your password the first time you log in.

Next steps

Navigate to the following sub-pages to deploy the following bundled v5 components:

  • Connections Deployment

  • Event Listener/Worker Deployment

  • Maintenance CLI

FAQ

A list of Frequently Asked Questions. Please use the search function or the Table of Contents to find specific queries.

Can I get a record count from a delimited file before running the CLI?

You can use PowerShell to count the lines in a delimited file and based on the result decide if you will run the CLI.

If you run a CLI without performing the sync, there is currently no way for you to find out how many records will be inserted/updated/deleted.

Can I restore my deleted data?

If the record is still in the table, but has been overwritten by mistake, access your Collaboration Log for the row, and restore back to the correct version.

If your row has been deleted by mistake, access your Recycling Bin, locate the row and restore it.

The only way to truly delete data on the platform is through and

Can I send multiple comma-delimited values to a query parameter? [234,233,365 to be used in WHERE [Id] IN (@param)]

For example: 4,10,15 to be used in WHERE [Id] IN (@param)

This can be done by using parameters in {}, such as {0},{1}.

These will be replaced with the exact text when running the query.

For example: ​ SELECT * FROM [HR].[Employees] WHERE [Deleted] IS NULL AND [Employee ID] IN ({0}) (Image 1).

Cinchy dates are being saved as a 1900-01-01 when updating using a variable

When updating a date field using a variable, and no value is entered for that variable, the date field will be 1900-01-01. To avoid this, use a case statement to replace the empty string with NULL, as shown in the following example:

Does a row filter restrict access for a Cinchy administrator?

Cinchy Administrators have access to view/edit/approve all data in the platform, so there's no way to restrict access for Cinchy administrators.

A workaround is to create a separate "administrators" group which has edit access to all Cinchy system tables, and just leave the "admin" user account or super admins as "Cinchy administrators."

How can I automatically check if a CLI data sync was successful or failed?

You can check if a data sync was successful by its exit code. Below is sample code in PowerShell to check for the exit code and what they mean.

From the command prompt the following will also return the error code:

How can I check for platform errors?

In Version 4:

First check <base URL>/elmah, which stores web-related induced errors.

Then check the logs, which can be accessed from <base URL>/admin/index.

Cinchy logs will contain all exceptions thrown by the Cinchy Web application. This includes failed queries, stack overflows and much more.

CinchySSO logs will contain IDP errors.

In Version 5: Errors and Logs can be found through the

How can I enter a new line into a field in Manage Data?

You can add line breaks in a cell on the UI, the same way as in Excel, by typing Alt+Enter. If you use the expanded row heights option, or manually expand the row, it will show the line breaks.

How can I prevent wrong data loading in from external applications?

The best way to load data from external sources into Cinchy, is by using a

You can do the following to preview your changes:

  • Create staging tables to validate the data first.

  • Use formatting rules in Cinchy, to highlight data that's not valid.

  • Configure a CLI using a Cinchy Query source to move the data from the staging tables to the permanent tables.

How can I see who has modified my data?

Right click on the row you want additional information and select the Collaboration Log.

You can also add the "Modified By" and "Modified" columns into the current view/to your query if you want to see it for multiple rows at once.

How do I create a Cinchy user with a set password?

One Time setup:

  1. Open the Users table

  2. For the password of this user, copy the admin user password and paste it into the Password field of the new user.

  3. Set the Password Expiration Timestamp to today.

  4. In an Incognito browser, navigate to the Cinchy website.

  5. Sign in as the new user with the admin user password.

  6. Cinchy will ask you to change the password for the new user, change it to a default password you will give out every time you create an account.

  7. In the original session window, refresh the Users table and remove the Password Expiration Timestamp for the new user.

Each time, for new users:

  1. Open the Users table.

  2. Create the new user, for example sandip.

  3. For the password of this user, copy the the new user password and paste it into the Password field of sandip.

  4. Set the Password Expiration Timestamp to today.

  5. Give the user their username and friendly password created in step 7 above. They will be asked to change their password on first sign in.

How do I get the change history through CQL?

You write the query for the records for which you want the change history, including system columns like [Version], [Created], and the columns for which you like to see the changes.

You can add an ORDER BY [Version] (either ASC or DESC)

Then you change the query return type to "Query Results (Including Version History )"

The following query will show when the Cinchy instances were upgraded.

How do I insert, update, and delete links in a multi-select field using CQL?

Removing and updating a multi-select a link, is the same as setting the link field. The field needs to be updated with the list of values.

The value is a concatenated string of '[Cinchy Id],[Version],[Cinchy Id],[Version],[Cinchy Id],[Version]' from the lookup values

In this example it would set [Multi-Link Field Name] to values with [Cinchy Id] 1, 2, and 3. The version after each Cinchy Id should be 1.

You must provide the full list of multi-select values. If your field was '1,2' and you update it with '3,1' it will end up as '3', not '1,2,3'.

How do I know which version of Cinchy I am running?

Navigate to <baseURL>/healthcheck

(ex. if your current URL is https://cinchy.mycompany.com/Tables/123?viewId=0 then you would navigate to https://cinchy.mycompany.com/healthcheck)

The response looks this:

In this case your Cinchy version is 4.14.0.0

If you would like to use the health check link for monitoring of the Cinchy application you can add "return503OnFailure=true" to the URL

How do I map a parameter's value to one of my target columns?

Use the model loader to load it back in the system (/apps/modelloader).

You create a calculated column in the source and give it the value of the parameter.

For each table, export and import the data via the UI.

Then map the calculated source column to the target. The order of the columns in the source is important. If your source is a file, put the calculated columns at the end in the source, after all the actual columns in the file.

How do I parse a pipe delimited file using the CLI?

Set the delimiter to |.

How do I remove the leading 0 from an incoming field using the CLI?

This can be done by using Transformations in the sync configuration of a column. Here is an example:

The pattern contains a regular expression:

^ - anchor for the beginning of the string

0 - the string to replace

* - quantifier to be applied to 0 or more occurrences

How do I clone a Cinchy table?

  1. Export the Model to XML from the Design Table info tab

  2. Open the exported model in an editor and change the name of the model

  3. Change the name of the table

  4. remove the guids from the table in the model and save the file

  5. Use the modelloader at <cinchy base URL>/apps/modelloader to upload the modified model

  6. Export the data from the Manage Data screen of the initial table and import it in the new table

How do I clone a domain?

If you just have a group of tables, see the instructions below. If you have tables, queries, you want to port the permissions, etc. you can use this:

Table only instructions:

1. Create a dummy Data Experience and add all your tables from your domain to it (Image 2).

2. Hit the following endpoint with the GUID in your row:

<CinchyURL>/api/createdxversion?guid=<GUID>

I am unable to use COALESCE with a link column in a calculated column

If [Person 1] and [Person 2] are Link columns and [Member] is a Text column, a calculated column with the following expression will fail to save:

COALESCE([Person 1],[Person 2],[Member])

Please cast the link columns to VARCHAR:

COALESCE(CAST([Person 1] AS VARCHAR(50)),CAST([Person 2] AS VARCHAR(50)),[Member])

I can't disable Change Approvals

This is caused by records in Draft status. To retrieve these records, run a query with return type Query Results (Including Draft Data).

After approving these records, you will be able to disable change approval.

You may have to restore cancelled records, approve them, and delete them so that everything is approved.

I can't find the [Cinchy].[Table Access Control] table

The [Cinchy].[Table Access Control] table doesn't show in the Market Place, but you can query for the data in the table.

I can't find the column I want to link to even though the column is present in the table

In this example it would set [Multi-Link Field Name] to values with [Cinchy Id] 1, 2, and 3. The version after each Cinchy Id should be 1."

Columns don't "Allow Linking" by default. Check the properties of the column in the original table and make sure that in “Show Advanced” the “Allow Linking” checkmark box is selected. If you don't have Design Table access to that table, you will need to ask someone who does to do it.

I have access to a table but I can't see any rows

You may not be able to see any rows because of the following reasons:

  • View Filter

  • Data access controls

  • Error with the View or Table

View Filter

Check the All Data view and see if there is data there, if that's the case but a particular view has no rows, there could be a filter on the view. For example, if there is a "Due Soon" or "My Actions" view, it could just be that there are no records assigned to you that require action.

Data Access Controls

Access controls set on the table could cause you to have access to 0 records. Since you are able to set row level filters in Cinchy, it may be the case that the permissions of the table hasn't changed, but the data has changed such that you no longer have permission or vice versa.

Error

There may be an error on the view. If the bottom of the page doesn't show 0 records then there may be an error on the page (Image 3).

Is it possible to correct/replace a table or column's GUID?

It can be done. It's unlikely that the GUID you want to change to is already allocated, but you should still check. Filter the [Cinchy].[Table Columns] for the new GUID. You shouldn't find it. Then replace it in two places:

  • the JSON field in [Cinchy].[Tables] - replace it in the column definition

  • the GUID field in [Cinchy].[Table Columns]

To replace the table GUID, replace it in the JSON in [Cinchy].[Tables] and in the GUID field in [Cinchy].[Tables].

When you are done, restart the Cinchy UI. ​

My Insert/Update statement is making multiple changes instead of just one

A query like the following will cause multiple inserts instead of one if your result type is set to Query Results instead of # of Rows Affected.

The same applies to UPDATE statements.

If you need to perform inserts and updates in a query and want to return data at the end, another option is to use the "Single value (First Column of First Row)" return type, which will only be able to return a single value.

My query parameter isn't working

When I pass a value to the following query, the result is empty.

The query works without the DECLARE statement. When the DECLARE statement is present, the input variable is ignored, and needs to be SET. To still get the variable from the input, a second variable is needed.

Null values aren't updating correctly in Salesforce using the data sync

When performing a data sync with a Salesforce target, you need to replace nulls with '#N/A' in the source. You can use ISNULL([Column],'#N/A') in the source query. The following is a link to the Salesforce documentation related to this topic:

Passing Parameters to a query called With Exec

Declare and set the parameters before invoking the query:

[HR].[Employees and Departments] is:

Some of the columns have been rearranged under the default "All Data" view

The default All Data view displays the columns in the same order as in Design Table. But you can create a view and change the columns displayed and their order.

The multi-select option on the Link Column is disabled

Once link column is added to a table and saved, the multi-select checkbox should be disabled. If you need to change the option, you need to rename the column and create a new link column with the correct option.

What permissions are needed for a user to be able to create and edit views?

The user needs to have "Design Table" permissions granted for the table where they will create or edit views and also needs to have the "Can Design Tables" checked in the [Cinchy].[Users] table.

Columns

This page guides you through the various column types available on Cinchy,

System Columns

Cinchy has system columns used to perform various functionality. These columns can't be modified directly by a user.

You can't create a new column with the same name as a system column.

Cinchy Id

Cinchy Id is a unique identifier assigned automatically to all records within a normal table. The Cinchy Id is associated with the record permanently and is never reassigned even if the record is deleted.

Version & Draft Version

Version and Draft Version are used to track changes and versions.

Without Change Approvals Enabled

Any changes made to a record increments the Version. Draft Version is always 0.

With Change Approvals Enabled

Any data approval increments Version and resets Draft Version to 0. Any proposed changes increments the Draft Version.

Approval State

This is a legacy column. It's always blank.

Created By & Created

Created By is a linked column to the [Cinchy].[Users] table, of the user who created the record.

Created is the time when the record was created, per the logged-in user's timezone.

Without Change Approvals Enabled

Created By and Created will be the same for all records with the same Cinchy Id.

With Change Approvals Enabled

Created By and Created is based on the first user to make changes on an approved record.

Modified By and Modified

Modified By is a linked column to the [Cinchy].[Users] table, of the user who last modified the record.

Without Change Approvals Enabled

The last user to modify the record, and when it happened, per the logged-in user's timezone.

With Change Approvals Enabled

The last user to either modify the record (Draft Version != 0) or approve the record (Draft Version = 0). The timestamp for when that version was generated.

Deleted By and Deleted

If a record is deleted, it will show up in the Recycle Bin.

Without Change Approvals Enabled

A deleted record will have Deleted By and Deleted filled in, with the timezone set to the logged-in user's.

With Change Approvals Enabled

Deleted By and Deleted are based on the user/time when the Delete Request was created, per the logged-in user's timezone., not when it was approved.

Replaced

  • Cinchy always has one latest/up to date record at a time. Anytime changes are made to a record, a new version (normal or draft) is created, and the previous version is updated with a Replaced timestamp.

  • Any record where Replaced is empty is the current version of that record.

Common Fields

Name

Each column must have a unique name. They must also not conflict with system columns (even if you aren't using Change Approvals on the table).

Data security classification

Each column has a data security classification. This defaults to blank, and can be set to one of 4 pre-configured settings (Public, Internal, Restricted, Confidential) or additional options can be created in the [Cinchy].[Data Security Classifications] table by an administrator.

Currently there is no functionality tied directly to Data Security Classification - the tagging is just for internal auditing purposes. Future security settings will be tied to Data Security Classifications, rather than simply done at a column level.

  • Public: This type of data is accessible to all employees and company personnel. It can be used, reused, and redistributed without repercussions. An example might be job descriptions, press releases or links to articles.

  • Internal: This type of data is strictly accessible to internal company personnel or employees who are granted access. This might include internal-only memos, business performance, customer surveys or website analytics.

  • Confidential: Often, access to confidential data requires additional authorization and explanation of why access to the data is needed. Examples of confidential data include social security numbers, credit card details, phone numbers or medical records. Depending on the industry, confidential data is protected by laws like GDPR, HIPAA, CASL and others.

  • Restricted: Restricted data is the most sensitive data, so you would have to treat it extra carefully. If compromised or accessed without authorization, it could lead to criminal charges, massive legal fines, or cause irreparable damage to the company. Examples include intellectual property, proprietary information or data protected by state and federal regulations.

Description

Each column can optionally have a description. The description is displayed when you hover on the column header in Data Management.

GUID

A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique identification. Every column in Cinchy is automatically assigned one. For more information,see the

Be careful when editing a GUID, as you can have unintended consequences.

Common Parameters

Add to Default View

Checked by default. After saving your changes this will add the column to be displayed in the default table (All Data by default). Generally it makes sense to be checked since there should be a view where all columns are displayed.

If you need to hide a column from certain users or groups you can do so in table controls. It's usually best to have a view where all table columns are displayed.

Mandatory

Makes the column a mandatory field. You won't be able to save or alter a record in a state where a mandatory field is blank.

Unique

Requires all values in the column to be unique. Adding a new record or modifying a previous record into a state where it's a duplicate of another record will cause an error and can't be saved.

If you need uniqueness across multiple columns instead, you can create an index in Design Table, add those columns and set the index to unique. If it needs to be more complicated, you can also create a calculated column and set that column to unique. For example, First Name doesn't need to be unique, but First Name + Last Name needs to be unique.

Multi-Select

Some fields can also be set to multi-select.

For example, the column Players in [Football].[Teams] can be a multi-select field since each team will have multiple players.

Allow Linking

Checked by default. This allows other tables to use the column as a link/relationship.

See to get more context on how they're used.

You want to pick identifying columns for linking, such as IDs or Name. Generally you want to use unique columns, but in some cases it's a better user experience to pick an almost unique field for readability.

For example, Full name may not be unique, but it's much easier to understand than Employee ID.

Allow Display in Linked Views

Checked by default. Some columns may not make sense for linking but can be useful to display when someone is choosing an option.

See to get more context and tips.

Encrypt

If is enabled, you will see the option of Encrypt for columns. If this is checked, the column will be encrypted within the database. This is useful for hiding sensitive information so that people with access to the database directly don't see these fields.

Selecting encryption makes no difference to the user experience within the Cinchy platform. The data is displayed in plain text on the UI or via the query APIs.

Regular Columns

Text

Text columns have a maximum length, set to 500 by default.

These are equivalent to VARCHAR(n) data type in SQL.

Number

You can choose from 3 display formats for number - regular, currency, and percentage. You can also decide how many decimal places to display (0 by default). Note that these are both display settings, and won't affect how the number is stored.

These are equivalent to FLOAT(53) data type in SQL (default, 8-byte field).

Date

Cinchy has several Date column type display format options available:

  • MMM DD, YYYY (Oct 31, 2016)

  • YY-MM-DD (16-10-31)

  • DD-MM-YYYY (31-10-2016)

  • DD-MMM-YY (31-Oct-16)

  • Custom Format

The "Default Value" field isn't mandatory and should be left blank (best practice). However, if populated you won't be able to clear the default date value provided to a "blank" data (no date). You will only be able to overwrite it with another date value.

These are equivalent to DATE() data type in SQL.

Yes/No

You must select a default value of yes (1) or no (0) for yes/no fields.

These are equivalent to bitdata type in SQL.

Choice

You can create a choice column (single or multi-select) in Cinchy. In this scenario, you specify all your choices (1 per newline) in the table design. A user will only be able to select from the options provided.

Calculated Columns

A calculated column uses values from other fields in the record for its evaluation. These columns also have a specified result type, which dictates the format of the calculated output.

Example:

A [Full Name] column can be calculated as CONCAT([First Name], ' ', [Last Name]).

These columns are similar to computed columns in SQL databases.

Live vs Cached Calculated Columns

Choose Your Calculation Type

When creating a calculated column, you have two types to choose from: cached and live. This feature is accessible via the Advanced Settings and was a part of the 4.0 version update.

Cached Calculated Columns

  • What It Does: Speeds up data retrieval.

  • How It's Stored: As an actual column based on a CQL formula.

  • When It Updates: Updates only if the data in the same row changes.

Example:

Changing a name in a single row only triggers a recalculation for that row's "Label" column.

Limitations

If a cached column relies on a column from another table, changes in the other table's column won't automatically update the cached column. Make sure to account for this when using cached columns that depend on external data.

Live Calculated Columns

  • What It Does: A live calculated column is a non-cached calculated column that provides real-time data.

  • How It's Stored: As a formula executed on-the-fly during read or query.

  • When It Updates: Refreshes automatically upon every query or screen refresh.

  • When to use:

    • Your calculated column depends on a value from a linked table and you need the latest value from the linked table.

    • Your table doesn't contain many records.

Example:

A live "Label" column will update instantly if any referenced data changes, affecting all rows and tables.

Limitations

  • Live columns consume more system resources.

  • Using user-defined functions in live calculated columns can cause errors if they reference other live calculated columns. Only use inbuilt functions in live columns if they reference other live columns.

Geospatial Columns

If you created a spatial table, you will have access to the geography and geometry column types. These columns also have the option to be indexed via Index in the advanced settings on the column itself.

Geometry

In the UI, this takes a well-known text (WKT) representation of a geometry object. You can modify or paste the WKT representation directly in the editor on the UI. Geometric functions can be performed on this column through CQL and calculated columns.

Geography

In the UI, this takes a well-known text (WKT) representation of a geography object. You can modify or paste the WKT representation directly in the editor on the UI. Geographic functions can be performed on this column through CQL and calculated columns.

Link Columns

Link columns allow you to establish inherent relationships with other records in other tables. See for more details.

Hierarchy Columns

Hierarchy columns are link columns referencing the current table. For example, the below table contains a list of documentation pages, some of which also have sub-level pages (or even sub-sub-level pages). Using a Hierarchy Column shows the relationships between data.

Example 1: API Overview is the parent page. It had four sub-pages: API Authentication, API Saved Queries, ExecuteCQL, and Webhook Ingestion. Clicking on any of the links within the Sub-Pages column would return you to the row for that specific data set.

Example 2: Builder Guides is the parent page. It has five sub-pages: Best Practices, Creating Tables, Saved Queries, Integration Guides, and CInchy DXD Utility. In this example, we also have another level of hierarchy, wherein Best Practices is also a parent page, and Multilingual Support is its sub-page.

Another common use of Hierarchy columns are to show Manager/Employee relationships

v5.7 (IIS)

Upgrading on IIS

The following process can be run when upgrading any v5.x instance to v5.7 on IIS.

Warning: If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.2)** and deploy version 5.2.

If you are upgrading from Cinchy v5.3 or lower to v5.5+ on an SQL Server Database, you will need to make a change to your connectionString in your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

Ex:

Warning:** If you are upgrading from Cinchy v5.4 or lower to Cinchy v5.7, you must first run a mandatory process (Upgrade 5.5)** and deploy version 5.5.

The upgrade of any version to Cinchy v5.7 requires changes to be made to various App Setting files. See section 1.2, step 3, for further details.

Prerequisites

  1. Take a backup of your database.

  2. Extract the for the version you wish to upgrade to.

Upgrade process

  1. Merge the following configs with your current instance configs:

    • Cinchy/web.config

    • Cinchy/appsettings.json

    • CinchySSO/appsettings.json

    • CinchySSO/web.config

  2. If you are upgrading to 5.7 on an SQL Server Database and didn't do so in any previous updates, you will need to make a change to your connectionString in both your SSO and Cinchy appsettings.json. Adding will allow you to bypass the certificate chain during validation.

    Ex:

  3. When upgrading to 5.7, you are required to make the following changes to various appsettings.json files:

CinchySSO\appsettings.json

Navigate to your CinchySSO\appsettings.json file and make the following changes:

  • ADD the following value:

    • "StsPrivateOriginUri" - This should be the private base URL used by the .well-known discovery. If left blank will match the request URL. /cinchysso

Cinchy\appsettings.json

Navigate to your Cinchy\appsettings.json file and make the following changes:

  • REMOVE the following values:

    • "StsAuthorityUri"

    • "RequireHttpsMetadata"

  • ADD the following values:

    • "StsPrivateAuthorityUri" - This should match your private Cinchy SSO URL.

    • "StsPublicAuthorityUri" - This should match your public Cinchy SSO URL.

    • "CinchyPrivateUri" - This should match your private Cinchy URL.

    • "CinchyPublicUri" - This should match your public Cinchy URL.

Worker Directory appsettings.json

Navigate to your appsettings.json file within your Cinchy Worker directory and make the following changes:

  • ADD a new section titled CinchyClientSettings, following the below code snippet as a guide:

  • REMOVE the following:

    • "AuthServiceDomain"

    • "UseHttps"

Event Listener Directory appsettings.json

Navigate to your appsettings.json file within your Cinchy Listener directory and make the following changes:

  • ADD a new section titled CinchyClientSettings, following the below code snippet as a guide:

  • REMOVE the following:

    • "StateFileLocation"

    • "Path"

  1. Execute the following command:

  1. Replace the Cinchy and CinchySSO folders with the new build and your merged configs.

  2. Execute the following command:

  1. Open your Cinchy URL in your browser.

  2. Ensure you can log in.

If you encounter an error during this process, restore your database backup and contact Cinchy Support.

"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
"SqlServer" : "Server=MyServer;Database=Cinchy;User ID=cinchy;Password=password;Trusted_Connection=False;Connection Timeout=30;Min Pool Size=10;TrustServerCertificate=True"
    "AppSettings": {
      "CinchyUri": "http://localhost",
      "CertificatePath": "C:\\inetpub\\wwwroot\\cinchysso\\cinchyidentitysrv.pfx",
      "CertificatePassword": "",
      "SAMLClientEntityId": "",
      "SAMLIDPEntityId": "",
      "SAMLMetadataXmlPath": "",
      "SAMLSSOServiceURL": "",
      "SAMLEncryptedCertificatePath": "",
      "SAMLEncryptedCertificatePassword": "",
      "SAMLSignCertificatePath": "",
      "SAMLSignCertificatePassword": "",
      "HstsMaxAge": 2592000,
      "HstsIncludeSubDomains": false,
      "HstsPreload": false,
      "SAMLSignCertificateMinAlgorithm": "",
      "SAMLSignCertificateSigningBehaviour": "",
      "AcsURLModule": "",
      "StsPublicOriginUri": "",
      // Add in the below "StsPrivateOriginUri".
      //This should be the private base URL used by the .well-known discovery.
      // If left blank will match the request URL. /cinchysso
      "StsPrivateOriginUri": "",
      "MaxRequestHeadersTotalSize": 65536,
      "MaxRequestBufferSize": 65536,
      "MaxRequestBodySize": -1,
      "MachineKeyXml": "",
      "DpApiKeyRingPath": "",
      "TlsVersion": "",
      "CinchyAccessTokenLifetime": "7.00:00:00",
      "DataChangeCallbackTimeout": 7,
      "RefreshCacheTimeInMin": 10,
      "DefaultExpirationCacheTimeInMin": 360,
      "DBType": "PostgreSQL"
    "AppSettings": {
    // Add the below "StsPrivateAuthorityUri" value.
    // This should match your private Cinchy SSO URL.
      "StsPrivateAuthorityUri": "",
   // Add the below "StsPublicAuthorityUri" value.
   // This should match your public Cinchy SSO URL.
      "StsPublicAuthorityUri": "",
   // Add the below "CinchyPrivateUri" value.
   // This should match your private Cinchy URL.
      "CinchyPrivateUri": "",
   // Add the below "CinchyPublicUri" value.
   // This should match your public Cinchy URL.
      "CinchyPublicUri": "",
      "AllowLogFileDownload": false,
      "LogDirectoryPath": "C:\\CinchyLogs\\CinchyWeb",
      "SSOLogPath": "C:\\CinchyLogs\\CinchySSO\\log.json",
      "UseHttps": true,
      "HstsMaxAge": 2592000,
      "HstsIncludeSubDomains": false,
      "HstsPreload": false,
      "TlsVersion": "",
      "RouteDebuggerEnabled": false,
      "RefreshCacheTimeInMin": 10,
      "DefaultExpirationCacheTimeInMin": 360,
      "DBType": "PostgreSQL",
      "StorageType": "Local", // Local | S3 | AzureBlobStorage
      "MaxRequestBodySize": 1073741824 // 1gb
    },
{
  "CinchyClientSettings": {
    "Url": "",      // Cinchy Url
    "Username": "", // For Cinchy v4 only, remove otherwise
    "Password": ""  // For Cinchy v5, this should be the password for the user [email protected]. For v4 this will be the desired user's password.
  },
  "CinchyClientSettings": {
    "Url": "", // Cinchy Url
    "Username": "", // For Cinchy v4, remove otherwise
    "Password": "" // For Cinchy v5, this should be the password for the user [email protected]. For v4 this will be the desired user's password.
  }
iisreset -stop
iisreset -start
using the Cinchy Utility
TrustServerCertificate=True
using the Cinchy Utility
new build
Download .NET 6.0
TrustServerCertificate=True
$row_count=(get-content sample_150k.csv).length
write-host $row_count

If ($row_count -lt 50000)
{
     exit
}
else {
      Write-host "run CLI"
}
UPDATE E
SET E.[Date Hired]=CASE WHEN @dhired<>'' THEN @dhired ELSE NULL END
FROM [HR].[Employees] E
WHERE E.[Deleted] IS NULL AND [Employee ID]=@empid
Invoke-Expression $CLICommand
switch ($LASTEXITCODE) {
  0 { Write-Host "Completed without errors" }
  1 { Write-Host "Execution failed" }
  2 { Write-Host "Completed with validation errors" }
}
echo %ErrorLevel%
SELECT [Version], [Modified], [Model Version]
FROM [Cinchy].[Models]
WHERE [Deleted] IS NULL AND [Name]='Cinchy'
ORDER BY [Version] DESC
UPDATE T
SET T.[Multi-Link Field Name] = '1,1,2,1,3,1'
FROM [Domain].[Table Name] T
WHERE T.[Deleted] IS NULL AND ...
{
  "component": "Cinchy",
  "version": "4.14.0.0",
  "ipAddress": ["172.31.14.171", "172.19.64.1"],
  "systemTime": "2020-06-18T19:43:54.1692859Z",
  "status": "Green",
  "healthChecks": [
    {
      "name": "Database Connectivity",
      "description": "Validates that the application can connect to the database",
      "status": "Green"
    }
  ]
}
<Parameters>
    <Parameter name="snapshotDate" />
</Parameters>
...
<Schema>
...
    <CalculatedColumn name="Snapshot Date" formula="@snapshotDate" dataType="Date" />
<Schema>
<DelimitedDataSource delimiter="|" textQualifier="&quot;"  headerRowsToIgnore="2" path="@filePath" encoding="UTF8">
<Column name="Value 2" dataType="Text">
  <Transformations>
    <StringReplacement pattern="^0*" replacement="" />
  </Transformations>
</Column>
SELECT T.*
FROM [Your Domain].[Your Table] T
WHERE T.[Approval State] <> 'Approved'
SELECT *
FROM [Cinchy].[Table Access Control]
WHERE [Deleted] IS NULL AND [Table]='HR.Employees'
INSERT INTO [Customer].[Tickets] ( [Ticket Id], [Subject] )
VALUES ( 1900, 'This is a Test' );
SELECT [Cinchy Id],
       [Ticket Id],
       [Subject]
FROM [Customer].[Tickets]
WHERE [Deleted] IS NULL
DECLARE @nbdays AS INT;
SELECT @nbdays;
DECLARE @nbdays AS INT;
SET @nbdays = @inputDays;
SELECT @nbdays;
DECLARE @dep AS VARCHAR ( 500 );
SET @dep = 'Accounting';
EXEC [HR].[Employees and Departments]
SELECT [Employee ID],
       [Full Name],
       [Date Hired],
       [Department]
FROM [HR].[Employees]
WHERE [Deleted] IS NULL
      AND [Department] = @dep
Data Erasure
Data Compression.
OpenSearch Dashboard.
data sync.
Data Experience Deployment - Cinchy Platform Documentation
https://help.salesforce.com/articleView?id=000328822&language=en_US&type=1&mode=1
Image 1: Sending multiple comma-delimited values to a query parameter
Image 2: How to clone a domain
Image 3: 0 records

AD Group Integration

This page contains information on how to leverage Active Directory groups within Cinchy.

Group management

This section defines how to manage Groups.

Cinchy Groups

Cinchy Groups are containers that have Users and other Groups within them as members. Use Groups to provision access controls throughout the platform. Cinchy Groups enable centralized administration for access controls.

Groups are defined in the Groups table within the Cinchy domain. By default, only members of the Cinchy Administrators group can manage this table Each group has the following attributes:

Attribute
Definition

Name

The Group Name. This must be unique across all groups within the system.

Users

The Users which are members of the group

User Groups

The Groups which are members of the group

Owners

Users who are able to administer memberships to the group. By default, Owners are also members of the group and this don't need to also be added into the Users category.

Owner Groups

Groups whose members are able to administer the membership of the group. By default, members of Owner Groups are also members of the group itself, and thus don't need to also be added into the User or User Groups category.

Group Type

This will be either "Cinchy Group" or AD Group. "Cinchy Group": The membership is maintained directly in Cinchy. "AD Group": A sync process will be leveraged to maintain the membership and overwrite the Users.

Define a new AD Group

  1. To define a new AD Group, create a new record within the Groups Table with the same name as the AD Group (using the cn attribute).

  2. Set the Group Type to AD Group.

Convert an existing Group to sync with AD

  1. To convert an existing group, update the Name attribute of the existing group record to match the AD Group (using the cn attribute).

  2. Set the Group Type to AD Group.

Group membership sync

AD Groups defined in Cinchy have their members synced from AD through a batch process that leverages the Cinchy Command Line Interface (CLI).

Execution flow

The sync operation performs the following high-level steps:

  1. Fetches all Cinchy registered AD Groups using a Saved Query.

  2. Retrieves the usernames of all members for each AD Group. The default attribute for username that's retrieved is userPrincipalName, but configurable as part of the sync process.

  3. For each AD Group, it loads the users that are both a member in AD and exist in the Cinchy Users table (matched on the Username) into the "Users" attribute of the Cinchy Groups table.

Dependencies

  1. You must install the Cinchy CLI Model in your instance of Cinchy. See the CLI installation page for more details.

  2. An instance of the Cinchy CLI must be available to execute the sync.

  3. You must have a task scheduler to perform the sync on a regular basis (For example, AutoSys).

Configuration steps

Create a saved Query to retrieve AD Groups from Cinchy

  1. Create a new query within Cinchy with the below CQL to fetch all AD Groups from the Groups table. The domain and name assigned to the query will be referenced in the next step.

SavedQuery
SELECT [Name]
FROM [Cinchy].[Cinchy].[Groups]
WHERE [Group Type] = 'AD Group'
AND [Deleted] IS NULL

Create the sync config

  1. Copy the below XML into a text editor of your choice and update the attributes listed in the table below the XML to align to your environment specific settings.

  2. Create an entry with the config in your Data Sync Configurations table (part of the Cinchy CLI model).

ConfigXML
<?xml version="1.0" encoding="utf-16" ?>
<BatchDataSyncConfig name="AD_GROUP_SYNC" version="1.0.0" xmlns="http://www.cinchy.co">
  <Parameters />
  <LDAPDataSource objectCategory="group" ldapserver="LDAP:\\activedirectoryserver.domain.com" username="encryptedUsername" password="encryptedPassword" >
    <Schema>
      <Column name="cn" ordinal="1" dataType="Text" maxLength="5000" isMandatory="false" validateData="false" trimWhitespace="true" description=""/>
      <Column name="member.userPrincipalName" ordinal="2" dataType="Text" maxLength="200" isMandatory="false" validateData="false" trimWhitespace="true" description=""/>
    </Schema>
    <Filter>
      lookup('Domain Name','Query Name')
    </Filter>
  </LDAPDataSource>
  <CinchyTableTarget model="" domain="Cinchy" table="Groups" suppressDuplicateErrors="false">
    <ColumnMappings>
      <ColumnMapping sourceColumn="cn" targetColumn="name" />
      <ColumnMapping sourceColumn="member.userPrincipalName" targetColumn="Users" linkColumn="Username" />
    </ColumnMappings>
    <SyncKey>
      <SyncKeyColumnReference name="name" />
    </SyncKey>
	<ChangedRecordBehaviour type="UPDATE" />
    <DroppedRecordBehaviour type="IGNORE" />
  </CinchyTableTarget>
</BatchDataSyncConfig>
XML Tag
Attribute
Content

LDAPDataSource

ldapserver

The LDAP server URL

LDAP:\activedirectoryserver.domain.com

LDAPDataSource

username

The encrypted username to authenticate with the AD server

(generated using the CLI's encrypt command)

dotnet Cinchy.CLI.dll encrypt -t "Domain/username"

LDAPDataSource

password

The encrypted password to authenticate with the AD server

(generated using the CLI's encrypt command)

dotnet Cinchy.CLI.dll encrypt -t "password".

LDAPDataSource -> Filter

Domain Name

The domain of the Saved Query that retrieves AD Groups

LDAPDataSource -> Filter

Query Name

The name of the Saved Query that retrieves AD Groups

If the userPrincipalName attribute in Active Directory doesn't match what you expect to have as the Username in the Cinchy Users table (For example, if the SAML token as part of your SSO integration returns a different ID), then you must replaceuserPrincipalNamein the XML config with the expected attribute.

The userPrincipalName appears twice in the XML, once in the LDAPDataSource Columns and once in the CinchyTableTarget ColumnMappings.

Sync execution & scheduling

  1. The below CLI command (see here for additional information on the syncdata command) should be used to execute the sync.

  2. Update the command parameters (described in the table below) with your environment specific settings.

  3. Execution of this command can be scheduled at your desired frequency using your scheduler of choice.

dotnet Cinchy.CLI.dll syncdata -s cinchyAppServer -u username -p "encryptedPassword" -m "Model" -f "AD_GROUP_SYNC" -d "TempDirectory"

The user account credentials provided in above CLI syncdata command must have View/Edit access to Cinchy Groups table.

Parameters

SyncData Parameters
  • -h, -HTTPS: Flag indicating connections to Cinchy should be over https.

  • -s, --server: Required. The full path to the Cinchy server without the protocol (cinchy.co/Cinchy).

  • **-u, --userid: **Required. The user id to login to Cinchy.

  • -p, --password: Required. The password of the specified user. This can be optionally encrypted using the CLI's encrypt command.

  • **-f, --feed: **Required. The name of the feed configuration as defined in Cinchy.

  • -d, --tempdirectory: Only applies to Cinchy v4. Required. The path to a directory that the CLI can use for storing temporary files to support the sync (such as error files).

  • -b, --batchsize: (Default: 5000) The number of rows to sync per batch (within a partition) when executing inserts/updates.

  • -z, --retrievalbatchsize: (Default: 5000) The max number of rows to retrieve in a single batch from Cinchy when downloading data.

  • -v, --param-values: Job parameter values defined as one or more name value pairs delimited by a colon (-v name1:value1 name2:value2).

  • --file: Works exactly as -v but it's for parameters that are files.

  • --help: Displays the help screen with the options.

  • -w, --writetofile: Write the data from Cinchy to disk, required for large data sets exceeding 2GB.

High number of Groups in ADFS

If you are syncing someone with a lot of ADFS groups, the server may reject the request for the header being too large. If you are able to login as a user with a few groups in ADFS but run into an error with users with a lot of ADFS groups (regardless of if those ADFS groups are in Cinchy), you will need to make the following changes:

Update the server max Request Header size

  1. Follow the instructions outlined in this document.

CinchySSO AppSettings

In your CinchySSO app settings, you will also need to increase the max size of the request, as follows:

"AppSettings": {
  ...
  "MaxRequestHeadersTotalSize": {max size in bytes},
  "MaxRequestRequestBufferSize": {max size in bytes, use same as above},
  "MaxRequestBodySize": -1
}

Glossary

Overview

Becoming comfortable with the Cinchy terms and verbiage is an essential step to broaden your understanding of the platform and data collaboration as a whole. We've created this handy guide featuring to help you better understand the words and phrases you may see appear throughout this wiki space.

B

Batch Sync

A Batch Sync is one of the two types of Data Syncs that you can perform using Cinchy. Batch syncs work by processing a group or a ‘batch’ of data all together rather than each piece of data individually. When the data sync is triggered it will compare the contents of the source to the target. The Cinchy Worker will decide if data needs to be added, deleted or updated. Batch sync can either be run as a one-time data load operation, or it can be scheduled to run periodically using an external Enterprise Scheduler

C

Cinchy Administrator

The Cinchy Administrator is a user or user group with special Entitlements within the Cinchy platform. An administrator can be either an End-User or a Builder.

A Builder Admin can:

  • Modify all table data (including system tables), all schema, and all data controls

    • This includes setting up and configuring users, assigning them to groups, and assigning which users have builder access

  • View all tables (including system tables) and queries in the platform

A Non-Builder Admin can:

  • View all tables (including system tables) and queries in the platform

  • Modify data controls for tables

Cinchy Builder

“Cinchy Builders” use the Cinchy platform to build an unlimited number of business capabilities and use-cases.

The “Cinchy Builder” has access to perform the following capabilities:

  • Change Table Schema (use Cinchy’s “Design Table” functionality)

  • Grant access (use Cinchy’s “Design Controls” functionality)

  • Edit Cinchy Data in Cinchy System Tables

  • Create, Save, and Share Cinchy Queries

  • Perform specific Cinchy Queries on the Cinchy data network

  • Import/export packaged business capabilities (such as deployment packages)

  • Build Cinchy Experiences

  • Perform integration with Cinchy (Cinchy Command Line Interface [CLI] operations)

  • Create and Deliver an unlimited number of Customer Use Cases within Cinchy

  • A builder can be part of the Administrators group

Cinchy CLI

The Cinchy Command Line Interface (CLI) is a tool that offers utilities for syncing data in and out of Cinchy via various commands.

CinchyDXD

CinchyDXD is a downloadable utility used to move Data Experiences (DX) from one environment to another. This includes any and all objects and components that have been built for, or are required in, support of the Data Experience.

Cinchy ID

Cinchy ID is a unique identifier assigned automatically to all records within a normal table. The Cinchy ID is associated with the record permanently and is never reassigned even if the record is deleted.

Cinchy Query Language (CQL)

Cinchy Query Language (CQL) is the data collaboration version of common query languages such as SQL. With a robust set of functions and commands, you can use CQL to retrieve and manage/modify data and metadata from tables in your network. While data can reside across many tables, a query can isolate it to a single output, making the possibilities of CQL endlessly powerful.

Cinchy Query Language can be used in many ways, including but not limited to:

  • ​Building queries through the query editor that can return, insert, delete, and otherwise manage your data.

  • ​Creating, altering or dropping views for tables​

  • ​Creating, altering or dropping indexes​

Cinchy Upgrade Utility

The Cinchy Upgrade Utility is an easy to use tool that can help you deploy important changes and upgrades to your Cinchy environment when upgrading versions. Note that not every major or minor release version will require the Utility to be run, however this will be clearly noted in the release notes and upgrade guide.

Cinchy End-User

The “End-Users” of the Cinchy platform are those that apply the functionalities created by the “Cinchy Builders” to their business objectives. This can be employees, customers, partners, or systems. Cinchy has two types of end-user: direct and indirect.

  • Direct Users log into Cinchy via the data browser

  • Indirect Users (also commonly referred to as "external users") view/edit data via a third-party application/page that connects to Cinchy via API

Cinchy End-Users are able to:

  • Create and save personal queries. Unlike traditional saved queries made by builders, personal saved queries can't be shared and aren't auto exposed as APIs.

  • Use Tables, Saved Queries, and Experiences created by “Builders"

  • Track version history for the full lifecycle of data

  • Bookmark and manage data

  • Access data through application experiences

  • An end-user can be part of the Administrators group

Connections Experience

The Connections Experience is an integral part of Cinchy data collaboration. Serving as the front-end UI for performing Data Syncs, the Connections Experience can be accessed natively in your Cinchy browser to create, configure, and manage the data being synced in and out of the platform. The user friendly interface makes synchronizing your data across a range of apps easy.

D

Data Browser

The Data Browser is the homepage of your Cinchy platform. From here you can view any Tables, Queries, or Experiences that you have access to. You can also use it to search and review any bookmarks that you have saved.

Data Collaboration

Data collaboration entails the collection, exchange and use of data from different origins within an organization. This interaction creates data products with data owners acting as experts for their specific data domains. The interaction of these various data set domains also results in dramatically informed insights that are solely driven by the interactions of the data itself.

Data Destination(s)

When setting up a Data Sync, you must define which destination to push data to. Cinchy maintains a robust list of Data Destination connectors and is always working on adding more into the Connections Experience.

Data Source(s)

When setting up a Data Sync, you must define which source to pull data from. Cinchy maintains a robust list of Data Source connectors and is always working on adding more into the Connections Experience.

Data Synchronizations

Data Synchronizations ("Data Syncs") are a powerful and important aspect of the Cinchy platform. Using a Data Sync allows you to bidirectional push and pull data between various Data Sources and Data Destinations, including Salesforce, Snowflake, REST APIs, and more. Built with keeping the core tenants of data collaboration in mind, Data Syncs adhere to your set Entitlements and can be configured using the intuitive Connections Experience to enable a myriad of use-cases for your business.

E

Entitlements

Entitlements refers to the set of permissions that you are granted for any piece of data. Cinchy allows entitlements to be set at a granular level, meaning you can give individual users or user groups access to things like:

  • Viewing a data sync

  • Running a job

  • Viewing a specific table row or column

  • Editing a specific table cell

  • Etc.

Entitlements can persist across the platform when using features such as link columns or the Network Map.

Experiences

Rather than traditional code-centric applications which creates data silos, you can build metadata driven application experiences directly on the Cinchy platform. These look and feel like regular applications, but persists its data on the data network autonomously, rather than managing its own persistence. These experiences automatically adapt as your data evolves. An Org Chart is an example of a metadata driven Experience you can create using the power of data collaboration on Cinchy.

L

Listener configuration

Real-time data syncs need to set up a Cinchy listener configuration for event stream sources so that it knows what data to pull/push through your sync. As of 5.7, this can now be done under the Sources section in the Listener tab. Each stream source has different variables and parameters that you can use to refine your sync. The Listener only needs to be configured for real-time syncs, not for batch syncs.

M

MDQE

MDQE, which stands for Metadata Quality Exceptions, can send out notifications based on a set of rules and the “exceptions” that break them. This powerful tool can be used to send notifications for exceptions such as:

  • Healthchecks returning a critical status

  • Upcoming Project Due Dates/Timelines

  • Client Risk Ratings reaching a high threshold

  • Tracking Ticket Urgency or Status markers

  • Unfulfilled and Pending Tasks/Deliverables

  • Etc.

MDQE monitors for specific changes in data, and then pushes out notifications when that change occurs.

Meta Forms

The Meta-Forms experience is a combination of Angular code packaged as an App Experience and a Data Model packaged as a Data Experience that enables users to interact with Cinchy data in a User-Friendly manner.

It lives on top of the data collaboration platform, allowing builders to create forms with custom configurations and users to access this data outside of the tabular view that's native to Cinchy.

N

Network Map

Cinchy comes out of the box with a system applet called Network Map, which is a visualization of your data on the platform and how everything interconnects; it's another way to view and navigate the data you have access to within Cinchy.

Each node represents a table you have access to within Cinchy, and each edge is one link between two tables. The size of the table is determined by the number of links referencing that table. The timeline on the bottom allows you to check out your data network at a point in the past and look at the evolution of your network.

It uses your Entitlements for viewable tables and linked columns.

Q

Queries/Saved Queries

Queries are the fundamental calculations that allow data to be pushed, pulled, and manipulated across a range of Cinchy features. A query can be used to call data, change data, delete data, or whatever functionality your use case requires. Cinchy comes out of the box with a native Query Builder and works with our proprietary Cinchy Query Language.

A Saved Query is merely a query that has been saved for reuse. These can be accessed via the Saved Queries table in Cinchy, and you have access to view or run them based on your entitlements.

R

Real Time Sync

A Real-Time Sync is one of the two types of Data Syncs that you can perform using Cinchy. In real-time syncs, the Cinchy Listener picks up changes in the source immediately as they occur. These syncs don't need to be manually triggered or scheduled using an external scheduler. Setting up a real-time sync does require an extra step of defining a listener configuration to execute properly.

Table and column GUID page
Linking data
Linking Data
Data At Rest Encryption
Linking Data
Check off Index to create an index on a geospatial column

Package the data experience

This page outlines Step 2 of Deploying CinchyDXD: Packaging the Data Experience

Download the CinchyDXD utility

The CinchyDXD utility takes all the components (tables, queries, views, formatting rules) of a DX and package them up so they can be moved from one environment to another.

Remember that all objects need to be created in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).

The CinchyDXD utility is only required (made accessible) for the environment that's packing up the data experience. It's not required for the destination (or target) environment.

For CinchyDXD to work, you must have CinchyCLI installed. For further installation instructions please refer to CLI () documentation

To access the Data Experience Deployment utility please contact Cinchy support ([email protected]).

To download the Utility:

  1. Login to Cinchy

  2. Navigate to the Releases Table

  3. Select the Experience Deployment Utility View

  4. Locate and download the utility (Cinchy DXD v1.7.0.zip)

The CinchyDXD utility is only upwards compatible with Cinchy version 4.6+

  1. Unzip the utility and place the folder at any location on a computer that also has CinchyCLI installed

  2. Create a new folder in the same directory that will hold all of the DX exports generated (CinchyDXD*Output) *(Image 1)._

This folder will then hold all your deployment packages.

  1. Launch a PowerShell console window

  2. From the console, navigate to the CinchyDXD directory (Image 2 and 3).

From within your file explorer window, type “PowerShell” into the file path. It will launch a PowerShell window already at the folder path

Initial setup: PowerShell

PowerShell requires an initial setup when using CinchyDXD.

  1. From your PowerShell window type cin

  2. Hit Tab on your keyboard (Image 4).

  1. Hit Enter on your keyboard (Image 5).

You will get an error message (above) that CinchyDXD.ps1 can't be loaded because the running script is disabled.

To resolve this error:

  1. From your start menu, search for PowerShell and select Run as Administrator (Image 6).

  1. When prompted if you want to allow this app to make changes on your device, select Yes.

  2. In your PowerShell Administrator window enter Set-ExecutionPolicy RemoteSigned (Image 7).

  1. Hit Enter on your keyboard (Image 8).

  1. When prompted with the Execution Policy Changes, enter A for “Yes to All”

  2. Close the PowerShell Administrator window

  3. Navigate back to your PowerShell window for the CinchDXD window

  4. From your PowerShell window type cin

  5. Hit Tab and then Enter on your keyboard (Image 9).

The basic CinchyDXD instructions will be displayed. You will be able to execute commands such as exporting and installing a Data Experience.

Cinchy DXD tables overview

Cinchy uses four tables for packing up and deploying a Data Experience (Image 10).

The Data Experience is defined and packed in what will be referred to moving forward as the Source Environment. Where the environment that the Data Experience will be deployed to will be referenced to as the Target Environment.

  1. Data Experience Definition Table: Where the data experience is defined (tables, queries, views, formatting rules, UDF’s etc.)

  2. Data Experience Reference Data Table: Where we define any data that needs to move with the Data Experience for the experience to work (lookup values, static values that may need to exist in tables - it typically would not be the physical data itself)

  3. Data Experience Releases Table: Once a Data Experience is exported, an entry is created in this table for the export containing:

    • Version Number

    • Release Binary is the location where you can archive/backup your release history in Cinchy Please Note: if you have your own release management system, you do have the option to opt out of archiving the releases in Cinchy and check the release into your own source control

    • Release Name

    • Data Experience

  4. Data Experience Release Artifact Table: Stores all of the files that are part of the Data Experience package as individual records along with all of the binary for each record

Define the data experience

When setting up a Data Experience definition, you will need one definition for each Data Experience you wish to package and deploy to a given number of Target Environments.

  1. Locate and open the Data Experience Definitions table (Image 11).

Column
Definition

2. Complete the following (Image 12):

Column
Value

If you make changes to the DX in the future, you aren't required to build a new Data Experience Definition in this table, you will update the existing definition. If you need to review what the definition looked like historically, you can view it via the Collaboration log.

Define the reference data

When setting up a Data Experience Reference Data definition, you will need one (1) definition for each Reference Data table you wish to package and deploy with your Data Experience to the Target Environment.

This table set up is similar to setting up a CLI.

  1. Locate and open the Data Experience Reference Data table (Image 13).

Column
Definition

Based on the configuration set up in this table, Cinchy will export the data and create CSV and CLI files.

This example doesn't have Reference Data as part of the Data Experience.

Export the data experience

Using PowerShell you will now export the Data Experience you have defined within Cinchy.

  1. Launch PowerShell and navigate to your CinchyDXD folder (Image 14).

Reminder: you can launch PowerShell right from your file explorer window in the CinchyDXD folder by entering in the folder path “PowerShell” and hitting enter on your keyboard. Saving you an extra step of navigating to the CinchyDXD folder manually in PowerShell (Image 15).

  1. In the PowerShell window type in cin and hit Tab on your keyboard (Image 16).

  1. Hit Enter on your keyboard, you will see a list of commands that are available to execute (Image 17).

  1. In the PowerShell command line hit your “up” arrow key to bring back the last command and type export next to it (Image 18).

  1. Hit Enter on your keyboard (Image 19).

The PowerShell window will provide you with the required and optional components to export the data experience.

  1. You must now set up any mandatory export parameters

The parameters executed in PowerShell can exist on one line in PowerShell, however for legibility (below) the parameters have been put on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute.

Please ensure that you are using the sample below as a sample. You will be required to provide values that correspond to:

  • the URL of the source environment

  • the User ID for the user who is performing the export

  • the Password for the user who is performing the export

  • your folder path for where CLI is stored

  • your folder path for where the CLI output files are written to

  • the GUID for the Data Experience that's generated in the Data Experience Definition table

  • your own version naming convention

  • your folder path for where your CinchyDXD output files are written to

Sample: .\CinchyDXD.ps1 export ` -s "<cinchy source url>" ` -u "<source user id>" ` -p "<source passsword>" ` -c "C:\Cinchy CLI v4.0.2" ` -d "C:\CLI Output Logs" ` -g "8C4D08A1-C0ED-4FFC-A695-BBED068507E9" ` -v "1.0.0" ` -o "C:\CinchyDXD_Output" `\

  1. Enter the export parameters into the PowerShell window (Image 20).

  1. Hit Enter on your keyboard to run the export command.

PowerShell will begin to process the export. Once the export is complete, PowerShell will provide you with an export complete message (Image 21).

Validate export

  1. Ensure that the DXD Export Folder is populated (Image 22).

2. Ensure that the Data Experience Release table is populated in the source environment (Image 23).

3. Ensure that the Data Experience Release Artifacts table is populated in the source environment (Image 24).

GUID

This value is calculated, please note this value will be required as one of your export parameters in PowerShell

Name

This is the Name of your Data Experience

Tables

Select all tables that are part of the Data Experience

Views

Select all views (in the data browser) that are a part of the Data Experience

Integrated Clients

Select any integrated clients (For example: Tableau, PowerBI, custom integrations) that are part of the Data Experience

Data Sync Configurations

Select any data syncs (CLI’s experience needs to work) that are part of the Data Experience

Listener Configurations

Select any Listener Config rows that refer to a Data Sync Configuration which is a part of the Data Experience

Reference Data

Select any reference data that's part of the Data Experience. Please note that the setup of the reference data is done in the table called Data Experience Reference Data (see step 2 below for setup details)

Secrets

Select any Secrets you'd like to include that are used Data Sync Configurations or Listener Configs which are a part of this Data Experience.

Webhooks

Select any Webhooks that are a part of this data experience

User Defined Functions

Select any user defined functions (For example: validate phone, validate email) that are part of the Data Experience

Models

Select any custom models that override columns or tables in your Data Experience, if there are none - leave blank

Groups

Select any groups that are part of the Data Experience (when moving groups, it will also move all table access [design] controls)

System Colours

Select a system colour (if defined) for the Data Experience

Saved Queries

Select any queries that are part of the Data Experience

Applets

Select any applets that are part of the Data Experience

Pre-install Scripts

Select any Pre-install Scripts (Saved Queries) that should run before the installation of this Data Experience.

Post-install Scripts

Select any Post-install Scripts (Saved Queries) that should run after to the installation of this Data Experience. A common use-case is to rectify data that may be different between environments.

Formatting Rules

Select any formatting rules that are part of the Data Experience

Literal Groups

Select any literals associated to the Data Experience (For example: key values with English and French definitions)

Builders

Select the builder(s) who have permission to export the Data Experience

Builder Groups

Select the builder group(s) that have permission to export the Data Experience

Note: Best Practice is to use a Group over a User. Users within groups can fluctuate, where the Group (or Role) will remain. This will require less maintenance moving forward

Sync GUID

Leave this column blank

Name

Currency Converter

Tables

Currency Exchange Rate (Sandbox)

Saved Queries

Currency Converter

Builder Groups

Currency Converters

Name

This is the Name of your Reference Data Table, note this name can be anything and doesn't have to replicate the actual table name

Ordinal

The ordinal number assigned will identify the order in which the data is loaded and required based on dependencies within the data experience. For example if you have tables that have hierarchies in them, you will need to load the parent records first and then load your child records which would then resolve any links in the table.

Filter

This is where a WHERE clause would be required. For example, if you have a table that has hierarchies, you would require two rows within the Data Experience Reference Data table. One to load the parent data and one to load the children data. In the parent record a filter WHERE clause would be needed to filter all parent records. In the second record in the filter column a WHERE clause in another in the second record that would be needed to filter the children records.

New Records

Identify the behaviour of a new record (INSERT, UPDATE, DELETE, IGNORE)

Change Records

Identify the behaviour of a changed record (INSERT, UPDATE, DELETE, IGNORE)

Dropped Records

Identify the behaviour of a dropped record (INSERT, UPDATE, DELETE, IGNORE)

Table

Identify the table that you are exporting data from

Sync Key

Required (need definition)

Expiration Timestamp Field

If Dropped Records is set to “Expire” then a timestamp column is required

https://cli.docs.cinchy.com/
Image 1: Creating your new folder
Image 2: Navigate to your directory
Image 3: Navigate to your directory
Image 4: Setting up
Image 5: Setting up, cont.
Image 6: Run as administrator
Image 7: Set-ExecutionPolicy RemoteSigned
Image 8: Hit Enter
Image 9: Finishing your set up
Image 10: Data Experience tables
Image 11: Data Experience Definitions table
Image 12: Enter in information
Image 13: Data Experience Reference Data table
Image 14: Launch PowerShell
Image 15: Launch PowerShell
Image 16: Type cin
Image 17: List of commands
Image 18
Image 19
Image 20: Enter your export parameters
Image 21: Wait for the export to complete
Image 22: Validate that your DXD Export Folder is populated
Image 23: Validate that the Data Experience Release Table is populated
Image 24: Validate that the Data Experience Release Artifacts table is populated

5.7 Release Notes

Cinchy version 5.7 was released on October 3rd, 2023

For instructions on how to upgrade your platform to the latest version, please review the documentation here.

New Capabilities

Connections

Test connections

We made it simpler to debug invalid credentials in data syncs by adding a "Test Connection" button to the UI for the following sources and destinations:

Name
Supported source
Supported destination

Amazon Marketplace

✅ Yes

🚫No

Binary Files

✅ Yes

N/A

Copper

✅ Yes

N/A

DB2

✅ Yes

✅ Yes

Delimited File

✅ Yes

N/A

Dynamics

✅ Yes

🚫No

Excel

✅ Yes

N/A

Fixed Width File

✅ Yes

N/A

Kafka Topic

🚫No

✅ Yes

ODBC

✅ Yes

N/A

Oracle

✅ Yes

✅ Yes

Parquet

✅ Yes

N/A

REST

🚫No

🚫No

Salesforce Object

✅ Yes

✅ Yes

Snowflake

✅ Yes

✅ Yes

SOAP

🚫No

🚫No

MS SQL Server

✅ Yes

✅ Yes

Selecting this button will validate whether your username/password/connection string/etc. are able to connect to your source or destination. If successful, a "Connection Succeeded" popup will appear. If unsuccessful, a "Connection Failed" message will appear, along with the ability to review the associated troubleshooting logs. With this change, you are able to debug access-related data syncs at a more granular level.

Listener config integration

As we continue to enhance our Connections Experience offerings, you can now configure your listener for real-time syncs directly in the UI without having to navigate to a separate table. For any event triggered sync source, (CDC, REST API, Kafka Topic, MongoDB Event, Polling Event, Salesforce Platform Event, and Salesforce Push Topic), there is now the option to input your configurations directly from the Source tab in the Connections Experience. Any configuration you populate via the UI will be automatically reflected back into the Listener Config table of your platform.

You are able to set the:

  • Topic JSON

  • Connections Attributes

  • Auto Offset Reset

  • Listener Status (Enabled/Disabled)

Information on the parameters and configurations for the above settings can be found here and here.

For ease of use, we also added help tips to the UI, as well as examples where necessary.

If there is more than one listener associated with your data sync, you still need to configure it via the Listener Configuration table.

New Source: Oracle Polling Connector

We added Oracle as a new database type for Polling Events in Connections. Data Polling is a source option first featured in Cinchy v5.4 which uses the Cinchy Event Listener to continuously monitor and sync data entries from your Oracle, SQL Server, or DB2 server into your Cinchy table. This capability makes data polling a much easier, effective, and streamlined process and avoids implementing the complex orchestration logic that was previous necessary.

Source filter additions

For REST API, SOAP 1.2, Kafka Topic, Platform Event, and Parquet sources, we added a new "Conditional" option for source filters in the Connections UI. Similarly to how the "Conditional Changed Record Behaviour" capability, works, once selected you will be able to define the conditions upon which data is pulled into your source via the filter. After data is pulled from the source, new conditional UI filters down the set of returned records to ones that match the defined conditions.

Cinchy Secrets table

The Cinchy platform now comes with a new way to store secrets — the Cinchy Secrets Table. Adhering to Cinchy’s Universal Access Controls, you can use this table as a key vault (such as Azure Key Vault or AWS Secrets Manager) to store sensitive data only accessible to the users or user groups that you give access to.

You can use secrets stored in this table anywhere a regular variable can go when configuring data syncs, including but not limited to:

  • As part of a connection string;

  • Within a REST Header, URL, or Body;

  • As an Access Key ID.

You can also use it in a Listener Configuration.

Additionally, we've implemented a new API endpoint for the retrieval of your secrets. Using the below endpoint, fill in your <base-url>, <secret-name>, and the <domain-name> to retrieve the referenced secret.

This endpoint works with Cinchy’s Personal Access Token capability, as well as Access Tokens retrieved from your IDP.

Blank Example:

<base-url>/api/v1.0/secrets-manager/secret?secretName=<secret-name>&domain=<domain-name>

Populated Example:

Cinchy.net/api/v1.0/secrets-manager/secret?secretName=<ExampleSecret>&domain=<Sandbox>

The API will return an object in the below format:

{
    "secretValue": "password123"
}

Polling listener optimization

To improve your Connections experience, we made various optimizations to our Polling Event Listener.

  • We added a new configurable property, DataPollingConcurrencyIndex, to the Data Polling Event Listener. This property allows only a certain number of threads to run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file > "DataPollingConcurrencyIndex": <numberOfThreads>

  • We added a new configurable property, QueueWriteConcurrencyIndex, to the Data Polling Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to your appSettings.json deployment file > "QueueWriteConcurrencyIndex": <numberOfThreads>. Note that this index is shared across all listener configs, meaning that if it's set to 1 only one listener config will be pushing the messages to the queue at a single moment in time.

  • We added a new mandatory property, CursorConfiguration.CursorColumnDataType, to the Listener Topic for the Data Polling Event. This change was made in tandem with an update that ensure that the database query always moved the offset, regardless of if the query returned the records or not—this helps to ensure that the performance of the source database isn't being weighed down by constantly running heavy queries on a wide range of records when the queries returned no data. This value of this mandatory property must match the column type of the source database system for proper casting of parameters.

  • We added a new configurable property, CursorConfiguration.Distinct, to the Listener Topic for the Data Polling Event. This property is a true/false Boolean type that, when set to true, applies a distinct clause on your query to avoid any duplicate records.

// App Settings JSON Example
// Example of the new configurable propeties: DataPollingConcurrencyIndex (set to "1") and QueueWriteConcurrencyIndex (set to "1")
"AppSettings": {
    "GetNewListenerConfigsInterval": "",
    "StateFileWriteDelaySeconds": "",
    "KafkaClientConfig": {
      "BootstrapServers": ""
    },
    "KafkaRealtimeDatasyncTopic": "",
    "KafkaJobCancellationTopic": "",
    "DataPollingConcurrencyIndex":  1,
    "QueueWriteConcurrencyIndex":  1
  }
// Listener Config Topic Example
// Example of the new mandatory CursorColumnDataType property, which below is set to "int", and "Distinct", below set to "true".
{
   "CursorConfiguration": {
       "FromClause": "",
       "CursorColumn": "",
       "BatchSize": "",
       "FilterCondition": "",
       "Columns": [],
            "Distinct": "true"
            "CursorColumnDataType" : "int"
   },
        "Delay": ""
}

Enhancements

Connections

We made various enhancements to the Connections Experience which should help to simplify and streamline your ability to create and maintain data synchronizations across the platform. Examples of these changes can be found in our Data Sync documentation.

Radio buttons for selection

  • Replaced drop-down menus with radio buttons for the following options:

    • Sync Strategy

    • Source Schema Data Types

    • Source Schema "Add Column"

Improved visibility

  • Expanded the width and height of source, destination, and connections drop-down menus to ensure visibility, even on screens with varying sizes.

Streamlined file-based source fields

  • Streamlined the organization of file-based source fields for greater efficiency.

Simplified options

  • Eliminated the following fields for a more focused interface:

    • Source > Cinchy Table > Model

    • Info > Version

  • The API Response Format field has been removed from the REST Source configuration. This change reflects that the only supported response format is JSON.

Refined order of operations

  • Reorganized the process steps, moving the "Permissions" step within the "Info" tab.

Clearer terminology

  • Adjusted terminology for clarity and consistency:

    • Renamed Sync Behaviour tab to Sync Actions.

    • Replaced Parameters with Variables.

    • Changed "Sync Pattern" to Sync Strategy in the Sync Actions tab.

    • Updated Column Mappings to Mappings in the Destination tab.

    • Substituted Access Token with API Key in the Copper Source, aligning with Copper's documentation language.

Enhanced guidance

  • Included descriptive explanations in various sections, such as Mapping, Schema, and Sync Behaviour, to provide comprehensive guidance during data sync configuration.

Unified language

  • Standardized language used in file-based connectors across all Sources.

Improved clarity

  • Added clarifying text throughout the interface for smoother navigation and configuration, fostering a more user-friendly experience.

Organizational enhancements

  • Grouped Sources by type, distinguishing between Batch and Event categories.

  • Implemented alphabetical sorting for improved accessibility and ease of locating connections.

Simplified Destination setup

We've streamlined the destination setup process for data syncs. When selecting a Source other than Cinchy, the destination is now automatically set as Cinchy Table. This enhancement speeds up the creation of data syncs.

Unique identifiers for saved connections

To assist sharing and collaboration on connections, we've introduced unique URLs for all saved connections. Each connection now possesses a unique URL that can be shared with other platform users. This URL links directly to the saved configuration.

Enhanced Load Metadata process

We've made significant improvements to the Load Metadata sources and destinations, enhancing user experience:

  • The Load Metadata modal no longer appears automatically when selecting a relevant source or destination.

  • The availability of the Load Metadata button is conditional on filling out parameters in the Connection section.

  • Clicking the Load Metadata button now directly takes you to metadata columns, skipping the interstitial modal.

  • In the Schema section, all columns are now collapsed by default. Manually added columns maintain an expanded view.

Redesigned UI for Listener Configurations

For simpler real-time sync setups, the Cinchy Event Broker has a new Listener section. This section assists in creating topic JSON for listener configurations, eliminating the need to manually set up topic JSON in the Listener Config table. Refer to the Cinchy Broker Event source page for details on topic JSON fields.

Modal dismissal

We've introduced the ability to dismiss most modals using the Escape key. This enhancement provides a more convenient and user-friendly interaction experience.

Logging

Log outputs

To help simplify and streamline the Connections experience, you are now able to view the output for each job by clicking on the Output button located in the Jobs tab of the UI after you run a sync.

This links to the Execution Log table with a filter set for your specific sync, which can help you reach your execution related data quicker and easier than before.

Log full REST Target HTTP response

We now log the full REST Target HTTP response in the data sync Execution Errors table to provide you with more detailed information about your job. This replaces the original log that only contained the HTTP response status code.

MongoDB update

We continue to provide optimization updates to our Connections capabilities. v5.7 of the Cinchy platform has the following updates for the MongoDB Event Stream:

  • A new configurable property, QueueWriteConcurrencyIndex, to the MongoDB Event Listener. This property allows only a certain number of threads to be concurrently sending messages to the queue. This works to provide a more consistent batching by the worker and reduce your batching errors. run queries against the source database, which works to reduce the load against the database. The default number of threads is set to 12. To configure this property, navigate to the appSettings.json > QueueWriteConcurrencyIndex: <numberOfThreads>. This index is shared across all listener configs, meaning that if it's set to 1 - only one listener config will be pushing the messages to the queue at a single moment in time.

  • We also added a new optional property to the MongoDB Listener Topic, 'changeStreamSettings.batchsize’, that's a configurable way to set your own batch size on the MongoDB Change Stream Listener.

{
  "database": "",
  "collection": "",
  "changeStreamSettings": {
    "pipelineStages": [],
    "batchSize": "1000"
  }
}

Faster query performance for PostgreSQL multi-select column joins

We optimized PostgreSQL query performance when referencing multi-select columns.

Improved query performance using CASE statements

We improved query performance when using a CASE statement on a Link reference.

Meta-Forms

UI changes

  • We consolidated all actions into a single menu for easier navigation.

  • We moved Create new record into the single menu and renamed it to Create.

  • We added an option to copy the record link (URL) to the clipboard.

  • We changed Back to Table View to View Record in Table.

Forms new dropdown menu

Forms action bar

To improve the user experience and make interacting with forms easier, we made the Forms action bar always visible when you scroll through a form.

The Forms action bar

URL sync with record selection

We updated the URL to accurately match the record currently displayed, when switched from the records dropdown menu.

Unsaved changes prompt in forms

You'll now get a prompt to save if you have unsaved changes in a form.

Required fields alert for child forms

We added a warning message in child forms when essential columns like "Child Form Link Field" or both "Child Form Parent ID" and "Child Form Link ID" are missing, as they're needed for proper functionality.

Platform

General security enhancements

We made several updates and enhancements to packages across Cinchy to improve our platform security.

Link column enhancements

We updated the dropdown menus for Link columns to display selected and deleted values at the top of the list so that you don't need to scroll through long lists just to find the ones you've selected.

IdentityServer4 to IdentityServer6 upgrade

We upgraded our IDP from IdentityServer4 to IdentityServer6 to ensure we're maintaining the highest standard of security for your platform.

Add Execute function to UDF extensions

We added execute, a new method for UDF extensions. This new query call returns a queryResult object that contains additional information about your result. For more information, see the Cinchy User Defined Functions page.

Expand platform support for DXD

We added additional system columns to extend the number of core Cinchy objects that can be managed through DXD 1.7 and higher.

The newly supported Cinchy objects are:

  • Views (Data Browser)

  • Listener Config

  • Secrets

  • Pre-install Scripts

  • Post-install Scripts

  • Webhooks

mTLS support

We implemented Istio mTLS support to ensure secure/TLS in-cluster communication of Cinchy components.

Bugs

Platform

  • We fixed a bug in the Cinchy Upgrade Utility that was causing the use of the -c flag, which is meant to delete extra metadata created on the database, to instead run (or rerun) the entire upgrade process.

  • We fixed a bug that was stripping query parameters from Relative URLs if they were being used as the Application URL of the applets. In the below screenshot, the bug would have stripped out the "q=1" parameter, leaving only an Absolute URL in lieu of a Relative one.

  • We fixed an issue with the behaviour of cached calculated columns when using multi-select data types (Link, Choice, and Hierarchy) with Change Approval enabled. These data types should now work as expected.

  • We resolved an issue that prevented view exports from reaching the maximum limit of 250,000 records.

Connections

  • We fixed a bug where the UUID/ObjectId in a MongoDB Change Stream Sourced data sync wasn't being serialized into text format. If you have any MongoDB Stream Sourced syncs currently utilizing the UUID/ObjectId, you may need to adjust accordingly when referencing the columns with those data types.

// Previous UUID/ObjectIDs would have been serialized as the below:
{
  "_id": ObjectId('644054f5f88104157fa9428e'),
  "uuid": UUID('ca8a3df8-b029-43ed-a691-634f7f0605f6')
}

// They will now serialize into text format like this:
{
  "_id": "644054f5f88104157fa9428e",
  "uuid": "ca8a3df8-b029-43ed-a691-634f7f0605f6"
}
  • We fixed a bug where setting a user’s time zone to UTC (Coordinated Universal Time) would result in no data being returned in any tables.

  • We fixed a bug where the Sync GUID of Saved Queries transferred over via DXD would null out.

  • We fixed a bug affecting the MongoDB Event Listener wherein the “auto offset reset” functionality would not work as anticipated when set to earliest.

  • We fixed a bug where failed jobs would return errors for logs that haven't yet been created. Log files now correctly search for only the relevant logs for the failed job.

  • We fixed an issue in the data configuration table where the IF field for the Delimited File > Conditional Calculated Column wasn't displaying correctly.

  • We resolved an issue where using multiple parameters while configuring data syncs could result in parsing and execution errors.

  • We fixed a bug preventing calculated columns from working in MongoDB targets for data syncs.

  • We fixed a bug where users were prompted to restore unsaved changes for a new connection when no configuration changes to a data sync were made.

  • We fixed a bug that was causing the platform to fail upon initializing when a System User had been added to any user group (such as the Connections or Admin groups).

  • We fixed a bug where passing an encrypted value to a variable used in a field encrypted by the connections UI would cause the sync to fail. You can now use variables with either encrypted or plaintext values.

  • We fixed a bug where using the "Delta" sync strategy led to duplicating existing records in some destinations before inserting the new rows of data.

Meta-Forms

  • We fixed a bug where child record tables within a form would display data differently when exported to a PDF.

  • We fixed an issue where the first load of an applet wouldn't render sections that require Cinchy data until you refreshed the page.

  • We fixed an issue where raw HTML was being displayed instead of HTML hyperlinks.

  • We fixed a bug that prevented a form from loading if you deleted an associated child form.

  • We fixed an issue with the record dropdown search where inputs of more than 30 characters caused a failure to match.

The Listener section in Cinchy Event Broker

Kubernetes

This page details the installation instructions for deploying Cinchy v5 on Kubernetes

Introduction

This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.

The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.

Deployment prerequisites

To install Cinchy v5 on Kubernetes, you need to follow the requirements below. Some requirements depend on whether you deploy on Azure or on AWS.

All platforms

These prerequisites apply whether you are installing on Azure or on AWS.

  • You must create the following four Git repositories. You can use any source control platform that supports Git, such as GitLab, Azure DevOps, and GitHub.

    • cinchy.terraform:: Contains all Terraform configurations.

    • cinchy.argocd:: Contains all ArgoCD configurations.

    • cinchy.kubernetes:: Contains cluster and application component deployment manifests.

    • cinchy.devops.automations:: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.

  • Download the artifacts for the four Git repositories. See here for information on accessing these. Check the contents of each of the directories into the respective repository.

  • You must have a service account with read/write permissions to the git repositories created above.

  • Install the following tools on the deployment machine:

    • Terraform

      • For an introduction to Terraform + AWS, see this Get started Guide.

      • For an introduction to Terraform + Azure, see this Get started Guide

    • kubectl (v1.23.0+)

    • .NET Core 6 required for Cinchy v5.8 and higher.

    • Bash (Git Bash may be used on Windows)

  • If you are using Cinchy docker images, pull them.

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

  • You will need a single domain for accessing ArgoCD, Grafana, OpenSearch Dashboard, and any deployed Cinchy instances. You have two routing options for accessing these applications - path based or subdomains. See below for an example with multiple Cinchy instances:

Application
Path Based Routing
Subdomain Based Routing

Cinchy 1 (DEV)

domain.com/dev

dev.mydomain.com

Cinchy 2 (QA)

domain.com/qa

qa.mydomain.com

Cinchy 3 (UAT)

domain.com/uat

uat.mydomain.com

ArgoCD

domain.com/argocd

cluster.mydomain.com/argocd

Grafana

domain.com/grafana

cluster.mydomain.com/grafana

OpenSearch

domain.com/dashboard

cluster.mydomain.com/dashboard

  • You will need an SSL certificate for the cluster. This should be a wildcard certificate if you will use subdomain based routing. You can also use Self-Signed SSL.

Azure requirements

If you are deploying Cinchy v5 on Azure, you require the following:

Terraform requirements

  • A resource group that will contain the Azure Blob Storage with the terraform state.

  • A storage account and container (Azure Blob Storage) for persisting terraform state.

  • Install the Azure CLI on the deployment machine. It must be set to the correct profile/login

The deployment template has two options available:

  • Use an existing resource group.

  • Creating a new one.

Existing resource group

If you prefer an existing resource group, you must provision the following before the deployment:

  • The resource group.

  • A virtual network (VNet) within the resource group.

  • A single subnet. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 addresses.

New resource group

  • If you prefer a new resource group, all resources will be automatically provisioned.

  • The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).

  • An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).

Kubernetes AWS requirements

If you are deploying Cinchy v5 on AWS, you require the following:

Terraform requirements:

  • An S3 bucket that will contain the terraform state.

  • Install the AWS CLI on the deployment machine. It must be set to the correct profile/login

The template has two options available:

  • Use an existing VPC

  • Create a new one.

Existing VPC

  • If you prefer an existing VPC, you must provision the following before the deployment:

    • The VPC. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IP addresses.

    • 3 Subnets (one per AZ). It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IP addresses.

    • If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.

New VPC

  • If you prefer a new VPC, all resources will be automatically provisioned.

  • The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).

  • An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).

  • You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).

  • You must import the SSL certificate into AWS Certificate Manager, or a new certificate can be requested via AWS Certificate Manager.

  • If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.) Read more on this here.

Tips for Success:

  • Ensure you have the same region configuration across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.

Initial configuration

The following steps detail the instructions for setting up the initial configurations.

Configure the deployment.json file

  1. Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.

  2. Depending on platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.

  3. This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.

  4. Follow the guidance within the file to configure the properties.

  5. Commit and push your changes.

Tips for Success:

  • You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.

  • The deployment.json will ask for your repository username and password, but ArgoCD may have errors when retrieving your credentials in certain situations (ex: if using GitHub). To verify if your credentials are working, navigate to the ArgoCD Settings after you have deployed Argo in this guide. To avoid errors, Cinchy recommends using a Personal Access Token instead.

    [Find more information here.](https://argo-cd.readthedocs.io/en/release-1.8/user-guide/private-repositories/)

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in "Configuring the Deployment.json" step 2 has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should have the following message:

Completed successfully

Terraform deployment

The following steps detail how to deploy Terraform.

Cinchy.terraform repository structure - AWS

If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.

Cinchy.terraform repository structure - Azure

If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution.

Cloud provider authentication

  1. Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.

  2. If you are using AWS, run the following commands to authenticate the session:

export AWS_DEFAULT_REGION=REGION
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY
  1. For Azure, run the following command and follow the on screen instructions to authenticate the session:

az login

Deploy the infrastructure

  1. Execute the following command to create the cluster:

bash create.sh
  1. Type yes when prompted to apply the terraform changes.

The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header

Output variables

If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password

If deploying on Azure, this section will contain a single value: Azure SQL Database Password

These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.

Retrieve the SSH keys

The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.

SSH keys should be saved for future reference if a connection needs to be established directly to a worker node in the Kubernetes cluster.

AWS SSH keys

  1. The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:

terraform output -raw private_key

Azure SSH keys

  1. The SSH key is output to the directory containing the cluster terraform configurations.

Update the deployment.json

The following section pertains to updating the Deployment.json file.

Update the database connection string

  1. Navigate to the deployment.json (created in step 3.1) > cinchy_instance_configs section.

  2. Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.

For Azure deployments, the host name isn't available as part of the terraform output and instead must be sourced from the Azure Portal.

Create the IAM user for S3 (AWS)

The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.

To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.

The Access Key and Secret Access Key for the IAM user must be specified under the object_storage section of the deployment.json

Update blob storage connection details (Azure)

  1. Within the deployment.json, the azure_blob_storage_conn_str must be set.

  2. The in-line comments outline the commands required to source this value from the Azure CLI.

Enable Azure Key Vault secrets

If you have the key_vault_secrets_provider_enabled=true value in the azure.json then the below secrets files would have been created during the execution of step 3.2:

You will need to add the following secrets to your Azure Key Vault:

  • worker-secret-appsettings-<cinchy_instance_name>

  • web-secret-appsettings-<cinchy_instance_name>

  • maintenance-cli-secret-appsettings-<cinchy_instance_name>

  • idp-secret-appsettings-<cinchy_instance_name>

  • forms-secret-config-<cinchy_instance_name>

  • event-listener-secret-appsettings-<cinchy_instance_name>

  • connections-secret-config-<cinchy_instance_name>

  • connections-secret-appsettings-<cinchy_instance_name>

To create your new secrets:

  1. Navigate to your key vault in the Azure portal.

  2. Open your Key Vault Settings and select Secrets.

  3. Select Generate/Import.

  4. On the Create a Secret screen, choose the following values:

    1. Upload options: Manual.

    2. Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>

    3. Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.

    4. Content type: JSON

  5. Leave the other values to their defaults.

  6. Select Create.

Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. You must create a total of 8 secrets, as shown in the above list of secret names.

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. If the file created in section 3 has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

  2. The console output should end with the following message:

Completed successfully
  1. The updates must be committed to Git before proceeding to the next step.

Connect with kubectl

Update the Kubeconfig

AWS

  1. From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:

aws eks update-kubeconfig --region <region> --name <cluster_name>

Azure

  1. From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.

These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.

az account set --subscription <subscription_id>
az aks get-credentials --admin --resource-group <deployment_resource_group> --name <cluster_name>

Verify the connection

  1. Verify that the connection has been established and the context is the correct cluster by running the following command:

kubectl config get-contexts

Deploy and access ArgoCD

In this step, you will deploy and access ArgoCD.

Deploy ArgoCD

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy ArgoCD:

bash deploy_argocd.sh
  1. Monitor the pods within the ArgoCD namespace by running the following command every 30 seconds until they all move into a healthy state:

kubectl get pods -n argocd

Access ArgoCD

  1. Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to access ArgoCD:

bash access_argocd.sh

This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090

The credentials for ArgoCD's portal are output at the start of the access_argocd script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090 endpoint.

Deploy cluster components

In this step, you will deploy your cluster components.

Deploy ArgoCD applications

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy the cluster components using ArgoCD:

bash deploy_cluster_components.sh
  1. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).

Tips for Success:

  • If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.

  • Check that ArgoCD is pulling from your git repository by navigating to your Settings

  • If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.

Update the DNS

  1. Execute the following command to get the External IP used by the Istio ingress gateway.

kubectl get svc -n istio-system
  1. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.

Access OpenSearch

The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard

The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.

To change the default credentials for Cinchy v5.4+, follow the documentation here.

To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”

Access Grafana

The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana

The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml

We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.

Deploy Cinchy components

In this step, you will deploy your Cinchy components.

Deploy ArgoCD application

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy the Cinchy application components using ArgoCD:

bash deploy_cinchy_components.sh
  1. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)

  2. You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step 8.2.

You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).

## Troubleshooting

  • If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.

kubectl rollout restart sts argocd-application-controller -n argocd