Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is your first stop when considering a deployment of Cinchy v5.
This section will guide you through the considerations and the prerequisites before deploying version 5 of the Cinchy platform.
The pages in this section include:
Deployment Architecture Overview: This page explores your two high-level options for deploying Cinchy, on Kubernetes or on VM, and why Cinchy recommends a Kubernetes deployment. It also walks you through selecting a database to run your deployment on and some sizing considerations.
Kubernetes Deployment Architecture: This page provides Infrastructure (for both Azure and AWS), Cluster, and Platform component overviews for Kubernetes deployments. It also guides you through considerations about your cluster configuration.
IIS Deployment Architecture: This page provides Infrastructure and Platform component overviews for IIS (VM) deployments.
Deployment Prerequisites: This page details important prerequisites for deploying Cinchy v5.
Use the following checklist when planning for your Cinchy v5 deployment. Each item links to the appropriate documentation page.
The main differences between a Kubernetes based deployment and an IIS deployment are:
Kubernetes offers the ability to elastically scale.
IIS limits certain components to running single instances.
As all caching is in memory in an IIS deployment, if multiple instances are online for redundancy there is point to point communication between them (HTTP requests on the server IPs) required to maintain the cache.
Performance is better on Kubernetes because of Kafka/Redis
Prometheus/Grafana and OpenSearch aren't available in an IIS deployment
The Maintenance CLI runs as a CronJob in Kubernetes while this needs to be orchestrated using a scheduler for an IIS deployment.
Upgrades are simpler with the container images on Kubernetes.
If you will be running on Kubernetes, please review the following checklist:
Define your object storage requirements.
Create an S3 compatible bucket.
Create your SSL Certs (With the option to use Self-Signed).
Define your Secrets Management, if desired.
Define whether you will use Cinchy's Docker Images or your own.
If using Cinchy’s, pull the images.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Access the deployment repositories and copy them into your own repository (GitHub or similar).
If you will be running on IIS, please review the following checklist:
Ensure that you have an instance of SQL Server 2017+
Ensure that you have a Windows Server 2012+ machine with IIS 7.5+ installed
Install .net core Hosting bundle Version 6.0
Specifically, install: ASP.NET Core/.NET Core Runtime & Hosting Bundle
Ensure that you review the minimum web server hardware recommendations
Ensure that you review the minimum database server hardware recommendations
Define your application storage requirements.
Ensure you have access to the release binary.
This page details the deployment architecture of Cinchy v5 when running on a VM.
The below diagram shows a high level overview of Cinchy's Infrastructure components when deploying on IIS.
Some components and configurations are dependent on the platform usage. The table below provides a description of each component.
Tip: Click on an image to enlarge it.
Cinchy Web Application
This is the primary application for Cinchy, providing both the UI for end users as well as the REST APIs that serve application integration needs. The back-end holds the engine that powers Cinchy's data / metadata management functionality.
ASP.NET MVC 5
.NET Framework 4.7.2+IIS 7.5+Windows Server 2012 or later
Cinchy IdP
This is an OpenID Connect / OAuth 2.0 based Identity Provider that comes with Cinchy for authenticating users. Cinchy supports user group management directly on the platform, but can also connect into an existing IdP available in the organization if it can issue SAML tokens. Optionally, Active Directory groups may be integrated into the platform. When using SSO, this component federates authentication to the customer's SAML enabled IdP. This centralized IdP issues tokens to all integrated applications including the Cinchy web app as well as any components accessing the REST based APIs.
.Net Core 2.1
.NET Framework 4.7.2+IIS 7.5+Windows Server 2012 or later
Cinchy Database
All data managed on Cinchy is stored in a MS SQL Server database. This is the persistence layer.
MS SQL Server Database
Windows Server 2012 or laterMS SQL Server 2012 or later
Cinchy CLI
The CLI offers utilities to get data in and out of Cinchy. It has tools to sync data from a source into a table in Cinchy. it can operate on large datasets with built-in partitioning capability and performs a reconciliation to determine differences before applying changes. Another utility is the data export, which invokes a query against the Cinchy platform and dumps the results to a file for distribution to other systems requiring batch data feeds.
.NET Core 2.0
.NET Core Runtime 2.0.7+ (on Windows or Linux)
ADO.NET Driver
For .NET applications Cinchy provides an ADO.NET driver that can be used to connect into the platform and perform CRUD operations on data.
.NET Standard 2.0
See implementation support table here
JavaScript SDK
Cinchy's JavaScript SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and back end.
JavascriptJQuery
Angular SDK
Cinchy's Angular SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and back end.
Angular 5
This page details the deployment architecture of Cinchy v5 when running on Kubernetes.
The diagram below shows a high level overview of a possible Infrastructure diagram with components on the cluster, but your specific configuration may vary (Image 1).
Tip: Click on an image to enlarge it.
When deploying Cinchy version 5 on Kubernetes, you may deploy via Amazon Web Services (AWS). The diagram below shows a high level overview of a possible AWS Infrastructure with components outside the cluster, but your specific configuration may vary (Image 2).
Tip: Click on an image to enlarge it.
When deploying Cinchy v5 on Kubernetes, you may deploy via Microsoft Azure. The diagram below shows a high level overview of possible Azure Infrastructure with components outside the cluster, but your specific configuration may vary (Image 3).
Tip: Click on an image to enlarge it.
The following highlighted area provides a high-level overview of cluster level components used when deploying Cinchy on Kubernetes and what versions they're running.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components. This diagram shows them in the cluster (Image 4).
Tip: Click on an image to enlarge it.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components.
Service Mesh - Istio: Istio handles and routes all inbound traffic to your Cinchy instance, keeping it secure and managed.
Monitoring/Alerting - Prometheus & Grafana: Prometheus consumes metrics from the running components in your environment, which you can visualize into user friendly graphs and dashboards by Grafana. Prometheus can also connect to third party services to provide alerting capabilities. Both Prometheus and Grafana use persistent storage.
Logging - OpenSearch and Fluent Bit: OpenSearch captures and indexes all logs in a single, accessible location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.
Caching - Redis: Redis facilitates a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It's also a storage location for the execution output when running batch data syncs.
Event Processing - Kafka: This acts as the middleware for messaging between components through a queuing mechanism. Kafka features persistent storage.
Before you deploy Cinchy on Kubernetes, consider the following about your cluster configuration :
How many clusters will you need?
Will you be sharing from an existing cluster?
Will you be running multiple environments on a single cluster?
Each Cinchy instance uses the following components to either provide an experience to users/applications or connect data in/out of Cinchy. You can deploy multiple Cinchy instances per cluster, so these components will repeat for each environment.
The following highlighted area provides a high-level overview of instance level components used when running Cinchy on Kubernetes (Image 5).
Tip: Click on an image to enlarge it.
Meta Experiences: Cinchy offers pre-packaged experiences that you can import into your Cinchy environment and use on your data network. This includes experiences like Meta-Forms and Meta-Reports.
Connections: Use the Cinchy Connections experience to create data syncs in/out of the platform. It features persistent storage.
Data Browser: Cinchy’s data collaboration platform features a Universal Data Browser that allows users to view, change, analyze, and otherwise interact with all data on the network. The Data Browser even enables non-technical business users to manage and update data, build models, and set controls, all through an easy and intuitive UI.
Identity Provider: An Identity Provider (IdP) creates and manages user credentials and associated identity attributes. Cinchy uses IdPs authentication services to authenticate end-users.
Event Listener: The Event Listener picks up events from connected sources during a data sync. Review the Data Sync page for further information on the Event Listener. The Event Listener uses persistent storage.
Event Stream Worker: The Event Stream Worker processes data picked up by the Event Listener during data syncs. Review the Data Sync page for further information on the Event Stream Worker. The Event Worker uses persistent storage.
Maintenance (Batch Jobs): Cinchy performs maintenance tasks through the CLI. This includes the data erasure and data compression deletions.
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes that simplifies the application deployment and lifecycle management. ArgoCD is highly recommended for deploying Cinchy, but you can also use another tool.
Once you set up the configurations, ArgoCD automates the deployment of the desired application states into your specified target environments. Implemented as a Kubernetes controller, it continuously monitors running applications and compares the current, live state against the desired target state (as specified in your repositories).
This section guides you through the deployment process for Cinchy version 5: from planning all the way through to installation and upgrades.
If you are looking to deploy Cinchy v5, please start here and read through all the sub-pages:
Once you have familiarized yourself with the above documentation, you may move on to either of the below guides, depending on your preference:
If you are a customer currently on v4 and want to upgrade to v5, start here:
If you have any questions about the processes outlined in this section, please reach out to the Cinchy Support team:
Via email: support@cinchy.com
Via phone: 1-888-792-6051
Through the support portal:
This page provides an overview for the deployment architecture of Cinchy v5.
When choosing to deploy Cinchy version 5, you must decide whether to deploy via Kubernetes or on a VM (IIS).
is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, which helps to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and scale up/down with your current operations.
Grafana, OpenSearch, OpenSearch Dashboard: Working together, these three applications provide a visual logging dashboard for all the information coming in from your database pods. This streamlines your search for information by putting the control into your hands and compiling your logs in one easy to access place — you can now write a query against all your logs, in all your environments. You will have access to a default configuration out of the box, but you can also customize your dashboards as well.
Prometheus: With the Kubernetes addition, you now have access to Prometheus for your metrics dashboard. Prometheus records real-time metrics in a time series database used for event monitoring and alerting. You can then create custom dashboards to display your data in an easy to use visual that makes reporting on your metrics easy, and even set up push alerts based on your custom needs.
You also have the option to run Cinchy on , which was the traditional deployment method before Cinchy v5. Internet Information Services (IIS) for Windows Server is a flexible, secure and manageable Web server for hosting anything on the Web.
We recommend using Kubernetes to deploy Cinchy v5, because of the robust features that you can leverage, such as improved logging and metrics. Kubernetes helps scale your Cinchy instances and lower your costs by using PostgreSQL.
Before deploying Cinchy v5, you must select which database you want to use.
The following list outlines the supported databases for Kubernetes Deployments.
For IIS Deployments
Microsoft SQL Server is a relational database management system. As a database server, it's a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network.
Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure. It runs on a cloud computing platform, and access to it's provided as a service. Managed database services take care of scalability, backup, and high availability of the database.
SQL Managed Instance is a managed, cloud-based, always up-to-date SQL instance that combines broad SQL Server engine compatibility with the benefits of a fully managed PaaS.
Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora is part of the managed database service Amazon Relational Database Service (Amazon RDS). Amazon RDS is a cloud web service that makes it easier to set up, operate, and scale a relational database.
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance. PostgreSQL comes with features aimed to help developers build applications, administrators to protect data integrity and build fault-tolerant environments, and help you manage your data no matter how big or small the dataset.
Amazon RDS makes it easy to set up, operate, and scale cloud-based PostgreSQL deployments. With Amazon RDS, you can deploy scalable PostgreSQL deployments with cost-efficient and resizable hardware capacity. Amazon RDS manages complex and time-consuming administrative tasks such as PostgreSQL software installation and upgrades; storage management; replication for high availability and read throughput; and backups for disaster recovery. Amazon RDS for PostgreSQL gives you access to the capabilities of the familiar PostgreSQL database engine.
This is a fully managed and intelligent Azure Database for PostgreSQL. Enjoy high availability with a service-level agreement (SLA) up to 99.99 percent, AI-powered performance optimization, and advanced security. A fully managed database that automates maintenance, patching, and updates. Provision in minutes and independently scale compute or storage in seconds.
Before deploying Cinchy v5, you need to define your sizing requirements.
Cluster sizing recommendations vary and are dependant on a number of deployment factors. We've provided the following general sizing recommendations, but encourage you to explore more personalized options.
CPU: 8 Cores
Memory: 32GB Ram
Number of Servers: 3
AWS: m5.2xlarge
Azure: D8 v3
If you are choosing to deploy Cinchy v5 on IIS, then you need to ensure that your VM disks have enough application storage to run your clusters.
If you are using Terraform for your Kubernetes deployment, you will need to set up a new S3 compatible bucket to manually to store your state file. You will also need a bucket for Connections, to store error files created during data syncs.
Example Terraform Bucket: cinchy-terraform-state
Example Connection Bucket: cinchy-connections-cinchy-nonprod
S3 provides unlimited scalability and it charges only for what you use/how much you store on it, so there are no sizing definitions.
This document outlines the steps for configuring Active Directory Federation Services (ADFS) to facilitate Single Sign-On (SSO) with Cinchy.
Certainly, presenting the information in a table can help make it easier to understand. Here's how you can structure it:
Before starting with the ADFS configuration, make sure to have following information:
Having these details readily available will streamline the ADFS configuration process.
Navigate to AD FS Management on your ADFS server.
Right-click on Relying Party Trusts and choose Add Relying Party Trust to open the Add Relying Party Trust Wizard.
In the wizard, select Claims Aware > Start > Select Data Source.
Select Enter Data About the Relying Part Manually > Next.
Fill in a Display Name under Specify Display Name.
Skip certificate configuration in Configure Certificates.
In Configure URL, select Enable support for the SAML 2.0 SSO Web SSO protocol.
Input your login URL as follows:
Under Configure Identifiers, add an Identifier and press Next to complete the setup.
Right-click on the newly created Relying Party Trust (located by its Display Name) and select Edit Claim Issuance Policy.
Select Add Rule > Claim Rule > Send LDAP Attributes as Claims.
Input a Claim Rule Name.
In the Attribute Store, select Active Directory. Map the LDAP attributes to the corresponding outgoing claim types as shown in the table below:
Select Finish.
Select Edit Rule > View Rule Language. Copy the Claim URLs for later use in configuring your Cinchy appsettings.json
. It should look like the following:
Press OK to confirm and save.
Note: Please ensure that the configurations below are case-sensitive and align exactly with those in your SAML IdP setup.
Retrieve and save the Federation Metadata XML file from the following location: https://{your.ADFS.server}/FederationMetadata/2007-06/FederationMetadata.xml
.
If needed, use IIS Manager to establish an HTTPS connection for the Cinchy website.
Also establish an HTTPS connection for the SSO site. Make sure the port number aligns with the one specified in the login URL.
You will need to refer to the Rule Language URLs you copied from the ADFS Configuration. Replace the placeholders below with your own URLs:
Insert the following lines within the <appSettings>
section of your web.config
file. Make sure to replace the {your.cinchy.url}
and {your.cinchysso.url}
with your Cinchy and Cinchy SSO values.
This page contains information on how to leverage Active Directory groups within Cinchy.
This section defines how to manage Groups.
Cinchy Groups are containers that have Users and other Groups within them as members. Use Groups to provision access controls throughout the platform. Cinchy Groups enable centralized administration for access controls.
Groups are defined in the Groups table within the Cinchy domain. By default, only members of the Cinchy Administrators group can manage this table Each group has the following attributes:
To define a new AD Group, create a new record within the Groups Table with the same name as the AD Group (using the cn
attribute).
Set the Group Type to AD Group.
To convert an existing group, update the Name attribute of the existing group record to match the AD Group (using the cn
attribute).
Set the Group Type to AD Group.
The sync operation performs the following high-level steps:
Fetches all Cinchy registered AD Groups using a Saved Query.
Retrieves the usernames of all members for each AD Group. The default attribute for username that's retrieved is userPrincipalName, but configurable as part of the sync process.
For each AD Group, it loads the users that are both a member in AD and exist in the Cinchy Users table (matched on the Username) into the "Users" attribute of the Cinchy Groups table.
An instance of the Cinchy CLI must be available to execute the sync.
You must have a task scheduler to perform the sync on a regular basis (For example, AutoSys).
Create a new query within Cinchy with the below CQL to fetch all AD Groups from the Groups table. The domain and name assigned to the query will be referenced in the next step.
Copy the below XML into a text editor of your choice and update the attributes listed in the table below the XML to align to your environment specific settings.
Create an entry with the config in your Data Sync Configurations table (part of the Cinchy CLI model).
If the userPrincipalName
attribute in Active Directory doesn't match what you expect to have as the Username in the Cinchy Users table (For example, if the SAML token as part of your SSO integration returns a different ID), then you must replaceuserPrincipalName
in the XML config with the expected attribute.
The userPrincipalName
appears twice in the XML, once in the LDAPDataSource Columns and once in the CinchyTableTarget ColumnMappings.
Update the command parameters (described in the table below) with your environment specific settings.
Execution of this command can be scheduled at your desired frequency using your scheduler of choice.
The user account credentials provided in above CLI syncdata command must have View/Edit access to Cinchy Groups table.
If you are syncing someone with a lot of ADFS groups, the server may reject the request for the header being too large. If you are able to login as a user with a few groups in ADFS but run into an error with users with a lot of ADFS groups (regardless of if those ADFS groups are in Cinchy), you will need to make the following changes:
In your CinchySSO app settings, you will also need to increase the max size of the request, as follows:
For sizing recommendations and prerequisites about an IIS deployment,
Cinchy supports both and
You will create your two S3 compatible buckets using either Amazon or Azure. Ensure that you use the following convention when naming your buckets so that the automation script runs correctly: <org>-<component>-<cluster>. These bucket names will be referenced in your configuration files when you
Right-click on Relying Party Trust > Properties. Move to the Advanced tab and select SHA-256 as the secure hash algorithm.
AD Groups defined in Cinchy have their members synced from AD through a batch process that leverages the
You must install the Cinchy CLI Model in your instance of Cinchy. for more details.
The below CLI command ( for additional information on the syncdata
command) should be used to execute the sync.
Follow the instructions
User-Principal-Name
Name ID
SAM-Account-Name
sub
Type sub
manually to avoid auto complete
Given-Name
Given Name
Required for Auto User Creation
Surname
Surname
Required for Auto User Creation
E-Mail-Address
E-Mail Address
Required for Auto User Creation
Is-Member-Of-DL
Role
Required for Auto User Creation
CinchyLoginRedirectUri
URL of the user login redirect
https://{your.cinchysso.url}/Account/LoginRedirect
CinchyPostLogoutRedirectUri
URL of the user post-logout redirect
https://{your.cinchy.url}
CertificatePath
Path to Cinchy SSO certificate
{Path/to/CinchySSO}\cinchyidentitysrv.pfx
SAMLClientEntityId
Relying Party Identifier from earlier-configured Relying Party Trust
SAMLIDPEntityId
Entity ID for SAML IdP, found in FederationMetadata.xml
http://{your.AD.server}/adfs/services/trust
SAMLMetadataXmlPath
Location of saved FederationMetadata.xml from Initial setup
SAMLSSOServiceURL
URL path in Domain Controller's in-service endpoints
https://{your.AD.server}/Saml2/Acs
AcsURLModule
/Saml2
MaxRequestHeadersTotalSize
Maximum header size in bytes; adjustable if default is insufficient
MaxRequestBufferSize
Should be equal to or larger than MaxRequestHeadersTotalSize
MaxRequestBodySize
Maximum request body size in bytes (use -1
for default; usually no need to change)
Name
The Group Name. This must be unique across all groups within the system.
Users
The Users which are members of the group
User Groups
The Groups which are members of the group
Owners
Users who are able to administer memberships to the group. By default, Owners are also members of the group and this don't need to also be added into the Users category.
Owner Groups
Groups whose members are able to administer the membership of the group. By default, members of Owner Groups are also members of the group itself, and thus don't need to also be added into the User or User Groups category.
Group Type
This will be either "Cinchy Group" or AD Group. "Cinchy Group": The membership is maintained directly in Cinchy. "AD Group": A sync process will be leveraged to maintain the membership and overwrite the Users.
LDAPDataSource
ldapserver
The LDAP server URL
LDAP:\activedirectoryserver.domain.com
LDAPDataSource
username
The encrypted username to authenticate with the AD server
(generated using the CLI's encrypt command)
dotnet Cinchy.CLI.dll encrypt -t "Domain/username"
LDAPDataSource
password
The encrypted password to authenticate with the AD server
(generated using the CLI's encrypt command)
dotnet Cinchy.CLI.dll encrypt -t "password"
.
LDAPDataSource -> Filter
Domain Name
The domain of the Saved Query that retrieves AD Groups
LDAPDataSource -> Filter
Query Name
The name of the Saved Query that retrieves AD Groups
Cinchy SSO URL
The URL of your Cinchy SSO instance
{your.cinchysso.url}
Cinchy URL
The URL of your main Cinchy instance
{your.cinchy.url}
Cinchy SSO Installation Path
Directory where CinchySSO files are located
{Path/to/CinchySSO}
ADFS Server
The URL of your ADFS server
{your.ADFS.server}
This page details various prerequisites for deploying Cinchy v5.
Before deploying Cinchy v5 on Kubernetes, you must follow the steps listed below.
Install the following tools on the machine where the deployment will run:
Kubectl (v1.23.0+)
All your Cinchy environments will need a domain for each of the following:
ArgoCD
OpenSearch
Grafana
Do this through your specific domain registrar. For example, GoDaddy or Google Domains.
You must have valid SSL Certs ready when you deploy Cinchy v5. Cinchy recommends using a wildcard certificate if ArgoCD will be exposed via a subdomain. Without the wildcard certificate, you must create a port forward using kubectl
on demand to access ArgoCD's portal.
You also have the option to use Self-Signed Certs in Kubernetes deployments. Find more information here.
Although optional, Cinchy strongly recommends secret management for storing and accessing secrets that you use in the deployment process. Cinchy currently supports:
If you would like to set up single sign-on for use in your Cinchy v5 environments, please review the SSO integration page.
You can use Cinchy Docker images or your own. If you would like to use Cinchy images, please follow the section below to access them.
You will pull Docker images from Cinchy's AWS Elastic Container Registry (ECR).
To gain access to Cinchy's Docker images, you need login credentials to the ECR. Contact Cinchy Support for access.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source. Use this option if you plan on leveraging a DB2 data sync.
When installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
You must create the following four Git repositories. You can use any source control platform that supports Git, such as Gitlab, Azure DevOps, or GitHub.
cinchy.terraform: Contains all Terraform configurations.
cinchy.argocd: Contains all ArgoCD configurations.
cinchy.kubernetes: Contains cluster and application component deployment manifests.
cinchy.devops.automations: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.
You must have a service account with read/write permissions to the git repositories created above.
You will need to access and download the Cinchy artifacts before deployment.
To access the Kubernetes artifacts:
Access the Cinchy Releases table. Please contact Cinchy Support if you don't have the access credentials necessary.
Navigate to the release you wish to deploy.
Download the .zip file(s) listed under the Kubernetes Artifacts column.
Check the contents of each of the directories into their respective repository.
Please contact Cinchy Support if you are encountering issues accessing the table or the artifacts.
If you are deploying Cinchy v5 on Azure, you require the following:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
Install the Azure CLI on the deployment machine. It must be set to the correct profile/login
The deployment template has two options available:
Use an existing resource group.
Creating a new one.
If you prefer an existing resource group, you must provision the following before the deployment:
The resource group.
A VNet within the resource group.
A single subnet. It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 IPs.
If you prefer a new resource group, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
If you are deploying Cinchy v5 on AWS, you require the following:
An S3 bucket that will contain the terraform state.
Install the AWS CLI on the deployment machine. It must be set to the correct profile/login
The template has two options available:
Use an existing VPC.
Create a new one.
If you prefer an existing VPC, you must provision the following before the deployment:
The VPC. It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IPs.
3 Subnets (one per AZ). It's important that the address range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IPs.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If you prefer a new VPC, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).
You must import the SSL certificate into AWS Certificate Manager, or a new certificate can be requested via AWS Certificate Manager.
If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.) Read more on this here.
Before deploying Cinchy v5 on IIS, you require the following:
You need to access and download the Cinchy binary before deployment:
Access the Cinchy Releases table. Please contact Cinchy Support if you don't have the access credentials necessary.
Navigate to the release you wish to deploy
Download the files listed under the Component Artifacts column. This should include zip files for:
Cinchy Platform
Cinchy Maintenance CLI and CLI (optional)
Cinchy Meta-Forms (optional)
Please contact Cinchy Support if you are encountering issues accessing the table or the artifacts.
An instance of SQL Server 2017+
A Windows Server 2012+ machine with IIS 7.5+ installed
Install .net core Hosting bundle Version 6.0
Specifically, install: ASP.NET Core/.NET Core Runtime & Hosting Bundle
Cinchy Platform 5.4+ uses .NET Core 6.0.
4.18.0+ used .NET Core 3.1 and earlier versions used .NET Core 2.1
2 × 2 GHz Processor
8 GB RAM
4 GB Hard Disk storage available
4 × 2 GHz Processor
12 GB RAM
Hard disk storage dependent upon use case. Note that Cinchy maintains historical versions of data and performs soft deletes which will add to the storage requirements.
Clustering considerations are applicable to both the Web and Database tiers in the Cinchy deployment architecture.
The web tier can be clustered by introducing a load balancer and scaling web server instances horizontally. Each node within Cinchy uses an in-memory cache of metadata information, and expiration of cached elements is triggered upon data changes that would impact that metadata. Data changes processed by one node wouldn't be known to other nodes without establishing connectivity between them. The nodes must be able to communicate over either HTTP or HTTPS through an IP based binding on the IIS server that allows the broadcast of cache expiration messages. The port used for this communication is different from the standard port that's used by the application when a domain name is involved. Often for customers this means that a firewall port must be opened on these servers.
The database tier relies on standard MS SQL Server failover clustering capabilities.
The web application oversees all interactions with Cinchy be it through the UI or connectivity from an application. It interprets/routes incoming requests, handles serialization/deserialization of data, data validation, enforcement of access controls, and the query engine to transform Cinchy queries into the physical representation for the database. The memory footprint for the application is low, as caching is limited to metadata, but CPU use grows with request volume and complexity(For example, insert/update operations are more complex than select operations). As the user population grows or request volume increases, there may be a need to add nodes.
The database tier relies on a persistence platform that scales vertically. As the user population grows and request volume increases, the system may require additional CPU / Memory. Cinchy recommends you start off in an environment that allows flexibility (such as a VM) until you can profile the real-world load and establish a configuration. On the storage side, Cinchy maintains historical versions of records when changes are made and performs soft deletes of data which will add to the storage requirements. The volume of updates occurring to records should be considered when estimating the storage size.
Outside of log files there is no other data generated & stored on the web servers by the application, which means backups are centered around the database. Since the underlying persistence platform is a MS SQL Server, this relies on standard procedures for this platform.
This page details how to enable TLS 1.2 on Cinchy v5.
Navigate to the CinchySSO Folder > appsettings.json file.
Find the following line:
Replace the above line with the following:
Navigate to the Cinchy Folder > web.config file.
Find the following line:
Replace the above line with the following:
Restart the application pools in IIS for the changes to take effect.
This page details how to change your File Storage configuration in Cinchy v5 to S3, Azure Blob, or Local.
In v5.2, Cinchy implemented the ability to free up database space by using S3 compatible or Azure Blob Storage for file storage. You can set this configuration in the deployment.json of a Kubernetes installation, or the appsettings.json of an IIS installation.
If you are using a Kubernetes deployment, you will change your file storage config in the deployment.json.
Navigate to the object storage section, where you will see either S3 or Azure Blob storage, depending on your deployment structure.
To use Blob Storage or S3, update each line with your own parameters.
To use Local storage, leave each line blank except for the Connections_Storage_Type, which you should set to Local:
5. Run the deployment script by using the following command in the root directory of your devops.automations repository:
Commit and push your changes.
If you are using an IIS deployment, you will change your file storage config in the Cinchy Web AppSettings file.
Locate the StorageType section of the file and set it to either Local, AzureBlobStorage, or S3.
If you selected AzureBlobStorage, fill out the following lines in the same file:
If you selected S3, fill out the following lines in the same file:
This guide serves as a walkthrough of how to deploy v5 on IIS.
Cinchy version 5 on IIS comes bundled with common components such as Connections, Meta Forms, and the Event Listener. This page details the configuration and deployment instructions for the Cinchy Platform, including SSO.
SQL SERVER 2017+
SSMS (optional)
Install IIS 7.5+ / enable IIS from Windows features
Dotnet 6
Dotnet 7 isn't supported with Cinchy 5.x
2 × 2 GHz Processor
8 GB RAM
4 GB Hard Disk storage available
4 × 2 GHz Processor
12 GB RAM
Hard disk storage dependent upon use case. Cinchy maintains historical versions of data and performs soft deletes which will add to the storage requirements.
Access to Cinchy.net (Cinchy Prod) can be obtained during onboarding.
Alternatively, users can request access by sending an email to support@cinchy.com.
Navigate to the Cinchy Releases table from the Cinchy user interface.
Download the following items from the "Release Artifacts" column:
Cinchy VX.X.zip
Cinchy Connection
Cinchy Event Listener
Cinchy Meta-Forms (optional)
Cinchy Maintenance CLI (optional)
For more information about creating a database in SQL server, see the Microsoft Create a database page.
On your SQL Server 2017+ instance, create a new database and name it Cinchy.
If you choose an alternate name, use the name in the rest of the instructions instead of **Cinchy**.
Create a single user account with db_owner privileges
for Cinchy to connect to the database. If you choose to use Windows Authentication instead of SQL Server Authentication, the authorized account must be the same account that runs the IIS Application Pool.
On the Windows Server machine, launch an instance of PowerShell as Administrator.
Copy and run the PowerShell snippet below to create the application pool and set its priorities. You can also manually create the app pool via the IIS Manager.
Verify Db_name → Security → Users → select the user → properties → membership
If you use Windows Authentication in the database or want to run the application under a different user account, execute the commands below to change the application pool identity.
You can also use an alternate name in the application pool.
Download and unzip the "Cinchy vX.X" application package from the Releases Table. This will create two directories: Cinchy
and CinchySSO
. For example, if you unzip at the root of your C drive, the two directories will be C:\Cinchy
and C:\CinchySSO
.
Make sure your application pool accounts has read and execute access to these directories.
Run the below commands in the Administrator instance of PowerShell to create separate directories for Errorlogs and Logs.
You can create it under your single folder as well. For example, md C:\your_folder_name\CinchyLogs\Cinchy
. If you do, make sure to replace any related directory instructions with the your folder path. pool.
Open the C:\CinchySSO\appsettings.json
file in a text editor and update the values below.
Under AppSettings section, update the values outlined in the table.
Replace <base url>
with your chosen protocol and domain. For example, if using HTTPS on app.cinchy.co
, substitute <base url>
with https://app.cinchy.co
. For localhost, use http://localhost/Cinchy
.
CinchyUri
The base URL appended with /Cinchy
.
http://localhost/Cinchy
, {base_cinchy_url}/Cinchy
CertificatePath
Path to the CinchySSO v5 folder for the certificate.
C:\\CinchySSO\\cinchyidentitysrv.pfx
StsPublicOriginUri
Base URL of the .well-known
discovery.
http://localhost/CinchySSO
, {base_cinchy_url}/CinchySSO
StsPrivateOriginUri
Private Base URL of the .well-known
discovery.
http://localhost/CinchySSO
, {base_cinchy_url}/CinchySSO
CinchyAccessTokenLifetime
Duration for the Cinchy Access Token in v5.4+. Defaults to 7.00:00:00
(7 days).
7.00:00:00
DB Type
Database type. Either PostgreSQL
or TSQL
.
For SQLSERVER installation:TSQL
For more information on the SSO installation, please seee the SSO installation page
To connect the application to the database, you must set the SqlServer
value.
Find and update the value under the "ConnectionStrings" section:
Cinchy has a serilog
property that configures where the logs are located. In the below code, update the following:
"Name"
must be set to "File" so it writes to a physical file on the disk.
Set "path"
to the file path to where you want it to log.
Replace "WriteTo"
section with following:
Navigate to the installation folder for Cinchy (C:\Cinchy).
Open the appsettings.json file and update the following properties:
StsPrivateAuthorityUri
Match your private Cinchy SSO URL.
http://localhost/CinchySSO
, {base_cinchy_url}/CinchySSO
StsPublicAuthorityUri
Match your public Cinchy SSO URL.
http://localhost/CinchySSO
, {base_cinchy_url}/CinchySSO
CinchyPrivateUri
Match your private Cinchy URL.
http://localhost/Cinchy
, {base_cinchy_url}/CinchySSO
CinchyPublicUri
Match your public Cinchy URL.
http://localhost/Cinchy
, {base_cinchy_url}/Cinchy
UseHttps
Use HTTPS.
false
DB Type
Database type.
TSQL
MaxRequestBodySize
Introduced in Cinchy v5.4. Sets file upload size for the Files API. Defaults to 1G.
1073741824 // 1g
LogDirectoryPath
Match your Web/IDP logs folder path.
C:\\CinchyLogs\\CinchyWeb
SSOLogPath
Match your SSO log folder path.
C:\\CinchyLogs\\CinchySSO\\log.json
To connect the application to the database, the SqlServer
value needs to be set.
Open an administrator instance of PowerShell.
Execute the below commands to create the IIS applications and enable anonymous authentication. (This is required to allow authentication to be handled by the application).
To enable HTTPS, you must load the server certificate and the standard IIS configuration completed at the Web Site level to add the binding.
Access the <base url>/Cinchy
(http://app.cinchy.co/Cinchy) through a web browser.
Once the login screen appears, enter the credentials:
The default username is admin and the password is cinchy.
You will be prompted to change your password the first time you log in.
Navigate to the following sub-pages to deploy the following bundled v5 components:
This page details the optional steps that you can take to use self-signed SSL Certificates in a Kubernetes Deployment of Cinchy.
Follow this process only after running the devops.automations script during your initial deployment and each additional time you run the script (such as updating your Cinchy platform), as it wipes out all custom configurations you set up to use a self-signed certificate.
Execute the following commands in any folder to generate the self-signed certificate:
Create a YAML file located at cinchy.kubernetes/platform_components/base/self-signed-ssl-root-ca.yaml.
Add the following to the YAML file:
Add the self signed root CA cert file to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base folder.
Add the yaml code snippet to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base/kustomization.yaml file, changing the below files key value as per your root ca cert file name:
Add the following line to the cinchy.kubernetes/platform_components/base/kustomization.yaml file
Add the below Deployment patchesJson6902 to each of your cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/ENV_NAME/PLATFORM_COMPONENT_NAME/kustomization.yaml files, except base
.
Ensure that the rootCA.crt file name is matched with ConfigMap data, configMapGenerator files, and the patch subpath.
Once the changes are deployed, verify the root CA cert is available on the pod under /etc/ssl/certs with below command. Make sure to input your own POD_NAME
and NAMESPACE
:
For further reference material, see the linked article on self-signed certificates in Kubernetes.
This page details the installation instructions for deploying Cinchy v5 on Kubernetes
This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.
The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.
To install Cinchy v5 on Kubernetes, you need to follow the requirements below. Some requirements depend on whether you deploy on Azure or on AWS.
These prerequisites apply whether you are installing on Azure or on AWS.
You must create the following four Git repositories. You can use any source control platform that supports Git, such as GitLab, Azure DevOps, and GitHub.
cinchy.terraform:: Contains all Terraform configurations.
cinchy.argocd:: Contains all ArgoCD configurations.
cinchy.kubernetes:: Contains cluster and application component deployment manifests.
cinchy.devops.automations:: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.
Download the artifacts for the four Git repositories. See here for information on accessing these. Check the contents of each of the directories into the respective repository.
You must have a service account with read/write permissions to the git repositories created above.
Install the following tools on the deployment machine:
For an introduction to Terraform + AWS, see this Get started Guide.
For an introduction to Terraform + Azure, see this Get started Guide
kubectl (v1.23.0+)
.NET Core 6 required for Cinchy v5.8 and higher.
If you are using Cinchy docker images, pull them.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
You will need a single domain for accessing ArgoCD, Grafana, OpenSearch Dashboard, and any deployed Cinchy instances. You have two routing options for accessing these applications - path based or subdomains. See below for an example with multiple Cinchy instances:
Cinchy 1 (DEV)
domain.com/dev
dev.mydomain.com
Cinchy 2 (QA)
domain.com/qa
qa.mydomain.com
Cinchy 3 (UAT)
domain.com/uat
uat.mydomain.com
ArgoCD
domain.com/argocd
cluster.mydomain.com/argocd
Grafana
domain.com/grafana
cluster.mydomain.com/grafana
OpenSearch
domain.com/dashboard
cluster.mydomain.com/dashboard
You will need an SSL certificate for the cluster. This should be a wildcard certificate if you will use subdomain based routing. You can also use Self-Signed SSL.
If you are deploying Cinchy v5 on Azure, you require the following:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
Install the Azure CLI on the deployment machine. It must be set to the correct profile/login
The deployment template has two options available:
Use an existing resource group.
Creating a new one.
If you prefer an existing resource group, you must provision the following before the deployment:
The resource group.
A virtual network (VNet) within the resource group.
A single subnet. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 addresses.
If you prefer a new resource group, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
If you are deploying Cinchy v5 on AWS, you require the following:
An S3 bucket that will contain the terraform state.
Install the AWS CLI on the deployment machine. It must be set to the correct profile/login
The template has two options available:
Use an existing VPC
Create a new one.
If you prefer an existing VPC, you must provision the following before the deployment:
The VPC. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IP addresses.
3 Subnets (one per AZ). It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IP addresses.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If you prefer a new VPC, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).
You must import the SSL certificate into AWS Certificate Manager, or a new certificate can be requested via AWS Certificate Manager.
If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.) Read more on this here.
Tips for Success:
Ensure you have the same region configuration across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.
The following steps detail the instructions for setting up the initial configurations.
Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.
Depending on platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.
This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.
Follow the guidance within the file to configure the properties.
Commit and push your changes.
Tips for Success:
You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.
The deployment.json will ask for your repository username and password, but ArgoCD may have errors when retrieving your credentials in certain situations (ex: if using GitHub). To verify if your credentials are working, navigate to the ArgoCD Settings after you have deployed Argo in this guide. To avoid errors, Cinchy recommends using a Personal Access Token instead.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
If the file created in "Configuring the Deployment.json" step 2 has a name other than deployment.json
, the reference in the command will will need to be replaced with the correct name of the file.
The console output should have the following message:
The following steps detail how to deploy Terraform.
If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster
is created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.
If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster
is created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.
If you are using AWS, run the following commands to authenticate the session:
For Azure, run the following command and follow the on screen instructions to authenticate the session:
Execute the following command to create the cluster:
Type yes when prompted to apply the terraform changes.
The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header
If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password
If deploying on Azure, this section will contain a single value: Azure SQL Database Password
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.
The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.
SSH keys should be saved for future reference if a connection needs to be established directly to a worker node in the Kubernetes cluster.
The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
The SSH key is output to the directory containing the cluster terraform configurations.
The following section pertains to updating the Deployment.json file.
Navigate to the deployment.json (created in step 3.1) > cinchy_instance_configs section.
Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string
property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
For Azure deployments, the host name isn't available as part of the terraform output and instead must be sourced from the Azure Portal.
The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.
To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.
The Access Key and Secret Access Key for the IAM user must be specified under the object_storage
section of the deployment.json
Within the deployment.json, the azure_blob_storage_conn_str
must be set.
The in-line comments outline the commands required to source this value from the Azure CLI.
If you have the key_vault_secrets_provider_enabled=true
value in the azure.json then the below secrets files would have been created during the execution of step 3.2:
You will need to add the following secrets to your Azure Key Vault:
worker-secret-appsettings-<cinchy_instance_name>
web-secret-appsettings-<cinchy_instance_name>
maintenance-cli-secret-appsettings-<cinchy_instance_name>
idp-secret-appsettings-<cinchy_instance_name>
forms-secret-config-<cinchy_instance_name>
event-listener-secret-appsettings-<cinchy_instance_name>
connections-secret-config-<cinchy_instance_name>
connections-secret-appsettings-<cinchy_instance_name>
To create your new secrets:
Navigate to your key vault in the Azure portal.
Open your Key Vault Settings and select Secrets.
Select Generate/Import.
On the Create a Secret screen, choose the following values:
Upload options: Manual.
Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>
Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.
Content type: JSON
Leave the other values to their defaults.
Select Create.
Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. You must create a total of 8 secrets, as shown in the above list of secret names.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
If the file created in section 3 has a name other than deployment.json
, the reference in the command will will need to be replaced with the correct name of the file.
The console output should end with the following message:
The updates must be committed to Git before proceeding to the next step.
From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.
These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.
Verify that the connection has been established and the context is the correct cluster by running the following command:
In this step, you will deploy and access ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
Monitor the pods within the ArgoCD namespace
by running the following command every 30 seconds until they all move into a healthy state:
Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to access ArgoCD:
This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090
The credentials for ArgoCD's portal are output at the start of the access_argocd
script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090 endpoint.
In this step, you will deploy your cluster components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the cluster components using ArgoCD:
Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
Check that ArgoCD is pulling from your git repository by navigating to your Settings
If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.
Execute the following command to get the External IP used by the Istio ingress gateway.
DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.
The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard
The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.
To change the default credentials for Cinchy v5.4+, follow the documentation here.
To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”
The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana
The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword
within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml
We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.
In this step, you will deploy your Cinchy components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the Cinchy application components using ArgoCD:
Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)
You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step 8.2.
You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).
## Troubleshooting
If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
In Cinchy v5.6, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up AWS IAM role authentication, please review the procedure below.
To check that you have an OpenID Connect set up with the cluster (the default for deployments made using the Cinchy automation process), run the below command within a terminal:
The output should appear like the below. Make sure to note this down for later use.
Log in to your AWS account and create an IAM Role policy through the AWS UI. Ensure that it has S3 access.
Run the below command in a terminal to create a service account with the role created in step 3. If your cluster has a special character like an underscore, skip to the next section.
If your cluster name has a special character, like an underscore, you will need to create and apply the YAML. Follow section 1 up until step 4, and then follow the below procedure.
In an IDE (Visual Studio, VsCode), create a new file titled my-service-account.yaml
in your working directory. It should contain the below content.
In a terminal, run the below command:
In an IDE (Visual Studio, VsCode), create a new file titled trust-relationship.json
in your working directory. It should contain the below content.
For example:
Execute the following command to create the role, referencing the above .json file:
For example:
Execute the following command to attach the IAM policy to your role:
For example:
Execute the following command to annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume:
For example:
Confirm that the role and service account are correctly configured by verifying the output of the following commands:
To ensure that the Connections pod's role has the correct permissions, the role specified by the user in AWS must have its Trusted Relationships configured as such:
To confirm that the Connections app is using the service account:
Navigate to the cinchy.kubernetes repository > connections/kustomization.yaml
file
Execute the following:
From a terminal, run the below command:
The output should look like the following:
This page walks through the integration of an Identity Provider with Cinchy via SAML Authentication
Cinchy supports integration with any Identity Provider that issues SAML tokens (such as Active Directory Federation Services) for authenticating users.
It follows an SP Initiated SSO pattern where the SP will Redirect to the IdP and the IdP must submit the SAML Response via an HTTP Post to the SP Assertion Consumer Service.
Below is a diagram outlining the flow when a non-authenticated user attempt to access a Cinchy resource (Image 1).
You must register Cinchy with the Identity Provider. As part of that process you'll supply the Assertion Consumer Service URL, choose a client identifier for the Cinchy application, and generate a metadata XML file.
The Assertion Consumer Service URL of Cinchy is the base URL of the CinchySSO application followed by "{AcsURLModule}/Acs"
https:///\<CinchySSO URL>/Saml2/Acs
https://myCinchyServer/Saml2/Acs
To enable SAML authentication within Cinchy, do the following:
You can find the necessary metadata XML from the applicable identity provider. Place the metadata file in the deployment directory of the CinchySSO web application.
If you are using Azure AD for this process, you can find your metadata XML by following these steps.
If you are using Google Workspace for this process, you can find your metadata XML by following steps 1-6 here.
If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.ad.server>: https://
<your.AD.server>
/FederationMetadata/2007-06/FederationMetadata.xml
If you are using Okta for this process, you can find your metadata XML by following these steps.
If you are using Auth0 for this process, you can find your metadata XML by following these steps.
If you are using PingIdentity for this process, you can find your metadata XML by following these steps.
Update the values of the below app settings in the CinchySSO appsettings.json file.
SAMLClientEntityId - The client identifier chosen when registering with the Identity Provider
SAMLIDPEntityId - The entityID from the Identity Provider metadata XML
SAMLMetadataXmlPath - The full path to the metadata XML file
AcsURLModule - This parameter is needs to be configured per your SAML ACS URL. For example, if your ACS URL looks like this "https:///<CinchySSO URL>/Saml2/Acs", then the value of this parameter should be "/Saml2"
When configuring the Identity Provider, the only required claim is a user name identifier. If you plan to enable automatic user creation, then additional claims must be added to the configuration, see section 4 below for more details.
Once you enable SSO, the next time a user arrives at the Cinchy login screen they will see an additional button for Single Sign-On".
Retrieve your metadata.xml file from your identity provider.
If you are using Azure AD for this process, you can find your metadata XML by following these steps.
If you are using Google Workspace for this process, you can find your metadata XML by following steps 1-6 here.
If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.ad.server>: https://
<your.AD.server>
/FederationMetadata/2007-06/FederationMetadata.xml
If you are using Okta for this process, you can find your metadata XML by following these steps.
If you are using Auth0 for this process, you can find your metadata XML by following these steps.
If you are using PingIdentity for this process, you can find your metadata XML by following these steps.
Navigate to your cinchy.kubernetes\environment\_kustomizations\_template\instance\_template\idp\kustomization.yaml
file.
Add your metadata.xml patch into your secrets where specified below as <<metadata.xml>>
Navigate to your devops.automation > deployment.json in your Cinchy instance.
Add the following fields into the .json and update them below using the metadata.xml.
Navigate to your kubernetes\environment_kustomizations_template\instance_template_encoded_vars\idp_appsettings_json.
Update the below code with your proper AppSettings and ExternalIdentityClaimSection details.
Run DevOps automation script which will populate the updated outputs into the cinchy.kubernetes
repository.
Commit your changes and push to your source control system.
Navigate to your ArgoCD dashboard and refresh the idp-app to pick up your changes. It will also delete your currently running pods to pick up the latest secrets.
Once the pods are healthy, you can verify the changes by looking for the SSO Tab on your Cinchy login page.
Before a user is able to login through the SSO flow, the user must be set up in Cinchy with the appropriate authentication configuration.
Users in Cinchy are maintained within the Users table in the Cinchy domain. Each user in the system is configured with 1 of 3 Authentication Methods:
Cinchy User Account - These are users that are created and managed directly in the Cinchy application. They log into Cinchy by entering their username and password on the login screen.
Non Interactive - These accounts are intended for application use.
Single Sign-On - These users authenticate through the SSO Identity Provider (configured using the steps above). They log into Cinchy by clicking the "Login with Single Sign-On" link on the login screen.
Create a new record within the Users table with the Authentication Method set to Single Sign-On.
The password field in the Users table is mandatory. For SSO users, the value entered is ignored. You can input n/a.
Change the Authentication Method of the existing user to Single Sign-On.
When a user is configured for SSO, they can select Login with Single Sign-On on the login page, which directs logins through the Identity Provider's authentication flow.
If a user successfully authenticates with the Identity Provider but hasn't been set up in the Users table, then they will see the following error message - " You aren't a registered user in Cinchy. Please contact your Cinchy administrator." To avoid the manual step to add new users, you can consider enabling automatic user creation.
On SSO enabled Cinchy instances, users that don't exist in the Cinchy Users table won't be able to login, regardless if they're authenticated by the Identity Provider.
If you enable Automatic User Creation, the Identity Provider authorizes the user and automatically create a user entry in the Cinchy Users table if one doesn't already exist. This means that any SSO authenticated user is guaranteed to be able to access the platform.
If AD Groups are configured within Cinchy, then the authenticated user is also automatically be added to any Cinchy mapped AD Groups where they're a member. See AD Group Integration for additional information on how to define AD Groups in Cinchy.
See below for details on how to enable Automatic User Creation.
Users that are automatically added won't be allowed to create or modify tables and queries. To provision this access, Can Design Tables and Can Design Queries must be checked on the User record in the Cinchy Users table.
The Identity Provider configuration must include the following additions to the base configuration in the SAML token response:
First Name
Last Name
To enable automatic group assignment for newly created users, then you must also include an attribute that captures the groups that this user is a member of. For example, the memberOf
field in AD. This is applicable if you plan on using AD Groups.
To enable automatic user creation, you require the following changes. For IIS Deployments this will be done to the appsettings.json file in the CinchySSO web application.
Add ExternalClaimName attribute values under "ExternalIdentityClaimSection" in appsettings.json file. Don't add the value for MemberOf if you don't want to enable automatic group assignment .
The ExternalClaimName value must be updated to create a mapping between the attribute name in the SAML response and the required field. For example, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname is the name in the SAML response for the FirstName field.
## 6. Further Reading
There might be times when you want to temporarily disable your Kubernetes pods to perform maintenance or upgrades. You can do so through the following steps:
Access your ArgoCD.
Navigate to the application directory for the namespace
you wish to disable, in this case development-cinchy (Image 1). You should see your cluster component applications.
Select the main application (development-cinchy) (Image 2).
Navigate to Summary > Sync Policy > Automated, then select Disable Auto-Sync > OK (Image 3).
For each of the cluster applications that you wish to disable, select the "..." > Delete (Image 5).
Your apps should all appear as "out of sync" (Image 6).
To re-enable your applications, return to the application directory for your disabled namespace
(Image 7).
Select the main application (i.e. development-cinchy) (Image 8).
Navigate to Summary > Sync Policy, then select Enable Auto-Sync > OK (Image 9).