Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is your first stop when considering a deployment of Cinchy v5.
There are various things to consider before deploying Cinchy v5.
The pages in this Deployment Planning section will guide you through the considerations you should think about and the prerequisites that you should implement before deploying version 5 of the Cinchy platform.
The pages in this section include:
Deployment Architecture Overview: This page explores your two high-level options for deploying Cinchy, on Kubernetes or on VM, and why we recommend a Kubernetes deployment. It also walks you through the important decision of selecting a database to run your deployment on, as well as some sizing considerations.
Kubernetes Deployment Architecture: This page provides Infrastructure (for both Azure and AWS), Cluster, and Platform component overviews for Kubernetes deployments. It also guides you through considerations concerning your cluster configuration.
IIS Deployment Architecture: This page provides Infrastructure and Platform component overviews for IIS (VM) deployments.
Deployment Prerequisites: This page details important prerequisites for deploying Cinchy v5.
We have provided the following checklist for you to use when planning for your Cinchy v5 deployment. Each item is linked to the appropriate documentation page to provide more insight and clarity.
Kubernetes is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, and with it comes a myriad of features that help to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and easily scale up/down with your current operations.
You also have the option to run Cinchy on Microsoft IIS, which was the traditional deployment method prior to Cinchy v5. Internet Information Services (IIS) for Windows Server is a flexible, secure and manageable Web server for hosting anything on the Web.
We recommend using Kubernetes to deploy Cinchy v5, because of the robust features that you can leverage, such as improved logging and metrics. Using Kubernetes allows for a greater ability to scale your Cinchy instances as well as the ability to lower your costs by using PostgreSQL.
The main differences between a Kubernetes based deployment and an IIS deployment are:
If you will be running on Kubernetes, please review the following checklist:
Define your object storage requirements.
Create an S3 compatible bucket.
Create your SSL Certs (With the option to use Self-Signed).
Define your Secrets Management, if desired.
Define whether you will use Cinchy's Docker Images or your own.
If using Cinchy’s, pull the images.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Access the deployment repositories and copy them into your own repo (Github or similar).
If you will be running on IIS, please review the following checklist:
Ensure that you have an instance of SQL Server 2017+
Ensure that you have a Windows Server 2012+ machine with IIS 7.5+ installed
Install .net core Hosting bundle Version 6.0
Specifically, install: ASP.NET Core/.NET Core Runtime & Hosting Bundle
Ensure that you review the minimum web server hardware recommendations
Ensure that you review the minimum database server hardware recommendations
Define your application storage requirements.
Ensure you have access to the release binary.
This page details how to enable TLS 1.2 on Cinchy v5.
Navigate to the CinchySSO Folder > appsettings.json file.
Find the following line:
3. Replace the above line with the following:
4. Navigate to the Cinchy Folder > web.config file.
5. Find the following line:
6. Replace the above line with the following:
7. You may need to restart the application pools in IIS for the changes to take effect.
This page walks through the integration of an Identity Provider with Cinchy via SAML Authentication
Cinchy supports integration with any Identity Provider that issues SAML tokens (e.g. Active Directory Federation Services) for authenticating users.
It follows an SP Initiated SSO pattern where the SP will Redirect to the IdP and the IdP must submit the SAML Response via an HTTP Post to the SP Assertion Consumer Service.
Below is a diagram outlining the flow when a non-authenticated user attempt to access a Cinchy resource (Image 1).
Cinchy must be registered with the Identity Provider. As part of that process you'll supply the Assertion Consumer Service URL, choose a client identifier for the Cinchy application, and generate a metadata XML file.
The Assertion Consumer Service URL for Cinchy is the base URL for the CinchySSO application followed by "{AcsURLModule}/Acs"
e.g. https:///<CinchySSO URL>/Saml2/Acs
e.g. https://myCinchyServer/Saml2/Acs
To enable SAML authentication within Cinchy, follow the below steps:
You can find the necessary metadata XML from the applicable identity provider you're using the login against. Place the metadata file in the deployment directory of the CinchySSO web application.
If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.AD.server>: https://
<your.AD.server>
/FederationMetadata/2007-06/FederationMetadata.xml
2. Update the values of the below app settings in the CinchySSO appsettings.json file.
SAMLClientEntityId - The client identifier chosen when registering with the Identity Provider
SAMLIDPEntityId - The entityID from the Identity Provider metadata XML
SAMLMetadataXmlPath - The full path to the metadata XML file
AcsURLModule - This parameter is needs to be configured as per your SAML ACS URL. For example, if your ACS URL looks like this "https:///<CinchySSO URL>/Saml2/Acs", then the value of this parameter should be "/Saml2"
When configuring the Identity Provider, the only required claim is a user name identifier. If you plan to enable automatic user creation, then additional claims must be added to the configuration, see section 4 below for more details.
Once SSO is enabled, the next time a user arrives at the Cinchy login screen they will see an additional button for "Single Sign-On".
Retrieve your metadata.xml file from the identity provider you're using the login against.
If you are using ADFS for this process, you can find your metadata XML at the following link, inputting your own information for <your.AD.server>: https://
<your.AD.server>
/FederationMetadata/2007-06/FederationMetadata.xml
2. Navigate to your cinchy.kubernetes\environment_kustomizations_template\instance_template\idp\kustomization.yaml file.
3. Add your metadata.xml patch into your secrets where specified below as <<metadata.xml>>
4. Navigate to your devops.automation > deployment.json in your Cinchy instance.
5. Add the following fields into the .json and update them below using the metadata.xml.
6. Navigate to your kubernetes\environment_kustomizations_template\instance_template_encoded_vars\idp_appsettings_json.
7. Update the below code with your proper AppSettings and ExternalIdentityClaimSection details.
8. Run devops automation script which will populate the updated outputs into the cinchy.kubernetes repo.
9. Commit your changes and push to your source control system.
10. Navigate to your ArgoCD dashboard and refresh the idp-app to pick up your changes. It will also delete your currently running pods in order to pick up the latest secrets.
11. Once the pods are healthy, you can verify the changes by looking for the SSO Tab on your Cinchy login page.
Before a user is able to login through the SSO flow, the user must be set up in Cinchy with the appropriate authentication configuration.
Users in Cinchy are maintained within the Users table in the Cinchy domain. Each user in the system is configured with 1 of 3 Authentication Methods:
Cinchy User Account - These are users that are created and managed directly in the Cinchy application. They log into Cinchy by entering their username and password on the login screen.
Non Interactive - These accounts are intended for application use.
Single Sign-On - These users authenticate through the SSO Identity Provider (configured using the steps above). They log into Cinchy by clicking the "Login with Single Sign-On" link on the login screen.
Create a new record within the Users table with the Authentication Method set to "Single Sign-On".
The password field in the Users table is mandatory. For Single Sign-On users, the value entered is ignored. You can input "n/a".
Change the Authentication Method of the existing user to "Single Sign-On".
Once a user is configured for SSO, they can then click the "Login with Single Sign-On" link on the login page and that will then take them through the Identity Provider's authentication flow and bring them into Cinchy.
Once SSO has been enabled on your instance of Cinchy, by default, any user that does not exist in the Cinchy Users table will not be able to login regardless if they are authenticated by the Identity Provider.
Enabling Automatic User Creation means that upon login, if the Identity Provider authorizes the user, an entry for this user will automatically be created in the Cinchy Users table if one does not already exist. This means that any SSO authenticated user is guaranteed to be able to access the platform.
See below for details on how to enable Automatic User Creation.
Users that are automatically added will not be allowed to create or modify tables and queries. To provision this access, Can Design Tables and Can Design Queries must be checked on the User record in the Cinchy Users table.
The Identity Provider configuration must include the following claims in addition to the base configuration in the SAML token response:
First Name
Last Name
In order to enable automatic group assignment for newly created users (which is applicable if you plan on using AD Groups), then you must also include an attribute that captures the groups that this user is a member of (e.g. memberOf field in AD)
Enabling automatic user creation requires the following changes. For IIS Deployments this will be done to the appsettings.json file in the CinchySSO web application.
Add ExternalClaimName attribute values under "ExternalIdentityClaimSection" in appsettings.json file. Do not add the value for "MemberOf" if you don't want to enable automatic group assignment .
The ExternalClaimName value must be updated to create a mapping between the attribute name in the SAML response and the required field. (e.g. http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname is the name in the SAML response for the FirstName field)
This page details the deployment architecture of Cinchy v5 when running on a VM.
Table of Contents |
---|
The below diagram shows a high level overview of Cinchy's Infrastructure components when deploying on IIS.
Certain components and configurations are optional and dependent upon the usage pattern of the platform, these will be called out in the table below the diagram which provides a description of each component.
Tip: Click on an image to enlarge it.
This section of Cinchy's documentation will guide you through the deployment process for Cinchy version 5: from planning all the way through to installation and upgrades.
If you are looking to deploy Cinchy v5, please start here and read through all the sub-pages:
Once you have familiarized yourself with the above documentation, you may move on to either of the below guides, depending on your preference:
If you are a customer currently on v4 and want to upgrade to v5, start here:
If you have any questions about the processes outlined in this section, please reach out to our Support team:
Via email: support@cinchy.com
Via phone: 1-888-792-6051
Through the support portal:
This page details the deployment architecture of Cinchy v5 when running on Kubernetes.
Table of Contents |
---|
The below diagram shows a high level overview of a possible Infrastructure diagram with components on the cluster, however your specific configuration may vary (Image 1).
Tip: Click on an image to enlarge it.
When deploying Cinchy version 5 on Kubernetes, you may deploy via Amazon Web Services (AWS). The below diagram shows a high level overview of a possible AWS Infrastructure with components outside the cluster, however your specific configuration may vary (Image 2).
Tip: Click on an image to enlarge it.
When deploying Cinchy version 5 on Kubernetes, you may deploy via Microsoft Azure. The below diagram shows a high level overview of possible Azure Infrastructure with components outside the cluster, however your specific configuration may vary (Image 3).
Tip: Click on an image to enlarge it.
The following highlighted area provides a high-level overview of cluster level components used when deploying Cinchy on Kubernetes, as well as what versions they are running.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components. This diagram shows them in the cluster (Image 4).
Tip: Click on an image to enlarge it.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components.
There are a few things to consider about your cluster configuration before you deploy Cinchy on Kubernetes:
How many clusters will you need?
Will you be sharing from an existing cluster?
Will you be running multiple environments on a single cluster?
Each instance of Cinchy has the following components, which are used to either provide an experience to users/applications or connect data in/out of Cinchy. Since multiple Cinchy instances can be deployed per cluster, these components will repeat for each environment.
The following highlighted area provides a high-level overview of instance level components used when running Cinchy on Kubernetes (Image 5).
Tip: Click on an image to enlarge it.
Connections: The Cinchy Connections experience is used to help easily create data syncs in/out of the platform. It features persistent storage.
Data Browser: Cinchy’s Dataware platform features a Universal Data Browser that allows users to view, change, analyze, and otherwise interact with all data on the network. The Data Browser even enables non-technical business users to manage and update data, build models, and set controls, all through an easy and intuitive UI.
Identity Provider: An Identity Provider (IdP) creates and manages user credentials and associated identity attributes. IdPs authentication services are used in Cinchy to authenticate end-users.
Once you configurations are set, ArgoCD automates the deployment of the desired application states into your specified target environments. Implemented as a Kubernetes controller, it continuously monitors running applications and compares the current, live state against the desired target state (as specified in your repositories).
Kubernetes | Windows IIS |
---|---|
If you are using Azure AD for this process, you can find your metadata XML by
If you are using GSuite for this process, you can find your metadata XML by
If you are using Okta for this process, you can find your metadata XML by
If you are using Auth0 for this process, you can find your metadata XML by
If you are using PingIdentity for this process, you can find your metadata XML by
If you are using Azure AD for this process, you can find your metadata XML by
If you are using GSuite for this process, you can find your metadata XML by
If you are using Okta for this process, you can find your metadata XML by
If you are using Auth0 for this process, you can find your metadata XML by
If you are using PingIdentity for this process, you can find your metadata XML by
If a user successfully authenticates with the Identity Provider but has not been set up in the Users table, then they will see the following error message - " You are not a registered user in Cinchy . Please contact your Cinchy administrator." To avoid the manual step to add new users, you can consider enabling .
In addition to creating a user record, if AD Groups are configured within Cinchy, then the authenticated user will automatically be added to any Cinchy mapped AD Groups where they are a member. See for additional information on how to define AD Groups in Cinchy.
Service Mesh - : All inbound traffic to your Cinchy instance is routed and handled through this component, keeping it secure and managed.
Monitoring/Alerting - & Prometheus consumes metrics from the running components in your environment, which can then be visualized into user friendly graphs and dashboards by Grafana. Prometheus can also connect to third party services to provide alerting capabilities. Both Prometheus and Grafana use persistent storage.
Logging - and : All logs are captured and indexed by Opensearch in a single, easily accessible location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.
Caching - : Redis is currently being used to facilitate a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It is also a storage location for the execution output when running batch data syncs.
Event Processing - : This is designed to act as the middleware that allows for messaging between components through a queuing mechanism. Kafka features persistent storage.
Meta Experiences: Cinchy offers pre-packaged experiences that you can import into your Cinchy environment and use on your data network. This includes experiences like and .
Event Listener: The Event Listener is used to picks up events from connected sources during a data sync. Review for further information on the Event Listener. The Event Listener uses persistent storage.
Event Stream Worker: The Event Stream Worker is used to process data picked up by the Event Listener during data syncs. Review for further information on the Event Stream Worker. The Event Worker uses persistent storage.
Maintenance (Batch Jobs): Cinchy through the CLI. This currently includes the data erasure and data compression deletions.
is a declarative, GitOps continuous delivery tool for Kubernetes that simplifies the application deployment and lifecycle management. ArgoCD is highly recommended for deploying Cinchy, however you may also choose to use another tool.
The ability to elastically scale with your business needs.
Limits certain components to running single instances.
Better performance due to Kafka/Redis components.
Kafka/Redis not available as part of the deployment.
Ability to run the Maintenance CLI as a cron job.
Maintenance CLI must be orchestrated using a scheduler.
Detailed logging and metrics capabilities available via Prometheus/Grafana and Opensearch components.
Prometheus/Grafana and Opensearch not available as part of the deployment.
Point to point communication not required to maintain multiple instance caching.
Point to point communications (HTTP requests on the server IPs) is required to maintain caching in the event that multiple instances are stood up.
The use of container images allow for simpler version upgrades.
Does not use container images for upgrades.
Ability to use PostgreSQL database to lower infrastructure costs.
Locked into a TSQL database.
You need to manually manage your cluster(s).
You do not need to manually manage your cluster(s).
| This is the primary application for Cinchy, providing both the UI for end users as well as the REST APIs that serve application integration needs. The back-end holds the engine that powers Cinchy's data / metadata management functionality. | ASP.NET MVC 5 |
|
2. Cinchy IdP | This is an OpenID Connect / OAuth 2.0 based Identity Provider that comes with Cinchy for authenticating users. Cinchy supports user & group management directly on the platform, but can also connect into an existing IdP available in the organization if it can issue SAML tokens. Optionally, Active Directory groups may be integrated into the platform. When SSO is turned on, this component is responsible for federating authentication to the customer's SAML enabled IdP. This centralized IdP issues tokens to all integrated applications including the Cinchy web app as well as any components accessing the REST based APIs. | .Net Core 2.1 |
|
3. Cinchy Database | All data managed on Cinchy is stored in a MS SQL Server database. This is the persistence layer | MS SQL Server Database |
|
4. Cinchy CLI | This is Cinchy's Command Line Interface that offers utilities to get data in and out of Cinchy. One of these utilities is a tool to sync data from a source into a table in Cinchy. This is able to operate on large datasets by leveraging an in-built partitioning capability and performs a reconciliation to determine differences before applying changes. Another commonly used utility is the data export, which allows customers to invoke a query against the Cinchy platform and dump the results to a file for distribution to other systems requiring batch data feeds. | .NET Core 2.0 |
|
5. ADO.NET Driver | For .NET applications Cinchy provides an ADO.NET driver that can be used to connect into the platform and perform CRUD operations on data. | .NET Standard 2.0 |
6. Javascript SDK | Cinchy's Javascript SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and backend. | Javascript JQuery |
7. Angular SDK | Cinchy's Angular SDK for front-end developers looking to create an application that can integrate with the Cinchy platform to act as it's middle-tier and backend. | Angular 5 |
Beginning in Cinchy v5.6, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up AWS IAM role authentication, please review the procedure below.
To check that you have an OpenID Connect set up with the cluster (the default for deployments made using the Cinchy automation process), run the below command within a terminal:
The output should appear like the below. Make sure to note this down for later use.
Log in to your AWS account and create an IAM Role policy through the AWS UI. Ensure that it has S3 access.
Run the below command in a terminal to create a service account with the role created in step 3. If your cluster has a special character like an underscore, skip to section 1.1.
If your cluster name has a special character, like an underscore, you will need to create and apply the yaml. Follow section 1 up until step 4, and then follow the below procedure.
In an editor like Visual Code or similar, create a new file titled "my-service-account.yaml" in your working directory. It should contain the below content.
In a terminal, run the below command:
In an editor like Visual Code or similar, create a new file titled "trust-relationship.json" in your working directory. It should contain the below content.
For example,
Execute the following command to create the role, referencing the above .json file:
For example,
Execute the following command to attach the IAM policy to your role:
For example,
Execute the following command to annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume:
For example,
Confirm that the role and service account are configured correctly by verifying the ouput of the following commands:
To ensure that the Connections pod's role has the correct permissions, the role specified by the user in AWS must have its Trusted Relationships configured as such:
To confirm that the Connections app is using the service account:
Navigate to the cinchy.kubernetes repo > connections/kustomization.yaml file
Execute the following:
From a terminal, run the below command:
The output should look like the following:
This page contains information on how to leverage Active Directory groups within Cinchy.
Table of Contents |
---|
This section defines how to manage Groups.
Cinchy Groups are containers that have Users and other Groups within them as members. They are used to provision access controls throughout the platform. Cinchy Groups enable centralized administration for access controls.
Groups are defined in the "Groups" table within the Cinchy domain. By default this table can only be managed by members of the Cinchy Administrators group. Each group has the following attributes:
Attribute | Definition |
---|---|
To define a new AD Group, create a new record within the Groups Table with the same name as the AD Group (using the cn attribute).
Set the Group Type to "AD Group".
To convert an existing group, update the Name attribute of the existing group record to match the AD Group (using the cn attribute).
Set the Group Type to "AD Group".
AD Groups defined in Cinchy have their members synced from AD through a batch process that leverages the Cinchy Command Line Interface (CLI).
The sync operation performs the following high-level steps:
Fetches all Cinchy registered AD Groups using a Saved Query.
Retrieves the usernames of all members for each AD Group. The default attribute for username that is retrieved is "userPrincipalName", but configurable as part of the sync process.
For each AD Group, it loads the users that are both a member in AD and exist in the Cinchy Users table (matched on the Username) into the "Users" attribute of the Cinchy Groups table.
The Cinchy CLI Model must be installed in your instance of Cinchy. See here for more details.
An instance of the Cinchy CLI must be available to execute the sync.
A task scheduler is required to perform the sync on a regular basis (e.g. Autosys).
Create a new query within Cinchy with the below CQL to fetch all AD Groups from the Groups table. The domain & name assigned to the query will be referenced in the subsequent step.
Copy the below XML into a text editor of your choice and update the attributes listed in the table below the XML to align to your environment specific settings.
Once complete, create an entry with the config in your Data Sync Configurations table (part of the Cinchy CLI model).
If the userPrincipalName
attribute in Active Directory does not match what you expect to have as the Username in the Cinchy Users table (e.g. if the SAML token as part of your SSO integration returns a different ID), then you must replaceuserPrincipalName
in the XML config with the expected attribute.
The userPrincipalName
appears twice in the XML, once in the LDAPDataSource Columns and once in the CinchyTableTarget ColumnMappings.
The below CLI command (see here for additional information on the syncdata command) should be used to execute the sync.
Update the command parameters (described in the table below) with your environment specific settings.
Execution of this command can be scheduled at your desired frequency using your scheduler of choice.
The user account credentials provided in above CLI syncdata command must have View/Edit access to Cinchy Groups table.
If you are syncing someone with a lot of ADFS groups, the server may reject the request for the header being too large. If you are able to login as a user with a few groups in ADFS but run into an error with users with a lot of ADFS groups (regardless of if those ADFS groups are in Cinchy), you will need to make the following changes:
Follow the instructions outlined in this document.
In your CinchySSO app settings, you will also need to increase the max size of the request, as follows:
This page details the installation instructions for deploying Cinchy v5 on Kubernetes
Table of Contents |
---|
This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.
The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.
The following lists are required prerequisites for installing Cinchy v5 on Kubernetes.
Note that some prerequisites will depend on whether you are deploying on Azure or on AWS.
These prerequisites apply whether you are installing on Azure or on AWS.
The following four Git repositories must be created. Any source control platform supporting Git may be used, e.g. Gitlab, Azure DevOps, Github
cinchy.terraform: This repo contains all Terraform configurations.
cinchy.argocd: This repo contains all ArgoCD configurations.
cinchy.kubernetes: This repo contains cluster and application component deployment manifests.
cinchy.devops.automations: This repo contains the single configuration file and binary utility that maintains the contents of the above three repositories.
Download the artifacts for the four Git repositories. See here for information on accessing these. Check the contents of each of the directories into the respective repository.
You must have a service account with read/write permissions to the git repos created above.
If you are using Cinchy's docker images, pull them.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
You will need a single domain for accessing ArgoCD, Grafana, Opensearch Dashboard, and any deployed Cinchy instances. There are two routing options for accessing these applications - path based or subdomains. See below for an example with multiple cinchy instances:
You will need an SSL certificate for the cluster. This should be a wildcard certificate if you will use subdomain based routing. You can also use Self-Signed SSL.
The following prerequisites are required if you are deployment Cinchy v5 on Azure.
Terraform Backend Requirements:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
The Azure CLI should be installed on the machine where the deployment will be run. It must be set to the correct profile/login
The deployment template has the option of either leveraging an existing resource group or creating a new one:
If an existing resource group is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The resource group.
A VNet within the resource group.
A single subnet. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /22 to provide a range of 1024 IPs.
If a new resource group is preferred, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must provide sufficient availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
The following prerequisites are required if you are deployment Cinchy v5 on AWS.
Terraform Backend Requirements:
An S3 bucket that will contain the terraform state.
The AWS CLI should be installed on the machine where the deployment will be run. It must be set to the correct profile/login
The template has the option of either leveraging an existing VPC or creating a new one:
If an existing VPC is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The VPC. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /21 to provide a range of 2048 IPs.
3 Subnets (one per AZ). It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /23 to provide a range of 512 IPs.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If a new VPC is preferred, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must provide sufficient availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
The SSL certificate must be imported into AWS Certificate Manager, or a new certificate can be requested via AWS Certificate Manager.
If your are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.) Read more on this here.
Tips for Success:
Ensure that your region is configured the same across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.
The following steps detail the instructions for setting up the initial configurations.
Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.
Depending on the cloud platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.
This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.
Follow the guidance within the file to configure the properties.
5. Commit and push your changes.
Tips for Success:
You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.
The deployment.json will ask for your repo username and password, however ArgoCD may encounter errors when retrieving your credentials in certain situations (ex: if using Github). You can verify if your credentials have been picked up or not by navigating to the ArgoCD Settings once you have deployed Argo in step 7 of this guide. To avoid errors, we recommend using a Personal Access Token instead.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
2. If the file created in "Configuring the Deployment.json" step 2 has a name other than "deployment.json", the reference in the command will will need to be replaced with the correct name of the file.
3. The console output should terminate with a "Completed successfully".
The following steps detail how to deploy Terraform.
If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that is a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.
If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster is created. Nested within that is a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repo.
2. If you are using AWS, run the following commands to authenticate the session:
3. If using Azure, run the following command and follow the on screen instructions to authenticate the session:
Execute the following command to create the cluster:
2. Type yes when prompted to apply the terraform changes.
The resource creation process can take approx. 15-20 minutes. At the end of the execution there will be a section with the following header
======= Output Variables =======
If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password
If deploying on Azure, this section will contain a single value: Azure SQL Database Password
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repo.
The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.
SSH keys should be saved for future reference in the event that a connection needs to be established directly to a worker node in the Kubernetes cluster.
The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
The SSH key is output to the directory containing the cluster terraform configurations.
The following section pertains to updating the Deployment.json file.
Navigate to the the deployment.json (created in step 3.1) > cinchy_instance_configs section.
Each object within represents an instance that will be deployed on the cluster. Each instance configuration contains a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
Note that in the case of an Azure deployment, the host name is not available as part of the terraform output and instead must be sourced from the Azure Portal.
The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.
To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.
The Access Key and Secret Access Key for the IAM user must be specified under the object_storage section of the deployment.json
Within the deployment.json, the azure_blob_storage_conn_str must be set.
The in-line comments outline the commands required to source this value from the Azure CLI.
If you have the key_vault_secrets_provider_enabled=true value in the azure.json then the below secrets files would have been created during the execution of step 3.2:
You will need to add the following secrets to your azure key vault:
worker-secret-appsettings-<cinchy_instance_name>
web-secret-appsettings-<cinchy_instance_name>
maintenance-cli-secret-appsettings-<cinchy_instance_name>
idp-secret-appsettings-<cinchy_instance_name>
forms-secret-config-<cinchy_instance_name>
event-listener-secret-appsettings-<cinchy_instance_name>
connections-secret-config-<cinchy_instance_name>
connections-secret-appsettings-<cinchy_instance_name>
To create your new secrets:
Navigate to your key vault in the Azure portal.
Open your Key Vault Settings and select Secrets.
Select Generate/Import.
On the Create a Secret screen, choose the following values:
Upload options: Manual.
Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>
Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.
Content type: JSON
Leave the other values to their defaults.
Select Create.
Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. There are a total of 8 secrets to create as shown in the above list of secret names.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
2. If the file created in section 3 has a name other than "deployment.json", the reference in the command will will need to be replaced with the correct name of the file.
3. The console output should terminate with a "Completed successfully".
4. The updates must be committed to Git before proceeding to the next step.
From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.
These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.
Verify that the connection has been established and the context is the correct cluster by running the following command:
In this step, we will deploy and access ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
3. Monitor the pods within the argocd namespace by running the following command every 30 seconds until they all move into a healthy state:
Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to access ArgoCD:
This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090
The credentials for ArgoCD's portal are output at the start of the access_argocd's script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090 endpoint.
In this step, you will deploy your cluster components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the cluster components using ArgoCD:
3. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
If your pods are degraded or failed to sync, refresh of resync your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
Check that ArgoCD is pulling from your git repo by navigating to your Settings
If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.
Execute the following command to get the External IP used by the istio ingress gateway.
2. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including Opensearch, Grafana, and ArgoCD.
The default path to access Opensearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard
The default credentials for accessing Opensearch are admin/admin. We recommend that you change these credentials the first time you log in to Opensearch.
To change the default credentials for Cinchy v5.4+, follow the documentation here.
To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in Opensearch.”
The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana
The default username is admin. The default password for accessing Grafana can be found by doing a search of "adminPassword" within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml
We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.
In this step, you will deploy your Cinchy components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the Cinchy application components using ArgoCD:
3. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)
4. You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step 8.2.
You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).
If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
This page details various prerequisites for deploying Cinchy v5.
Table of Contents |
---|
The following is a list of steps that are required prior to deploying Cinchy v5 on Kubernetes.
The following tools should be installed on the machine where the deployment will run:
Kubectl (v1.23.0+)
All of your Cinchy environments will need a domain for each of the following:
ArgoCD
Opensearch
Grafana
This is done through your specific domain registrar (for example: GoDaddy, Google Domains, etc.)
You will need to have valid SSL Certs ready when you deploy Cinchy v5. This should be a wildcard certificate if ArgoCD will be exposed via a subdomain (preferred). Without the wildcard certificate, accessing ArgoCD's portal requires a port forward to be created using kubectl on demand.
You also have the option to use Self-Signed Certs in Kubernetes deployments. Find more information here.
Secrets management is optional, however Cinchy recommends it for securely storing and accessing secrets that will be used in the deployment process. Cinchy currently supports:
This is optional, but if you would like to set up single sign-on for use in your Cinchy v5 environments, please review the documentation here.
You have the option to either use Cinchy's Docker images or your own. If you would like to use Cinchy's, please follow the section below to access them.
You will pull Docker images from Cinchy's AWS Elastic Container Registry (ECR).
To gain access to Cinchy's Docker images, you will need login credentials to the ECR. You can gain these by contacting Cinchy Support.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
The following four Git repositories must be created. Any source control platform supporting Git may be used, e.g. Gitlab, Azure DevOps, Github
cinchy.terraform: This repo contains all Terraform configurations.
cinchy.argocd: This repo contains all ArgoCD configurations.
cinchy.kubernetes: This repo contains cluster and application component deployment manifests.
cinchy.devops.automations: This repo contains the single configuration file and binary utility that maintains the contents of the above three repositories.
You must have a service account with read/write permissions to the git repos created above.
You will need to access and download the Cinchy artifacts prior to deployment.
To access the Kubernetes artifacts:
Access the Cinchy Releases table. Please contact Cinchy Support if you do not have the access credentials necessary.
Navigate to the release you wish to deploy
Download the .zip file(s) listed under the Kubernetes Artifacts column
Check the contents of each of the directories into their respective repository.
Please contact Cinchy Support if you are encountering issues accessing the table or the artifacts.
The following prerequisites are required if you are deployment Cinchy v5 on Azure.
Terraform Backend Requirements:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
The Azure CLI should be installed on the machine where the deployment will be run. It must be set to the correct profile/login
The deployment template has the option of either leveraging an existing resource group or creating a new one:
If an existing resource group is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The resource group.
A VNet within the resource group.
A single subnet. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /22 to provide a range of 1024 IPs.
If a new resource group is preferred, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must provide sufficient availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
The following prerequisites are required if you are deployment Cinchy v5 on AWS.
Terraform Backend Requirements:
An S3 bucket that will contain the terraform state.
The AWS CLI should be installed on the machine where the deployment will be run. It must be set to the correct profile/login
The template has the option of either leveraging an existing VPC or creating a new one:
If an existing VPC is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The VPC. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /21 to provide a range of 2048 IPs.
3 Subnets (one per AZ). It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /23 to provide a range of 512 IPs.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If a new VPC is preferred, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must provide sufficient availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
The SSL certificate must be imported into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).
The following is a list of steps that are required prior to deploying Cinchy v5 on IIS
You will need to access and download the Cinchy binary prior to deployment:
Access the Cinchy Releases table. Please contact Cinchy Support if you do not have the access credentials necessary.
Navigate to the release you wish to deploy
Download the files listed under the Component Artifacts column. This should include zip files for:
Cinchy Platform
Cinchy Maintenance CLI and CLI (optional)
Cinchy Meta-Forms (optional)
Please contact Cinchy Support if you are encountering issues accessing the table or the artifacts.
An instance of SQL Server 2017+
A Windows Server 2012+ machine with IIS 7.5+ installed
Install .net core Hosting bundle Version 6.0
Specifically, install: ASP.NET Core/.NET Core Runtime & Hosting Bundle
Cinchy Platform 5.4+ uses .NET Core 6.0.
4.18.0+ used .NET Core 3.1 and previous versions used .NET Core 2.1
Minimum Web Server Hardware Recommendations
2 x 2 GHz Processor
8 GB RAM
4 GB Hard Disk storage available
Minimum Database Server Hardware Recommendations
4 x 2 GHz Processor
12 GB RAM
Hard disk storage dependent upon use case. Note that Cinchy maintains historical versions of data and performs soft deletes which will add to the overall storage requirements.
Clustering considerations are applicable to both the Web and Database tiers in the Cinchy deployment architecture.
The web tier can be clustered by introducing a load balancer and scaling web server instances horizontally. Each node within Cinchy uses an in-memory cache of metadata information, and expiration of cached elements is triggered upon data changes that would impact that metadata. Data changes processed by one node wouldn't immediately be known to other nodes without establishing connectivity between them. For this reason the nodes must be able to communicate over either http or https through an IP based binding on the IIS server that allows cache expiration messages to be broadcast. The port used for this communication is different from the standard port that is used by the application when a domain name is involved. Often for customers this means that a firewall port must be opened on these servers.
The database tier relies on standard MS SQL Server failover clustering capabilities.
The web application is responsible for all interactions with Cinchy be it through the UI or connectivity from an application. It interprets/routes incoming requests, handles serialization/deserialization of data, data validation, enforcement of access controls, and the query engine to transform Cinchy queries into the physical representation for the database. The memory footprint for the application is fairly low as caching is limited to metadata, but the CPU utilization grows with request volume and complexity (e.g. insert / update operations are more complex than select operations). As the user population grows or request volume increases from batch processes / upstream system API calls there may be a need to add nodes.
The database tier relies on a persistence platform that scales vertically. As the user population grows and request volume increases from batch processes / upstream system API calls the system may require additional CPU / Memory. Starting off in an environment that allows flexibility (e.g. a VM) would be advised until the real world load can be profiled and a configuration established. On the storage side, Cinchy maintains historical versions of records when changes are made and performs soft deletes of data which will add to the storage requirements. The volume of updates occurring to records should be considered when estimating the storage size.
Outside of log files there is no other data generated & stored on the web servers by the application, which means backups are generally centered around the database. Since the underlying persistence platform is a MS SQL Server, this relies on standard procedures for this platform.
This document outlines the steps for configuring Active Directory Federation Services (ADFS) to facilitate Single Sign-On (SSO) with Cinchy.
Certainly, presenting the information in a table can help make it easier to understand. Here's how you can structure it:
Before starting with the ADFS configuration, make sure to have following information:
Information Required | Description | Reference |
---|---|---|
Having these details readily available will streamline the ADFS configuration process.
Navigate to AD FS Management on your ADFS server.
Right-click on Relying Party Trusts and choose Add Relying Party Trust to open the Add Relying Party Trust Wizard.
In the wizard, select Claims Aware > Start > Select Data Source.
Select Enter Data About the Relying Part Manually > Next.
Fill in a Display Name under Specify Display Name.
Skip certificate configuration in Configure Certificates.
In Configure URL, select Enable support for the SAML 2.0 SSO Web SSO protocol.
Input your login URL as follows:
Under Configure Identifiers, add an Identifier and press Next to complete the setup.
Right-click on the newly created Relying Party Trust (located by its Display Name) and select Edit Claim Issuance Policy.
Select Add Rule > Claim Rule > Send LDAP Attributes as Claims.
Input a Claim Rule Name.
In the Attribute Store, select Active Directory. Map the LDAP attributes to the corresponding outgoing claim types as shown in the table below:
Select Finish.
Select Edit Rule > View Rule Language. Copy the Claim URLs for later use in configuring your Cinchy appsettings.json
. It should look like the following:
Press OK to confirm and save.
Note: Please ensure that the configurations below are case-sensitive and align exactly with those in your SAML IdP setup.
Retrieve and save the Federation Metadata XML file from the following location: https://{your.ADFS.server}/FederationMetadata/2007-06/FederationMetadata.xml
.
If needed, use IIS Manager to establish an HTTPS connection for the Cinchy website.
Also establish an HTTPS connection for the SSO site. Make sure the port number aligns with the one specified in the login URL.
You will need to refer to the Rule Language URLs you copied from the ADFS Configuration. Replace the placeholders below with your own URLs:
Insert the following lines within the <appSettings>
section of your web.config
file. Make sure to replace the {your.cinchy.url}
and {your.cinchysso.url}
with your Cinchy and Cinchy SSO values.
This page provides an overview for the deployment architecture of Cinchy v5.
Table of Contents |
---|
When choosing to deploy Cinchy version 5, you must decide whether to deploy via Kubernetes or on a VM (IIS).
Kubernetes is an open-source system that manages and automates the full lifecycle of container-based applications. You now have the ability to deploy Cinchy v5 on Kubernetes, and with it comes a myriad of features that help to simplify your deployment and enhance your scaling. Kubernetes can maximize your container capacity and easily scale up/down with your current operations.
Grafana, Opensearch, Opensearch Dashboard: Working together, these three applications provide a visual logging dashboard for all of the information coming in from your database pods. This streamlines your search for information by putting the control into your hands and compiling your logs in one easy to access place — you can now easily write a query against all of your logs, in all of your environments. You will have access to a default configuration out of the box, but you can also customize your dashboards as well.
Prometheus: With the Kubernetes addition, you now have access to Prometheus for your metrics dashboard. Prometheus records real-time metrics in a time series database used for event monitoring and alerting. You can then create custom dashboards to display your data in an easy to use visual that makes reporting on your metrics easy, and even set up push alerts based on your custom needs.
You also have the option to run Cinchy on Microsoft IIS, which was the traditional deployment method prior to Cinchy v5. Internet Information Services (IIS) for Windows Server is a flexible, secure and manageable Web server for hosting anything on the Web.
We recommend using Kubernetes to deploy Cinchy v5, because of the robust features that you can leverage, such as improved logging and metrics. Using Kubernetes allows for a greater ability to scale your Cinchy instances as well as the ability to lower your costs by using PostgreSQL.
Before deploying Cinchy v5, you must select which database you want to use.
The following list outlines which databases we support for Kubernetes Deployments.
For IIS Deployments please review the architecture requirements here.
Microsoft SQL Server is a relational database management system. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network.
Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure. It runs on a cloud computing platform, and access to it is provided as a service. Managed database services take care of scalability, backup, and high availability of the database.
SQL Managed Instance is a managed, always up-to-date SQL instance in the cloud that combines broad SQL Server engine compatibility with the benefits of a fully managed PaaS.
Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora is part of the managed database service Amazon Relational Database Service (Amazon RDS). Amazon RDS is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance. PostgreSQL comes with features aimed to help developers build applications, administrators to protect data integrity and build fault-tolerant environments, and help you manage your data no matter how big or small the dataset.
Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. With Amazon RDS, you can deploy scalable PostgreSQL deployments with cost-efficient and resizable hardware capacity. Amazon RDS manages complex and time-consuming administrative tasks such as PostgreSQL software installation and upgrades; storage management; replication for high availability and read throughput; and backups for disaster recovery. Amazon RDS for PostgreSQL gives you access to the capabilities of the familiar PostgreSQL database engine.
This is a fully managed and intelligent Azure Database for PostgreSQL. Enjoy high availability with a service-level agreement (SLA) up to 99.99 percent, AI-powered performance optimization, and advanced security. A fully managed database that automates maintenance, patching, and updates. Provision in minutes and independently scale compute or storage in seconds.
Prior to deploying Cinchy version 5, you need to define your sizing requirements.
Cluster sizing recommendations vary and are dependant on a myriad of deployment factors. We have provided the following very general sizing recommendations, but encourage you to explore more personalized options.
CPU: 8 Cores
Memory: 32GB Ram
Number of Servers: 3
AWS: m5.2xlarge
Azure: D8 v3
For sizing recommendations and prerequisites concerning an IIS deployment, please review the documentation found here.
If you are choosing to deploy Cinchy v5 on IIS, then you need to ensure that your VM disks have enough application storage to run your clusters.
Cinchy supports both Amazon S3 and Azure Blob Storage.
If you are using Terraform for your Kubernetes deployment, you will need to set up a new S3 compatible bucket to manually to store your state file. You will also need a bucket for Connections, to store error files created during data syncs.
You will create your two S3 compatible buckets using either Amazon or Azure. Ensure that you use the following convention when naming your buckets so that the automation script runs correctly: <org>-<component>-<cluster>. These bucket names will be referenced in your configuration files when you deploy Cinchy on Kubernetes.
Example Terraform Bucket: cinchy-terraform-state
Example Connection Bucket: cinchy-connections-cinchy-nonprod
There is no sizing definitions, as S3 provides unlimited scalability and it charges only for what you use/how much you store on it.
There may be times when you want to temporarily disable your Kubernetes pods in order to perform maintenance or certain upgrades. You can do so through the following steps:
Access your ArgoCD
Navigate to the application directory for the namespace you wish to disable, in this case development-cinchy (Image 1). You should see your cluster component applications.
3. Click on the main application (i.e. development-cinchy) (Image 2).
4. Navigate to Summary > Sync Policy > Automated. Click on Disable Auto-Sync > OK (Image 3).
5. For each of the cluster applications that you wish to disable, click on the "..." > Delete (Image 5).
6. Your apps should all appear as "out of sync" (Image 6).
To re-enable your applications, return to the application directory for your disabled namespace (Image 7).
2. Click on the main application (i.e. development-cinchy) (Image 8).
3. Navigate to Summary > Sync Policy. Click on Enable Auto-Sync > OK (Image 9).
This page details how to change your File Storage configuration in Cinchy v5 to S3, Azure Blob, or Local.
In Cinchy v5.2, we implemented the ability to free up database space by using S3 compatible or Azure Blob Storage for file storage. This configuration is set in the deployment.json of a Kubernetes installation, or the appsettings.json of an IIS installation.
If you are using a Kubernetes deployment, you will change your file storage config in the deployment.json.
Navigate to the object storage section, where you will see either S3 or Azure Blob storage, depending on your deployment structure.
3. To utilize Blob Storage or S3, update each line with your own parameters.
4. To utilize Local storage, leave each line blank with the exception of the Connections_Storage_Type, which should be set to Local:
5. Run the deployment script by using the following command in the root directory of your devops.automations repo:
6. Commit and push your changes.
If you are using an IIS deployment, you will change your file storage config in the Cinchy Web appsettings file.
2. Locate the StorageType section of the file and set it to either "Local", "AzureBlobStorage" or "S3".
3. If you selected "AzureBlobStorage", fill out the following lines in the same file:
4. If your selected "S3", fill out the following lines in the same file:
This page details the optional steps that you can take to use self-signed SSL Certificates in a Kubernetes Deployment of Cinchy.
This process needs to be followed after running the devops.automations script during your initial deployment, as well as each additional time that you run the script (Ex: updating your Cinchy platform), since it will wipe out all of the custom configuration you set up to use a self-signed certificate.
Generate the self-signed certificate by executing the following commands in any folder:
2. Create a yaml file located at cinchy.kubernetes/platform_components/base/self-signed-ssl-root-ca.yaml.
3. Add the following to the yaml file:
4. Add the self signed root CA cert file to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base folder.
5. Add the yaml code snippet to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base/kustomization.yaml file, changing the below files key value as per your root ca cert file name:
6. Add the following line to the cinchy.kubernetes/platform_components/base/kustomization.yaml file
7. Add the below Deployment patchesJson6902 to each of your cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/ENV_NAME/PLATFORM_COMPONENT_NAME/kustomization.yaml files, except "base".
Ensure that the rootCA.crt file name is matched with ConfigMap data, configMapGenerator files, and the patch subpath.
8. Once the changes are deployed, verify the root CA cert is available on the pod under /etc/ssl/certs with below command, inputing your own POD_NAME and NAMESPACE where noted:
This guide serves as a walkthrough of how to deploy v5 on IIS.
Table of Contents |
---|
Cinchy version 5 on IIS comes bundled with common components such as Connections, Meta Forms, and the Event Listener. This page details the configuration and deployment instructions for the Cinchy Platform, including SSO. Click on the links below to be taken to the appropriate pages for other components:
Ensure that you review the prior to performing an IIS Deployment, including downloading all necessary artifacts from the
Please contact if you do not have the credentials required to access the artifacts table.
Step 1 of this guide refers to the SQL Server. Step 2 onwards refers to the Web Server.
On your SQL Server 2017+ instance, create a new database named Cinchy (or any other name you prefer).
If you choose an alternate name, in the remaining instructions wherever the database name is referenced, replace the word Cinchy with the name you chose.
A single user account with db_owner privileges is required for the Cinchy application to connect to the database. If you choose to use Windows Authentication instead of SQL Server Authentication, the account that is granted access must be the same account under which the IIS Application Pool runs.
On the Windows Server machine, launch an instance of PowerShell as Administrator.
Run the below commands to create the application pool and set its properties.
3. If you chose to use Windows Authentication in the database or want to run the application under a different user account, execute the below commands to change the application pool identity.
You may use an alternate application pool name (i.e. instead of Cinchy) if you prefer.
Run the below commands in the Administrator instance of PowerShell to create directories for the application logs. Ensure your application pool account has write access to these directories.
Open the C:\CinchySSO\appsettings.json file in a text editor and update the values below.
1. Under AppSettings section, update the values outlined in the table.
2. Wherever you see <base url> in the value, replace this with the actual protocol (i.e. http or https) and the domain name (or ip address) you plan to use.
Ex:. if you're using https with the domain app.cinchy.co, then <base url> should be replaced with https://app.cinchy.co
4.18.0+ includes session expiration based on the CinchyAccessTokenLifetime. So for the default of "7.00:00:00", this means that if you have been inactive in Cinchy for 7 days, your session will expire and you will need to log in again.
In order for the application to connect to the database, the "SqlServer" value needs to be set.
Find and update the value under the "ConnectionStrings" section:
Example:
Under the "ExternalIdentityClaimSection" section you'll see the following values.
These values are used for SAML SSO. If you are not using SSO, keep these values as blank
There is a "serilog" property that allows you to configure where it logs to. In the below code, update the following:
"Name" must be set to "File" so it writes to a physical file on the disk.
Set "path" to the file path to where you want it to log.
This configuration makes a log every day (defined by the "rollingInterval" value) and keeps your file count to 30 (defined by the "retainedFileCountLimit" value).
Navigate to C:\Cinchy
Navigate to the appsettings.json file and update the following properties:
In order for the application to connect to the database, the "SqlServer" value needs to be set.
Find and update the value under the "ConnectionStrings" section:
Example:
There is a "serilog" property that allows you to configure where it logs to. In the below code, update the following:
"Name" must be set to "File" so it writes to a physical file on the disk.
Set "path" to the file path to where you want it to log.
This configuration makes a log every day (defined by the "rollingInterval" value) and keeps your file count to 30 (defined by the "retainedFileCountLimit" value).
Open an administrator instance of PowerShell
Execute the below commands to create the IIS applications and enable anonymous authentication. (This is required in order to allow authentication to be handled by the application)
To enable HTTPS, the server certificate must be loaded and the standard IIS configuration completed at the Web Site level to add the binding.
Access the <base url>/Cinchy (e.g. http://app.cinchy.co/Cinchy) through Google Chrome.
Once the login screen appears, enter the credentials:
The default username is admin and the password is cinchy.
You will be prompted to change your password the first time you log in.
Navigate to the following sub-pages to deploy the following bundled v5 components:
See implementation support table
Application | Path Based Routing | Subdomain Based Routing |
---|---|---|
LDAP Attribute | Outgoing Claim Type | Comments |
---|---|---|
Right-click on Relying Party Trust > Properties. Move to the Advanced tab and select SHA-256 as the secure hash algorithm.
Attribute | Value or Description |
---|---|
For further reference material,
Unzip the "Cinchy vX.X" application package (for the Cinchy Platform) that you downloaded from the into your C drive. This will create 2 directories, C:\Cinchy and C:\CinchySSO. Ensure your application pool accounts has read and execute access to these directories (default accounts are IIS AppPool\CinchyWeb and IIS AppPool\CinchySSO).
Key | Value |
---|
Key | Value |
---|
If you are deploying Cinchy v5.4+ on an SQL Server Database, you will need to make an addition to your connectionString. Adding will allow you to bypass the certificate chain during validation.
Key | Value |
---|
Key | Value |
---|
If you are deploying Cinchy v5.4+ on an SQL Server Database, you will need to make an addition to your connectionString. Adding will allow you to bypass the certificate chain during validation.
To avoid users from having to access the application at a url that contains /Cinchy, you can use a downloadable IIS extension called URL Rewrite to remap requests hitting the <base url> to <base url>/Cinchy. The extension is available .
Name
The Group Name. This must be unique across all groups within the system.
Users
The Users which are members of the group
User Groups
The Groups which are members of the group
Owners
Users who are able to administer memberships to the group. By default, Owners are also members of the group and this do not need to also be added into the Users category.
Owner Groups
Groups whose members are able to administer the membership of the group. By default, members of Owner Groups are also members of the group itself, and thus do not need to also be added into the User or User Groups category.
Group Type
This will be either "Cinchy Group" or "AD Group". "Cinchy Group": The membership is maintained directly in Cinchy. "AD Group": A sync process will be leveraged to maintain the membership and overwrite the Users.
XML Tag
Attribute
Content
LDAPDataSource
ldapserver
The LDAP server url
(e.g. LDAP:\\activedirectoryserver.domain.com)
LDAPDataSource
username
The encrypted username to authenticate with the AD server
(generated using the CLI's encrypt command -
dotnet Cinchy.CLI.dll encrypt -t "Domain/username").
LDAPDataSource
password
The encrypted password to authenticate with the AD server
(generated using the CLI's encrypt command -
dotnet Cinchy.CLI.dll encrypt -t "password").
LDAPDataSource -> Filter
Domain Name
The domain of the Saved Query that retrieves AD Groups
LDAPDataSource -> Filter
Query Name
The name of the Saved Query that retrieves AD Groups
Cinchy 1 (Dev)
domain.com/dev
dev.mydomain.com
Cinchy 2 (QA)
domain.com/qa
qa.mydomain.com
Cinchy 3 (UAT)
domain.com/uat
uat.mydomain.com
ArgoCD
domain.com/argocd
cluster.mydomain.com/argocd
Grafana
domain.com/grafana
cluster.mydomain.com/grafana
Opensearch
domain.com/dashboard
cluster.mydomain.com/dashboard
Cinchy SSO URL
The URL of your Cinchy SSO instance
{your.cinchysso.url}
Cinchy URL
The URL of your main Cinchy instance
{your.cinchy.url}
Cinchy SSO Installation Path
Directory where CinchySSO files are located
{Path/to/CinchySSO}
ADFS Server
The URL of your ADFS server
{your.ADFS.server}
User-Principal-Name
Name ID
SAM-Account-Name
sub
Type sub
manually to avoid auto complete
Given-Name
Given Name
Required for Auto User Creation
Surname
Surname
Required for Auto User Creation
E-Mail-Address
E-Mail Address
Required for Auto User Creation
Is-Member-Of-DL
Role
Required for Auto User Creation
CinchyLoginRedirectUri
URL of the user login redirect
https://{your.cinchysso.url}/Account/LoginRedirect
CinchyPostLogoutRedirectUri
URL of the user post-logout redirect
https://{your.cinchy.url}
CertificatePath
Path to Cinchy SSO certificate
{Path/to/CinchySSO}\\cinchyidentitysrv.pfx
SAMLClientEntityId
Relying Party Identifier from earlier-configured Relying Party Trust
SAMLIDPEntityId
Entity ID for SAML IdP, found in FederationMetadata.xml
http://{your.AD.server}/adfs/services/trust
SAMLMetadataXmlPath
Location of saved FederationMetadata.xml from Initial setup
SAMLSSOServiceURL
URL path in Domain Controller's in-service endpoints
https://{your.AD.server}/Saml2/Acs
AcsURLModule
/Saml2
MaxRequestHeadersTotalSize
Maximum header size in bytes; adjustable if default is insufficient
MaxRequestBufferSize
Should be equal to or larger than MaxRequestHeadersTotalSize
MaxRequestBodySize
Maximum request body size in bytes (use -1
for default; usually no need to change)
CinchyUri | <base url>/Cinchy |
CertificatePath | Adjust the certificate path to point to the CinchySSO v5 folder. Ex: C:\CinchySSO\\cinchyidentitysrv.pfx |
StsPublicOriginUri | The Base URL used by the .well-known discovery. Ex: <base url>/cinchySSO |
StsPrivateOriginUri | The Private Base URL used by the .well-known discovery. Ex: <base url>/cinchySSO |
CinchyAccessTokenLifetime | The duration for the Cinchy Access Token. This determines how long a user can be inactive until they need to re-enter their credentials. In Cinchy v5.4+ it defaults to "7.00:00:00", i.e. 7 days. |
DB Type | Either "PostgreSQL" or "TSQL" |
SAMLClientEntityId | Client Entity Id |
SAMLIDPEntityId | Identity Provider Entity Id |
SAMLMetadataXmlPath | Identity Provider metadata XML file path |
SAMLSSOServiceURL | Configure service endpoint for SAML authentication |
AcsURLModule | This parameter is needs to be configured as per your SAML ACS URL. For example, if your ACS URL looks like this - "https:///CinchySSO/identity/AuthServices/Acs", then the value of this parameter should be "/identity/AuthServices" |
ExternalIdentityClaim > FirstName > ExternalClaimName |
ExternalIdentityClaim > LastName > ExternalClaimName |
ExternalIdentityClaim > Email > ExternalClaimName |
ExternalIdentityClaim -> MemberOf -> ExternalClaimName |
StsPrivateAuthorityUri | This should match your private Cinchy SSO URL. Ex: <baseURL>/CinchySSO |
StsPublicAuthorityUri | This should match your public Cinchy SSO URL. Ex: <baseURL>/CinchySSO |
CinchyPrivateUri | This should match your private Cinchy URL. Ex: <baseURL>/Cinchy |
CinchyPublicUri | This should match your public Cinchy URL. Ex: <baseURL>/Cinchy |
UseHttps | This is "true" by default. |
DB Type | Either "PostgreSQL" or "TSQL" |
“MaxRequestBodySize” | This capability was introduced in Cinchy v5.4 This configurable property to allow you to set your own file upload size for the Files API, should you wish. It is defaulted to 1G. |