5️⃣
Cinchy Platform Documentation
Cinchy v5.8
Cinchy v5.8
  • Data Collaboration Overview
  • Release notes
    • Release notes
      • 5.9 release notes
      • 5.8 Release Notes
      • 5.7 Release Notes
      • 5.6 Release Notes
      • 5.5 Release Notes
      • 5.4 Release Notes
      • 5.3 Release Notes
      • 5.2 Release Notes
      • 5.1 Release Notes
      • 5.0 Release Notes
  • Support
  • Glossary
  • FAQ
  • Deployment guide
    • Deploying Cinchy
      • Plan your deployment
        • Deployment architecture
          • Kubernetes architecture
          • IIS architecture
        • Deployment prerequisites
          • Single Sign-On (SSO) integration
            • Enable TLS 1.2
            • Configure ADFS
            • AD Group Integration
      • Kubernetes
        • Disable your Kubernetes applications
        • Change your file storage configuration
        • Configure AWS IAM for Connections
        • Use Self-Signed SSL Certs (Kubernetes)
        • Deploy the CLI (Kubernetes)
      • IIS
  • Upgrade guide
    • Upgrade Cinchy
      • Cinchy Upgrade Utility
      • Kubernetes upgrades
        • v5.1 (Kubernetes)
        • v5.2 (Kubernetes)
        • v5.3 (Kubernetes)
        • v5.4 (Kubernetes)
        • v5.5 (Kubernetes)
        • v5.6 (Kubernetes)
        • v5.7 (Kubernetes)
        • v5.8 (Kubernetes)
        • Upgrade AWS EKS Kubernetes version
        • Update the Kubernetes Image Registry
        • Upgrade Azure Kubernetes Service (AKS)
      • IIS upgrades
        • v4.21 (IIS)
        • v4.x to v5.x (IIS)
        • v5.1 (IIS)
        • v5.2 (IIS)
        • v5.3 (IIS)
        • v5.4 (IIS)
        • v5.5 (IIS)
        • v5.6 (IIS)
        • v5.7 (IIS)
        • v5.8 (IIS)
      • Upgrading from v4 to v5
  • Guides for using Cinchy
    • User Guide
      • Data Browser overview
      • The Admin panel
      • User preferences
        • Personal access tokens
      • Table features
      • Data management
      • Queries
      • Version management
        • Versioning best practices
      • Commentary
    • Builder Guide
      • Best practices
      • Create tables
        • Attach files
        • Columns
        • Data controls
          • Data entitlements
          • Data erasure
          • Data compression
        • Formatting rules
        • Indexing & partitioning
        • Linking data
        • Table and column GUIDs
        • System tables
      • Delete tables
        • Restore tables, columns, and rows
      • Saved queries
      • CinchyDXD
        • Overview
        • DXD workflow
        • Package the data experience
        • Install the data experience
        • Release package
        • Changelog
        • References
          • Cinchy DXD CLI reference
          • Data Experience Definitions table
          • Data Experience Reference table
      • Multilingual support
      • Integration guides
    • Administrator Guide
    • Additional guides
      • Monitor and Log on Kubernetes
        • Grafana
        • OpenSearch dashboards
          • Set up Alerts
        • Monitor via ArgoCD
      • Maintenance
      • Cinchy Secrets Manager
      • GraphQL (Beta)
      • System properties
      • Enable Data At Rest Encryption (DARE)
      • Application experiences
        • Network map
          • Custom node results
          • Custom results in the Network Map
        • Set up experiences
  • API Guide
    • API overview
      • API authentication
      • API saved queries
      • ExecuteCQL
      • Webhook ingestion
  • CQL
    • Overview
      • CQL examples
      • CQL statements overview
        • Cinchy DML statements
        • Cinchy DDL statements
      • Cinchy supported functions
        • Cinchy functions
        • Cinchy system values
        • Cinchy User Defined Functions (UDFs)
          • Table-valued functions
          • Scalar-valued functions
        • Conversion functions
        • Date and Time types and functions
          • Return System Date and Time values
          • Return Date and Time parts
          • Return Date and Time values from their parts
          • Return Date and Time difference values
          • Modify Date and Time values
          • Validate Date and Time values
        • Logical functions
        • Math functions
        • String functions
        • Geometry and Geography data type and functions
          • OGC methods on Geometry & Geography instances
          • Extended methods on Geometry & Geography instances
        • Full Text Search functions
        • Connections functions
        • JSON functions
    • CQL functions reference list
  • Meta-Forms
    • Introduction
    • Install Meta-Forms
      • Deploy Meta-Forms (Kubernetes)
      • Deploy Meta-Forms (IIS)
    • Forms data types
    • Meta-Forms Builder Guide
      • Create a dynamic meta-form with tables
      • Create a dynamic meta-form example with Form Designer
      • Add links to a form
      • Rich text editing in forms
  • Data syncs
    • Get started with data syncs
    • IIS installation
      • Install Connections
      • Install the Worker/Listener
      • Install the Connections CLI
    • Build data syncs
      • Data sync types
      • Design patterns
      • Sync actions
      • Columns and mappings
        • Calculated column examples
      • Advanced settings
        • Filters
        • Variables
        • Auth requests
        • Request headers
        • Post sync scripts
        • Pagination
      • Batch data sync example
      • Real-time sync example
      • Schedule a data sync
      • Connection functions
    • Data sync sources
      • Cinchy Event Broker/CDC
        • Cinchy Event Broker/CDC XML config example
      • Cinchy Table
        • Cinchy Table XML config example
      • Cinchy Query
        • Cinchy Query XML config example
      • Copper
      • DB2 (query and table)
      • Dynamics 2015
      • Dynamics
      • DynamoDB
      • File-based sources
        • Binary file
        • Delimited file
        • Excel
        • Fixed width file
        • Parquet
      • Kafka Topic
        • Kafka Topic example config
        • Apache AVRO data format
      • LDAP
      • MongoDB collection
        • MongoDB collection source example
      • Mongo event
      • MongoDB collection (Cinchy event)
      • MS SQL Server (query and table)
      • ODBC Query
      • Oracle (query and table)
      • Polling event
        • Polling event example config
      • REST API
      • REST API (Cinchy event)
      • SAP SuccessFactors
      • Salesforce Object (Bulk API)
      • Salesforce platform event
      • Salesforce push topic
      • Snowflake
        • Snowflake source example config
      • SOAP 1.2 web service
      • SOAP 1.2 web service (Cinchy Event Triggered)
    • Data sync destinations
      • Cinchy Table
      • DB2 table
      • Dynamics
      • Kafka Topic
      • MongoDB collection
      • MS SQL Server table
      • Oracle table
      • REST API
      • Salesforce
      • Snowflake table
      • SOAP 1.2 web service
    • Real-time sync stream sources
      • The Listener Config table
      • Cinchy Event Broker/CDC
      • Data Polling
      • Kafka Topic
      • MongoDB
      • Salesforce Push Topic
      • Salesforce Platform Event
    • CLI commands list
    • Troubleshooting
  • Other Resources
    • Angular SDK
    • JavaScript SQK
Powered by GitBook
On this page
  • Introduction
  • Deployment prerequisites
  • All platforms
  • Azure requirements
  • Terraform requirements
  • Kubernetes AWS requirements
  • Terraform requirements:
  • Initial configuration
  • Configure the deployment.json file
  • Execute cinchy.devops.automations
  • Terraform deployment
  • Cloud provider authentication
  • Deploy the infrastructure
  • Retrieve the SSH keys
  • Update the deployment.json
  • Update the database connection string
  • Create the IAM user for S3 (AWS)
  • Update blob storage connection details (Azure)
  • Enable Azure Key Vault secrets
  • Execute cinchy.devops.automations
  • Connect with kubectl
  • Update the Kubeconfig
  • Verify the connection
  • Deploy and access ArgoCD
  • Deploy ArgoCD
  • Access ArgoCD
  • Deploy cluster components
  • Deploy ArgoCD applications
  • Update the DNS
  • Access OpenSearch
  • Access Grafana
  • Deploy Cinchy components
  • Deploy ArgoCD application
  1. Deployment guide
  2. Deploying Cinchy

Kubernetes

This page details the installation instructions for deploying Cinchy v5 on Kubernetes

PreviousAD Group IntegrationNextDisable your Kubernetes applications

Last updated 1 year ago

Introduction

This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.

The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.

Deployment prerequisites

To install Cinchy v5 on Kubernetes, you need to follow the requirements below. Some requirements depend on whether you deploy on Azure or on AWS.

All platforms

These prerequisites apply whether you are installing on Azure or on AWS.

  • You must create the following four Git repositories. You can use any source control platform that supports Git, such as , , and .

    • cinchy.terraform:: Contains all Terraform configurations.

    • cinchy.argocd:: Contains all ArgoCD configurations.

    • cinchy.kubernetes:: Contains cluster and application component deployment manifests.

    • cinchy.devops.automations:: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.

  • Download the artifacts for the four Git repositories. Check the contents of each of the directories into the respective repository.

  • You must have a service account with read/write permissions to the git repositories created above.

  • Install the following tools on the deployment machine:

      • For an introduction to Terraform + AWS,

      • For an introduction to Terraform + Azure,

    • (v1.23.0+)

    • ( may be used on Windows)

  • If you are using Cinchy docker images,

Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.

  • When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:

    • "5.x.x" - Alpine

    • "5.x.x-debian" - Debian

  • You will need a single domain for accessing ArgoCD, Grafana, OpenSearch Dashboard, and any deployed Cinchy instances. You have two routing options for accessing these applications - path based or subdomains. See below for an example with multiple Cinchy instances:

Application
Path Based Routing
Subdomain Based Routing

Cinchy 1 (DEV)

domain.com/dev

dev.mydomain.com

Cinchy 2 (QA)

domain.com/qa

qa.mydomain.com

Cinchy 3 (UAT)

domain.com/uat

uat.mydomain.com

ArgoCD

domain.com/argocd

cluster.mydomain.com/argocd

Grafana

domain.com/grafana

cluster.mydomain.com/grafana

OpenSearch

domain.com/dashboard

cluster.mydomain.com/dashboard

Azure requirements

If you are deploying Cinchy v5 on Azure, you require the following:

Terraform requirements

  • A resource group that will contain the Azure Blob Storage with the terraform state.

  • A storage account and container (Azure Blob Storage) for persisting terraform state.

The deployment template has two options available:

  • Use an existing resource group.

  • Creating a new one.

Existing resource group

If you prefer an existing resource group, you must provision the following before the deployment:

  • The resource group.

  • A virtual network (VNet) within the resource group.

  • A single subnet. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 addresses.

New resource group

  • If you prefer a new resource group, all resources will be automatically provisioned.

  • The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).

  • An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).

Kubernetes AWS requirements

If you are deploying Cinchy v5 on AWS, you require the following:

Terraform requirements:

The template has two options available:

  • Use an existing VPC

  • Create a new one.

Existing VPC

  • If you prefer an existing VPC, you must provision the following before the deployment:

    • The VPC. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IP addresses.

    • 3 Subnets (one per AZ). It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IP addresses.

    • If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.

New VPC

  • If you prefer a new VPC, all resources will be automatically provisioned.

  • The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).

  • An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).

  • You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).

Tips for Success:

  • Ensure you have the same region configuration across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.

Initial configuration

The following steps detail the instructions for setting up the initial configurations.

Configure the deployment.json file

  1. Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.

  2. Depending on platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.

  3. This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.

  4. Follow the guidance within the file to configure the properties.

  5. Commit and push your changes.

Tips for Success:

  • You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.

  • [Find more information here.](https://argo-cd.readthedocs.io/en/release-1.8/user-guide/private-repositories/)

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. The console output should have the following message:

Completed successfully

Terraform deployment

The following steps detail how to deploy Terraform.

Cinchy.terraform repository structure - AWS

If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.

Cinchy.terraform repository structure - Azure

If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster is created. Nested within that's a subdirectory with the same name as the newly created cluster.

To perform terraform operations, the cluster directory must be the working directory during execution.

Cloud provider authentication

  1. Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.

  2. If you are using AWS, run the following commands to authenticate the session:

export AWS_DEFAULT_REGION=REGION
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY
  1. For Azure, run the following command and follow the on screen instructions to authenticate the session:

az login

Deploy the infrastructure

  1. Execute the following command to create the cluster:

bash create.sh
  1. Type yes when prompted to apply the terraform changes.

The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header

Output variables

If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password

If deploying on Azure, this section will contain a single value: Azure SQL Database Password

These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.

Retrieve the SSH keys

The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.

SSH keys should be saved for future reference if a connection needs to be established directly to a worker node in the Kubernetes cluster.

AWS SSH keys

  1. The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:

terraform output -raw private_key

Azure SSH keys

  1. The SSH key is output to the directory containing the cluster terraform configurations.

Update the deployment.json

The following section pertains to updating the Deployment.json file.

Update the database connection string

  1. Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.

For Azure deployments, the host name isn't available as part of the terraform output and instead must be sourced from the Azure Portal.

Create the IAM user for S3 (AWS)

The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.

To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.

The Access Key and Secret Access Key for the IAM user must be specified under the object_storage section of the deployment.json

Update blob storage connection details (Azure)

  1. Within the deployment.json, the azure_blob_storage_conn_str must be set.

  2. The in-line comments outline the commands required to source this value from the Azure CLI.

Enable Azure Key Vault secrets

If you have the key_vault_secrets_provider_enabled=true value in the azure.json then the below secrets files would have been created during the execution of step 3.2:

You will need to add the following secrets to your Azure Key Vault:

  • worker-secret-appsettings-<cinchy_instance_name>

  • web-secret-appsettings-<cinchy_instance_name>

  • maintenance-cli-secret-appsettings-<cinchy_instance_name>

  • idp-secret-appsettings-<cinchy_instance_name>

  • forms-secret-config-<cinchy_instance_name>

  • event-listener-secret-appsettings-<cinchy_instance_name>

  • connections-secret-config-<cinchy_instance_name>

  • connections-secret-appsettings-<cinchy_instance_name>

To create your new secrets:

  1. Navigate to your key vault in the Azure portal.

  2. Open your Key Vault Settings and select Secrets.

  3. Select Generate/Import.

  4. On the Create a Secret screen, choose the following values:

    1. Upload options: Manual.

    2. Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>

    3. Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.

    4. Content type: JSON

  5. Leave the other values to their defaults.

  6. Select Create.

Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. You must create a total of 8 secrets, as shown in the above list of secret names.

Execute cinchy.devops.automations

This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.

  1. From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:

dotnet Cinchy.DevOps.Automations.dll "deployment.json"
  1. The console output should end with the following message:

Completed successfully
  1. The updates must be committed to Git before proceeding to the next step.

Connect with kubectl

Update the Kubeconfig

AWS

  1. From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:

aws eks update-kubeconfig --region <region> --name <cluster_name>

Azure

  1. From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.

These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.

az account set --subscription <subscription_id>
az aks get-credentials --admin --resource-group <deployment_resource_group> --name <cluster_name>

Verify the connection

  1. Verify that the connection has been established and the context is the correct cluster by running the following command:

kubectl config get-contexts

Deploy and access ArgoCD

In this step, you will deploy and access ArgoCD.

Deploy ArgoCD

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy ArgoCD:

bash deploy_argocd.sh
  1. Monitor the pods within the ArgoCD namespace by running the following command every 30 seconds until they all move into a healthy state:

kubectl get pods -n argocd

Access ArgoCD

  1. Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to access ArgoCD:

bash access_argocd.sh

This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090

Deploy cluster components

In this step, you will deploy your cluster components.

Deploy ArgoCD applications

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy the cluster components using ArgoCD:

bash deploy_cluster_components.sh
  1. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).

Tips for Success:

  • If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.

  • Check that ArgoCD is pulling from your git repository by navigating to your Settings

  • If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.

Update the DNS

  1. Execute the following command to get the External IP used by the Istio ingress gateway.

kubectl get svc -n istio-system
  1. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.

Access OpenSearch

The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard

The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.

Access Grafana

The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana

The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml

We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.

Deploy Cinchy components

In this step, you will deploy your Cinchy components.

Deploy ArgoCD application

  1. Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.

  2. Execute the following command to deploy the Cinchy application components using ArgoCD:

bash deploy_cinchy_components.sh
  1. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)

You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).

## Troubleshooting

  • If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.

kubectl rollout restart sts argocd-application-controller -n argocd

You will need an SSL certificate for the cluster. This should be a wildcard certificate if you will use subdomain based routing.

Install the on the deployment machine. It must be set to the correct profile/login

that will contain the terraform state.

Install the on the deployment machine. It must be set to the correct profile/login

You must import the SSL certificate , or a new certificate can be requested via

If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.)

The deployment.json will ask for your repository username and password, but ArgoCD may have errors when retrieving your credentials in certain situations (ex: if using GitHub). To verify if your credentials are working, navigate to the ArgoCD Settings after you have deployed Argo in this guide. To avoid errors, Cinchy recommends using a

If the file created in has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

Navigate to the deployment.json () > cinchy_instance_configs section.

If the file has a name other than deployment.json, the reference in the command will will need to be replaced with the correct name of the file.

The credentials for ArgoCD's portal are output at the start of the access_argocd script execution in Base64. to get the login credentials to use for the http://localhost:9090 endpoint.

To change the default credentials for Cinchy v5.4+,

To change the default credentials and/or add new users for all other deployments, follow or navigate to Settings > Internal Roles in OpenSearch.”

You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step

You can also use Self-Signed SSL.
Azure CLI
An S3 bucket
AWS CLI
into AWS Certificate Manager
AWS Certificate Manager.
Read more on this here.
Personal Access Token instead.
The Base64 value must be decoded
follow the documentation here.
this documentation
"Configuring the Deployment.json" step 2
created in step 3.1
created in section 3
8.2.
GitLab
Azure DevOps
GitHub
Terraform
see this Get started Guide.
see this Get started Guide
kubectl
.NET Core 3.1.x
Bash
Git Bash
See here for information on accessing these.
pull them.