Kubernetes Deployment Installation
This page details the installation instructions for deploying Cinchy v5 on Kubernetes

Table of Contents

1. Introduction

This page details the instructions for Deployment Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup also involves an automation script to streamline your configurations.

1.1 Resource Repositories

There are four repositories that you will need to access and copy in order to deploy Cinchy v5. See here for information on accessing these.
  1. 1.
    Cinchy.Terraform
  2. 2.
    Cinchy.ArgoCD
  3. 3.
    Cinchy. Kubernetes
  4. 4.
    Cinchy.Automations: This repo houses the automation script used to streamline your configurations. See: Template Configuration Utility for more information.

1.2 Template Configuration Utility

In order to streamline the configuration of your repository files, we have created the Template Configuration Utility, which is a script that can be found in the Cinchy.Automations repo.
By using this utility, you only need to set your configurations once, and they will be pushed out to all the necessary code files.

2. Resource Prerequisites

  1. 1.
    Access the cinchy.terraform, cinchy.argocd, cinchy.kubernetes, and cinchy.automations repos and copy the files into your own repository (If you are using ArgoCD you must store in Github).
  2. 2.
    If you are not using Terraform, please create a new S3 compatible bucket manually to store your state file. The convention for the bucket name should be <org>-<component>-<cluster>

3. Configuring the Environments.json

In this step we will configure the Environments.json with our specific parameters.

3.1 Terraform Configurations

  1. 1.
    Within the cinchy.automations repo, navigate to Cinchy.DevOps.Automations > environments.json
  2. 2.
    Update your repo path to point to the correct location:
1
"terraform_repo_path": "YOUR_REPO\\cinchy.terraform",
Copied!
3. Under "terraform_backend", update the following parameters:
  • BUCKET: Update this with your own bucket name. The convention for the bucket name is <org>-<component>-<cluster>.
    • Ex: cinchy-terraform-state
  • KEY: Update this parameter with your own cluster name
    • Ex: cinchy_nonprod/terraform.tfstate
1
"terraform_backend": {
2
"terraform_backend_s3_bucket": "BUCKET",
3
"terraform_backend_s3_key": "KEY",
4
"terraform_backend_s3_encrypt": "true"
5
},
Copied!

3.2 Connections S3 Configuration

1. Under "connections_s3", update the following parameters:
  • BUCKET: Update this with your own bucket name. The convention for the bucket name is <org>-<component>-<cluster>.
    • Ex: cinchy-connections-cinchy-nonprod
  • ENV: Update this with your environment tag.
    • Ex: cinchy_nonprod
1
"connections_s3": {
2
// convention for the bucket name is <org>-<component>-<cluster>
3
"connections_s3_bucket": "BUCKET",
4
"connections_s3_acl": "private",
5
"connections_s3_environment_tag": "ENV"
6
},
Copied!

3.3 EKS Configuration

1. Under "eks", update the following parameters to match your configuration:
  • CLUSTER_NAME
  • REGION
  • VPC_ID: Which VPC will run the cluster
  • SUBNETS: A list of subnets that need to be created
  • INSTANCE_TYPE
  • DISK_SIZE
  • USERMAPPING: Map the users who will have access to the cluster
1
"eks": {
2
"cluster_name": "CLUSTER_NAME",
3
"aws_region": "REGION",
4
"vpc_id": "VPC_ID",
5
"vpnsecuritygroup": "VPN_SECURITY_GROUP",
6
"subnet": "SUBNETS",
7
"instancetype": "INSTANCE_TYPE",
8
"disk_size": "DISK_SIZE",
9
"usermapping": "<<USERMAPPING\r\n mapUsers: |\r\n - userarn: arn:aws:iam::2043409424924335:user\/user_name\r\n username: user_name\r\n groups:\r\n - system:masters\r\n USERMAPPING"
10
}
Copied!

3.4 ECR Configuration

1. Under "ecr_repo_secrets", update the following with a base64 encoded version of your:
  • SECRETS_ID
  • SECRET_ACCESS_KEY
1
"ecr_repo_secrets": {
2
"base64_encoded_aws_id": "SECRETS_ID",
3
"base64_encoded_aws_region": "Y2EtY2VudHJhbC0x",
4
"base64_encoded_aws_secret_access_key": "SECRET_ACCESS_KEY"
5
}
Copied!
Do not update the region ID. This points to the Cinchy ECR as it is needed to pull images and should not be changed

3.5 ArgoCD Configurations

  1. 1.
    Within the cinchy.automations repo, navigate to Cinchy.DevOps.Automations > environments.json
  2. 2.
    Update the your repo path to point to the correct location:
1
"argocd_repo_path": "YOUR_REPO\\cinchy.argocd",
Copied!
3. Under "argocd_config", update the following parameters:
  • KUBERNETES_REPO: Point this to your own Git URL where you stored the Kubernetes repo.
  • KUBERNETES_REPO_REVISION: Input the name of your Kubernetes repo revision.
  • ARGOCD_REPO: Point this to your own Git URL where you stored the ArgoCD repo.
  • ARGOCD_REPO_REVISION: Input the name of your Kubernetes repo revision.
  • Your Git credentials:
    • GIT_URL: Input the base URL for your Git Repo.
    • GIT_USERNAME
    • GIT_PASSWORD
1
"argocd_config": {
2
"cinchy_kubernetes_repo_url": "KUBERNETES_REPO",
3
"cinchy_kubernetes_repo_revision": "KUBERNETES_REPO_REVISION",
4
"cinchy_argocd_repo_url": "ARGOCD_REPO_URL",
5
"cinchy_argocd_repo_revision": "ARGOCD_REPO_REVISION",
6
"git_credentials": {
7
"base_repo_url": "GIT_URL",
8
"git_username": "GIT_USERNAME",
9
"git_password": "GIT_PASSWORD"
10
}
11
},
Copied!

3.6 Cluster Component Configurations

1. Configure the following parameters in the Cinchy.Automations > environments.json file.
2. Update the your repo path to point to the correct location:
1
"kubernetes_repo_path": "YOUR_REPO\\cinchy.kubernetes"
Copied!
3. Under "istio", update the following with your own information:
  • Update the istio_ingress_gateway_host to point to you own url
    • Ex: "*.cinchy.net"
  • SSL_TLS_CRT: Update this with a base64 encoded version of your SSL cert.
  • SSL_TLS_KEY: Update this with a base64 encoded version of your SSL key.
  • Update the grafana_host_name to point to your own BASE_URL.
  • Update the opensearch_host_name to point to your own BASE_URL.
1
"istio": {
2
"istio_ingress_gateway_host": "",
3
"ssl_tls_crt": "SSL_TLS_CRT",
4
"ssl_tls_key": "SSL_TLS_KEY",
5
"grafana_host_name": "BASE_URL",
6
"opensearch_host_name": "BASE_URL"
7
},
Copied!

3.7 Instance Component Configurations

  1. 1.
    Configure the following parameters in the Cinchy.Automations > environments.json file.
  2. 2.
    Under "cinchy_instance_configs", for each of your Cinchy instances, update the following parameters:
  • NAMESPACE
  • HOST_NAME
  • APPLICATION_PATH
  • DATABASE_TYPE: "TSQL" or "PostgreSQL"
1
"cinchy_instance_configs": {
2
// key is automatically added to "instance_name" parameter
3
"NAMESPACE": {
4
"namespace": "NAMESPACE",
5
"protocol": "https",
6
"host_name": "HOST_NAME",
7
"application_path": "/APPLICATION_PATH",
8
"use_https_flag": "true",
9
"dbtype": "DATABASE_TYPE",
10
"database_connection_string": "User ID=postgres;Password=password;Host=<HOST>;Port=5432;Database=DATABASE;Timeout=300;Keepalive=300;",
11
"connections_image_tag": "IMAGE_TAG",
12
"event_listener_image_tag": "IMAGE_TAG",
13
"idp_image_tag": "IMAGE_TAG",
14
"maintenance_cli_image_tag": "IMAGE_TAG",
15
"meta_forms_image_tag": "IMAGE_TAG",
16
"web_image_tag": "IMAGE_TAG",
17
"worker_image_tag": "IMAGE_TAG"
18
}
19
}
Copied!
3. Under "cinchy_instance_configs", for each of your Cinchy instances, update the IMAGE_TAG of the below.
Example Image Tag syntax: "v5.0.0"
  • Connections
  • Event Listener
  • IDP
  • Maintenance CLI
  • Meta Forms
  • Web
  • Worker
See the code block in step 2 for the template example of where to find these parameters.
Review the documentation here if you are using Cinchy's Docker images.
4. Under "cinchy_instance_configs", for each of your Cinchy instances, update the following parameters of the database_connection_string (see code block in step 2 for the template example):
  • HOST: Update this with the host server
  • DATABASE

4. Run the Automation Script

1. Run the Cinchy.Devops.Automation.exe script to push out your configurations.

5. Deployment

In this step, you will deploy Terraform, ArgoCD, your Cluster Components, and your Platform Components.

5.1 Deploy your Infrastructure (Terraform)

  1. 1.
    Once your configurations are set, run the following commands from the cinchy.terraform repository to apply the changes and create the resources:
1
$ cd ../cinchy.kubernetes/
Copied!
1
$ terraform.exe init
Copied!
1
$ terraform.exe plan
Copied!
1
$ terraform.exe apply
Copied!

5.2 Deploy ArgoCD

1. Install ArgoCD by executing the following command in the root directory of your cinchy.argocd repo. This sources the ArgoCD manifests from github, and deploys version 2.1.7.
1
1 kubectl apply -k argocd
Copied!
2. The default username is admin. The password is stored in a secret in the k8s cluster and can be retrieved using the below command:
1
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Copied!
3. You must set up a port forward from the K8s cluster in order to access the ArgoCD portal. Do so by running the following command:
1
kubectl port-forward svc/argocd-server -n argocd 9090:80 --address 0.0.0.0
Copied!

5.2.1 Accessing ArgoCD

Once the port forward is established, the ArgoCD portal can be accessed in a local browser at: http://0.0.0.0:9090

5.3 Deploy your Cluster Components

  1. 1.
    If the common components have not been installed (i.e. this is a new cluster), execute the below command in the root directory of the repository to deploy them via ArgoCD:
1
kubectl apply -k environment_kustomizations/<cluster name>/cluster_components
Copied!

5.3.1 Accessing Opensearch

The default path to access Opensearch, unless you have configured it otherwise, is <baseurl>/dashboard
The default credentials for accessing Opensearch can be found by doing a search of "opensearch.username" and "opensearch.password" within the following path: cinchy.kubernetes/cluster_components/logging/opensearch-dashboards/values.yaml
You are able to change the default credentials within Opensearch itself. Ensure you update the respective secrets as well.

5.3.2 Accessing Grafana

The default path to access Grafana, unless you have configured it otherwise, is <baseurl>/grafana
The default username is admin. The default password for accessing Grafana can be found by doing a search of "adminPassword" within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml
You are able to change the default credentials within Grafana itself. Ensure you update the respective secrets as well.

5.4 Deploy your Platform

  1. 1.
    Execute the below command in the root directory of the cinchy.argocd repository to deploy a new Cinchy instance, replacing the specific directories for your cluster and environment.
1
kubectl apply -k environment_kustomizations/<cluster name>/<environment name>/cinchy
2
kubectl apply -k environment_kustomizations/<cluster name>/<environment name>/platform_components
Copied!

6. Configuring the Kubernetes Image to the Newest Version

In this step, you will configure your Kubernetes Image to match the newest version.
  1. 1.
    Navigate to the cinchy.kubernetes repository
    1. 1.
      Navigate to environment_kustomizations_template/<instance_template>
    2. 2.
      In the IDP > kustomizations.yaml, replace the version name/number with the instance that you wish to deploy.
  2. 2.
    For example: cinchy.idp:development becomes cinchy.idp:v5.0.68

7. Sync the Image

  1. 1.
    Navigate to the ArgoCD portal.
Once the port forward is established, the ArgoCD portal can be accessed in a local browser at: http://0.0.0.0:9090
2. Refresh your image. If that does not work, re-sync.