Kubernetes Deployment Installation
This page details the installation instructions for deploying Cinchy v5 on Kubernetes
Table of Contents
1. Introduction
This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.
The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.
2. Deployment Prerequisites
The following lists are required prerequisites for installing Cinchy v5 on Kubernetes.
Note that some prerequisites will depend on whether you are deploying on Azure or on AWS.
2.1 All Platforms
These prerequisites apply whether you are installing on Azure or on AWS.
cinchy.terraform: This repo contains all Terraform configurations.
cinchy.argocd: This repo contains all ArgoCD configurations.
cinchy.kubernetes: This repo contains cluster and application component deployment manifests.
cinchy.devops.automations: This repo contains the single configuration file and binary utility that maintains the contents of the above three repositories.
You must have a service account with read/write permissions to the git repos created above.
The following tools should be installed on the machine where the deployment will run:
You will need a single domain for accessing ArgoCD, Grafana, Opensearch Dashboard, and any deployed Cinchy instances. There are two routing options for accessing these applications - path based or subdomains. See below for an example with multiple cinchy instances:
Cinchy 1 (Dev)
domain.com/dev
dev.mydomain.com
Cinchy 2 (QA)
domain.com/qa
qa.mydomain.com
Cinchy 3 (UAT)
domain.com/uat
uat.mydomain.com
ArgoCD
domain.com/argocd
cluster.mydomain.com/argocd
Grafana
domain.com/grafana
cluster.mydomain.com/grafana
Opensearch
domain.com/dashboard
cluster.mydomain.com/dashboard
2.2 Azure Deployment
The following prerequisites are required if you are deployment Cinchy v5 on Azure.
Terraform Backend Requirements:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
The deployment template has the option of either leveraging an existing resource group or creating a new one:
If an existing resource group is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The resource group.
A VNet within the resource group.
A single subnet. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /22 to provide a range of 1024 IPs.
If a new resource group is preferred, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must provide sufficient availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
2.3 AWS Deployment
The following prerequisites are required if you are deployment Cinchy v5 on AWS.
Terraform Backend Requirements:
The template has the option of either leveraging an existing VPC or creating a new one:
If an existing VPC is preferred, the prerequisite requires the following be provisioned in advance of the deployment:
The VPC. It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /21 to provide a range of 2048 IPs.
3 Subnets (one per AZ). It's important that the address range be sufficient for all executing processes within the cluster, e.g. a CIDR ending with /23 to provide a range of 512 IPs.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If a new VPC is preferred, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must provide sufficient availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
Tips for Success:
Ensure that your region is configured the same across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.
3. Initial Configuration
The following steps detail the instructions for setting up the initial configurations.
3.1 Configure the Deployment.json
Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.
Depending on the cloud platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.
This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.
Follow the guidance within the file to configure the properties.
5. Commit and push your changes.
Tips for Success:
You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.
3.2 Execute cinchy.devops.automations
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
3. The console output should terminate with a "Completed successfully".
4. Terraform Deployment
The following steps detail how to deploy Terraform.
Cinchy.terraform Repo Structure - AWS
If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster is created. Nested within that is a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.
Cinchy.terraform Repo Structure - Azure
If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster is created. Nested within that is a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
4.1 Cloud Provider Authentication
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repo.
2. If you are using AWS, run the following commands to authenticate the session:
3. If using Azure, run the following command and follow the on screen instructions to authenticate the session:
4.2 Deploy the Infrastructure
Execute the following command to create the cluster:
2. Type yes when prompted to apply the terraform changes.
The resource creation process can take approx. 15-20 minutes. At the end of the execution there will be a section with the following header
======= Output Variables =======
If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password
If deploying on Azure, this section will contain a single value: Azure SQL Database Password
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repo.
4.3 Retrieve the SSH Keys
The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.
SSH keys should be saved for future reference in the event that a connection needs to be established directly to a worker node in the Kubernetes cluster.
4.3.1 AWS SSH Keys
The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
4.3.2 Azure SSH Keys
The SSH key is output to the directory containing the cluster terraform configurations.
5. Update the Deployment.json
The following section pertains to updating the Deployment.json file.
5.1 Update the Database Connection String
Each object within represents an instance that will be deployed on the cluster. Each instance configuration contains a database_connection_string property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
Note that in the case of an Azure deployment, the host name is not available as part of the terraform output and instead must be sourced from the Azure Portal.
5.2 Create the IAM User for S3 (AWS Only)
The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.
To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.
The Access Key and Secret Access Key for the IAM user must be specified under the object_storage section of the deployment.json
5.3 Update Blob Storage Connection Details (Azure Only)
Within the deployment.json, the azure_blob_storage_conn_str must be set.
The in-line comments outline the commands required to source this value from the Azure CLI.
5.3.1 Enabling Azure Key Vault Secrets
If you have the key_vault_secrets_provider_enabled=true value in the azure.json then the below secrets files would have been created during the execution of step 3.2:
You will need to add the following secrets to your azure key vault:
worker-secret-appsettings-<cinchy_instance_name>
web-secret-appsettings-<cinchy_instance_name>
maintenance-cli-secret-appsettings-<cinchy_instance_name>
idp-secret-appsettings-<cinchy_instance_name>
forms-secret-config-<cinchy_instance_name>
event-listener-secret-appsettings-<cinchy_instance_name>
connections-secret-config-<cinchy_instance_name>
connections-secret-appsettings-<cinchy_instance_name>
To create your new secrets:
Navigate to your key vault in the Azure portal.
Open your Key Vault Settings and select Secrets.
Select Generate/Import.
On the Create a Secret screen, choose the following values:
Upload options: Manual.
Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>
Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.
Content type: JSON
Leave the other values to their defaults.
Select Create.
Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. There are a total of 8 secrets to create as shown in the above list of secret names.
5.4 Execute cinchy.devops.automations
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
3. The console output should terminate with a "Completed successfully".
4. The updates must be committed to Git before proceeding to the next step.
6. Connect with kubectl
6.1 Update the Kubeconfig
6.1.1 AWS
From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
6.1.2 Azure
From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.
6.2 Verify the Connection
Verify that the connection has been established and the context is the correct cluster by running the following command:
7. Deploy and Access ArgoCD
In this step, we will deploy and access ArgoCD.
7.1 Deploy ArgoCD
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
3. Monitor the pods within the argocd namespace by running the following command every 30 seconds until they all move into a healthy state:
7.2 Access ArgoCD
Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to access ArgoCD:
This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090
8. Deploy Cluster Components
In this step, you will deploy your cluster components.
8.1 Deploy ArgoCD Applications
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the cluster components using ArgoCD:
3. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
If your pods are degraded or failed to sync, refresh of resync your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
Check that ArgoCD is pulling from your git repo by navigating to your Settings
If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.
8.2 Update the DNS
Execute the following command to get the External IP used by the istio ingress gateway.
2. DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including Opensearch, Grafana, and ArgoCD.
8.3 Accessing Opensearch
The default path to access Opensearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard
8.4 Accessing Grafana
The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana
9. Deploy Cinchy Components
In this step, you will deploy your Cinchy components.
9.1 Deploy ArgoCD Application
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the Cinchy application components using ArgoCD:
3. Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)
10. Troubleshooting
If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
Last updated
Was this helpful?