The major changes for the 5.7 Kubernetes upgrade are the following:
The Azure AKS and AWS EKS version supports up to 1.27
Upgraded ArgoCD from 2.1.7 to v2.7.6
Upgraded Istio from 1.3.1 to 1.18.0
OpenSearch from 1.2.0 to 2.13.1
Upgraded Logging Operator from 3.17.2 to 4.2.2
Upgraded Kube Prometheus Stack from 17.2.2 to 47.0.0
Upgraded Strimzi Kafka Operator from 0.1.0 to 0.34.0
New app Kafka UI 0.7.1
OpenSearch Index creation based on date format
To upgrade your various components, follow the instructions below in the order presented.
If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.
If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.7, you must run Upgrade 5.2 using the Cinchy Utility and deploy version 5.2.
If you are upgrading from 5.2 or higher, follow the 5.7 upgrade instructions below, then use the Cinchy Utility and deploy the target version using the -v "X.X"
argument.
Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git
file.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml
file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git
file.
Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git
file and your deployment.json.
Download and open the new Cinchy v5.7 k8s-template.zip
file from the Cinchy Releases table and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
Go to the new aws.json/azure.json
files and compare them with your current deployment.json
file. All additional fields in the new aws.json/azure.json
files should be added to your current deployment.json
.
Update the Kubernetes version in your deployment.json
. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.24 to 1.25, then from 1.25 to 1.26, and finally from 1.26 to 1.27.
You may have changed the name of the deployment.json
file during your original platform deployment. If so, make sure that you swap up the name wherever it appears in this document.
In the 5.7 templates, the cluster-level components will upgrade to the latest version. You need to remove kube-prometheus-stack
, logging-operator app
and kafka-cluster
from ArgoCD. This change deletes your recent metrics from Grafana and you will only see the latest metrics after you deploy the new kube-prometheus-stack
. The older CRDs created by kube-prometheus-stack
and logging-operator charts aren't removed by default during upgrade and should be manually cleaned up with the below commands:
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all of your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
Validate ArgoCD pods are running and check that ArgoCD is upgraded v2.7.6 by accessing the ArgoCD application Console.
Execute the following command to deploy cluster components and Cinchy components:
You might see a couple of ArgoCD apps out of sync because you deleted the logging operator. Sync them manually. Redis will take a few minutes to recover.
To upgrade the AWS EKS and Azure AKS version from 1.24 up to 1.27.x, you have two methods. The method depends on the status of the subnet CIDR range. The CIDR is a blocker for Azure only. For AWS export credentials and for Azure run the az login
command, if required.
If you have AKS subnet CIDR range higher than 10.10.0.0/22 then you can use the 1.25.x and later AKS upgrade without much downtime and AKS resource teardown.
Go to your cinchy.devops.automations repository and change AKS/EKS version in deployment.json
(or <cluster name>.json
) within the same directory.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
The AWS deployment updates a folder named eks_cluster
in the Terraform > AWS directory. Within that directory is a subdirectory with the same name as the created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
The Azure deployment updates a folder named aks_cluster
Within the Terraform > Azure directory. Within that directory is a subdirectory with the same name as the created cluster.
For AWS and Azure export credentials run the az login
command if required.
Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.
This section is only applicable to Azure deployments.
If you have 10.10.0.0/22 CIDR
range or smaller, then you won't be able to upgrade AKS version to 1.25.x. The 10.10.0.0/22
CIDR range gives you 1024 IP addresses, which aren't enough to run more than 4 worker nodes. Most customers already run 3 worker nodes and the upgrade process starts another 3 nodes, which will cause a failure.
The values below gives the suggested IP address CIDR range. Cinchy recommends to make your own choice based on your needs. Update these values in your deployment.json
file:
Make sure you are connected to the appropriate cluster. Before you start the upgrade process of AKS, you must delete all your apps from ArgoCD which will delete Cinchy apps and custom components from ArgoCD, which includes load balancer
as well kafka-cluster
.
To delete Cinchy apps, cluster components and ArgoCD:
From terminal change directory to cinchy.argocd and run the commands below sequentially. Make sure to change your cluster(directory) name and environment name in below commands:
If cluster components deletion takes longer than 10 minutes, run the command below. Check that all pods are deleted except Kubernetes default namespace(kube-system)
pods.
Verify the deletion of pods for all namespaces with the kubectl get pods -A
command. If the namespaces and pods aren't deleted for some cluster components, then delete the namespace manually with command kubectl delete ns NAMESPACE
.
Change the AKS version in DevOps automation tools deployment.json
against kubernetes_version
and orchestrator_version
key values. From a shell/terminal, go to the cinchy.devops.automations directory location and execute the following command:
In the Terraform > Azure directory, a new folder named aks_cluster
should be updated with the AKS version. In that directory is a subdirectory with the same name as the newly updated cluster.
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository and run az login
.
Execute the following command to create the cluster:
Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.