Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.
If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.2**, please review and follow the directives for Upgrade 5.2.
Navigate to your Cinchy devops.automation repository
Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)
In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.1.0 > 5.2.0).
Rerun the deployment script by using the following command in the root directory of your devops.automations repository:
Commit and push your changes.
If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:
Navigate to the ArgoCD portal.
Refresh your component(s). If that doesn't work, re-sync.
This page details the instructions for upgrading your Cinchy platform to v5.5 on Kubernetes
When it comes time to upgrade your various components, you can do so by following the below instructions.
This release requires you to run the Cinchy Upgrade Utility. Please review and follow the directives for Upgrade 5.5 here.
Additionally,** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.5, you must first run a mandatory process (Upgrade 5.2)** using the Cinchy Utility and deploy version 5.2. Once complete, you can continue on with your 5.5 upgrade.
If you are upgrading from Cinchy v5.3 or lower to Cinchy v5.5 on an SQL Server Database, you will need to make a change to your connectionString
. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json file:
Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.5 k8s-template.zip file.
Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml
file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented and your deployment.json.
Open the new Cinchy v5.4 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Perform this step only If you are upgrading to 5.5 on an SQL Server Database and didn't already make this change in any previous updates. Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.
This page details the instructions for upgrading your Cinchy platform to v5.6 on Kubernetes
When it comes time to upgrade your various components, you can do so by following the below instructions.
If you have made custom changes to your deployment file structure, please contact your Support team prior to upgrading your environments.
Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.2)** and deploy version 5.2.
If you are upgrading from Cinchy v5.3 or lower to Cinchy v5.6 on an SQL Server Database, you will need to make a change to your connectionString
. Adding will allow you to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json file:
Warning:** If you are upgrading from Cinchy v5.4 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.5) using** and deploy version 5.5.
Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.6 k8s-template.zip file.
Review the for this upgrade.
Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.
Open the new Cinchy v5.6 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernete, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Perform this step only If you are upgrading to 5.6 on an SQL Server Database and didn't already make this change in any previous updates. \
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all of your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.
We've added support for AWS EKS EBS volume encryption. By default EKS worker nodes will have gp3 storage class.
For current Cinchy environments you must keep your eks_persistent_apps_storage_class
to gp2 in your DevOps automation aws.json file.
If you want to move to gp3 storage or gp3 storage and volume encryption, you will have to delete any existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with statefulset
. This ensures that ArgoCD will take care of recreation of resources.
If your Kafka cluster pods aren't coming back you must restart your Kafka operators.
You can verify the change by running the following command: "kubectl get pvc --all-namespaces".
The Connections app has changed from StatefulSet to Deployment. The persistence volume has changed to emptyDir
.
We've modified the replica count from 1 to 2 for istiod and istio ingress.
We've disabled the ArgoCD namespace: istio injection.
If this is already enabled on your environment you may keep it as is, such as keeping the cinchy.kubernetes/cluster_components/servicemesh/istio/istio-injection/argocd-ns.yaml file as it's without commenting content in it.
The Istio namespace injection has been removed.
If this is already enabled on your environment please keep it as is -- otherwise it will force you to redeploy all of your Kubernetes application components.
We've upgraded the AWS Secret Manager CSI Driver to the latest version due to crashing pods.
We've added support for the EKS EBS CSI driver in lieu of using in-tree EBS storage plugin.
We've changed the EKS Metrics server port number in order to support newer versions of Kubernetes.
We've set fixed AWS Terraform providers version for all components.
We've installed the cluster autoscaler from local charts instead of remote charts.
The deprecated azurerm_sql_server
Terraform resource has been changed to azurerm_mssql_server
The deprecated azurerm_sql_database
resource has been changed to azurerm_mssql_database
The deprecated azurerm_sql_failover_group has been changed to azurerm_mssql_failover_group
The deprecated azurerm_sql_firewall_rule has been changed to azurerm_mssql_firewall_rule
Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string:
The AWS EKS version has been
When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.
Navigate to your Cinchy devops.automation repository
Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)
In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.0.0 > 5.1.0).
2. Rerun the deployment script by using the following command in the root directory of your devops.automations repository:
3. Commit and push your changes.
If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:
Navigate to the ArgoCD portal.
Refresh your component(s). If that doesn't work, re-sync.
When it comes time to upgrade your various components, you can do so by updating the version number in your configuration files and applying the changes in ArgoCD.
If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.3**, you must first run a mandatory process (Upgrade 5.2) using the Cinchy Utility **and deploy version 5.2. Once complete, you can continue on with your 5.3 upgrade.
Navigate to your cinchy devops.automation repository
Navigate to your deployment.json (You may have renamed this during your original Kubernetes deployment)
In the cinchy_instance_configs section, navigate to the image tags. Replace the version number with the instance that you wish to deploy (Ex: v5.2.0 > 5.3.0).
Rerun the deployment script by using the following command in the root directory of your devops.automations repository:
3. Commit and push your changes.
If your environment isn't set-up to automatically apply upon configuration, complete the following the apply the newest version:
Navigate to the ArgoCD portal.
Refresh your component(s). If that doesn't work, re-sync.
This page will guide you through how to update your AWS EKS Kubernetes version for your Cinchy v5 platform.
This page will guide you through how to update your AWS EKS Kubernetes version for your Cinchy v5 platform.
Update your Cinchy platform to the
Confirm the latest Cinchy supported version of EKS. You can find the version number in the cinchy.devops.automations\aws-deployment.json as "cluster_version": "1.xx".
You must upgrade your EKS sequentially. For example, if you are on EKS cluster version 1.22 and wish to upgrade to 1.24, you must upgrade from 1.22 > 1.23 > 1.24.
Navigate to your cinchy.devops.automations\aws-deployment.json file.
Change the cluster_version key value to the EKS version you wish to upgrade to. (Example: "1.24")
Open a shell/terminal and navigate to the cinchy.devops.automations directory location.
Execute the following command:
Commit changes for all the repositories (cinchy.argocd, cinchy.kubernetes, cinchy.terraform and cinchy.devops.automation).
Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME directory location.
Execute the following command:
Verify the changes are as expected and then accept.
This process will first upgrade the managed master node version and then the worker nodes. During the upgrade process, existing pods get migrated to new worker nodes and all pods will get migrated to new upgraded worker nodes automatically.
The below two commands can be used to verify that all pods are being migrated to new worker nodes.
To show both old and new nodes:
To show all the pods on the new worker nodes and the old worker nodes.
For EKS version 1.24, the metrics server goes into a crashed loop status. Reinstalling the metrics server will fix this, should you encounter this during your upgrade.
In a code editor, open the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME\new-vpc.tf or existing-VPC.tf file.
Find the enable_metrics_server key and mark its value to false.
Open a new shell/terminal and navigate to the cinchy.terraform\aws\eks_cluster\CLUSTER_NAME file.
Run the below command to remove metrics server.
Revert the enable_metrics_server key value from step 1 to true.
Run the below command within the same shell/terminal as step 3 to deploy metrics server with
The Kubernetes project runs a community-owned image registry called registry.k8s.io in which to host its container images. On April 3rd, 2023, the registry k8s.gcr.io was deprecated and no further images for Kubernetes and related subprojects are being pushed to this location.
**Instead, there is a new registry: **registry.k8s.io.
New Cinchy Deployments: this change will be automatically reflected in your installation.
For Current Cinchy Deployments: please follow the instructions outlined in this upgrade guide to ensure your components are pointed to the correct image repo.
You can review the full details on this change here: https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/
In a shell/terminal, run the below command to get a list of all the pods that are pointing to the old registry: k8s.gcr.io. These will need to be updated to point to the new image registry.
Once you find which pods are using old image registry, you need to update its deployment/daemonset/stateful set with new registry.k8s.io registry. Find the right resource type for your pods with below command:
Once you have your pod name and resource type, run the below command to open the manifest file in an editor:
The below command uses the autoscaler as an example. You will want to propagate the command with your correct pod and resource type.
In the editor, find any instances of k8s.gcr.io and replace it with registry.k8s.io.
Save and close the file.
Repeat steps 3-5 for the rest of your pods until they're all pointing to the correct registry.
When it comes time to upgrade your various components, you can do so by following the below instructions.
Warning: Upgrading to v5.4 will require you to take your Cinchy platform offline. We recommend you perform this upgrade during off-peak hours
Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.4,** you must first run a mandatory process _(Upgrade 5.2)_ and deploy version 5.2. Once complete, you can continue on with your 5.4 upgrade.
If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString
. Adding will allow you to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json file:
Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.4 k8s-template.zip file.
Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Open the new Cinchy v5.4 k8s-template.zip file you downloaded from the Cinchy Releases table.
Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.
When the new version is started for the first time, one node is made responsible for the migration of the entire database. This process can take upwards of 30 minute to complete and your system will be unavailable during this time.
This page details the process for upgrading your AKS instance once you've deployed Cinchy v5 on Kuubernetes.
AKS (Azure Kubernetes Service) is a managed K8 service used when deploying Cinchy v5 over Azure. Once you have deployed your v5 instance of Cinchy, you may want to upgrade your AKS to a newer version. You can do so with the below guide.
Before proceeding for the AKS version upgrade, ensure that you have enough IP address space, as this process will need to start new nodes and pods.
You will need more than 1024 IP addresses in total when you have single Cinchy instance/environment.
If you have limited IP address space, you can upgrade the master node and then the worker node pools one by one.
Before proceeding, make sure to check with Cinchy support which versions of Kubernetes are supported by this process.
Navigate to your ArgoCD Dashboard.
Find and click on your Istio app > disable auto-sync.
Find and click on your Istio-ingress app > disable auto-sync.
Open a terminal and run the following command to get the currently available Kubernetes versions for AKS, inputting your AKS cluster name and resource group where specified:
Within your cinchy.devops.automations folder/repo, navigate to the deployment-azure.json file and change the values for kubernetes_version
and orchestrator_version
to the currently available Kubernetes version found in step 4.
Within the cinchy.devops.automations folder/repo, run the following command to push your new values:
Within your cinchy.terraform/azure/aks_cluster/ directory, run bash create.sh
and accept the change.
AKS has three node pools, one node pool per availability zone. Terraform will now start the upgrade of the master node and the zone1 node pool.
Verify which nodes that the istio-ingressgateway and istiod pods are running on with below command.
In the output you will see aks-zone1/2/3-xxxxx under the Nodes column.
If one or both pods are running on aks-zone1-xxxxx then you need to scale up replicas 2 by following steps 12-14. If not, skip to step 15.
Use the following command to start replica on zone2 or zone3 nodes; once you scale down it will terminate from zone1 upgrading nodes.
Verify that new pods are running on other node pool by running the following command:
Run the following command to scale it back. This will remove the pod from cordoned node:
Once Maser node and zone1 upgrade is done, you will see a message such as:
The Terraform script will continue with the zone2 and zone3 node pool upgrade. You will see messages like the below when thee zone2 and zone3 node pool upgrade is in process:
Repeat steps 11-13. This will make sure that istio-ingressgateway and istiod are running on zone1 nodes.
Return to your ArgoCD dashboard and re-enable the auto sync for Istio and Istio-ingress apps auto sync on ArgoCD dashboard.
Turn off your Cinchy platform. In a Kubernetes deployment,
Perform this step only If you are upgrading to 5.4+ on an SQL Server Database. Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string:
Turn your platform back on. In a Kubernetes deployment,
The major changes for the 5.7 Kubernetes upgrade are the following:
The Azure AKS and AWS EKS version supports up to 1.27
Upgraded ArgoCD from 2.1.7 to v2.7.6
Upgraded Istio from 1.3.1 to 1.18.0
OpenSearch from 1.2.0 to 2.13.1
Upgraded Logging Operator from 3.17.2 to 4.2.2
Upgraded Kube Prometheus Stack from 17.2.2 to 47.0.0
Upgraded Strimzi Kafka Operator from 0.1.0 to 0.34.0
New app Kafka UI 0.7.1
OpenSearch Index creation based on date format
To upgrade your various components, follow the instructions below in the order presented.
If you have made custom changes to your deployment file structure, please contact your Support team before you upgrade your environments.
If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.7, you must run Upgrade 5.2 using the Cinchy Utility and deploy version 5.2.
If you are upgrading from 5.2 or higher, follow the 5.7 upgrade instructions below, then use the Cinchy Utility and deploy the target version using the -v "X.X"
argument.
Go to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
Go to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git
file.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml
file and it's not commented, don't change it. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Go to your cinchy.terraform repository. Delete all existing folder structure except for the .git
file.
Go to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git
file and your deployment.json.
Download and open the new Cinchy v5.7 k8s-template.zip
file from the Cinchy Releases table and place the files into their respective cinchy.kubernetes, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
Go to the new aws.json/azure.json
files and compare them with your current deployment.json
file. All additional fields in the new aws.json/azure.json
files should be added to your current deployment.json
.
Update the Kubernetes version in your deployment.json
. To upgrade EKS to a new version, you need to follow an upgrade sequence, installing each incremental version one by one. For example, you might need to upgrade from 1.24 to 1.25, then from 1.25 to 1.26, and finally from 1.26 to 1.27.
You may have changed the name of the deployment.json
file during your original platform deployment. If so, make sure that you swap up the name wherever it appears in this document.
In the 5.7 templates, the cluster-level components will upgrade to the latest version. You need to remove kube-prometheus-stack
, logging-operator app
and kafka-cluster
from ArgoCD. This change deletes your recent metrics from Grafana and you will only see the latest metrics after you deploy the new kube-prometheus-stack
. The older CRDs created by kube-prometheus-stack
and logging-operator charts aren't removed by default during upgrade and should be manually cleaned up with the below commands:
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all of your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
Validate ArgoCD pods are running and check that ArgoCD is upgraded v2.7.6 by accessing the ArgoCD application Console.
Execute the following command to deploy cluster components and Cinchy components:
You might see a couple of ArgoCD apps out of sync because you deleted the logging operator. Sync them manually. Redis will take a few minutes to recover.
To upgrade the AWS EKS and Azure AKS version from 1.24 up to 1.27.x, you have two methods. The method depends on the status of the subnet CIDR range. The CIDR is a blocker for Azure only. For AWS export credentials and for Azure run the az login
command, if required.
If you have AKS subnet CIDR range higher than 10.10.0.0/22 then you can use the 1.25.x and later AKS upgrade without much downtime and AKS resource teardown.
Go to your cinchy.devops.automations repository and change AKS/EKS version in deployment.json
(or <cluster name>.json
) within the same directory.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
The AWS deployment updates a folder named eks_cluster
in the Terraform > AWS directory. Within that directory is a subdirectory with the same name as the created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
The Azure deployment updates a folder named aks_cluster
Within the Terraform > Azure directory. Within that directory is a subdirectory with the same name as the created cluster.
For AWS and Azure export credentials run the az login
command if required.
Run the command below to start the upgrade process. Make sure to verify before you select yes to upgrade the process. This shouldn't delete or destroy any data. It runs an in-place deployment that will update the Kubernetes version.
This section is only applicable to Azure deployments.
If you have 10.10.0.0/22 CIDR
range or smaller, then you won't be able to upgrade AKS version to 1.25.x. The 10.10.0.0/22
CIDR range gives you 1024 IP addresses, which aren't enough to run more than 4 worker nodes. Most customers already run 3 worker nodes and the upgrade process starts another 3 nodes, which will cause a failure.
The values below gives the suggested IP address CIDR range. Cinchy recommends to make your own choice based on your needs. Update these values in your deployment.json
file:
Make sure you are connected to the appropriate cluster. Before you start the upgrade process of AKS, you must delete all your apps from ArgoCD which will delete Cinchy apps and custom components from ArgoCD, which includes load balancer
as well kafka-cluster
.
To delete Cinchy apps, cluster components and ArgoCD:
From terminal change directory to cinchy.argocd and run the commands below sequentially. Make sure to change your cluster(directory) name and environment name in below commands:
If cluster components deletion takes longer than 10 minutes, run the command below. Check that all pods are deleted except Kubernetes default namespace(kube-system)
pods.
Verify the deletion of pods for all namespaces with the kubectl get pods -A
command. If the namespaces and pods aren't deleted for some cluster components, then delete the namespace manually with command kubectl delete ns NAMESPACE
.
Change the AKS version in DevOps automation tools deployment.json
against kubernetes_version
and orchestrator_version
key values. From a shell/terminal, go to the cinchy.devops.automations directory location and execute the following command:
In the Terraform > Azure directory, a new folder named aks_cluster
should be updated with the AKS version. In that directory is a subdirectory with the same name as the newly updated cluster.
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository and run az login
.
Execute the following command to create the cluster:
Before accepting the change, verify that it meets your expectations and ensures the protection of your database and any other resources. This command will create, update, or destroy vnet, subnet, AKS cluster, and AKS node groups. Make sure to review the changes before proceeding.