v5.6 (Kubernetes)
This page details the instructions for upgrading your Cinchy platform to v5.6 on Kubernetes
Upgrading on Kubernetes
When it comes time to upgrade your various components, you can do so by following the below instructions.
If you have made custom changes to your deployment file structure, please contact your Support team prior to upgrading your environments.
Warning:** If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.2)** using the Cinchy Utility and deploy version 5.2.
If you are upgrading from Cinchy v5.3 or lower to Cinchy v5.6 on an SQL Server Database, you will need to make a change to your connectionString
. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json file:
Warning:** If you are upgrading from Cinchy v5.4 or lower to Cinchy v5.6, you must first run a mandatory process (Upgrade 5.5) using** the Cinchy Utility and deploy version 5.5.
Prerequisites
Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.6 k8s-template.zip file.
Review the template changes for this upgrade.
Configuring to the newest version
Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git folder/directory and any custom changes you may have implemented.
Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it's not commented then please keep it as is. Changing this will delete your ArgoCD namespace, which will force you to delete everything from Kubernetes and redeploy.
Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file.
Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and your deployment.json.
Open the new Cinchy v5.6 k8s-template.zip file you downloaded from the Cinchy Releases table and check the files into their respective cinchy.kubernete, cinchy.argocd, cinchy.terraform and cinchy.devops.automation repositories.
Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
Perform this step only If you are upgrading to 5.6 on an SQL Server Database and didn't already make this change in any previous updates. \
Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all of your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.
Appendix A
Template changes (Kubernetes 5.6)
The AWS EKS version has been upgraded to support up to v1.24.
We've added support for AWS EKS EBS volume encryption. By default EKS worker nodes will have gp3 storage class.
For current Cinchy environments you must keep your
eks_persistent_apps_storage_class
to gp2 in your DevOps automation aws.json file.If you want to move to gp3 storage or gp3 storage and volume encryption, you will have to delete any existing volumes/pvc's for Kafka, Redis, OpenSearch, Logging Operator and Event Listener with
statefulset
. This ensures that ArgoCD will take care of recreation of resources.If your Kafka cluster pods aren't coming back you must restart your Kafka operators.
You can verify the change by running the following command: "kubectl get pvc --all-namespaces".
The Connections app has changed from StatefulSet to Deployment. The persistence volume has changed to
emptyDir
.We've modified the replica count from 1 to 2 for istiod and istio ingress.
We've disabled the ArgoCD namespace: istio injection.
If this is already enabled on your environment you may keep it as is, such as keeping the cinchy.kubernetes/cluster_components/servicemesh/istio/istio-injection/argocd-ns.yaml file as it's without commenting content in it.
The Istio namespace injection has been removed.
If this is already enabled on your environment please keep it as is -- otherwise it will force you to redeploy all of your Kubernetes application components.
We've upgraded the AWS Secret Manager CSI Driver to the latest version due to crashing pods.
We've added support for the EKS EBS CSI driver in lieu of using in-tree EBS storage plugin.
We've changed the EKS Metrics server port number in order to support newer versions of Kubernetes.
We've set fixed AWS Terraform providers version for all components.
We've installed the cluster autoscaler from local charts instead of remote charts.
The deprecated
azurerm_sql_server
Terraform resource has been changed to azurerm_mssql_serverThe deprecated
azurerm_sql_database
resource has been changed to azurerm_mssql_databaseThe deprecated azurerm_sql_failover_group has been changed to azurerm_mssql_failover_group
The deprecated azurerm_sql_firewall_rule has been changed to azurerm_mssql_firewall_rule
Last updated