When it comes time to upgrade your various components, you can do so by following the below instructions.
Warning: Upgrading to v5.4 will require you to take your Cinchy platform offline. We recommend you perform this upgrade during off-peak hours
Warning: If you are upgrading from Cinchy v5.1 or lower to Cinchy v5.4, you must first run a mandatory process (Upgrade 5.2) using the Cinchy Utility and deploy version 5.2. Once complete, you can continue on with your 5.4 upgrade.
If you are upgrading to 5.4+ on an SQL Server Database, you will need to make a change to your connectionString. Adding TrustServerCertificate=True will allow you to bypass the certificate chain during validation.
For a Kubernetes deployment, you can add this value in your deployment.json file:
Download the latest Cinchy Artifacts from the Cinchy Releases Table > Kubernetes Artifacts column (Image 1). For this upgrade, please download the Cinchy v5.4 k8s-template.zip file.
Turn off your Cinchy platform. In a Kubernetes deployment, you can do so via ArgoCD.
Navigate to your cinchy.argocd repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Navigate to your cinchy.kubernetes repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
If you have cinchy.kubernetes\cluster_components\servicemesh\istio\istio-injection\argocd-ns.yaml file and it is not commented then please keep it as it is. Changing this will delete your Argocd namespace, which will force you to delete everything from Kubernetes and redeploy.
Navigate to your cinchy.terraform repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Navigate to your cinchy.devops.automation repository. Delete all existing folder structure except for the .git file and any custom changes you may have implemented.
Open the new Cinchy v5.4 k8s-template.zip file you downloaded from the Cinchy Releases table.
Navigate to the new aws.json/azure.json files and compare them with your current deployment.json file. Any additional fields in the new aws.json/azure.json files should be added to your current deployment.json.
Note that you may have changed the name of the deployment.json file during your original platform deployment. If so, ensure that you swap up the name wherever it appears in this document.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
3. Perform this step only If you are upgrading to 5.4+ on an SQL Server Database. Navigate to your cinchy_instance_configs section > database_connection_string, and add in the following value to the end of your string: TrustServerCertificate=True
Open a shell/terminal from the cinchy.devops.automations directory and execute the following command:
Commit all of your changes (if there were any) in each repository.
If there were any changes in your cinchy.argocd repository you may need to redeploy ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
If there were any change to the cluster components, execute the following command from the cinchy.argocd repository:
If there were any change to the Cinchy instance, execute the following command from the cinchy.argocd repository:
Log in to your ArgoCD application console and refresh the apps to ensure that all changes were picked up.
Turn your platform back on. In a Kubernetes deployment, you can do so via Argo CD
When the new version is started for the first time, one node is made responsible for the migration of the entire database. This process can take upwards of 30 minute to complete and your system will be unavailable during this time.