In Cinchy v5.6, you are now able to run the Connections pod under a service account that uses an AWS IAM (Identity and Access Management) role, which is an IAM identity that you can create to have specific permissions and access to your AWS resources. To set up AWS IAM role authentication, please review the procedure below.
To check that you have an OpenID Connect set up with the cluster (the default for deployments made using the Cinchy automation process), run the below command within a terminal:
The output should appear like the below. Make sure to note this down for later use.
Log in to your AWS account and create an IAM Role policy through the AWS UI. Ensure that it has S3 access.
Run the below command in a terminal to create a service account with the role created in step 3. If your cluster has a special character like an underscore, skip to the next section.
If your cluster name has a special character, like an underscore, you will need to create and apply the YAML. Follow section 1 up until step 4, and then follow the below procedure.
In an IDE (Visual Studio, VsCode), create a new file titled my-service-account.yaml
in your working directory. It should contain the below content.
In a terminal, run the below command:
In an IDE (Visual Studio, VsCode), create a new file titled trust-relationship.json
in your working directory. It should contain the below content.
For example:
Execute the following command to create the role, referencing the above .json file:
For example:
Execute the following command to attach the IAM policy to your role:
For example:
Execute the following command to annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume:
For example:
Confirm that the role and service account are correctly configured by verifying the output of the following commands:
To ensure that the Connections pod's role has the correct permissions, the role specified by the user in AWS must have its Trusted Relationships configured as such:
To confirm that the Connections app is using the service account:
Navigate to the cinchy.kubernetes repository > connections/kustomization.yaml
file
Execute the following:
From a terminal, run the below command:
The output should look like the following:
This page details how to change your File Storage configuration in Cinchy v5 to S3, Azure Blob, or Local.
In v5.2, Cinchy implemented the ability to free up database space by using S3 compatible or Azure Blob Storage for file storage. You can set this configuration in the deployment.json of a Kubernetes installation, or the appsettings.json of an IIS installation.
If you are using a Kubernetes deployment, you will change your file storage config in the deployment.json.
Navigate to the object storage section, where you will see either S3 or Azure Blob storage, depending on your deployment structure.
To use Blob Storage or S3, update each line with your own parameters.
To use Local storage, leave each line blank except for the Connections_Storage_Type, which you should set to Local:
5. Run the deployment script by using the following command in the root directory of your devops.automations repository:
Commit and push your changes.
If you are using an IIS deployment, you will change your file storage config in the Cinchy Web AppSettings file.
Locate the StorageType section of the file and set it to either Local, AzureBlobStorage, or S3.
If you selected AzureBlobStorage, fill out the following lines in the same file:
If you selected S3, fill out the following lines in the same file:
This page details the optional steps that you can take to use self-signed SSL Certificates in a Kubernetes Deployment of Cinchy.
Follow this process only after running the devops.automations script during your initial deployment and each additional time you run the script (such as updating your Cinchy platform), as it wipes out all custom configurations you set up to use a self-signed certificate.
Execute the following commands in any folder to generate the self-signed certificate:
Create a YAML file located at cinchy.kubernetes/platform_components/base/self-signed-ssl-root-ca.yaml.
Add the following to the YAML file:
Add the self signed root CA cert file to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base folder.
Add the yaml code snippet to the cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/base/kustomization.yaml file, changing the below files key value as per your root ca cert file name:
Add the following line to the cinchy.kubernetes/platform_components/base/kustomization.yaml file
Add the below Deployment patchesJson6902 to each of your cinchy.kubernetes/environment_kustomizations/cinchy_nonprod/ENV_NAME/PLATFORM_COMPONENT_NAME/kustomization.yaml files, except base
.
Ensure that the rootCA.crt file name is matched with ConfigMap data, configMapGenerator files, and the patch subpath.
Once the changes are deployed, verify the root CA cert is available on the pod under /etc/ssl/certs with below command. Make sure to input your own POD_NAME
and NAMESPACE
:
For further reference material, see the linked article on self-signed certificates in Kubernetes.
This page details the installation instructions for deploying Cinchy v5 on Kubernetes
This page details the instructions for deployment of Cinchy v5 on Kubernetes. We recommend, and have documented below, that this is done via Terraform and ArgoCD. This setup involves a utility to centralize and streamline your configurations.
The Terraform scripts and instructions provided enable deployment on Azure and AWS cloud environments.
To install Cinchy v5 on Kubernetes, you need to follow the requirements below. Some requirements depend on whether you deploy on Azure or on AWS.
These prerequisites apply whether you are installing on Azure or on AWS.
You must create the following four Git repositories. You can use any source control platform that supports Git, such as GitLab, Azure DevOps, and GitHub.
cinchy.terraform:: Contains all Terraform configurations.
cinchy.argocd:: Contains all ArgoCD configurations.
cinchy.kubernetes:: Contains cluster and application component deployment manifests.
cinchy.devops.automations:: Contains the single configuration file and binary utility that maintains the contents of the above three repositories.
Download the artifacts for the four Git repositories. See here for information on accessing these. Check the contents of each of the directories into the respective repository.
You must have a service account with read/write permissions to the git repositories created above.
Install the following tools on the deployment machine:
For an introduction to Terraform + AWS, see this Get started Guide.
For an introduction to Terraform + Azure, see this Get started Guide
kubectl (v1.23.0+)
.NET Core 6 required for Cinchy v5.8 and higher.
If you are using Cinchy docker images, pull them.
Starting in Cinchy v5.4, you will have the option between Alpine or Debian based image tags for the listener, worker, and connections. Using Debian tags will allow a Kubernetes deployment to be able to connect to a DB2 data source, and that option should be selected if you plan on leveraging a DB2 data sync.
When either installing or upgrading your platform, you can use the following Docker image tags for the listener, worker, and connections:
"5.x.x" - Alpine
"5.x.x-debian" - Debian
You will need a single domain for accessing ArgoCD, Grafana, OpenSearch Dashboard, and any deployed Cinchy instances. You have two routing options for accessing these applications - path based or subdomains. See below for an example with multiple Cinchy instances:
You will need an SSL certificate for the cluster. This should be a wildcard certificate if you will use subdomain based routing. You can also use Self-Signed SSL.
If you are deploying Cinchy v5 on Azure, you require the following:
A resource group that will contain the Azure Blob Storage with the terraform state.
A storage account and container (Azure Blob Storage) for persisting terraform state.
Install the Azure CLI on the deployment machine. It must be set to the correct profile/login
The deployment template has two options available:
Use an existing resource group.
Creating a new one.
If you prefer an existing resource group, you must provision the following before the deployment:
The resource group.
A virtual network (VNet) within the resource group.
A single subnet. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /22 to provide a range of 1024 addresses.
If you prefer a new resource group, all resources will be automatically provisioned.
The quota limit of the Total Regional vCPUs and the Standard DSv3 Family vCPUs (or equivalent) must offer enough availability for the required number of vCPUs (minimum of 24).
An AAD user account to connect to Azure, which has the necessary privileges to create resources in any existing resource groups and the ability to create a resource group (if required).
If you are deploying Cinchy v5 on AWS, you require the following:
An S3 bucket that will contain the terraform state.
Install the AWS CLI on the deployment machine. It must be set to the correct profile/login
The template has two options available:
Use an existing VPC
Create a new one.
If you prefer an existing VPC, you must provision the following before the deployment:
The VPC. It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /21 to provide a range of 2048 IP addresses.
3 Subnets (one per AZ). It's important that the range be enough for all executing processes within the cluster, such as a CIDR ending with /23 to provide a range of 512 IP addresses.
If the subnets are private, a NAT Gateway is required to enable node group registration with the EKS cluster.
If you prefer a new VPC, all resources will be automatically provisioned.
The limit of the Running On-Demand All Standard vCPUs must offer enough availability for the required number of vCPUs (minimum of 24).
An IAM user account to connect to AWS which has the necessary privileges to create resources in any existing VPC and the ability to create a VPC (if required).
You must import the SSL certificate into AWS Certificate Manager (or a new certificate can be requested via AWS Certificate Manager).
You must import the SSL certificate into AWS Certificate Manager, or a new certificate can be requested via AWS Certificate Manager.
If you are importing it, you will need the PEM-encoded certificate body and private key. You can find this, you can get the PEM file from your chosen domain provider (GoDaddy, Google, etc.) Read more on this here.
Tips for Success:
Ensure you have the same region configuration across your SSL Certificate, your Terraform bucket, and your deployment.json in the next step of this guide.
The following steps detail the instructions for setting up the initial configurations.
Navigate to your cinchy.devops.automations repository where you will see an aws.json and azure.json.
Depending on platform that you are deploying to, select the appropriate file and copy it into a new file named deployment.json (or <cluster name>.json) within the same directory.
This file will contain the configuration for the infrastructure resources and the Cinchy instances to deploy. Each property within the configuration file has comments in-line describing its purpose along with instructions on how to populate it.
Follow the guidance within the file to configure the properties.
Commit and push your changes.
Tips for Success:
You can return to this step at any point in the deployment process if you need to update your configurations. Simply rerun through the guide sequentially after making any changes.
The deployment.json will ask for your repository username and password, but ArgoCD may have errors when retrieving your credentials in certain situations (ex: if using GitHub). To verify if your credentials are working, navigate to the ArgoCD Settings after you have deployed Argo in this guide. To avoid errors, Cinchy recommends using a Personal Access Token instead.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory location and execute the following command:
If the file created in "Configuring the Deployment.json" step 2 has a name other than deployment.json
, the reference in the command will will need to be replaced with the correct name of the file.
The console output should have the following message:
The following steps detail how to deploy Terraform.
If deploying on AWS: Within the Terraform > AWS directory, a new folder named eks_cluster
is created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution. This applies to everything within step 4 of this guide.
If deploying on Azure: Within the Terraform > Azure directory, a new folder named aks_cluster
is created. Nested within that's a subdirectory with the same name as the newly created cluster.
To perform terraform operations, the cluster directory must be the working directory during execution.
Launch a shell/terminal with the working directory set to the cluster directory within the cinchy.terraform repository.
If you are using AWS, run the following commands to authenticate the session:
For Azure, run the following command and follow the on screen instructions to authenticate the session:
Execute the following command to create the cluster:
Type yes when prompted to apply the terraform changes.
The resource creation process can take about 15 to 20 minutes. At the end of the execution there will be a section with the following header
If deploying on AWS, this section will contain 2 values: Aurora RDS Server Host and Aurora RDS Password
If deploying on Azure, this section will contain a single value: Azure SQL Database Password
These variable values are required to update the connection string within the deployment.json file (or equivalent) in the cinchy.devops.automations repository.
The following section breaks down how to retrieve your SSH keys for both AWS and Azure deployments.
SSH keys should be saved for future reference if a connection needs to be established directly to a worker node in the Kubernetes cluster.
The SSH key to connect to the Kubernetes nodes is maintained within the terraform state and can be retrieved by executing the following command:
The SSH key is output to the directory containing the cluster terraform configurations.
The following section pertains to updating the Deployment.json file.
Navigate to the deployment.json (created in step 3.1) > cinchy_instance_configs section.
Each object within represents an instance that will be deployed on the cluster. Each instance configuration has a database_connection_string
property. This has placeholders for the host name and password that must be updated using output variables from the previous section.
For Azure deployments, the host name isn't available as part of the terraform output and instead must be sourced from the Azure Portal.
The terraform script will create an S3 bucket for the cluster that must be accessible to the Cinchy application components.
To access this programmatically, an IAM user that has read/write permissions to the new S3 bucket is required. This can be an existing user.
The Access Key and Secret Access Key for the IAM user must be specified under the object_storage
section of the deployment.json
Within the deployment.json, the azure_blob_storage_conn_str
must be set.
The in-line comments outline the commands required to source this value from the Azure CLI.
If you have the key_vault_secrets_provider_enabled=true
value in the azure.json then the below secrets files would have been created during the execution of step 3.2:
You will need to add the following secrets to your Azure Key Vault:
worker-secret-appsettings-<cinchy_instance_name>
web-secret-appsettings-<cinchy_instance_name>
maintenance-cli-secret-appsettings-<cinchy_instance_name>
idp-secret-appsettings-<cinchy_instance_name>
forms-secret-config-<cinchy_instance_name>
event-listener-secret-appsettings-<cinchy_instance_name>
connections-secret-config-<cinchy_instance_name>
connections-secret-appsettings-<cinchy_instance_name>
To create your new secrets:
Navigate to your key vault in the Azure portal.
Open your Key Vault Settings and select Secrets.
Select Generate/Import.
On the Create a Secret screen, choose the following values:
Upload options: Manual.
Name: Choose the secret name from the above list. They will all follow the format of: <app>-secret-appsettings-<cinchy_instance_name> or <app>-secret-config-<cinchy_instance_name>
Value: The value for the secret will be the content of each app JSON located in the cinchy.kubernetes\environment_kustomizations\nonprod<cinchy_instance_name>\secrets folder, and as shown in above image.
Content type: JSON
Leave the other values to their defaults.
Select Create.
Once you receive the message that the first secret has been successfully created, you may proceed to create the other secrets. You must create a total of 8 secrets, as shown in the above list of secret names.
This utility updates the configurations in the cinchy.terraform, cinchy.argocd, and cinchy.kubernetes repositories.
From a shell/terminal, navigate to the cinchy.devops.automations directory and execute the following command:
If the file created in section 3 has a name other than deployment.json
, the reference in the command will will need to be replaced with the correct name of the file.
The console output should end with the following message:
The updates must be committed to Git before proceeding to the next step.
From a shell/terminal run the following command, replacing <region> and <cluster_name> with the accurate values for those placeholders:
From a shell/terminal run the following commands, replacing <subscription_id>, <deployment_resource_group>, and <cluster_name> with the accurate values for those placeholders.
These commands with the values pre-populated can also be found from the Connect panel of the AKS Cluster in the Azure Portal.
Verify that the connection has been established and the context is the correct cluster by running the following command:
In this step, you will deploy and access ArgoCD.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy ArgoCD:
Monitor the pods within the ArgoCD namespace
by running the following command every 30 seconds until they all move into a healthy state:
Launch a new shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to access ArgoCD:
This script creates a port forward using kubectl to enable ArgoCD to be accessed at http://localhost:9090
The credentials for ArgoCD's portal are output at the start of the access_argocd
script execution in Base64. The Base64 value must be decoded to get the login credentials to use for the http://localhost:9090 endpoint.
In this step, you will deploy your cluster components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the cluster components using ArgoCD:
Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (this may take a few minutes).
Tips for Success:
If your pods are degraded or failed to sync, refresh of resynchronize your components. You can also delete pods and ArgoCD will automatically spin them back up for you.
Check that ArgoCD is pulling from your git repository by navigating to your Settings
If your components are failing upon attempting to pull an image, refer to your deployment.json to check that each component is set to the correct version number.
Execute the following command to get the External IP used by the Istio ingress gateway.
DNS entries must be created using the External IP for any subdomains / primary domains that will be used, including OpenSearch, Grafana, and ArgoCD.
The default path to access OpenSearch, unless you have configured it otherwise in your deployment.json, is <baseurl>/dashboard
The default credentials for accessing OpenSearch are admin/admin. We recommend that you change these credentials the first time you log in to OpenSearch.
To change the default credentials for Cinchy v5.4+, follow the documentation here.
To change the default credentials and/or add new users for all other deployments, follow this documentation or navigate to Settings > Internal Roles in OpenSearch.”
The default path to access Grafana, unless you have configured it otherwise in your deployment.json, is <baseurl>/grafana
The default username is admin. The default password for accessing Grafana can be found by doing a search of adminPassword
within the following path: cinchy.kubernetes/cluster_components/metrics/kube-prometheus-stack/values.yaml
We recommend that you change these credentials the first time you access Grafana. You can do so through the admin profile once logged in.
In this step, you will deploy your Cinchy components.
Launch a shell/terminal with the working directory set to the root of the cinchy.argocd repository.
Execute the following command to deploy the Cinchy application components using ArgoCD:
Navigate to ArgoCD at http://localhost:9090 and login. Wait until all components are healthy (may take a few minutes)
You will be able to access ArgoCD through the URL that you configured in your deployment.json, as long as you created a DNS entry for it in step 8.2.
You have now finished the deployment steps required for Cinchy. Navigate to your configured domain URL to verify that you can login using the default username (admin) and password (cinchy).
## Troubleshooting
If ArgoCD Application Sync is stuck waiting for PreSync jobs to complete, you can run the below command to restart the application controller.
There might be times when you want to temporarily disable your Kubernetes pods to perform maintenance or upgrades. You can do so through the following steps:
Access your ArgoCD.
Navigate to the application directory for the namespace
you wish to disable, in this case development-cinchy (Image 1). You should see your cluster component applications.
Select the main application (development-cinchy) (Image 2).
Navigate to Summary > Sync Policy > Automated, then select Disable Auto-Sync > OK (Image 3).
For each of the cluster applications that you wish to disable, select the "..." > Delete (Image 5).
Your apps should all appear as "out of sync" (Image 6).
To re-enable your applications, return to the application directory for your disabled namespace
(Image 7).
Select the main application (i.e. development-cinchy) (Image 8).
Navigate to Summary > Sync Policy, then select Enable Auto-Sync > OK (Image 9).
Application | Path Based Routing | Subdomain Based Routing |
---|---|---|
Cinchy 1 (DEV)
domain.com/dev
dev.mydomain.com
Cinchy 2 (QA)
domain.com/qa
qa.mydomain.com
Cinchy 3 (UAT)
domain.com/uat
uat.mydomain.com
ArgoCD
domain.com/argocd
cluster.mydomain.com/argocd
Grafana
domain.com/grafana
cluster.mydomain.com/grafana
OpenSearch
domain.com/dashboard
cluster.mydomain.com/dashboard