Kubernetes Deployment Architecture
This page details the deployment architecture of Cinchy v5 when running on Kubernetes.
Last updated
This page details the deployment architecture of Cinchy v5 when running on Kubernetes.
Last updated
The below diagram shows a high level overview of a possible Infrastructure diagram with components on the cluster, however your specific configuration may vary (Image 1).
Tip: Click on an image to enlarge it.
When deploying Cinchy version 5 on Kubernetes, you may deploy via Amazon Web Services (AWS). The below diagram shows a high level overview of a possible AWS Infrastructure with components outside the cluster, however your specific configuration may vary (Image 2).
Tip: Click on an image to enlarge it.
When deploying Cinchy version 5 on Kubernetes, you may deploy via Microsoft Azure. The below diagram shows a high level overview of possible Azure Infrastructure with components outside the cluster, however your specific configuration may vary (Image 3).
Tip: Click on an image to enlarge it.
The following highlighted area provides a high-level overview of cluster level components used when deploying Cinchy on Kubernetes, as well as what versions they are running.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components. This diagram shows them in the cluster (Image 4).
Tip: Click on an image to enlarge it.
These are created once per cluster. Clients may choose to run these components outside of the cluster or replace with their own comparable components.
Service Mesh - Istio: All inbound traffic to your Cinchy instance is routed and handled through this component, keeping it secure and managed.
Monitoring/Alerting - Prometheus & Grafana: Prometheus consumes metrics from the running components in your environment, which can then be visualized into user friendly graphs and dashboards by Grafana. Prometheus can also connect to third party services to provide alerting capabilities. Both Prometheus and Grafana use persistent storage.
Logging - Opensearch and Fluentbit: All logs are captured and indexed by Opensearch in a single, easily accessible location. These logs can be queried, searched, and filtered, and Correlation IDs mean that they can also be traced across various components. These logging components take advantage of persistent storage.
Caching - Redis: Redis is currently being used to facilitate a distributed lock using RedLock, which guarantees lock synchronizations across Cinchy instances. It is also a storage location for the execution output when running batch data syncs.
Event Processing - Kafka: This is designed to act as the middleware that allows for messaging between components through a queuing mechanism. Kafka features persistent storage.
There are a few things to consider about your cluster configuration before you deploy Cinchy on Kubernetes:
How many clusters will you need?
Will you be sharing from an existing cluster?
Will you be running multiple environments on a single cluster?
Each instance of Cinchy has the following components, which are used to either provide an experience to users/applications or connect data in/out of Cinchy. Since multiple Cinchy instances can be deployed per cluster, these components will repeat for each environment.
The following highlighted area provides a high-level overview of instance level components used when running Cinchy on Kubernetes (Image 5).
Tip: Click on an image to enlarge it.
Meta Experiences: Cinchy offers pre-packaged experiences that you can import into your Cinchy environment and use on your data network. This includes experiences like Meta-Forms and Meta-Reports.
Connections: The Cinchy Connections experience is used to help easily create data syncs in/out of the platform. It features persistent storage.
Data Browser: Cinchy’s Dataware platform features a Universal Data Browser that allows users to view, change, analyze, and otherwise interact with all data on the network. The Data Browser even enables non-technical business users to manage and update data, build models, and set controls, all through an easy and intuitive UI.
Identity Provider: An Identity Provider (IdP) creates and manages user credentials and associated identity attributes. IdPs authentication services are used in Cinchy to authenticate end-users.
Event Listener: The Event Listener is used to picks up events from connected sources during a data sync. Review Cinchy's Data Sync documentation for further information on the Event Listener. The Event Listener uses persistent storage.
Event Stream Worker: The Event Stream Worker is used to process data picked up by the Event Listener during data syncs. Review Cinchy's Data Sync documentation for further information on the Event Stream Worker. The Event Worker uses persistent storage.
Maintenance (Batch Jobs): Cinchy performs maintenance tasks through the CLI. This currently includes the data erasure and data compression deletions.
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes that simplifies the application deployment and lifecycle management. ArgoCD is highly recommended for deploying Cinchy, however you may also choose to use another tool.
Once you configurations are set, ArgoCD automates the deployment of the desired application states into your specified target environments. Implemented as a Kubernetes controller, it continuously monitors running applications and compares the current, live state against the desired target state (as specified in your repositories).