5️⃣
Cinchy Platform Documentation
Cinchy v5.8
Cinchy v5.8
  • Data Collaboration Overview
  • Release notes
    • Release notes
      • 5.9 release notes
      • 5.8 Release Notes
      • 5.7 Release Notes
      • 5.6 Release Notes
      • 5.5 Release Notes
      • 5.4 Release Notes
      • 5.3 Release Notes
      • 5.2 Release Notes
      • 5.1 Release Notes
      • 5.0 Release Notes
  • Support
  • Glossary
  • FAQ
  • Deployment guide
    • Deploying Cinchy
      • Plan your deployment
        • Deployment architecture
          • Kubernetes architecture
          • IIS architecture
        • Deployment prerequisites
          • Single Sign-On (SSO) integration
            • Enable TLS 1.2
            • Configure ADFS
            • AD Group Integration
      • Kubernetes
        • Disable your Kubernetes applications
        • Change your file storage configuration
        • Configure AWS IAM for Connections
        • Use Self-Signed SSL Certs (Kubernetes)
        • Deploy the CLI (Kubernetes)
      • IIS
  • Upgrade guide
    • Upgrade Cinchy
      • Cinchy Upgrade Utility
      • Kubernetes upgrades
        • v5.1 (Kubernetes)
        • v5.2 (Kubernetes)
        • v5.3 (Kubernetes)
        • v5.4 (Kubernetes)
        • v5.5 (Kubernetes)
        • v5.6 (Kubernetes)
        • v5.7 (Kubernetes)
        • v5.8 (Kubernetes)
        • Upgrade AWS EKS Kubernetes version
        • Update the Kubernetes Image Registry
        • Upgrade Azure Kubernetes Service (AKS)
      • IIS upgrades
        • v4.21 (IIS)
        • v4.x to v5.x (IIS)
        • v5.1 (IIS)
        • v5.2 (IIS)
        • v5.3 (IIS)
        • v5.4 (IIS)
        • v5.5 (IIS)
        • v5.6 (IIS)
        • v5.7 (IIS)
        • v5.8 (IIS)
      • Upgrading from v4 to v5
  • Guides for using Cinchy
    • User Guide
      • Data Browser overview
      • The Admin panel
      • User preferences
        • Personal access tokens
      • Table features
      • Data management
      • Queries
      • Version management
        • Versioning best practices
      • Commentary
    • Builder Guide
      • Best practices
      • Create tables
        • Attach files
        • Columns
        • Data controls
          • Data entitlements
          • Data erasure
          • Data compression
        • Formatting rules
        • Indexing & partitioning
        • Linking data
        • Table and column GUIDs
        • System tables
      • Delete tables
        • Restore tables, columns, and rows
      • Saved queries
      • CinchyDXD
        • Overview
        • DXD workflow
        • Package the data experience
        • Install the data experience
        • Release package
        • Changelog
        • References
          • Cinchy DXD CLI reference
          • Data Experience Definitions table
          • Data Experience Reference table
      • Multilingual support
      • Integration guides
    • Administrator Guide
    • Additional guides
      • Monitor and Log on Kubernetes
        • Grafana
        • OpenSearch dashboards
          • Set up Alerts
        • Monitor via ArgoCD
      • Maintenance
      • Cinchy Secrets Manager
      • GraphQL (Beta)
      • System properties
      • Enable Data At Rest Encryption (DARE)
      • Application experiences
        • Network map
          • Custom node results
          • Custom results in the Network Map
        • Set up experiences
  • API Guide
    • API overview
      • API authentication
      • API saved queries
      • ExecuteCQL
      • Webhook ingestion
  • CQL
    • Overview
      • CQL examples
      • CQL statements overview
        • Cinchy DML statements
        • Cinchy DDL statements
      • Cinchy supported functions
        • Cinchy functions
        • Cinchy system values
        • Cinchy User Defined Functions (UDFs)
          • Table-valued functions
          • Scalar-valued functions
        • Conversion functions
        • Date and Time types and functions
          • Return System Date and Time values
          • Return Date and Time parts
          • Return Date and Time values from their parts
          • Return Date and Time difference values
          • Modify Date and Time values
          • Validate Date and Time values
        • Logical functions
        • Math functions
        • String functions
        • Geometry and Geography data type and functions
          • OGC methods on Geometry & Geography instances
          • Extended methods on Geometry & Geography instances
        • Full Text Search functions
        • Connections functions
        • JSON functions
    • CQL functions reference list
  • Meta-Forms
    • Introduction
    • Install Meta-Forms
      • Deploy Meta-Forms (Kubernetes)
      • Deploy Meta-Forms (IIS)
    • Forms data types
    • Meta-Forms Builder Guide
      • Create a dynamic meta-form with tables
      • Create a dynamic meta-form example with Form Designer
      • Add links to a form
      • Rich text editing in forms
  • Data syncs
    • Get started with data syncs
    • IIS installation
      • Install Connections
      • Install the Worker/Listener
      • Install the Connections CLI
    • Build data syncs
      • Data sync types
      • Design patterns
      • Sync actions
      • Columns and mappings
        • Calculated column examples
      • Advanced settings
        • Filters
        • Variables
        • Auth requests
        • Request headers
        • Post sync scripts
        • Pagination
      • Batch data sync example
      • Real-time sync example
      • Schedule a data sync
      • Connection functions
    • Data sync sources
      • Cinchy Event Broker/CDC
        • Cinchy Event Broker/CDC XML config example
      • Cinchy Table
        • Cinchy Table XML config example
      • Cinchy Query
        • Cinchy Query XML config example
      • Copper
      • DB2 (query and table)
      • Dynamics 2015
      • Dynamics
      • DynamoDB
      • File-based sources
        • Binary file
        • Delimited file
        • Excel
        • Fixed width file
        • Parquet
      • Kafka Topic
        • Kafka Topic example config
        • Apache AVRO data format
      • LDAP
      • MongoDB collection
        • MongoDB collection source example
      • Mongo event
      • MongoDB collection (Cinchy event)
      • MS SQL Server (query and table)
      • ODBC Query
      • Oracle (query and table)
      • Polling event
        • Polling event example config
      • REST API
      • REST API (Cinchy event)
      • SAP SuccessFactors
      • Salesforce Object (Bulk API)
      • Salesforce platform event
      • Salesforce push topic
      • Snowflake
        • Snowflake source example config
      • SOAP 1.2 web service
      • SOAP 1.2 web service (Cinchy Event Triggered)
    • Data sync destinations
      • Cinchy Table
      • DB2 table
      • Dynamics
      • Kafka Topic
      • MongoDB collection
      • MS SQL Server table
      • Oracle table
      • REST API
      • Salesforce
      • Snowflake table
      • SOAP 1.2 web service
    • Real-time sync stream sources
      • The Listener Config table
      • Cinchy Event Broker/CDC
      • Data Polling
      • Kafka Topic
      • MongoDB
      • Salesforce Push Topic
      • Salesforce Platform Event
    • CLI commands list
    • Troubleshooting
  • Other Resources
    • Angular SDK
    • JavaScript SQK
Powered by GitBook
On this page
  • Monitoring and alerting
  • Create your destination
  • Authenticate your sender
  • Create your monitor
  • Add a trigger
  • Example alerts
  • Alerting on Stream Errors
  • Alerting on Kubernetes restarts
  • Alerting on status codes
  1. Guides for using Cinchy
  2. Additional guides
  3. Monitor and Log on Kubernetes
  4. OpenSearch dashboards

Set up Alerts

PreviousOpenSearch dashboardsNextMonitor via ArgoCD

Last updated 1 year ago

Monitoring and alerting

OpenSearch comes with the ability to set up alerts based on any number of monitors. You can then push these alerts via email, should you desire.

Before you set up a monitor or alert, ensure that you have .

Definitions:

Monitor

A job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.

Trigger

Conditions that, if met, generate alerts.

Alert

An event associated with a trigger. When an alert is created, the trigger performs actions, which can include sending a notification.

Action

The information that you want the monitor to send out after being triggered. Actions have a destination, a message subject, and a message body.

Destination

A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook.

Create your destination

Your destination will be where you want your alerts to be pushed to. OpenSearch supports various options, but this guide focuses on email.

  1. From the left navigation pane, click Alerting (Image 1).

  1. Click on the Destinations Tab > Add Destination

  2. Add a name to label your destination and select Email as type (Image 2)

  1. You will need to assign a Sender. This is the email address that the alert will send from when you specify this specific destination. To add a new Sender, click Manage Senders (Image 3).

  1. Click Add Sender

  2. Add in the following information (Image 4):

  • Sender Name

  • Email Address

  • Host (this is the host address for the email provider)

  • Port (this is the Port of the email provider)

  • Encryption

  1. You will need to assign your Recipients. This is the email address(es) that will receive the alert when you specify this specific destination. To add a new Recipient, you can either type their email(s) into the box, or click Manage Senders to create an email group (Image 5).

  1. Click Update to finish your Destination.

Authenticate your sender

You will need to authenticate your sender for emails to come through. Please contact Cinchy Customer Support to help you with this step.

  • Via email: support@cinchy.com

  • Via phone: 1-888-792-6051

Create your monitor

Your monitor is a job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.

  1. From the Alerting dashboard, select Monitors > Create Monitor (Image 6).

  1. Under Monitor Details, add in the following information (Image 7).

  • Monitor Name

  • Monitor Type (This example uses Per Bucket)

    • Whereas query-level monitors run your specified query and then check whether the query’s results triggers any alerts, bucket-level monitors let you select fields to create buckets and categorize your results into those buckets.

    • The alerting plugin runs each bucket’s unique results against a script you define later, so you have finer control over which results should trigger alerts. Each of those buckets can trigger an alert, but query-level monitors can only trigger one alert at a time.

  • Monitor Defining Method: the way you want to define your query and triggers. (This example uses Visual Editor)

    • Visual definition works well for monitors that you can define as “some value is above or below some threshold for some amount of time.”

  • Schedule: Choose a frequency and time zone for your monitor.

  1. Under Data Source add in the following information (Image 8):

  • Index: Define the index you want to use as a source for this monitor

  • Time Field: Select the time field that will be used for the x-axis of your monitor

  1. The Query section will appear differently depending on the Monitor Defining Method selected in step 2 (Image 9). This example is using the visual editor.

To define a monitor visually, select an aggregation (for example, count() or average()), a data filter if you want to monitor a subset of your source index, and a group-by field if you want to include an aggregation field in your query. At least one group-by field is required if you’re defining a bucket-level monitor. Visual definition works well for most monitors.

Add a trigger

A trigger is a condition that, if met, will generate an alert.

  1. To add a trigger, click the Add a Trigger button (Image 10).

  1. Under New Trigger, define your trigger name and severity level (with 1 being the highest) (Image 11).

  1. Under Trigger Conditions, you will specify the thresholds for the query metrics you set up previously (Image 12). In the below example, our trigger will be met if our COUNT threshold goes ABOVE 10000.

You can also use keyword filters to drill down into a more specific subset of data from your data source.

  1. In the Action section you will define what happens if the trigger condition is met (Image 13). Enter the following information to set up your Action:

  • Action Name

  • Message Subject: In the case of an email alert, this will be the email subject line.

  • Message: In the case of an email alert, this will be the email body.

  • Perform Action: If you’re using a bucket-level monitor, decide whether the action is performed per execution or per alert.

  • Throttling: Enable action throttling if you wish. Use action throttling to limit the number of notifications you receive within a given span of time.

  1. Click Send Test Message, if you want to test that the alert functions correctly.

  2. Click Save.

Example alerts

Alerting on Stream Errors

This example pushes an alert based on errors. We will monitor our Connections stream for any instance of 'error', and push out an alert when our trigger threshold is hit.

  • Index: This example looks at Connections.

  • Time Field

  • Time Range: Define how far back you want to monitor

  • Data Filter: We want to monitor specifically whenever the Stream field of our index is stderr (standard error).

This is how our example monitor will appear; it shows when in the last 15 days our Connections app had errors in the log (Image 15).

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 1.

The trigger threshold will be visible on your monitoring graph as a red line.

Alerting on Kubernetes restarts

This example pushes an alert based on the kubectl.kubernetes.io/restartedAt annotation, which updates whenever your pod restarts. We will monitor this annotation across our entire product-mssql instance, and push out an alert when our trigger threshold is hit.

  • Index: This example looks at the entire product-mssql instance.

  • Time Field

  • Query: This example is using the total count of the kubectl.kubernetes.io/restartedAt annotation.

  • Time Range: Define how far back you want to monitor. This example goes back 30 days.

This is how our example monitor will appear; it shows when in the last 30 days our instance had restarts (Image 18).

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 100.

The trigger threshold will be visible on your monitoring graph as a red line.

Alerting on status codes

This example pushes an alert based on status codes. We will monitor our entire instance for 400 status codes and push out an alert when our trigger threshold is hit.

  • Index: This example looks across the entire product-mssql-1 instance.

  • Time Field

  • Time Range: Define how far back you want to monitor. The time range for this example is the past day.

  • Data Filter: We want to monitor specifically whenever the Status Code is 400 (bad request).

This is how our example monitor will appear (note that there are no instances of a 400 status code in this graph) (Image 21).

  • Trigger Name

  • Severity Level

  • Trigger Condition: In this example, we use IS ABOVE and the threshold of 0.

The trigger threshold will be visible on your monitoring graph as a red line.

Ensure that you, or your alert won't work.

Through the support portal:

Query definition gives you flexibility in terms of what you query for (using ) and how you evaluate the results of that query (Painless scripting).

First we create our by defining the following (Image 14):

Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to our defined In this example we want to be alerted when there is more than one stderr in our Connections stream (Image 16). Input the following:

First we create our by defining the following (Image 17):

2. Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to our defined In this example we want to be alerted when there is more than 100 restarts across our instance (Image 19). Input the following:

First we create our by defining the following (Image 20):

Once our monitor is created, we need to define a . When this condition is met, the alert will be pushed out to the defined In this example we want to be alerted when there is at least one 400 status code across out instance (Image 22). Input the following:

Support Portal
the OpenSearch query DSL
Monitor Cluster Metrics
authenticate the Sender
Destination
Monitor
trigger condition
Recipient(s).
Monitor
trigger condition
Recipient(s).
Monitor
trigger condition
Recipient(s).
added your data source as an index pattern
Image 1: Click on Alerting
Image 2: Update your destination
Image 3: Manage your Senders
Image 4: Configure your Sender
Image 5: Configure your Recipients
Image 6: Create your Monitor
Image 7: Define your Monitor details
Image 8: Configure your Data Source
Image 9: Define your Query
Image 10: Add a Trigger
Image 11: Define your Trigger.
Image 12: Trigger Conditions
Image 13: Define your Actions
Image 14: Define your Data Source and Query
Image 15: Example monitor
Image 16: Example Trigger
Image 17: Define your Query and Data Source
Image 18: Example Monitor
Image 19: Trigger Conditions
Image 20: Define your Query and Data Source
Image 21: Example Monitor
Image 22: Trigger Conditions