Kafka Topic
Last updated
Was this helpful?
Last updated
Was this helpful?
is an end-to-end event streaming platform that:
Publishes (writes) and subscribes to (reads) streams of events from sources like databases, cloud services, and software applications.
Stores these events durably and reliably for as long as you want.
Processes and reacts to the event streams in real-time and retrospectively.
Those events are organized and durably stored in topics. These topics are then partitioned over a number of buckets located on different Kafka brokers.
Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time .
You currently use Kafka to store the metrics for user logins, but being stuck in the Kafka silo means that you can't easily use this data across a range of business use cases or teams. You can use a batch sync to liberate your data into Cinchy.
The Kafka Topic source supports real-time syncs.
You can find the parameters in the Info tab below (Image 1).
Title
Mandatory. Input a name for your data sync
Website Metrics
Variables
Permissions
Data syncs are role based access systems where you can give specific groups read, write, execute, and/or all of the above with admin access. Inputting at least an Admin Group is mandatory.
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
Source
Mandatory. Select your source from the drop down menu.
Kafka Topic
Optional. Review our documentation on for more information about this field.
Note that If there is more than one listener associated with your data sync, you will need to configure the addition listeners via
Earliest will start reading from the beginning on the queue (when the CDC was enabled on the table). This might be a suggested configuration if your use case is recoverable or re-runnable and if you need to reprocess all events to ensure accuracy. Latest will fetch the last value after whatever was last processed. This is the typical configuration. None w read or start reading any events. You are able to switch between Auto Offset Reset types after your initial configuration through the process outlined
Optional. Put "AVRO" if your messages are , otherwise leave blank.
This is required if your data follows a schema It's a comma-separated list of URLs for schema registry instances that are used to register or lookup schemas.
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your
Add in your , if required.
If more than one listener is needed for a real-time sync, configure it/them via
To run a real-time sync, enable your Listener from