This page outlines some best practices for versioning.
This page details some best practices for version history in Cinchy. These recommendations are important because they can help:
Minimize your database bloat/size.
Make it easier to parse through version history when there aren't hundreds of redundant records.
When doing any type of update statement, it's best to include an opposite “where” clause to avoid creating unnecessary history for unchanged values.
For example, if your update was set name to Marc, you would include a where name doesn't already equal Marc. Doing so prevents a redundant update in your version history.
When writing an update statement, run it more than once. If it results in an update each time, return to your query and troubleshoot.
This is relevant anywhere the statement can be run repeatedly, such as in APIs or Post Sync Scripts.
In data syncs, ensure that your data types are matched properly.
For example, if the source is text and the target is data, even if the values are the same, it will update and create unnecessary version history.
When performing a data sync, run it more than once. If it creates an update each time, return to your configuration and troubleshoot.
This page details how Cinchy approaches Version Management within the platform.
Cinchy natively and automatically manages data versioning in the platform through the ‘always-on’ version tracking, collaboration logging, and recycle bin features (data restore).
Cinchy maintains a version history of all changes to every data element stored in Cinchy. You can query the version history in Cinchy to speed up analysis, and can also be viewed through the Collaboration Log, which tracks changes made by users, systems, or external applications (Image 1). When required, you can easily revert data to previous states using the Recycle Bin or the Revert button.
This section refers to data schemas/models, not data values themselves.
Your schema/data model version can also be managed when you are using multiple environments. For example, if you have a DEV environment and make a change to a table design (ex: changing a column name), you can export and deploy your data model to a PROD environment and Cinchy will intelligently consolidate and merge the schema changes to adhere to the latest version.
To export a table (like your data model), navigate to the Design Table > Export button (Image 2). You can then import your data model into any other environment using the model loader (Image 3).
This functionality is achieved through the use and synchronization of GUIDs. Each data element in Cinchy (table, column, etc.) will have a matching GUID, which stays consistent even across multiple environments. That means that changes made in your source environment will automatically and accurately be applied once promoted to your higher environment.
A GUID _(globally unique identifier)_** is a 128-bit text string that represents an identification (ID).**
You can find the GUID for your object by navigating to the applicable System Table. Ex: Column GUIDs can be found in the Columns table (Image 4).
While you are able to manually export/import data models across environments, you may want to package up multiple objects (tables, queries, reference data, etc.) and push that all together between environments. This method still adheres to schema version control and management.
This can be accomplished using the Cinchy DXD Utility, which you can learn more about by reviewing the documentation here.