Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page outlines a few common best practices when building in Cinchy.
Cinchy is a simple, business user friendly application. This means that you should use business friendly terms to name your tables and columns. For example, you want to name a column Full Name
rather than full_name
, fullName
, or fName
.
Domains essentially act as folders to be able to organize your data. Most users will want to split domains by business lines, such as Sales, Marketing, and Human Resources. The key thing is to keep it consistent so users have a general idea where to go to find information.
You can add descriptions to your tables and columns. Descriptions lets users access data in a more self-serve fashion, and also helps prevent misunderstandings of the meaning of your data. Table descriptions are shown in the My Network screen, and will show up in search as well. (Image 1).
Column descriptions show up when you hover on the column in the Manage Data screen (Image 2 and 3).
This page gives an overview on Cinchy Builders
Cinchy Builders use the Cinchy platform to build an unlimited number of business capabilities and use-cases.
The “Cinchy Builder” has access to perform the following capabilities:
Change Table Schema (use Cinchy’s “Design Table” functionality)
Grant access (use Cinchy’s “Design Controls” functionality)
Edit Cinchy Data in Cinchy System Tables
Create, Save, and Share Cinchy Queries
Perform Cinchy Queries on the Cinchy data network
Import/export packaged business capabilities (like deployment packages)
Build Cinchy Experiences
Perform integration with Cinchy (such as Cinchy Command Line Interface [CLI] operations)
Create and Deliver an unlimited number of Customer Use Cases within Cinchy
A builder can be part of the Administrators group
The End-Users of the Cinchy platform are those that use the functionalities created by the “Cinchy Builders” to their business objectives.
All Builders are also End-Users, but not all End-Users are Builders.
“End-Users” of the Cinchy Platform can:
Use Tables, Saved Queries, and Experiences created by “Builders”
Builders can leverage Cinchy as one platform to simplify solutions delivery as they:
Connect
Protect
Collaborate
Build
Reuse
Data collaboration eliminates point-to-point integration, reducing cost and complexity (including real-time, batch, on-prem, cloud, legacy, and SaaS services) and allowing custom data-sync configurations. This drives faster time to market, lower costs and improved usability.
When you connect a new data source to your data network, you can use it with data from any other source on the network with no further integration efforts. The more sources you connect, the more powerful and efficient it becomes. You can extend data on the network with attributes and entirely new entities, including calculated data, derived data, AI models, and user-managed data.
Data on Cinchy is protected by cellular-level access controls, data-driven entitlements, and superior data governance. This includes meta architecture, versioning, and write-specific business functions that restrict user views, such as a managed hierarchy. Owner-defined permissions are universally enforced, significantly reducing the effort of managing them at the enterprise level. You can use existing Active Directory and SSO access policies to set controls for an individual user, external system, or user-defined functions (such as approving updates row by row or using bulk approvals).
All data is automatically version-controlled and can be reverted to previous states. You can see changes made by users, systems, or external applications through Data Synchronization or by using a Collaboration Log.
Use the universal Data Browser to view, change, analyze, and otherwise interact with ALL data on the Fabric. Non-technical business users can manage and update data, build models, and set controls, all through an easy and intuitive UI.
Cinchy’s data collaboration platform features an intuitive Drag and Drop Query Builder that allows Builders to create using the Cinchy Query Language (CQL), a proprietary language specific to Cinchy. All queries can be saved and shared, and query results automatically generate a full no-code API.
By decoupling the data from the application, our Autonomous Data Network lets you consolidate legacy applications to increase operational agility and reduce overhead. You can create enterprise-grade solutions using the Application SDK as quickly as you would build Excel-based solutions and without the operational risk. Application SDK (React Native, Angular, and REST SDKs) lets you build custom applications for end users.
For even more flexibility, connect your Data Network to third-party Data Visualization tools. You’ll be able to run cross-queries against all data on the Fabric while maintaining universal access policies for users, systems, and external applications.
The more you use data collaboration, the more it’s capable of doing.
Any new data you add to the network will work in conjunction with any previously existing data instantly. This means you can re-use data in new ways, with no time-consuming integration efforts. Teams can collaborate directly in real-time, allowing it to act as a central data hub while simplifying integration. Unlike traditional data architecture projects, which grow more complicated as they involve more data sources, data collaboration delivers solutions faster and faster as more data is added to it.
This page guides you through the various column types available on Cinchy,
Cinchy has system columns used to perform various functionality. These columns can't be modified directly by a user.
You can't create a new column with the same name as a system column.
Cinchy Id is a unique identifier assigned automatically to all records within a normal table. The Cinchy Id is associated with the record permanently and is never reassigned even if the record is deleted.
Version and Draft Version are used to track changes and versions.
Any changes made to a record increments the Version
. Draft Version
is always 0.
Any data approval increments Version
and resets Draft Version
to 0. Any proposed changes increments the Draft Version
.
This is a legacy column. It's always blank.
Created By
is a linked column to the [Cinchy].[Users]
table, of the user who created the record.
Created
is the time when the record was created, per the logged-in user's timezone.
Created By
and Created
will be the same for all records with the same Cinchy Id.
Created By
and Created
is based on the first user to make changes on an approved record.
Modified By
is a linked column to the [Cinchy].[Users]
table, of the user who last modified the record.
The last user to modify the record, and when it happened, per the logged-in user's timezone.
The last user to either modify the record (Draft Version
!= 0) or approve the record (Draft Version
= 0). The timestamp for when that version was generated.
If a record is deleted, it will show up in the Recycle Bin.
A deleted record will have Deleted By
and Deleted
filled in, with the timezone set to the logged-in user's.
Deleted By
and Deleted
are based on the user/time when the Delete Request was created, per the logged-in user's timezone., not when it was approved.
Cinchy always has one latest/up to date record at a time. Anytime changes are made to a record, a new version (normal or draft) is created, and the previous version is updated with a Replaced
timestamp.
Any record where Replaced
is empty is the current version of that record.
Each column must have a unique name. They must also not conflict with system columns (even if you aren't using Change Approvals on the table).
Each column has a data security classification. This defaults to blank, and can be set to one of 4 pre-configured settings (Public, Internal, Restricted, Confidential) or additional options can be created in the [Cinchy].[Data Security Classifications]
table by an administrator.
Currently there is no functionality tied directly to Data Security Classification - the tagging is just for internal auditing purposes. Future security settings will be tied to Data Security Classifications, rather than simply done at a column level.
Public: This type of data is accessible to all employees and company personnel. It can be used, reused, and redistributed without repercussions. An example might be job descriptions, press releases or links to articles.
Internal: This type of data is strictly accessible to internal company personnel or employees who are granted access. This might include internal-only memos, business performance, customer surveys or website analytics.
Confidential: Often, access to confidential data requires additional authorization and explanation of why access to the data is needed. Examples of confidential data include social security numbers, credit card details, phone numbers or medical records. Depending on the industry, confidential data is protected by laws like GDPR, HIPAA, CASL and others.
Restricted: Restricted data is the most sensitive data, so you would have to treat it extra carefully. If compromised or accessed without authorization, it could lead to criminal charges, massive legal fines, or cause irreparable damage to the company. Examples include intellectual property, proprietary information or data protected by state and federal regulations.
Each column can optionally have a description. The description is displayed when you hover on the column header in Data Management.
A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique identification. Every column in Cinchy is automatically assigned one. For more information,see the Table and column GUID page
Be careful when editing a GUID, as you can have unintended consequences.
Checked by default. After saving your changes this will add the column to be displayed in the default table (All Data by default). Generally it makes sense to be checked since there should be a view where all columns are displayed.
If you need to hide a column from certain users or groups you can do so in table controls. It's usually best to have a view where all table columns are displayed.
Makes the column a mandatory field. You won't be able to save or alter a record in a state where a mandatory field is blank.
Requires all values in the column to be unique. Adding a new record or modifying a previous record into a state where it's a duplicate of another record will cause an error and can't be saved.
If you need uniqueness across multiple columns instead, you can create an index in Design Table, add those columns and set the index to unique. If it needs to be more complicated, you can also create a calculated column and set that column to unique. For example, First Name doesn't need to be unique, but First Name + Last Name needs to be unique.
Some fields can also be set to multi-select.
For example, the column Players
in [Football].[Teams]
can be a multi-select field since each team will have multiple players.
Checked by default. This allows other tables to use the column as a link/relationship.
See Linking data to get more context on how they're used.
You want to pick identifying columns for linking, such as IDs or Name. Generally you want to use unique columns, but in some cases it's a better user experience to pick an almost unique field for readability.
For example, Full name may not be unique, but it's much easier to understand than Employee ID.
Checked by default. Some columns may not make sense for linking but can be useful to display when someone is choosing an option.
See Linking Data to get more context and tips.
If Data At Rest Encryption is enabled, you will see the option of Encrypt for columns. If this is checked, the column will be encrypted within the database. This is useful for hiding sensitive information so that people with access to the database directly don't see these fields.
Selecting encryption makes no difference to the user experience within the Cinchy platform. The data is displayed in plain text on the UI or via the query APIs.
Text columns have a maximum length, set to 500 by default.
These are equivalent to VARCHAR(n)
data type in SQL.
You can choose from 3 display formats for number - regular, currency, and percentage. You can also decide how many decimal places to display (0 by default). Note that these are both display settings, and won't affect how the number is stored.
These are equivalent to FLOAT(53)
data type in SQL (default, 8-byte field).
Cinchy has several Date column type display format options available:
MMM DD, YYYY (Oct 31, 2016)
YY-MM-DD (16-10-31)
DD-MM-YYYY (31-10-2016)
DD-MMM-YY (31-Oct-16)
Custom Format
The "Default Value" field isn't mandatory and should be left blank (best practice). However, if populated you won't be able to clear the default date value provided to a "blank" data (no date). You will only be able to overwrite it with another date value.
These are equivalent to DATE()
data type in SQL.
You must select a default value of yes (1) or no (0) for yes/no fields.
These are equivalent to bit
data type in SQL.
You can create a choice column (single or multi-select) in Cinchy. In this scenario, you specify all your choices (1 per newline) in the table design. A user will only be able to select from the options provided.
A calculated column uses values from other fields in the record for its evaluation. These columns also have a specified result type, which dictates the format of the calculated output.
Example:
A [Full Name]
column can be calculated as CONCAT([First Name], ' ', [Last Name])
.
These columns are similar to computed columns in SQL databases.
When creating a calculated column, you have two types to choose from: cached and live. This feature is accessible via the Advanced Settings and was a part of the 4.0 version update.
What It Does: Speeds up data retrieval.
How It's Stored: As an actual column based on a CQL formula.
When It Updates: Updates only if the data in the same row changes.
Example:
Changing a name in a single row only triggers a recalculation for that row's "Label" column.
If a cached column relies on a column from another table, changes in the other table's column won't automatically update the cached column. Make sure to account for this when using cached columns that depend on external data.
What It Does: A live calculated column is a non-cached calculated column that provides real-time data.
How It's Stored: As a formula executed on-the-fly during read or query.
When It Updates: Refreshes automatically upon every query or screen refresh.
When to use:
Your calculated column depends on a value from a linked table and you need the latest value from the linked table.
Your table doesn't contain many records.
Example:
A live "Label" column will update instantly if any referenced data changes, affecting all rows and tables.
Live columns consume more system resources.
Using user-defined functions in live calculated columns can cause errors if they reference other live calculated columns. Only use inbuilt functions in live columns if they reference other live columns.
If you created a spatial table, you will have access to the geography and geometry column types. These columns also have the option to be indexed via Index in the advanced settings on the column itself.
In the UI, this takes a well-known text (WKT) representation of a geometry object. You can modify or paste the WKT representation directly in the editor on the UI. Geometric functions can be performed on this column through CQL and calculated columns.
In the UI, this takes a well-known text (WKT) representation of a geography object. You can modify or paste the WKT representation directly in the editor on the UI. Geographic functions can be performed on this column through CQL and calculated columns.
Link columns allow you to establish inherent relationships with other records in other tables. See Linking Data for more details.
Hierarchy columns are link columns referencing the current table. For example, the below table contains a list of documentation pages, some of which also have sub-level pages (or even sub-sub-level pages). Using a Hierarchy Column shows the relationships between data.
Example 1: API Overview is the parent page. It had four sub-pages: API Authentication, API Saved Queries, ExecuteCQL, and Webhook Ingestion. Clicking on any of the links within the Sub-Pages column would return you to the row for that specific data set.
Example 2: Builder Guides is the parent page. It has five sub-pages: Best Practices, Creating Tables, Saved Queries, Integration Guides, and CInchy DXD Utility. In this example, we also have another level of hierarchy, wherein Best Practices is also a parent page, and Multilingual Support is its sub-page.
Another common use of Hierarchy columns are to show Manager/Employee relationships
Data Control Entitlements allow you to set up permissions for who can view, edit, or approve data within a table. Note that this was formerly called "Design Controls"
Data Entitlements define who has access to do what on your Cinchy platform. These access controls are universally set at a cellular level, meaning that you can configure user access in the way that best supports your use case.
You can set entitlements such that specific users can view, edit, delete, or insert data you want them to access.
Cinchy supports user-based, role-based, and attribute-based access controls.
User-based controls are entitlements given to specific users. This is done via the Users column.
Defining access based on a user means that even if the user changes their role, team, group, etc., they will still maintain their data entitlements.
Role-based controls are entitlements given to set(s) or users based on their role in your environment. For example, you are able to define that only the Product team has access to insert records into a Product Roadmap table. Instead of configuring the entitlements user by user, which takes time and can lead to incorrect data when/if employees shift teams, you can configure it such that any user within the Product team automatically maintains the same level of control across the board.
In Cinchy, this is done via the Groups column.
Attribute-based controls are entitlements given to a user(s) based on a defined set of tags. This can include attributes such as their team, their role, their security clearance, their location, etc.
Defining entitlements based on attributes allows you to drill even deeper into the specificity of which users can do what on your tables.
In Cinchy, you can set up an infinite number of attributes based on your specific use case(s). This is done via Row Filters.
For example, if you have an Employee table that contains salary information visible only to certain people, you can configure a Row Filter such that the logged in user MUST have at least one of the following attributes to be able to see it:
The user to whom the salary belongs
Their manager
All VP level executives
The CEO
You are able to add as many attributes into your Row Filter as needed. For example you could only allow a user with the following set of tags to view a row: Located in Toronto, on the Marketing Team, and with a Security Clearance level of 2.
When viewing a table, click on Data Controls > Entitlements from the left navigation menu (Image 1).
Currently both the table creator and anyone in the Cinchy Administrators
group has access to perform any action on any objects. You can give granular entitlements at a Group or a User level, for both viewing and editing access (Image 2).
In the above scenario, John Smith is part of the Developers group. They're able to view all columns via the entitlement to the Developers group, and they're able to edit both the First Name and Last Name column through different entitlements.
Table-level entitlements apply to the entire table.
Approving this entitlement enables users to see and search for the table in the Marketplace/Homepage.
Approving this entitlement enables users to export data from the table via the Manage Data screen (Image 3).
Approving this entitlement enables users to query the data from the table directly in the Query Builder (Image 4).
Approving this entitlement enables users to alter the structure of the table.
This is a builder/administrative function and shouldn't be granted to end users.
Approving this entitlement enables users to change the permissions on a table.
This is a builder/administrative function and shouldn't be granted to end users.
Column-level entitlements apply only to columns.
Approving this entitlement enables users to view all columns within the table.
Note that this applies to any new columns that are added to the table after providing this permission as well.
This is a drop down where you can select the specific columns you want to grant view access to for users.
Approving this entitlement enables users to edit all columns within the table.
Note that this applies to any new columns that are added to the table after providing this permission as well.
Giving a user edit permission will also give them view permission.
This is a drop down where you can select the specific columns you want to grant edit access to for users.
Giving a user edit permission will also give them view permission.
Approving this entitlement enables users to approve all columns within the table. This also allows users to approve Create and Delete requests.
Note that this applies to any new columns that are added to the table after providing this permission as well.
Approve permissions only apply when Change Approvals are enabled.
Giving a user approve permission will also give them view permission.
This is a drop down where you can select the specific columns you want to grant approve access to for users.
Approve permissions only apply when Change Approvals are enabled.
Giving a user approve permission will also give them view permission.
Link columns require both permission to the column within the table and the column in the link column itself.
Row-level entitlements apply to specific rows. Used in conjunction with Column Level entitlements this allows for granular cell level entitlements.
Approving this entitlement enables users to create new rows in the table.
Approving this entitlement enables users to delete rows in the table.
This is a CQL fragment that applies a filter to which rows will be viewable or editable. Think of the column entitlements and the fragment as a SQL statement applied to the table.SELECT {Edit Selected Columns} WHERE {Editable Row Filter}
Most of these examples will be with the editable row filter so it's easy to see the underlying data for comparison. However this can be done for viewable row data as well.
(Image 5)
With the following entitlements (Image 6):
Edit Specific Columns: Age
Editable Row Filter: [Age] > 30
(Image 7)
View Specific Columns: First Name, Last Name
Viewable Row Filter: [End Date] IS NULL OR [End Date] > GetDate()
(Image 8)
View Specific Columns: All
Edit Specific Columns: First Name, Last Name, Age
Viewable Row Filter: [First Name] = 'John'
Editable Row Filter: [First Name] = 'John'
(Image 9)
For the All Users group:
(Image 10)
View All Columns: Check
Edit Selected Columns: First Name, Last Name
Editable Row Filter: [User Account].[Cinchy Id] = CurrentUserId()
To allow a user to edit certain fields of their own data, you will need an association from a user to the [Cinchy].[Users]
table. You can then use the following function to allow edit for that user, where [...]
is the chain of link columns to get to the Users table.
[...].[Cinchy Id] = CurrentUserId()
You can apply conditional formatting rules. Our first iteration of this is done directly in the Formatting Rules table. A future iteration will add a UI within a table to create them.
This section has formatting rules available for columns.
This follows the same syntax as a view filter query.
Order in which the formatting rules will apply on the same table. Ordinal 1 will show up above ordinal 2.
Color to highlight the cell. If you want to add your own colors you can do so within System Colours and check off highlight under usage.
Table in which to apply the conditional formatting.
Columns to apply the conditional formatting rules to. You don't need to include any row condition columns within the highlight columns.
This page provides an overview on data erasure.
Data erasure allows you to permanently delete data in Cinchy. As the data owner, you can set an erasure policy on your table if you need to delete data for compliance reasons (Image 1).
The actual erasing of data happens during the maintenance window. Please check with your system administrators to confirm when maintenance is scheduled.
Once data is erased, any links pointing to erased data will look like this (Image 2):
The time is counted based on the record's modified time stamp, not the deleted time stamp. This means for change approval records it's the time when the pending delete request was approved and moved to the Recycle Bin, not when the delete request was made.
This page will describe how to attach files to your Cinchy table rows.
You are able to attach files and images to any row in a Cinchy table by creating a linked column that also links to the 'Files' system table of your platform.
If you have access to view the 'Files' table, you can also view every attachment on your system.
Cinchy supports the attaching of any file type.
Navigate to the table where you want to attach your file.
Click Design Table > Columns
Add a new column with the following parameters (Image 1):
Column Name: Cinchy recommends using a straightforward name like "Images and Files"
Data Type: Link
Linked Table: You will link this to the "Cinchy\Files" table
Linked Column: File Name
Advanced Settings: Make sure to select the "Multi-select" checkbox if you want the capability to add more than one file to a row.
Click Save
Navigate back to your table and locate your newly created files column.
You must create the first row before uploading a file.
To attach a file, click on the upload button (located in the top right hand corner of any cell in the column) (Image 2).
From the pop-up window, select Choose Files to pick your file from your machine, then click Submit (Image 3).
Once uploaded, your cell should look like this (Image 4):
To fully delete your attachment from the platform, you will need to navigate to the Files system table and delete the row associated with your attachment.
You must have edit access to the Files table.
Your file will appear crossed out in your original table and can't be opened or downloaded (Image 5).
This page outlines the indexing and partitioning options on your tables.
Use indexing to improve query performance on frequently searched columns within large data sets. Without an index, Cinchy begins a data search with the first row of a table and then follows through the entire table sequentially to find all relevant rows. The larger the table(s), the slower the search.
If the table you are searching for has an index for its column(s), however, Cinchy is able to search much quicker.
In the below example, we will set up a query for a Full Name field. When you create an index for that field, an indexed version of your table is created that's sorted sequentially/alphabetically.
When you run your query on this index, that table will be searched using a binary search.
A binary search won't start from the top record. It will check the middle record with your search criteria for a match. If a match it not found, it will check whether the found value is larger or smaller than the desired value. If smaller, it reruns the data check with the top half of the data, finding the median record. If larger, it reruns the data check with the bottom half of the data, finding the median record. It will repeat until your data is found.
This example uses a table with employee names (Image 1). We want to search for John Smith, using the Full Name column.
To set up your index, select Design Table from the left navigation tab.
Click Indexes (Image 2).
Select "Click Here to Add" and fill out the following information for a new index. Click save when done (Image 3):
Index Name.
Select the column(s) to add to your index. In this example, you want to select the Full Name column to be indexed.
You can select more than one column per index.
Select the Included column(s) to add to your index, if applicable.
The difference between regular columns and Included columns is that indexes with included columns provide the greatest benefit when covering your query because you can include all columns your query may reference, such as columns with data types, numbers, or sizes not allowed as index key columns.
You can now query the full name column for John Smith and receive the results quicker than if you hadn't set up the index (Image 4).
Note that there is no UI change in the query builder or your results when running a query on an indexed column. The difference will be in the speed of your returned results.
A full-text index is a special index type that provides index access for full-text queries against character or binary column data. A full-text index breaks the column into tokens and these tokens make up the index data.
Click on Design Table > Full-text Index
Add in the desired column(s) and click save when done (Image 5).
You may want to use a columnar index when:
Your table has over 1 million records.
Your data is rarely modified. Having large numbers of deletes can cause fragmentation, which adversely affect compression rates. Updates are also processed as deletes followed by inserts, which will adversely affect the performance of your loading process.
Click on Design Table > Columnar Index
Add in the desired column(s) and click save when done (Image 6).
When using a Columnar Index, you won't be able to add any new columns to your table. You will need to delete the index, add your column(s), and then re-add the index.
Partitioning data in a table is essentially organizing and dividing it into units that can then be spread across more than one file in a database. The benefits of this are:
Improved efficiency of accessing and transferring data while maintaining its integrity.
Maintenance operations can be performed on one or more partitions more efficiently.
Query performance is improved based on the types of queries most frequently run.
When creating a partition in Cinchy, you use the values of a specified column to map the rows of a table into partitions.
This example sets up a partition that divides the employees based on a Years Active column (Image 7). You want to divide the data into two groups: those who have been active for two years or more, and those who have only been active for one year.
Click on Design Table > Partition
Fill in the following information and click save when done (Image 8):
Partitioning Column: this is the column value that will be used to map your rows. This example uses the Years Active column.
Type: Select either Range Left (which means that your boundary will be <=) or Range Right (where you boundary is only <). In this example we want our boundary to be Range Left.
Add Boundary: Add in your boundary value(s). Click the + key to add it to your boundary list. In this example we want to set our boundary to 2.
Once set up, this partition will organize the data into two groups, based on our boundary of those who have a Years Active value of two or above.
You can now run a query on your partitioned table (Image 9).
Note that there is no UI change in the query builder or your results when running a query on a partitioned table. The difference will be in the speed of your returned results.
You can find the Data Controls menu on the left-hand navigation bar of a table.
From here you may select to:
This page guides you through creating table in Cinchy.
Navigate to the Cinchy homepage. In the upper left-hand corner, click on Create to get started. (Image 1)
Select either a Standard or a Spatial Table (Image 2), per the descriptions below.
Spatial Table: A spatial table allows you to create geography and geometry column types, as well as geospatial indexes. You won't be able to create partitions on a spatial table.
Standard Table: You can't create geography or geometry columns in a standard table.
You can't convert from one type to another and will have to either recreate your table or link to another table with geospatial columns.
Any existing tables created before installing Cinchy Platform v4.19.0 are standard tables.
Select From Scratch (Image 3).
A new page will open with the Table Info tab (Image 4). Input the following information:
Table Name: This is a mandatory field, and must be unique to the domain you select.
Icon: You can pick an icon and colour to differentiate your table on thee home screen.
Domain: This is a mandatory field. Select the domain that this table will reside under. If you are have administrative privileges, you can also create new domains from this screen.
Description: You can give your table a description, which will be displayed on the homepage.
A new page will open with the Columns tab (Image 6). Every table in Cinchy must have at least one column. Input the following information:
Column Name: Input a unique column name.
Data Type: You can select from the following:
Text: All data in this column must be input as text
Number: All data in this column must be input numerically
Date: All data in this column must be in date format. The default is yyyy-mm-dd however you can change that.
Yes/No: All data in this column must be in Yes/No format
Choice: Data entered in this column must be selected from a set of choice answers that you provide
Link: Data in a link column is pulled from elsewhere on Cinchy
Description: Enter a description of your column
Data Security Classification: You can select from Public, Internal, Confidential, or Restricted. Additional options can be created in the [Cinchy].[Data Security Classifications]
table by an Administrator.
Restricted: Restricted data is the most sensitive data, so you would have to treat it extra carefully. If compromised or accessed without authorization, it could lead to criminal charges, massive legal fines, or cause irreparable damage to the company. Examples include intellectual property, proprietary information or data protected by state and federal regulations.
Confidential: Often, access to confidential data requires additional authorization and explanation of why access to the data is needed. Examples of confidential data include social security numbers, credit card details, phone numbers or medical records. Depending on the industry, confidential data is protected by laws like GDPR, HIPAA, CASL and others.
Internal: This type of data is strictly accessible to internal company personnel or employees who are granted access. This might include internal-only memos, business performance, customer surveys or website analytics.
Public: This type of data is freely accessible to all employees and company personnel. It can be freely used, reused, and redistributed without repercussions. An example might be job descriptions, press releases or links to articles.
Currently there is no functionality tied directly to Data Security Classification - the tagging is just for internal auditing purposes. Future security settings will be tied to Data Security Classifications, rather than simply done at a column level.
You may have further mandatory or optional data to input depending on your selected Data Type.
Click Save to finalize your table.
You may return to change the structure of the existing table (such as rename columns, add new columns, change data type) by clicking on the Design Table button on the left-hand navigation (Image 8).
Navigate to the Cinchy homepage. In the upper left-hand corner, click on Create to get started (Image 9).
Select either a Standard or a Spatial Table (Image 10), per the descriptions below.
Spatial Table: A spatial table allows you to create geography and geometry column types, as well as geospatial indexes. You won't be able to create partitions on a spatial table.
Standard Table: You can't create geography or geometry columns in a standard table.
You can't convert from one type to another and will have to either recreate your table or link to another table with geospatial columns.
Any existing tables created before installing Cinchy Platform v4.19.0 are standard tables.
Select Import a CSV (Image 11).
Enter the following information:
Domain: Select the domain that you table will reside under. If you are have administrative privileges, you can also create new domains from this screen.
File: To create the table, you must upload a .csv file.
When creating a table via Import a CSV, some settings will be set by default:
Default Table Name: The name of the file will be used as the name of the table (a number will be appended if there is a duplicate - ex. uploading Teams.csv will create a table named Teams 1, then Team 2 if uploaded again). You can always rename the table after it has been created.
Default Table Icon: The icon defaults to a green paintbrush.
Default Column Types: Columns by default will be created as a text field, with a maximum length of the longest value in the column. If a column has only numeric values in it, it will be created as a numeric column.
To update these settings, navigate to the Design Table tab on the left navigation bar (Image 12).
When you first create a table, a default view called All Data will be created for you, which you can find on the left navigation bar under Manage Data (Image 13).
You can create additional views by clicking on "+Create View" (Image 14).
You may chose to create a view From Scratch or by Copying an Existing view (Image 15).
Select From Scratch.
The Columns tab will open. Create a Name for your View (Image 16).
If you'd like this to become the default view, toggle the default setting to On (Image 16).
4. Select the column(s) that you want to be visible in this view (Image 17). You may rearrange the column order using drag and drop.
5. Click on the Sort tab in the left navigation bar (Image 18).
6. Use this screen to select which columns you'd like to sort your data by, and in which order. You may rearrange the columns using drag and drop (Image 19).
8. Click on the Permission tab in the left navigation bar. Here, you may set permissions for who can use this view. By default, it's set to All Users (Image 21).
9. Select Save to finalize your view.
Select From Existing.
Select which view you would like to copy (Image 22).
To update any view, including the Add Data view, click on the pencil icon next to the view's name under Manage Data (Image 23).
Once you create a table, it will be added to your bookmarks by default. Other users (or if you un-star the table from your bookmarks) will see it in the Homepage if they have permissions to.
For more on Included Columns,
Columnar Indexing (also known as ) is available when running SQL Server 2016+. It's not currently available on a PostgreSQL deployment of the Cinchy platform.
are used for storing and querying large tables. This index uses column-based data storage and query processing to improve query performance. Instead of rowstore or b-tree indexes where the data is logically and physically organized and stored as a table with rows and column, the data in a columnstore indexes is physically stored in columns and logically organized in rows and columns.
For more formation on creating, modifying or managing Partitioning, please visit Microsoft's documentation.
Change your
When you are finished with the Info page, select Columns from the left-hand navigation bar (Image 5). for additional information about column types.
Calculated: Data is this column is calculated using a expression
Advanced Settings: Select any checkboxes that pertain to your column. for more information about these parameters.
7. Click on Design Controls > Entitlements in the left navigation pane to set your permissions (Image 7). You may set these as granular as you choose. You may also set permissions on a
The column names in your .csv file must not conflict with
7. Click on the Filter tab in the left navigation bar. Here, you may use to focus your view (Image 20).
A GUID is a globally unique identifier, formatted as a 128-bit text string, that represents a unique identification. Both Cinchy Tables and Columns have a GUID.
This feature is particularly useful when deploying between Cinchy instances.
For example, in a model deployment, you must have matching GUIDs on your columns in order for them to properly load between environment A and environment B. There might be times when these GUIDs don’t automatically match, however, such as if you manually added a new column to environment B and also manually added it to environment A.
In this case, the two columns would have different GUIDs, and the model deployment would fail. With this new feature, however, you can match up conflicting GUIDs to properly load your model.
You have the ability to display and edit these table and column GUIDs within the Design Table screen.
Table GUIDs musts be unique to your specified environment.
Column GUIDs must be unique to the table.
Table GUIDs can be found under Design Table > Info > GUID (Image 1).
Click on the pencil icon to edit the GUID.
GUIDs must adhere to supported formats for Cinchy.
32 hexadecimal (base-16) digits.
Displayed in five groups, separated by hyphens.
The groups take the form of 8-4-4-4-12 characters, for a total of 36 characters (32 hexadecimal characters and 4 hyphens).
Example: 123e4567-e89b-12d3-a456-426614174000
Warning: Changing the value may have damaging effects, proceed with caution.
3. Column GUIDs can be found under Design Table > Columns > GUID (Image 2).
4. Click on the pencil icon to edit the GUID.
GUIDs must adhere to supported formats for Cinchy.
32 hexadecimal (base-16) digits.
Displayed in five groups, separated by hyphens.
The groups take the form of 8-4-4-4-12 characters, for a total of 36 characters (32 hexadecimal characters and 4 hyphens).
Example: 123e4567-e89b-12d3-a456-426614174000
Warning: Changing the value may have damaging effects, proceed with caution.
When you create a column within Cinchy, you can choose to create a link column. A link column allows you to establish inherent relationships with other tables.
Linking is done by the Cinchy ID, which is unique. When you create a link column, you select a column to link to. This is simply a decision on which field to show from the linked record. You should pick a unique field to link on to avoid confusion if possible.
Once a record is created, its Cinchy ID never changes. This means that modifying the row of data in the linked table won't change the relationship in your table to that row. This also means that if you didn't use a unique column, even though the UI looks the same, you are actually linking to different rows.
In general, you should only use unique columns as the linked column. This needs to be balanced with readability in other tables.
For example, Full Name might not be unique to every employee, but it's more readable and understandable than Employee ID. In other cases, it makes sense to link via an ID and add a display column to show relevant information.
To help other builders follow best practices of only linking to unique (or close to unique, such as Full Name) columns, you should un-check the Allow Linking checkbox for non-unique columns so they won't be able to use it for linking.
If this option is unchecked, it prevents users from showing this column in another table.
For example, if you have an ID card # within an employees table, you may not want to display it to the rest of the company because it simply would not be relevant when they're linking to employees and want to see additional information (such as department, title, location). Arguably, a lot of these columns are also taken care of by access controls (since most people won't have access to view that column).
Deselecting this box should be done sparingly, as it doesn't impact the security of your data, only how convenient it's to see it.
When you select a record to link to on the Manage Data screen, it can be useful to see additional information about the records to ensure that it's the record you want to link to (Image 1). You can add additional display columns in the advanced options for link columns (Image 2).
When you type in the cell, all displayed columns will be searched through, not just the Linked Column (Image 3). (Green doesn't have a B in it, but #00B050 does so the Green record shows up)
The link filter filters out records from the drop down list. This is useful for reducing the options to only the typical use case. Commonly used for filtering the drop down to a list of active users or other resources, while not preventing someone from entering missing records with inactive resources.
This is only a display filter; it doesn't prevent other values from being entered as long as they're valid records in the linked table.
You can define 1 to 1, 1 to many, and many to many relationships.
Generally it's rare to link 1:1 relationships since they should usually be in the same table. For example, you would not have a separate table for Employee Phone Number and Employee Address, they would simply be two columns within the Employees table. However there are cases nonetheless where it makes sense, for example, a Keycard tracking table where each keycard has 1 assigned employee.
To enforce a 1:1 relationship within Cinchy, you set the unique constraint and leave it as single-select when creating a link column.
A common relationship to have is a one to many relationship. For example, one customer can have multiple invoices.
To enforce a 1:many relationship within Cinchy, you want to create a link column in the table on the “many” side of the relationship (in the above example, in the invoices table) and leave the link column as single select.
You can also have a many to many relationship. For example, you can have multiple customers, and multiple products. Each customer can order multiple products, and each product can be ordered by multiple customers. Another example is books and authors. An author can write multiple books, but a book can also have multiple authors. You can express many to many relationships in two ways.
For the use case of multiple customers and multiple products, you can use orders as an intermediary table to create indirect relationships between customers and products. Each order has one customer, and each order has multiple products in it. You can derive the relationship between customers and products through the orders table.
To create a many:many relationship through a different entity, you want to create a table for orders. Within orders, you want to create a single-select link to customers and a multi-select link to products.
For the use case of books and authors, it makes sense to create a multi-select link column in the Books table where multiple authors can be selected.
To create a multi-select link column in Cinchy, you select the Multi-Select option when you create a new link column.
This page guides you through the Cinchy Data Experience Deployment Utility.
CinchyDXD is a downloadable utility used to move Data Experiences (DX) from one environment to another. This includes any and all objects and components that have been built for, or are required in, support of the Data Experience.
The following sections in this document will outline the basics of how to build, export**,** and install a DX’s.
Items of note moving forward in this document:
Source Environment is the environment in which the DX is built.
All objects need to be created in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).
Target Environment is the environment in which the DX will be installed.
The example DX is a simple Currency Converter DX that consists of
One (1) table
One (1) query
This example doesn't include the following:
NO applets
NO integrated clients
NO Data Sync Configurations
NO Reference Data
NO Models
NO Groups
NO System Colours
NO Formatting Groups
NO Literal Groups
Future iterations of this document will add to this example's complexity level.
The general steps to deploying the CinchyDXD Utility are as follows:
This page outlines Step 5 of Deploying CinchyDXD: Repackaging the Data Experience
After you have made any applicable changes to your DX, you must re-export the package out of your source environment.
If you have added or removed any of the following while updating your DX, you will need to update the Data Experience Definition table:
Name
Tables
Views
Integrated Clients
Data Sync Configurations
Listener Configurations
Secrets
Reference Data
User Defined Functions
Models
Groups
System Colours
Saved Queries
Pre-Install Scripts
Post-Install Scripts
Applets
Literal Groups
Webhooks
Builders
Builder Groups
Sync GUID
If you have added or removed any of the following while updating your DX, you will need to update the Data Experience Reference Data table:
Name
Ordinal
Filter
New Records
Changed Records
Dropped Records
Table
Sync Key
Expiration Timestamp Field
Sync GUID
Using PowerShell you will now export the Data Experience you have defined within Cinchy.
Launch PowerShell and navigate to your CinchyDXD folder
You can launch PowerShell right from your file explorer window in the CinchyDXD file, saving you an extra step of navigating to the CinchyDXD folder manually in PowerShell.
2. In the PowerShell window type in cin
and hit tab on your keyboard
3. In the PowerShell command line next to .\CinchyDXD.ps1 type in export
4. Hit Enter on your keyboard
If you don't remember the mandatory parameters, you can click the enter on your keyboard after typing in .\CinchyDXD.ps1 export, PowerShell will provide you with the required and optional components to export the data experience.
5. You must now enter your mandatory export parameters.
The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters are on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute
You will need to update your version number
Sample:
.\CinchyDXD.ps1 export `
-s "<source Cinchy url>" `
-u "<source user id>" `
-p "<source password>" `
-c "C:\Cinchy CLI v4.0.2" `
-d "C:\CLI Output Logs" `
-g "8C4D08A1-C0ED-4FFC-A695-BBED068507E9" `
-v "2.0.0" `
-o "C:\CinchyDXD_Output" `
\
Enter the export parameters into the PowerShell window (Image 1).
Hit Enter on your keyboard to run the export command
PowerShell will begin to process the export. Once the export is complete, PowerShell will provide you with an export complete message (Image 2).
Ensure that the DXD Export Folder is populated (Image 3).
Ensure that the Data Experience Release table is populated in the source environment (Image 4).
Ensure that the Data Experience Release Artifacts table is populated in the source environment (Image 5).
This page provides an overview of Data Compression.
If you need to manage space within your database, you can set a data compression policy. Currently we allow you to permanently delete versions in the collaboration log. Be aware that the current version of compression is a LOSSY process (data will be permanently deleted). Take that into consideration when configuring a policy.
We recommend you keep more versions rather than less versions. You can think of the above as keep any versions newer than 180 days and keeping the most recent 50 versions. As long as a version satisfies one of the two keep conditions, we keep it. Using the example above:
A version that’s from 205 days ago but is amongst the most recent 50 versions (For example: version 22 of 60) will be kept, because it satisfies at least one condition of being in the most recent 50 versions.
A version that’s from 163 days ago but is version 65 of 80 will be kept, because it satisfies at least one condition of being less than 180 days old.
A version that’s from 185 days ago and is version 65 of 80 will be deleted because, it doesn’t satisfy either of the conditions.
The actual compression of data happens during the maintenance window. Please check with your system administrators to confirm when maintenance is scheduled.
The number of versions is based on the major version and not the minor version. This means for a record on version 35.63 with a keep most recent 10 versions, it will keep all versions 26.0 +, rather than all versions 35.44+.
System tables are included out-of-the-box with your Cinchy platform, and can be used to track and manage a variety of data.
You can easily query for a list of your system tables using the below:
The system tables included are:
Applets: This system table manages a list of all your integrated applications
Data Experience Definitions: This is a system table for managing data experience definitions
Data Experience References: This is a system table for managing reference data for data experiences
Data Experience Release Artifacts: This is a system table for maintaining data experience release artifacts
Data Experience Releases: This is a system table for maintaining data experience releases
Data Security Classifications: This is a system table for maintaining data security classifications
Data Sync Configurations: This system table manages a list of all your data sync configurations.
Domains: This system table manages a list of all the domains in your instance.
Execution Log: This system table tracks the execution logs of data syncs
Formatting Rules: This system table manages your formatting rules
Groups: System table for managing all groups
Literal Groups: This system table maintains a list of groups
Literal Translations: This system table maintains a list of literal translations
Literals: This system table maintains a list of literals
Regions: This system table maintains a list of regions
Saved Queries: This system table manages a list of all user saved queries that can be exposed via the REST API
System Colours: System table for maintaining colours
Table Access Control: This system table maintains a list of all table access controls in your instance
Table Columns: This system table manages a list of all the system column definitions
Tables: This system table manages a list of all the tables in your instance.
User Defined Functions: This system table manages a list of all your user defined functions
Users: System table for managing all user information including enabling/disabling the ability to create tables, queries, etc.
Views: This system table manages a list of all the views in your instance.
This page explores Saved Queries
Saved queries allows you to query any data within Cinchy (respecting entitlements) and save them to be used as APIs by external systems.
You can access your Saved Query directly by either CinchyID or the domain and name of the Saved Query.
<baseurl>/Query/Execute?queryId=<cinchyid>
<baseurl>/Query/Execute/<domain>/<saved query name>
You can find this information in the Saved Queries table.
You can also search your Cinchy Homepage to find your Saved Query.
From the homepage, select Create > Query (Image 1)
Fill out the following information:
Under the Info tab, you can fill out information on the query if you wish to save it (Image 2):
Query Name: Mandatory field. Must be unique within the Domain.
Icon: You can optionally pick a non-default icon, as well as color for your table. This will be displayed in My Network.
Domain: You need to select a Domain your query will reside in. As an admin, you can also create new domains in this screen.
Description: You can give your query a description. This description will be displayed on the home screen to users browsing the marketplace. It will also be searchable.
Return Type: Queries have six different return types:
Query Results (Approved Data Only)
This is the default return type, it returns a table from a select query with only approved data for tables with Change Approval enabled, or all data for tables without Change Approval. This is generally used for external APIs as you will want to query approved data, rather than drafts.
Query Results (Including Draft Data)
This return type returns a table from a SELECT query (including draft data) for tables with Change Approval enabled. Use this return type when looking to display results of records that are pending approval.
Query Results (Including Version History)
This return type returns a table from a SELECT query (including draft data) with historical data for all tables, as seen in the Collaboration Log of any record. This data includes all changes that happened to all records within the scope of the select query.
Number of Rows Affected
This return type returns a single string response with the number of rows affected if the last statement in the query is an INSERT, UPDATE, or DELETE statement.
Execute DDL Script
Use this return type when your query contains DDL commands that implement schema changes such as CREATE|ALTER|DROP TABLE, CREATE|ALTER|DROP VIEW, or CREATE|DROP INDEX.
Single Value (First Column of First Row)
This return type returns a result of 1 row x 1 column, irrespective of the underlying result set.
In the Query screen, you can modify and run your query (Image 3).
On the left hand side you have the Object tree, which shows you all the domains, tables, and columns you have access to query within Cinchy. You can search or simply navigate by expanding the domains and tables.
You can drag and drop the columns or table you're looking for into the Query Builder.
Once you are satisfied with your query, you can click save to keep a copy. You can then find your query in the "Saved Queries" table (Image 4):
Once you've set up your saved query, you can find it on your homepage (Image 5).
2. Clicking the query will allow you to "Execute Query" and show you the result set (if there is a SELECT at the end). Sometimes the query will have parameters you need to fill out first before executing (Image 6).
Once you execute a query, you can switch the Display to Pivot Mode to see different visualizations of the data (Image 7).
If you want to share the report, you can click the Pivot URL button on the top right to copy the URL to that pivoted report. Simply add it as an applet and bookmark it to return to the pivoted view!
For more information and documentation on Cinchy Query Language (CQL), please see the CQL basics page
This page outlines Step 1 of Deploying CinchyDXD: Building the Data Experience
Remember that you must create all objects in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).
Create your data experience (DX) in a virtual data network.
Log in to Cinchy: URL: <Cinchy source URL> User ID: <source user id> Password: <source password>
From the Homepage, select Create
Select Table > From Scratch
Create the table with the following properties (Image 1).
Click Columns in the left hand navigation to create the columns for the table
Click the “Click Here to Add” a column tab to add a column
8. Click the Save button to save your table
In your newly created table, enter the following sample data (Image 2).
Create a simple query that pulls information from the Currency Exchange Rate table that will allow a simple currency exchange calculation.
From the Homepage, select Create.
Select Query.
In the query builder locate the Currency Exchange Rate table and drag it to the “FROM” line (Image 3).
You will find the Currency Exchange Rate table in the “Sandbox” domain. To expand the “Sandbox” domain, click on the gray arrow (or double click)
In the “SELECT” line drag and drop the “Rate” column and enter in the following (Image 4):
SELECT [Rate] * @Amount AS 'Converted Amount'
You will find the Rate column by expanding the Currency Exchange Rate table, similarly to expanding the “Sandbox” domain
Enter in the following for the WHERE clause (Image 5):
WHERE [Deleted] IS NULL
AND [Currency 1] = @Currency_1
AND [Currency 2] = @Currency_2
Click the Execute (or play) icon to run the query (Image 6):
Test the query by entering in the following and clicking the submit button (Image 7):
@Amount: 100
@Currency_1: CAD
@Currency_2: USD
Save the Query by clicking on the Info tab (Left Navigation)\
Enter in the following details for the query (Image 8):
Click the Save button (Image 9).
This page outlines how to delete tables in the Cinchy platform.
You can delete tables on your Cinchy platform in three ways:
Use the "Design Table" tab in the relevant table. This option is available to any user with the "Design Table" entitlement on the table.
Use CQL. This option is available to any user with the "Design Table" entitlement on the table.
Use the Tables table. This option is available to any user with the "Delete Row" entitlement on the table, which is usually an Administrator.
To ensure that the relevant user has the correct entitlements, you can navigate to the Data Controls > Entitlements tab of the relevant table. The "Design Table" column should be checked off (Image 1).
Deleting a table via any of the below methods results in your data becoming inaccessible, however it will technically still be available on the underlying database. To fully remove deleted data, you must use the Data Erasure capability,
Navigate to the table you wish to delete as a user with "Design Table" access on it.
Select "Design Table" > "Info" > Delete (Image 2).
Navigate to the Query Builder as a user with "Design Access" on the table you wish to delete.
Syntax
Example
Find the row with the table that you want to delete.
Right-click on the row > Delete
This page outlines how to restore tables, columns, and rows in Cinchy
This page documents the method of restoring deleted tables, columns, and rows in your Cinchy table.
Remember that you can always revert a changed or deleted record to a previous state using the
To restore a row that has been deleted:
In the table where you want to restore the row, navigate to the Recycle Bin.
Locate the deleted row.
Right click anywhere in the row > Restore Row (Image 1).
You should see a "Restore Successful" pop-up.
To restore a column that has been deleted or changed:
Note: You need insert access on the Table table to complete these steps.
This method will revert the entire table, including any changes made after the column was deleted.
Navigate to the [Cinchy].[Tables] table.
Find the row with the table that has the column you want to restore > right click anywhere in the row > Collaboration Log > Revert to a previous version (Image 2).
You should see a "Revert Successful" pop-up.
To restore a table that has been deleted:
Note: You need insert access on the Tables table to complete these steps.
Navigate to the [Cinchy].[Tables] table.
Navigate to the Recycle Bin.
Find the row with the table that you want to restore > right click > "Restore Row" (Image 3)
You should see a "Restore Successful" pop-up.
Make sure the Deleted date is the same, and you don't retrieve previously dropped columns.
This page outlines Step 4 of Deploying CinchyDXD: Updating the Data Experience
The Data Experience has required updates you must create in your source environment. You don't want to have to repeat the updates in both the source and target environments. The upcoming section will show how to update the data experience in the source environment so that you can then re-package and reinstall in the target environment.
Log back into your source environment using the following: URL: <Cinchy source url> User ID: <source user id> Password: <source password>
Make the following changes to the Currency Exchange Rate table:
Column Details | Values |
---|
3. Save your changes before leaving the table.
Update the Currency Converter query to reflect column name changes that were made in the Table Updates section above (Image 1).
Be sure to update the @Currency_1 and @Currency_2 labels to better reflect the input fields
Test the query to validate that it's still functioning (Image 2 and 3).
Save your query.
This page outlines Step 3 of Deploying CinchyDXD: Installing the Data Experience
The install of a Data Experience is executed in a different environment than that of the export. Please ensure that before moving forward with the following instructions you have an environment to install the data experience into. The install of a data experience MUST be done in the same version. Your source and target environment version MUST be the same (For example, Source Version = 4.11 | Target Version = 4.11).
Below are the details that will be required for the installation environment:
Source: <Cinchy target url>
UserID: <target user id>
Password: <target password>
Using PowerShell you will now install the Data Experience you have exported out of Cinchy.
Open the File Explorer and navigate to your DX exported folder (Image 1).
In the folder path URL of the exported data experience type in PowerShell to launch PowerShell for that path (Image 2).
Hit Enter on your keyboard, the PowerShell window will appear (Image 3).
In the PowerShell window, type in cin
and hit tab on your keyboard (Image 4).
In the PowerShell command line, type install (Image 5).
Hit Enter on your keyboard (Image 6).
The PowerShell window will provide you with the required and optional components to install the DX.
You must now set up your mandatory install parameters
The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters are on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute
Sample:
.\CinchyDXD.ps1 install`
-s "<target Cinchy url>" `
-u "<target user id>" `
-p "<target password>" `
-c "C:\Cinchy CLI v4.0.2" `
-d "C:\CLI Output Logs" `
Be sure that the user(s) and group(s) required to install a DX are existing in your target environment. If they don't exist, PowerShell will generate an error message when you attempt to install the DX.
Enter the install parameters into the PowerShell window (Image 7).
Hit Enter on your keyboard to run the install command. Once the Data Experience has been installed you will get a message in PowerShell that the install was completed (Image 8).
Ensure that the Models Table is populated in the target environment with the model that was installed (Image 9).
Ensure that the Currency Exchange Rate table exist in the target environment (Image 10).
Ensure that the Currency Converter query exist in the target environment (Image 11).
Ensure that the Data Experience Definitions table is populated with the DX parameters that were set up in the source environment (Image 12).
Ensure that the Data Experience Releases table in the target environment is populated (Image 13).
This page outlines Step 2 of Deploying CinchyDXD: Packaging the Data Experience
The CinchyDXD utility takes all the components (tables, queries, views, formatting rules) of a DX and package them up so they can be moved from one environment to another.
Remember that all objects need to be created in one source environment (ex: DEV). From there, DXD will be used to push them into others (ex: SIT, UAT, Production).
The CinchyDXD utility is only required (made accessible) for the environment that's packing up the data experience. It's not required for the destination (or target) environment.
For CinchyDXD to work, you must have CinchyCLI installed. For further installation instructions please refer to CLI () documentation
To access the Data Experience Deployment utility please contact Cinchy support (support@cinchy.com).
To download the Utility:
Login to Cinchy
Navigate to the Releases Table
Select the Experience Deployment Utility View
Locate and download the utility (Cinchy DXD v1.7.0.zip)
The CinchyDXD utility is only upwards compatible with Cinchy version 4.6+
Unzip the utility and place the folder at any location on a computer that also has CinchyCLI installed
Create a new folder in the same directory that will hold all of the DX exports generated (CinchyDXD*Output) *(Image 1)._
This folder will then hold all your deployment packages.
Launch a PowerShell console window
From the console, navigate to the CinchyDXD directory (Image 2 and 3).
From within your file explorer window, type “PowerShell” into the file path. It will launch a PowerShell window already at the folder path
PowerShell requires an initial setup when using CinchyDXD.
From your PowerShell window type cin
Hit Tab on your keyboard (Image 4).
Hit Enter on your keyboard (Image 5).
You will get an error message (above) that CinchyDXD.ps1 can't be loaded because the running script is disabled
.
To resolve this error:
From your start menu, search for PowerShell and select Run as Administrator (Image 6).
When prompted if you want to allow this app to make changes on your device, select Yes.
In your PowerShell Administrator window enter Set-ExecutionPolicy RemoteSigned (Image 7).
Hit Enter on your keyboard (Image 8).
When prompted with the Execution Policy Changes, enter A for “Yes to All”
Close the PowerShell Administrator window
Navigate back to your PowerShell window for the CinchDXD window
From your PowerShell window type cin
Hit Tab and then Enter on your keyboard (Image 9).
The basic CinchyDXD instructions will be displayed. You will be able to execute commands such as exporting and installing a Data Experience.
Cinchy uses four tables for packing up and deploying a Data Experience (Image 10).
The Data Experience is defined and packed in what will be referred to moving forward as the Source Environment. Where the environment that the Data Experience will be deployed to will be referenced to as the Target Environment.
Data Experience Definition Table: Where the data experience is defined (tables, queries, views, formatting rules, UDF’s etc.)
Data Experience Reference Data Table: Where we define any data that needs to move with the Data Experience for the experience to work (lookup values, static values that may need to exist in tables - it typically would not be the physical data itself)
Data Experience Releases Table: Once a Data Experience is exported, an entry is created in this table for the export containing:
Version Number
Release Binary is the location where you can archive/backup your release history in Cinchy Please Note: if you have your own release management system, you do have the option to opt out of archiving the releases in Cinchy and check the release into your own source control
Release Name
Data Experience
Data Experience Release Artifact Table: Stores all of the files that are part of the Data Experience package as individual records along with all of the binary for each record
When setting up a Data Experience definition, you will need one definition for each Data Experience you wish to package and deploy to a given number of Target Environments.
Locate and open the Data Experience Definitions table (Image 11).
2. Complete the following (Image 12):
If you make changes to the DX in the future, you aren't required to build a new Data Experience Definition in this table, you will update the existing definition. If you need to review what the definition looked like historically, you can view it via the Collaboration log.
When setting up a Data Experience Reference Data definition, you will need one (1) definition for each Reference Data table you wish to package and deploy with your Data Experience to the Target Environment.
This table set up is similar to setting up a CLI.
Locate and open the Data Experience Reference Data table (Image 13).
Based on the configuration set up in this table, Cinchy will export the data and create CSV and CLI files.
This example doesn't have Reference Data as part of the Data Experience.
Using PowerShell you will now export the Data Experience you have defined within Cinchy.
Launch PowerShell and navigate to your CinchyDXD folder (Image 14).
Reminder: you can launch PowerShell right from your file explorer window in the CinchyDXD folder by entering in the folder path “PowerShell” and hitting enter on your keyboard. Saving you an extra step of navigating to the CinchyDXD folder manually in PowerShell (Image 15).
In the PowerShell window type in cin
and hit Tab on your keyboard (Image 16).
Hit Enter on your keyboard, you will see a list of commands that are available to execute (Image 17).
In the PowerShell command line hit your “up” arrow key to bring back the last command and type export next to it (Image 18).
Hit Enter on your keyboard (Image 19).
The PowerShell window will provide you with the required and optional components to export the data experience.
You must now set up any mandatory export parameters
The parameters executed in PowerShell can exist on one line in PowerShell, however for legibility (below) the parameters have been put on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute.
Please ensure that you are using the sample below as a sample. You will be required to provide values that correspond to:
the URL of the source environment
the User ID for the user who is performing the export
the Password for the user who is performing the export
your folder path for where CLI is stored
your folder path for where the CLI output files are written to
the GUID for the Data Experience that's generated in the Data Experience Definition table
your own version naming convention
your folder path for where your CinchyDXD output files are written to
Sample: .\CinchyDXD.ps1 export `
-s "<cinchy source url>" `
-u "<source user id>" `
-p "<source passsword>" `
-c "C:\Cinchy CLI v4.0.2" `
-d "C:\CLI Output Logs" `
-g "8C4D08A1-C0ED-4FFC-A695-BBED068507E9" `
-v "1.0.0" `
-o "C:\CinchyDXD_Output" `
\
Enter the export parameters into the PowerShell window (Image 20).
Hit Enter on your keyboard to run the export command.
PowerShell will begin to process the export. Once the export is complete, PowerShell will provide you with an export complete message (Image 21).
Ensure that the DXD Export Folder is populated (Image 22).
2. Ensure that the Data Experience Release table is populated in the source environment (Image 23).
3. Ensure that the Data Experience Release Artifacts table is populated in the source environment (Image 24).
This page contains various Integration Guides
You can use various methods to establish a connection between Cinchy and Microsoft Excel, such as using Basic Auth, Personal Access Tokens, or Bearer Tokens.
Review each section below for further details.
Excel connects to queries within Cinchy, so before you use any of the connection methods below you will need to create one that represents your dataset. Once created, you will need to copy down the , located as a green button on the right-hand side of the Execute Query screen.
The structure of the URL endpoint is <your Cinchy instance URL>/API/<the name of your query>. You might optionally have querystring
parameters at the end as well.
For example: http://your.cinchy.instance.domain/API/YourQueryDomain/API Test
Note that for Basic Authentication with a result format of CSV we will use a slightly different URL endpoint.
For Basic Auth: /API/ becomes /BasicAuthAPI/
For CSV results you will add the querystring
parameter of ResultFormat=CSV
Our example URL of a basic auth using CSV results would then become: http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV
Launch Excel and navigate to Data > Get Data > From Other Sources > Blank Query (Image 1).
In the expression box that appears, enter the below text to add in your query as your data source (Image 2):
=Csv.Document(Web.Contents("API ENDPOINT URL"))
Example:
=Csv.Document(Web.Contents("http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV"))
Once you've entered that text either click the check mark to the left of the input box or click away and it will automatically attempt to run the expression.
The data may return in HTML format initially and not be what you're expecting. To correct this:
Select the Data Source Settings.
Select Basic and enter the credentials for a Cinchy User Account that has access to run this query.
Select OK.
Within the Edit Permissions dialogue, click OK.
Within the Data Source Settings dialogue, click Close.
Select Refresh Preview.
Select Close & Load and your dataset will be displayed in the Excel worksheet.
Launch Excel and navigate to Data > From Web.
Select Advanced and input the following values (Image 3):
HTTP Request Header Parameters:
In the first text box input Authorization
In the second text box type Bearer + your PAT. For example: "Bearer BGFHFHOJDF76DFDFD777"
Select OK.
Select Load to use the query data in Excel (Image 4).
Launch Excel and navigate to Data > From Web.
Select Advanced and input the following values (Image 5):
HTTP Request Header Parameters:
In the first text box input Authorization
In the second text box type Bearer + your token. For example: "Bearer eyUzI1NiIsImtpZCI6IkE4M0UwQTFEQTY1MzE0NkZENUQxOTFDMzRDNTQ0RDJDODYyMzMzMzkiLCJ0eXAiO"
Select OK.
Select Load to use the query data in Excel (Image 6).
You can use various methods to establish a connection between Cinchy and Power BI, such as using Basic Auth, Personal Access Tokens, or Bearer Tokens.
Review each section below for further details.
The structure of the URL endpoint is <your Cinchy instance URL>/API/<the name of your query>. You might optionally have querystring
parameters at the end as well.
For example: http://your.cinchy.instance.domain/API/YourQueryDomain/API Test
Note that for Basic Authentication with a result format of CSV we will use a slightly different URL endpoint.
For Basic Auth: /API/ becomes /BasicAuthAPI/
For CSV results you will add the querystring
parameter of ResultFormat=CSV
Our example URL of a basic auth using CSV results would then become: http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV
Launch Power BI and navigate Get Data > Web (Image 7).
In the window that launches, you will enter the below text, using your own URL endpoint where highlighted (Image 8):
=Csv.Document(Web.Contents(
"http://your.cinchy.instance.domain/BasicAuthAPI/YourQueryDomain/API Test?ResultFormat=CSV"
))
Click on the checkmark icon and Power BI will automatically attempt to run the expression (Image 9).
Select Edit Credentials > Basic (Image 10). Enter the credentials for a Cinchy User Account that has access to run this query and select the level at which to apply these settings. By default it's the root URL.
This process of entering your credentials won't occur with each query, it's just the first time and then they're saved locally.
Select Connect to see your data (Image 11).
You can now apply any transformations to the dataset.
In this example we also changed the name from Query1 to Product Roadmap and have edited to use the first row as a header (Image 12).
Select Close & Apply. The metadata now shows up on the right hand side and you can begin to use it to create your visualizations (Image 13).
Launch Power BI and navigate to Get Data > Web.
Select Advanced and input the following values (Image 14):
HTTP Request Header Parameters:
In the first text box input Authorization
In the second text box type Bearer + your PAT. For example: "Bearer BGFHFHOJDF76DFDFD777"
Select OK.
Select Load to use the query data in Power BI.
You can now apply any transformations to the dataset.
In this example we also changed the name from Query1 to Product Roadmap and have edited to use the first row as a header (Image 15).
Select Close & Apply. The metadata now shows up on the right hand side and you can begin to use it to create your visualizations (Image 16).
Launch Power BI and navigate to Get Data > Web.
Select Advanced and input the following values (Image 17):
HTTP Request Header Parameters:
In the first text box input Authorization
In the second text box type Bearer + your token. For example: "Bearer eyUzI1NiIsImtpZCI6IkE4M0UwQTFEQTY1MzE0NkZENUQxOTFDMzRDNTQ0RDJDODYyMzMzMzkiLCJ0eXAiO"
Select OK.
Select Load to use the query data in Power BI.
Cinchy exposes a Tableau Web Data Connector that provides access to Cinchy Saved Queries as data sources in Tableau. Tableau versions 2019.2+ are supported.
You need an active internet connection to use the Web Data Connector.
To get started, you must add a record into the Integrated Clients
table in the Cinchy
domain with the below values.
Launch Tableau.
Under Connect
-> To a Server
select the Web Data Connector
option.
Enter the URL from the Permitted Login Redirect URLs
field on the Integrated Clients
record created under the Prerequisites section above.
The Cinchy login screen will appear, enter your credentials
Select one or more queries to add to your data set. The result of each query will be available as a Table in Tableau. If a query has parameters, you will be prompted to provide the parameter values before you can add it to your collection.
Select the Load button.
The Cinchy query results will now be accessible for you to create your visualization.
This page outlines Step 6 of Deploying CinchyDXD: Reinstalling the Data Experience
Using PowerShell, you must now install the Data Experience you have exported out of Cinchy.
Open File Explorer and navigate to your exported folder (Image 1).
In the folder path URL of the exported data experience, type in PowerShell to launch PowerShell for that path.
Hit Enter on your keyboard (Image 2).
In the PowerShell window, type cin
and hit Tab on your keyboard. Type install (Image 3).
Enter the install parameters into the PowerShell window:
The parameters executed in PowerShell can exist on one line in PowerShell, but for legibility (below) the parameters have been put on separate lines. If you are putting your parameters on separate lines you will be required to have backticks quote ` for the parameters to execute
Sample (Image 4):
.\CinchyDXD.ps1 install
-s "<taget Cinchy url>" `
-u "<target user id>" `
-p "<target password>" `
-c "C:\Cinchy CLI v4.0.2" `
-d "C:\CLI Output Logs" `
Hit Enter on your keyboard to run the install command. Once the Data Experience has been installed you will get a message in PowerShell that the install was completed (Image 5).
Ensure that the Models Table is populated in the target environment with the model that was installed (Image 6).
Ensure that the Currency Exchange Rate table exists in the target environment with the new column names (Image 7).
Ensure that the Currency Converter query exists in the target environment with the new column names and labels (Image 8).
Ensure that the Data Experience Definitions table hasn't changed, unless you have added or removed column details within this table (Image 9).
Ensure that the Data Experience Releases table in the target environment is populated with the new release version number from the install (For example: 2.0.0) (Image 10).
This page outlines multi-lingual support information.
POST
<Cinchy-URL>/API/Translate
Pass in a list of literal GUIDs, along with a language and region. If translations are found in that language, they will be returned.
Name | Type | Description |
---|
If the translation exists in the language and region specified, it will be returned.
If the translation exists in the language but not the specified region, it will still be translated and returned.
If the GUID exists but it's not available in the specified language, the default text in the Literals table will return.
If the GUID doesn't exist or you don't have permission to it, it will return the GUID back as the translation.
Cinchy has three tables to provide language support.
[Cinchy].[Literal Groups]
[Cinchy].[Literals]
[Cinchy].[Literal Translations].
This table can optionally be used to group the translations. The default Cinchy strings belong to the Cinchy literal group. We recommend you create one literal group per applet or UI so you can retrieve the full list of GUIDs required for that page/applet easily.
This table defines all the strings that you want to translate.
String that displays if no translation is found for the language specified.
GUID used to refer to the literal. A UUID will be generated by default, but can be overrode using the GUID Override field to something more human-readable.
Use this to group your strings so they can be easily retrieved. Note that this is a multi-select so you can use a single literal for multiple applets (including using the default Cinchy literals and translations for custom applets).
This is the table where the translations are stored.
This is the translated string that's returned.
This is the literal the translation is for.
A language must be specified for a translation. Region can also be optionally specified for region specific words (ex. color vs colour).
Column Details | Values |
---|
Currency 1 | Currency 2 | Rate |
---|
Query Details | Values |
---|
Erroneously deleted tables can be restored via the [Tables] table,
Use the statement, shown below, to delete your table.
Erroneously deleted tables can be restored via the [Tables] table,
Navigate to the Tables table a user with "Delete Row" access, generally an
Erroneously deleted tables can be restored via the [Tables] table,
Column | Definition |
---|
Column | Value |
---|
Column | Definition |
---|
If needed, to generate a new PAT.
URL Parts: This is the Query API URL that you created in the section.
If needed, to generate a Bearer Token.
URL Parts: This is the Query API URL that you created in the section.
Power BI connects to queries within Cinchy, so before you use any of the connection methods below you will need to create one that represents your dataset. Once created, you will need to copy down the , located as a green button on the right-hand side of the Execute Query screen.
If needed, to generate a new Personal Access Token (PAT).
URL Parts: This is the Query API URL that you created in the section.
If needed, to generate a Bearer Token.
URL Parts: This is the Query API URL that you created in the section.
Column | Value |
---|
Column 1 | Column Name: Currency 1 Data Type: Text Advanced Settings:
|
Column 2 | Column Name: Currency 2 Data Type: Text Advanced Settings:
|
Column 3 | Column Name: Rate Data Type: Number Advanced Settings:
|
CAD | USD | 0.71 |
USD | CAD | 1.40 |
Query Name | Currency Converter |
Icon + Colour | Choose your own icon |
Return | Query Results (Approved Data Only) |
Domain | Sandbox |
API Result Format | JSON |
Description | This query is a test query for building and deploying a data experience for currency conversion |
GUID | This value is calculated, please note this value will be required as one of your export parameters in PowerShell |
Name | This is the Name of your Data Experience |
Tables | Select all tables that are part of the Data Experience |
Views | Select all views (in the data browser) that are a part of the Data Experience |
Integrated Clients | Select any integrated clients (For example: Tableau, PowerBI, custom integrations) that are part of the Data Experience |
Data Sync Configurations | Select any data syncs (CLI’s experience needs to work) that are part of the Data Experience |
Listener Configurations | Select any Listener Config rows that refer to a Data Sync Configuration which is a part of the Data Experience |
Reference Data | Select any reference data that's part of the Data Experience. Please note that the setup of the reference data is done in the table called Data Experience Reference Data (see step 2 below for setup details) |
Secrets | Select any Secrets you'd like to include that are used Data Sync Configurations or Listener Configs which are a part of this Data Experience. |
Webhooks | Select any Webhooks that are a part of this data experience |
User Defined Functions | Select any user defined functions (For example: validate phone, validate email) that are part of the Data Experience |
Models | Select any custom models that override columns or tables in your Data Experience, if there are none - leave blank |
Groups | Select any groups that are part of the Data Experience (when moving groups, it will also move all table access [design] controls) |
System Colours | Select a system colour (if defined) for the Data Experience |
Saved Queries | Select any queries that are part of the Data Experience |
Applets | Select any applets that are part of the Data Experience |
Pre-install Scripts | Select any Pre-install Scripts (Saved Queries) that should run before the installation of this Data Experience. |
Post-install Scripts | Select any Post-install Scripts (Saved Queries) that should run after to the installation of this Data Experience. A common use-case is to rectify data that may be different between environments. |
Formatting Rules | Select any formatting rules that are part of the Data Experience |
Literal Groups | Select any literals associated to the Data Experience (For example: key values with English and French definitions) |
Builders | Select the builder(s) who have permission to export the Data Experience |
Builder Groups | Select the builder group(s) that have permission to export the Data Experience Note: Best Practice is to use a Group over a User. Users within groups can fluctuate, where the Group (or Role) will remain. This will require less maintenance moving forward |
Sync GUID | Leave this column blank |
Name | Currency Converter |
Tables | Currency Exchange Rate (Sandbox) |
Saved Queries | Currency Converter |
Builder Groups | Currency Converters |
Name | This is the Name of your Reference Data Table, note this name can be anything and doesn't have to replicate the actual table name |
Ordinal | The ordinal number assigned will identify the order in which the data is loaded and required based on dependencies within the data experience. For example if you have tables that have hierarchies in them, you will need to load the parent records first and then load your child records which would then resolve any links in the table. |
Filter | This is where a WHERE clause would be required. For example, if you have a table that has hierarchies, you would require two rows within the Data Experience Reference Data table. One to load the parent data and one to load the children data. In the parent record a filter WHERE clause would be needed to filter all parent records. In the second record in the filter column a WHERE clause in another in the second record that would be needed to filter the children records. |
New Records | Identify the behaviour of a new record (INSERT, UPDATE, DELETE, IGNORE) |
Change Records | Identify the behaviour of a changed record (INSERT, UPDATE, DELETE, IGNORE) |
Dropped Records | Identify the behaviour of a dropped record (INSERT, UPDATE, DELETE, IGNORE) |
Table | Identify the table that you are exporting data from |
Sync Key | Required (need definition) |
Expiration Timestamp Field | If Dropped Records is set to “Expire” then a timestamp column is required |
debug | boolean | Defaults to false if not specified. Debug true will explain why that string was returned as the translation. |
region | string | Subtag from the Regions table. User's preferences will be used if not specified. |
guids | array | Array of strings. Guids from the Literals table. |
language | string | Subtag from the Languages table. User's preferences will be used if not specified. |
Client Id | tableau-connector |
Client Name | Tableau |
Grant Type | Implicit |
Permitted Login Redirect URLs |
Permitted Logout Redirect URLs |
Permitted Scopes | Id, OpenId, Email, Profile, Roles |
Access Token Lifetime (seconds) | 3600 |
Show Cinchy Login Screen | Checked |
Enabled | Checked |
### Connect from Tableau |
Table Details | Values |
Table Name | Currency Exchange Rate |
Icon + Colour | Choose your own icon |
Domain | Sandbox (if the domain doesn't exist, create it) To create a domain on the fly:
|
Description | This table is a test table for building and deploying a data experience for currency conversion |
Column 1 | Current Column Name Value = Currency 1 New Column NameValue = From Currency All other settings remain the same |
Column 2 | Current Column Name Value = Currency 2 New Column NameValue = To Currency All other settings remain the same |