Apache Parquet is a file format designed to support fast data processing for complex data, with several notable characteristics:
1. Columnar: Unlike row-based formats such as CSV or Avro, Apache Parquet is column-oriented – meaning the values of each table column are stored next to each other, rather than those of each record:
2. Open-source: Parquet is free to use and open source under the Apache Hadoop license, and is compatible with most Hadoop data processing frameworks. To quote the project website, “Apache Parquet is… available to any project… regardless of the choice of data processing framework, data model, or programming language.”
3. Self-describing: In addition to data, a Parquet file contains metadata including schema and structure. Each file stores both the data and the standards used for accessing each record – making it easier to decouple services that write, store, and read Parquet files.
Example Use Case: You have a parquet file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Parquet source supports batch syncs.
You can review the parameters that can be found in the info tab below (Image 1).
Parameter | Description | Example |
---|---|---|
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
(Sync) Source
Mandatory. Select your source from the drop down menu.
Parquet
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Row Group Size
Mandatory. The size of your Parquer Row Groups. Review the documentation here for more on Row Group sizing.
The recommended disk block/row group/file size is 512 to 1024 MB on HDFS.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
Auth Type
This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Fixed width text files are special cases of text files where the format is specified by column widths, pad character and left/right alignment. Column widths are measured in units of characters. For example, if you have data in a text file where the first column always has exactly 10 characters, and the second column has exactly 5, the third has exactly 12 (and so on), this would be categorized as a fixed width text file.
If a text file follows the rules below it is a fixed width text file:
Each row (paragraph) contains one complete record of information.
Each row contains one or many pieces of data (also referred to as columns or fields).
Each data column has a defined width specified as a number of characters that is always the same for all rows.
The data within each column is padded with spaces (or any character you specify) if it does not completely use all the characters allotted to it (empty space).
Each piece of data can be left or right aligned, meaning the pad characters can occur on either side.
Each column must consistently use the same number of characters, same pad character and same alignment (left/right).
Example Use Case: You have a fixed width file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Fixed Width File source supports batch syncs.
The Fixed Width File source does not support Geometry, Geography, or Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Title
Mandatory. Input a name for your data sync
Employee Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
Since we are doing a local upload, we use "@Filepath"
A binary file is a computer file that is not a text file, and whose content is in a binary format consisting of a series of sequential bytes, each of which is eight bits in length.
You can use binary files from a Local upload, Amazon S3, or Azure Blob Storage in your data syncs.
Some benefits of using binary files include:
Better efficiency via compression
Better Security through the ability to create custom encoding standards.
Unmatched Speed, since the data is stored in a raw format, and is not encoded using any character encoding standards, it is faster to read and store.
Example Use Case: You have a binary file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Binary File source supports batch syncs.
The Binary File source does not support Geometry or Geography data types.
You can review the parameters that can be found in the info tab below (Image 1).
Parameter | Description | Example |
---|---|---|
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
The Schema section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
You have the option to add a source filter to your data sync. Please review the documentation here for more information on source filters.
Configure your Destination
Define your Sync Behaviour.
Add in your Post Sync Scripts, if required.
Define your Permissions.
Click Jobs > Start a Job to begin your sync.
A delimited file is a sequential file with column delimiters. Each delimited file is a stream of records, which consists of fields that are ordered by column. Each record contains fields for one row. Within each row, individual fields are separated by column delimiters.
Example Use Case: You have a delimited file that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Delimited File source supports batch syncs.
The Delimited File source does not support Geometry, Geography, or Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
Parameter | Description | Example |
---|
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Microsoft Excel is a commonly spreadsheet program for managing and analyzing numerical data. You can use Microsoft Excel as a source for your Data Syncs by following the instructions below.
Example Use Case: You have an Excel spreadsheet that contains your Employee information. You want to use a batch sync to pull this info into a Cinchy table and liberate your data.
The Excel source supports batch syncs.
The Excel source does not support Binary data types.
You can review the parameters that can be found in the info tab below (Image 1).
Parameter | Description | Example |
---|
The following table outlines the mandatory and optional parameters you will find on the Source tab (Image 2).
The following parameters will help to define your data sync source and how it functions.
There are other options available for the Schema section if you click on Show Advanced.
You can choose to add in a Transformation > String Replacement by inputting the following:
Note that you can have more than one String Replacement
Click Jobs > Start a Job to begin your sync.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|---|---|
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Parameter | Description | Example |
---|
The section is where you define which source columns you want to sync in your connection. You can repeat the values for multiple columns.
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
Parameter | Description | Example |
---|
You have the option to add a source filter to your data sync. Please review the documentation here for more information on
Configure your
Define your.
Add in your , if required.
Define your .
Title
Mandatory. Input a name for your data sync
Employee Sync
Version
Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish.
1.0.0
Parameters
Optional. Review our documentation on Parameters here for more information about this field.
@Filepath
(Sync) Source
Mandatory. Select your source from the drop down menu.
Binary File
Source
The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String
Local
Header lines to Ignore
Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header).
1
Footer Lines to Ignore
Mandatory. The number of records from the bottom of the file to ignore
0
Encoding
Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII.
Path
Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file.
@Filepath
AuthType
his field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that:
The role must be configured to have at least read access to the source
The Connections pods' role must have permission to assume the role specified in the data sync config
Name
Mandatory. The name of your column as it appears in the source.
Name
Alias
Optional. You may choose to use an alias on your column so that it has a different name in the data sync.
Data Type
Mandatory. The data type of the column values.
Text
Description
Optional. You may choose to add a description to your column.
Parse Content By (Only for Standard Columns)
Binary File sources have a unique, mandatory parameter for Standard Columns:
Parse Content By - Choose from the following three options to define how you want to parse your content:
Byte Length - The content length in number of bytes
Trailing Byte Sequence - the trailing sequence in base64 that indicates the end of the field
Succeeding Byte Sequence - the trailing sequence in base64 that indicates the start of the next field, and thus the end of this one.
Byte Length
Mandatory
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Mandatory is checked on a column, then all rows are synced with the execution log status of failed, and the source error of "Mandatory Rule Violation"
If just Validated is checked on a column, then all rows are synced.
Validate Data
If both Mandatory and Validated are checked on a column, then rows where the column is empty are rejected
If just Validated is checked on a column, then all rows are synced.
Trim Whitespace
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Max Length
Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters).
Pattern
Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced.
Replacement
What you want to replace your pattern with.
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Fixed Width File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Footer Rows to Ignore | Mandatory. The number of records from the bottom of the file to ignore | 0 |
Encoding | Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII. |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
Name | Mandatory. The name of your column as it appears in the source. | Name |
Alias | Optional. You may choose to use an alias on your column so that it has a different name in the data sync. |
Data Type | Mandatory. The data type of the column values. | Text |
Description | Optional. You may choose to add a description to your column. |
Mandatory |
|
Validate Data |
|
Trim Whitespace | Optional if data type = text. If your data type was chosen as "text", you can choose whether to trim the whitespace (that is, spaces and other non-printing characters). |
Max Length | Optional if data type = text. You can input a numerical value in this field that represents the maximum length of the data that can be synced in your column. If the value is exceeded, the row will be rejected (you can find this error in the Execution Log). |
Pattern | Mandatory if using a Transformation. The pattern for your string replacement, i.e. the string that will be searched and replaced. |
Replacement | What you want to replace your pattern with. |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Delimited File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Delimiter | Mandatory. The delimiter character used to separate the. text strings. Use U+#### syntax (e.g. U+0001) for unicode characters. | , |
Text Qualifier | Mandatory. The text qualifier character, which is used in the event that the delimiter is contained within the row cell. Typically, the text qualifier is a double quote. | " |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). If you use both useHeaderRecord="true" and HeaderRowsToIgnore = 1, two rows will be ignored. Refer to the below to ensure you are receiving the results you want: One row as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 0 Two rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 1 Three rows as headers: useHeaderRecord="true" and HeaderRowsToIgnore = 2 | 1 |
Encoding | Optional. The encoding of the file. This default to UTF8, however also supports: UTF8_BOM, UTF16, ASCII. |
Use Header Record | Optional. Check this box to use the Header record to match schema. If set to true, fields not present inthe record will default to null. |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
AuthType | This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that: |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
(Sync) Source | Mandatory. Select your source from the drop down menu. | Delimited File |
Source | The location of the source file. Either a Local upload, Amazon S3, or Azure Blob Storage The following authentication methods are supported per source: Amazon S3: Access Key ID/Secret Access Key Azure Blob Storage: Connection String | Local |
Sheet Name | Mandatory. The name of the sheet that you want to sync. | Employee Info |
Header Rows to Ignore | Mandatory. The number of records from the top of the file to ignore before the data starts (includes column header). | 1 |
Footer Rows to Ignore | Mandatory. The number of records from the bottom of the file to ignore. | 0 |
Path | Mandatory. The path to the source file to load. To upload a local file, you must first insert a Parameter in the Info tab of the connection (ex: filepath). Then, you would reference that same value in this location (Ex: @Filepath). This will then trigger a File Upload option to import your file. | @Filepath |
AuthType | This field defines the authentication type for your data sync. Cinchy supports "Access Key" and "IAM" role. When selecting "Access Key", you must provide the key and key secret. When selecting "IAM role", a new field will appear for you to paste in the role's Amazon Resource Name (ARN). You also must ensure that: |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
Title | Mandatory. Input a name for your data sync | Employee Sync |
Version | Mandatory. This is a pre-populated field containing a version number for your data sync. You can override it if you wish. | 1.0.0 |
Parameters | @Filepath |
The role to have at least read access to the source
The Connections pods' role must specified in the data sync config
Optional. Review our documentation on for more information about this field.
The role to have at least read access to the source
The Connections pods' role must specified in the data sync config
Optional. Review our documentation on for more information about this field.
Optional. Review our documentation on for more information about this field.