Defining Snowflake Tables
The following properties are applicable to a Snowflake Table object.
Tab |
Property |
Description |
Additional Information |
---|---|---|---|
|
Physical Name | Specifies the physical name of the table |
|
|
Database |
Specifies the database of the table |
|
|
Schema |
Specifies the schema under which the table is created. |
|
|
Use Replace Syntax |
Specifies whether the CREATE OR REPLACE syntax is used during DDL generation. |
Selecting this option replaces the existing object with the newly generated DDL during forward engineering. |
|
If Not Exists |
Specifies that if the table already exists, no exception is thrown, and no action happens |
|
|
Table Type |
Specifies the table type |
External Table: Indicates that the selected table type is external and the table's data is stored in an external data source Dynamic Table: Indicates that the selected table type is dynamic Dynamic Iceberg Table: Specifies a table type that integrates with Apache Iceberg and supports dynamic, real-time data ingestion and querying Iceberg Table: Specifies a table format based on the Apache Iceberg open table standard, optimized for large-scale analytic workloads Hybrid Table: Indicates a Snowflake-native table type that supports both OLTP (transactional) and OLAP (analytical) operations. This enables New Non-Unique Index option in the index editor. You can define Unique Indexes and Non-Unique Indexes for Hybrid Tables. Hybrid Table As Select: Specifies a table type to create a hybrid table using a SQL SELECT statement |
|
Physical Only |
Specifies whether the table appears only in the physical model |
|
|
Generate |
Specifies whether a DDL statement is generated for the table during forward engineering |
|
General |
Name | Specifies the complete name of the table | |
Physical Name | Specifies the physical entity name of the table | ||
Harden Strategy |
Specifies the name of the hardening strategy for a table |
Inherit: Indicates to adopt the existing naming conventions established by your organization Override: Indicates that the existing naming conventions are overwritten with the new one Harden: Indicates that the naming conventions are hardened across the organization |
|
Table Type |
Specifies the table type |
Permanent: Specifies a standard table that persists until explicitly dropped Temporary: Specifies a session-specific table that is dropped at the end of the session Local Temporary: Indicates a temporary table visible only within the current session and not shared across sessions Global Temporary: Specifies a temporary table definition that is shared, but data is session-specific Volatile: Indicates a table that exists only for the duration of the session and is not recoverable Transient: Specifies a table that persists beyond the session but does not support fail-safe recovery |
|
Data Retention Time (In Days) | Specifies the number of days for which historical data can be accessed using SELECT, CLONE, or UNDROP | ||
DEFAULT_DDL_COLLATION | Specifies the default collation for all columns in the table | ||
Copy Grants | Specifies whether the access privileges from the original table are retained when a new table is created | ||
Like Table | Specifies the table from which the new table must use column definitions | ||
Clone Table | Specifies whether to create a new table with the same structure and data as the source table, without copying the data | ||
At or Before | Specifies whether to use Time Travel to clone the table at or before a specific point | ||
Time Type | Specifies the time reference type used for Time Travel cloning | ||
Point | Specifies the point at or before which the table must be cloned | ||
Additional Options
|
Cluster By | Specifies one or more columns or column expressions in the table as the clustering key | |
Max Data Extension Time In Days |
Specifies the maximum number of days the data can be retained on the dynamic tables |
||
Row Access Policy |
Specifies the row-level access policy applied to the table |
|
|
Row Access Policy Columns |
Specifies the columns associated with the row access policy |
|
|
Columns |
Specifies the columns used to apply row-level security rules based on user roles or attributes |
|
|
Stage Format Options |
Use Existing File Format | Specifies whether an existing file format should be used for loading or unloading data. | |
Stage File Format Type | Specifies the type of files to load or unload into the table |
CSV JSON AVRO ORC PARQUET XML Depending on the format that you set, other corresponding properties are available. |
|
Stage Copy Options |
Action On Error (while Copy) | Specifies the action to be performed when an error occurs during data loading |
Continue: Specifies that the copy operation should proceed even if errors occur Skip File: Indicates that the file should be skipped entirely if any error is encountered Skip File with Statement Error: Indicates that the file is skipped and an error is raised for the statement. Abort Statement: Specifies that the entire copy operation should stop immediately upon encountering an error |
Copy Size Limit | Specifies the maximum size (in bytes) of data to be loaded for a COPY statement | ||
Purge After Copy | Specifies whether data files must be purged automatically after data is successfully loaded | ||
Return Failed Only | Specifies whether only the files that failed to load are returned in the statement result | ||
Enforce Length | Specifies whether text strings that exceed the target column length must be truncated | ||
Match By Column Name | Specifies whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data |
Case Sensitive: Specifies that column names in the source file must exactly match the case of the target table columns Case Insensitive: Specifies that column names are matched regardless of letter casing None: Indicates that columns are matched by position rather than by name |
|
Truncate Columns | Specifies whether text strings that exceed the target column length must be truncated | ||
Force Load (Reload) All Files | Specifies whether all files must be loaded, irrespective of whether they have been loaded previously and have not changed since they were loaded | ||
External Table Options
|
External Stage Type | Specifies the external stage name |
Applicable only when the Table Type is set to External Table Amazon AWS: Specifies that the external data source is located in an Amazon S3 bucket Microsoft Azure: Specifies that the external data source is located in an Azure Blob Storage container Google GCP: Specifies that the external data source is located in a Google Cloud Storage bucket |
External Stage Location | Specifies the external stage where the files containing data to be read are staged | ||
Integration |
Specifies the integration service used to connect to Microsoft Azure for external table access |
|
|
Refresh On Create | Specifies whether the external table metadata must be automatically refreshed after the external table is created | ||
Auto Refresh | Specifies whether the external table metadata is automatically refreshed when new or updated data files are available in the specified external stage | ||
Pattern | Specifies a regular expression pattern string, enclosed in single quotes, specifying the file names and/or paths on the external stage to match | ||
Table Format |
Specifies the format of the external table |
|
|
Partition By | Specifies any partition columns to evaluate for the external table | ||
Column |
Specifies the name of the partition column |
|
|
Dynamic Table/Dynamic Iceberg Table Properties
|
Target Lag |
Specifies the maximum amount of time the dynamic table's content can be delayed after updates to the source tables |
|
Warehouse |
Specifies the name of the warehouse for refreshing dynamic tables |
||
Refresh Mode |
Specifies refresh modes for dynamic tables |
Auto: Indicates that the system applies an incremental refresh rate by default. This is the default mode. Full: Indicates that the system applies a complete refresh of the dynamic tables Incremental: Indicate that the system applies an incremental refresh of the dynamic tables |
|
Initialize |
Specifies the behavior of the initial refresh of dynamic tables |
On Create: Indicates that the dynamic tables are refreshed at creation On Schedule: Indicates that the dynamic tables are refreshed at the next scheduled refresh |
|
Require User |
Specifies whether user identity is required for accessing or modifying tables |
|
|
Query |
Specifies the query for dynamic tables |
||
External Volume |
Specifies the external storage location used for managing dynamic table data |
Available only when the Table Type is set to Dynamic Iceberg Table
|
|
Catalog |
Specifies the metadata service used to register and manage table schema and structure |
||
Base Location |
Specifies the root path within the external volume where the dynamic or Iceberg table data is stored |
||
Iceberg Table Options (Applicable only
|
Iceberg Catalog |
Specifies the catalog used to manage Iceberg tables |
Ensure that you select the appropriate Iceberg Catalog type. If no catalog is selected, erwin DM displays all catalog-specific fields. In such cases, configure only the relevant properties to avoid errors during DDL generation. SNOWFLAKE: Indicates that the Iceberg table is fully managed within Snowflake’s native catalog AWS Glue: Indicates that the Iceberg table metadata is managed using AWS Glue Data Catalog Iceberg Files: Indicates that the Iceberg table metadata is directly accessed from file-based storage Delta: Indicates compatibility with Delta Lake format, allowing Delta tables to be queried as Iceberg tables Iceberg REST: Indicates that the Iceberg table is accessed via a REST-based Iceberg catalog service |
External Volume |
Specifies the external storage volume used for managing table data |
|
|
Catalog |
Specifies the name of the catalog service used to register and manage the table schema |
|
|
Base Location |
Specifies the root path within the external volume where Iceberg table data is stored |
Available only when Iceberg Catalog is set to Snowflake or Delta |
|
Catalog Sync |
Specifies whether synchronization with the catalog is enabled |
Available only when Iceberg Catalog is set to Snowflake |
|
Storage Serialization Policy |
Specifies the policy used for serializing storage data |
Compatible: Indicates that the aggregation policy maintains compatibility with existing aggregation logic Optimized: Indicates that the aggregation policy is designed to improve performance through optimized aggregation techniques Available only when Iceberg Catalog is set to Snowflake |
|
Change Tracking |
Specifies whether change tracking is enabled for the table |
Available only when Iceberg Catalog is set to Snowflake |
|
Aggregation Policy |
Specifies the policy applied for data aggregation |
Available only when Iceberg Catalog is set to Snowflake |
|
Catalog |
Specifies the name of the catalog |
|
|
Catalog Table Name |
Specifies the name of the table as registered in the external catalog |
Available only when Iceberg Catalog set to AWS Glue or Iceberg REST |
|
Catalog Namespace |
Specifies the namespace within the catalog where the table is organized |
Available only when Iceberg Catalog set to AWS Glue or Iceberg REST |
|
Replace Invalid Characters |
Specifies whether invalid characters in names or paths should be automatically replaced |
Available only when Iceberg Catalog set to AWS Glue, Iceberg Files, Delta or Iceberg REST |
|
Auto Refresh |
Specifies whether automatic metadata refresh is enabled for the catalog table |
Available only when Iceberg Catalog set to AWS Glue, Delta or Iceberg REST |
|
Metadata File Path |
Specifies the file path to the metadata files associated with the catalog table |
Available only when Iceberg Catalog set to Iceberg Files |
|
Contact |
Specifies the contact details associated with the table or catalog entry |
|
|
Hybrid Table As Select |
Select Statement |
Specifies the SQL SELECT statement used to define the structure and data of the hybrid table |
|
Tags List | Name | Specifies the name of the tag | |
Value | Specifies the value of the available tag |
Copyright © 2025 Quest Software, Inc. |