Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml. - Enter the following keys and values under the key named
default(you can have multiple profiles, add it under whichever is relevant to you)
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL. - Under the
Data providerdropdown, selectSnowflake. - Under the
Profiledropdown, selectdefault(or the profile you added credentials underneath). - In the
Databaseinput in the block header, enter the database name you want this block to save data to. - In the
Schemainput in the block header, enter the schema name you want this block to save data to. - Under the
Write policydropdown, selectReplaceorAppend(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1. - Run the block.
Methods for configuring database and schema
You only need to include the database and schema config in one of these 3 places, in order of priority (so if all 3 are included, #1 takes priority, then #2, then #3):- Include the db and schema directly in the query (e.g.
select * from [database_name].[schema_name].[table_name];). This is supported when NOT using the “raw sql” query option. - Include the db and schema in the SQL code block header inputs, as mentioned in Steps 6 and 7 of the “Using SQL block” section above.
- Include the default db and schema in the
io_config.yamlfile using these fields:
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template). - Enter this code snippet (note: change the
config_profilefromdefaultif you have a different profile):
- Run the block.
Export a dataframe
Here is an example code snippet to export a dataframe to Snowflake:Custom types
To overwrite a column type when running a python export block, simply specify the column name and type in theoverwrite_types dict in data exporter config:
Method arguments
| Field name | Description | Example values |
|---|---|---|
| if_exists | Specify resolution policy if table name already exists | ”fail”/“replace”/“append” (default: “append”) |
| overwrite_types | Overwrite the column types | {'column1': 'VARCHAR', 'column2': 'NUMBER'} |
| unique_conflict_method | How to handle the conflict on unique constraints. Use ‘UPDATE’ for UPSERT (update existing rows, insert new ones) or ‘IGNORE’ to skip duplicates. | ‘UPDATE’ or ‘IGNORE’ (default: None) |
| unique_constraints | The unique constraints of the table. Used with unique_conflict_method for UPSERT operations. | [‘col1’, ‘col2’] (default: None) |