Skip to main content

Add credentials

  1. Create a new pipeline or open an existing pipeline.
  2. Expand the left side of your screen to view the file browser.
  3. Scroll down and click on a file named io_config.yaml.
  4. Enter the following keys and values under the key named default (you can have multiple profiles, add it under whichever is relevant to you)
version: 0.1.1
default:
  SNOWFLAKE_USER: ...
  SNOWFLAKE_PASSWORD: ...
  SNOWFLAKE_ACCOUNT: ...
  SNOWFLAKE_DEFAULT_WH: ...
  SNOWFLAKE_DEFAULT_DB: ...
  SNOWFLAKE_DEFAULT_SCHEMA: ...
  SNOWFLAKE_PRIVATE_KEY_PASSPHRASE: ...
  SNOWFLAKE_PRIVATE_KEY_PATH: ...
  SNOWFLAKE_ROLE: ...
  SNOWFLAKE_TIMEOUT: ...

Using SQL block

  1. Create a new pipeline or open an existing pipeline.
  2. Add a data loader, transformer, or data exporter block.
  3. Select SQL.
  4. Under the Data provider dropdown, select Snowflake.
  5. Under the Profile dropdown, select default (or the profile you added credentials underneath).
  6. In the Database input in the block header, enter the database name you want this block to save data to.
  7. In the Schema input in the block header, enter the schema name you want this block to save data to.
  8. Under the Write policy dropdown, select Replace or Append (please see SQL blocks guide for more information on write policies).
  9. Enter in this test query: SELECT 1.
  10. Run the block.

Methods for configuring database and schema

You only need to include the database and schema config in one of these 3 places, in order of priority (so if all 3 are included, #1 takes priority, then #2, then #3):
  1. Include the db and schema directly in the query (e.g. select * from [database_name].[schema_name].[table_name];). This is supported when NOT using the “raw sql” query option.
  2. Include the db and schema in the SQL code block header inputs, as mentioned in Steps 6 and 7 of the “Using SQL block” section above.
  3. Include the default db and schema in the io_config.yaml file using these fields:
SNOWFLAKE_DEFAULT_DB: YOUR_DB_NAME
SNOWFLAKE_DEFAULT_SCHEMA: YOUR_SCHEMA_NAME

Using Python block

  1. Create a new pipeline or open an existing pipeline.
  2. Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
  3. Select Generic (no template).
  4. Enter this code snippet (note: change the config_profile from default if you have a different profile):
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from os import path
from pandas import DataFrame

if 'data_loader' not in globals():
    from mage_ai.data_preparation.decorators import data_loader


@data_loader
def load_data_from_snowflake(**kwargs) -> DataFrame:
    query = 'SELECT 1'
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'

    with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        return loader.load(query)
  1. Run the block.

Export a dataframe

Here is an example code snippet to export a dataframe to Snowflake:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path

if 'data_exporter' not in globals():
    from mage_ai.data_preparation.decorators import data_exporter


@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
    """
    Template for exporting data to a Snowflake warehouse.
    Specify your configuration settings in 'io_config.yaml'.

    Docs: https://docs.mage.ai/design/data-loading#snowflake
    """
    table_name = 'your_table_name'
    database = 'your_database'
    schema = 'your_schema'
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'

    with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        loader.export(
            df,
            table_name,
            database=database,
            schema=schema,
            if_exists='append',  # Specify resolution policy if table name already exists
        )

Custom types

To overwrite a column type when running a python export block, simply specify the column name and type in the overwrite_types dict in data exporter config:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path

if 'data_exporter' not in globals():
    from mage_ai.data_preparation.decorators import data_exporter


@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
    """
    Template for exporting data to a Snowflake warehouse.
    Specify your configuration settings in 'io_config.yaml'.

    Docs: https://docs.mage.ai/design/data-loading#snowflake
    """
    table_name = 'your_table_name'
    database = 'your_database'
    schema = 'your_schema'
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'
    overwrite_types = {'column_name': 'VARCHAR(255)'}

    with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        loader.export(
            df,
            table_name,
            database=database,
            schema=schema,
            if_exists='replace',
            overwrite_types=overwrite_types,
        )

Method arguments

Field nameDescriptionExample values
if_existsSpecify resolution policy if table name already exists”fail”/“replace”/“append” (default: “append”)
overwrite_typesOverwrite the column types{'column1': 'VARCHAR', 'column2': 'NUMBER'}
unique_conflict_methodHow to handle the conflict on unique constraints. Use ‘UPDATE’ for UPSERT (update existing rows, insert new ones) or ‘IGNORE’ to skip duplicates.‘UPDATE’ or ‘IGNORE’ (default: None)
unique_constraintsThe unique constraints of the table. Used with unique_conflict_method for UPSERT operations.[‘col1’, ‘col2’] (default: None)
Example: Using UPSERT (UPDATE or INSERT)
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path

if 'data_exporter' not in globals():
    from mage_ai.data_preparation.decorators import data_exporter


@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
    """
    Template for exporting data to Snowflake with UPSERT support.
    Specify your configuration settings in 'io_config.yaml'.

    Docs: https://docs.mage.ai/design/data-loading#snowflake
    """
    table_name = 'your_table_name'
    database = 'your_database'
    schema = 'your_schema'
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'

    with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        loader.export(
            df,
            table_name,
            database=database,
            schema=schema,
            if_exists='append',  # Use 'append' with UPSERT
            unique_constraints=['user_id', 'email'],  # Columns that form unique constraint
            unique_conflict_method='UPDATE',  # Update existing rows, insert new ones
        )