Documentation Index
Fetch the complete documentation index at: https://docs.mage.ai/llms.txt
Use this file to discover all available pages before exploring further.
Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml.
- Enter the following keys and values under the key named
default (you can
have multiple profiles, add it under whichever is relevant to you)
version: 0.1.1
default:
SNOWFLAKE_USER: ...
SNOWFLAKE_PASSWORD: ...
SNOWFLAKE_ACCOUNT: ...
SNOWFLAKE_DEFAULT_WH: ...
SNOWFLAKE_DEFAULT_DB: ...
SNOWFLAKE_DEFAULT_SCHEMA: ...
SNOWFLAKE_PRIVATE_KEY_PASSPHRASE: ...
SNOWFLAKE_PRIVATE_KEY_PATH: ...
SNOWFLAKE_ROLE: ...
SNOWFLAKE_TIMEOUT: ...
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL.
- Under the
Data provider dropdown, select Snowflake.
- Under the
Profile dropdown, select default (or the profile you added
credentials underneath).
- In the
Database input in the block header, enter the database name you want this block to
save data to.
- In the
Schema input in the block header, enter the schema name you want this block
to save data to.
- Under the
Write policy dropdown, select Replace or Append (please see
SQL blocks guide
for more information on write policies).
- Enter in this test query:
SELECT 1.
- Run the block.
Methods for configuring database and schema
You only need to include the database and schema config in one of these 3 places, in
order of priority (so if all 3 are included, #1 takes priority, then #2, then #3):
- Include the db and schema directly in the query (e.g.
select * from [database_name].[schema_name].[table_name];). This is supported
when NOT using the “raw sql” query option.
- Include the db and schema in the SQL code block header inputs, as mentioned in Steps
6 and 7 of the “Using SQL block” section above.
- Include the default db and schema in the
io_config.yaml file using these fields:
SNOWFLAKE_DEFAULT_DB: YOUR_DB_NAME
SNOWFLAKE_DEFAULT_SCHEMA: YOUR_SCHEMA_NAME
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet
below is for a data loader).
- Select
Generic (no template).
- Enter this code snippet (note: change the
config_profile from default if
you have a different profile):
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from os import path
from pandas import DataFrame
if 'data_loader' not in globals():
from mage_ai.data_preparation.decorators import data_loader
@data_loader
def load_data_from_snowflake(**kwargs) -> DataFrame:
query = 'SELECT 1'
config_path = path.join(get_repo_path(), 'io_config.yaml')
config_profile = 'default'
with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
return loader.load(query)
- Run the block.
Export a dataframe
Here is an example code snippet to export a dataframe to Snowflake:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path
if 'data_exporter' not in globals():
from mage_ai.data_preparation.decorators import data_exporter
@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
"""
Template for exporting data to a Snowflake warehouse.
Specify your configuration settings in 'io_config.yaml'.
Docs: https://docs.mage.ai/design/data-loading#snowflake
"""
table_name = 'your_table_name'
database = 'your_database'
schema = 'your_schema'
config_path = path.join(get_repo_path(), 'io_config.yaml')
config_profile = 'default'
with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
loader.export(
df,
table_name,
database=database,
schema=schema,
if_exists='append', # Specify resolution policy if table name already exists
)
Custom types
To overwrite a column type when running a python export block, simply specify the column name and type in the overwrite_types dict in data exporter config:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path
if 'data_exporter' not in globals():
from mage_ai.data_preparation.decorators import data_exporter
@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
"""
Template for exporting data to a Snowflake warehouse.
Specify your configuration settings in 'io_config.yaml'.
Docs: https://docs.mage.ai/design/data-loading#snowflake
"""
table_name = 'your_table_name'
database = 'your_database'
schema = 'your_schema'
config_path = path.join(get_repo_path(), 'io_config.yaml')
config_profile = 'default'
overwrite_types = {'column_name': 'VARCHAR(255)'}
with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
loader.export(
df,
table_name,
database=database,
schema=schema,
if_exists='replace',
overwrite_types=overwrite_types,
)
Method arguments
| Field name | Description | Example values |
|---|
| if_exists | Specify resolution policy if table name already exists | ”fail”/“replace”/“append” (default: “append”) |
| overwrite_types | Overwrite the column types | {'column1': 'VARCHAR', 'column2': 'NUMBER'} |
| unique_conflict_method | How to handle the conflict on unique constraints. Use ‘UPDATE’ for UPSERT (update existing rows, insert new ones) or ‘IGNORE’ to skip duplicates. | ‘UPDATE’ or ‘IGNORE’ (default: None) |
| unique_constraints | The unique constraints of the table. Used with unique_conflict_method for UPSERT operations. | [‘col1’, ‘col2’] (default: None) |
Example: Using UPSERT (UPDATE or INSERT)
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.snowflake import Snowflake
from pandas import DataFrame
from os import path
if 'data_exporter' not in globals():
from mage_ai.data_preparation.decorators import data_exporter
@data_exporter
def export_data_to_snowflake(df: DataFrame, **kwargs) -> None:
"""
Template for exporting data to Snowflake with UPSERT support.
Specify your configuration settings in 'io_config.yaml'.
Docs: https://docs.mage.ai/design/data-loading#snowflake
"""
table_name = 'your_table_name'
database = 'your_database'
schema = 'your_schema'
config_path = path.join(get_repo_path(), 'io_config.yaml')
config_profile = 'default'
with Snowflake.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
loader.export(
df,
table_name,
database=database,
schema=schema,
if_exists='append', # Use 'append' with UPSERT
unique_constraints=['user_id', 'email'], # Columns that form unique constraint
unique_conflict_method='UPDATE', # Update existing rows, insert new ones
)