- AWS Redshift
- ClickHouse
- Databricks SQL (Pro only)
- DuckDB
- MySQL
- MSSQL
- MongoDB
- OracleDB
- PostgreSQL
- SQLite
- Snowflake
- Spark
- Trino
- Google BigQuery
- Microsoft Fabric Warehouse (Pro only)
- File-Loading Clients - import/export data between files and your pipeline. Includes both local filesystem storage and external filesystem storage (like AWS S3)
- Database Clients - imports data from an external database into your pipeline and exports data frames back into that database. These database clients include the
executemethod which execute your queries in the connected database. The Google BigQuery client follows this structure.- A subcategory of database clients are connection-based clients, which wrap a connection to the database. This connection is used to execute transactions (sets of queries) on the database, which are either committed (saved to database) or rolled-back (deleted). Clients for PostgreSQL, Redshift, and Snowflake follow this structure.
Example: Loading data from a file
While traditional Pandas IO procedures can be utilized to load files into your pipeline, Mage provides themage_ai.io.file.FileIO client as a convenience wrapper.
The following code uses the load function of the FileIO client to load the Titanic survival dataset from a CSV file into a Pandas DataFrame for use in your pipeline. All data loaders can be initialized with the verbose = True parameter, which will print the current action the data loading client is performing. This parameter defaults to False.
export function is used to save this data frame back to file, this time in JSON format
load and export. These arguments are passed to Pandas IO procedures for the requested file format, enabling fine-grained control over how your data is loaded and exported.
As the data loader was constructed with the verbose parameter set to True, the above operations would print the following output describing the actions of the data loader.
Example: Loading data from Snowflake warehouse
Loading data from a Snowflake data warehouse is made easy using themage_ai.io.snowflake.Snowflake data loading client. In order to authenticate access to a Snowflake warehouse, the client requires the associated Snowflake credentials:
- Account Username
- Account Password
- Account ID (including your region) (Ex: example.us-west-2)
mage init to create your project repository, you can store these values in your io_config.yaml file and use mage_ai.io.config.ConfigFileLoader to construct the data loader client.
For detailed information about setting up io_config.yaml, see the IO Config Setup documentation.
An example io_config.yaml file in this instance would be:
execute()- executes an arbitrary query on your data warehouse. In this case, the warehouse, database, and schema to use are selected.load()- loads the results of aSELECTquery into a Pandas DataFrame.export()- stores data frame as a table in your data warehouse. If the table exists, then the data is appended by default (and can be configured with other behavior, see Snowflake API). If the table doesn’t exist, then the table is created with the given schema name and table name (the table data types are inferred from the Python data type).
loader object manages a direct connection to your Snowflake data warehouse, so it is important to make sure that your connection is closed once your operations are completed. You can manually use loader.open() and loader.close() to open and close the connection to your data warehouse or automatically manage the connection with a context manager.
To learn more about loading data from Snowflake, see the Snowflake API for more details on member functions and usage.
Client APIs
This section covers the API for using the following data loaders.FileIO
mage_ai.io.file.FileIO
Handles data transfer between the filesystem and the Mage app. The FileIO client currently supports importing and exporting with the following file formats:
- CSV
- JSON
- Parquet
- HDF5
- XML
- Excel
Constructor
__init__(self, verbose: bool)
Initializes the FileIO data loading client.
- Args
verbose (bool): Enables verbose output printing. Defaults to False.
Methods
export -export(df: DataFrame, filepath: os.PathLike, format: FileFormat | str = None, **kwargs) -> None:
Exports the input data frame to the file specified.
If the format is HDF5, the default key under which the data frame is stored is the stem of the filename. For example, if the file to write the data frame to is ‘storage/my_data_frame.hdf5’, the key would be ‘my_data_frame’. This can be overridden using the key keyword argument.
-
Args:
-
df (DataFrame): Data frame to export. -
filepath (os.PathLike): Filepath to export data frame to. -
format (FileFormat | str, Optional): Format of the file to export data frame to. Defaults to None, in which case the format is inferred. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Writer CSV DataFrame.to_csv JSON DataFrame.to_json Parquet DataFrame.to_parquet HDF5 DataFrame.to_hdf XML DataFrame.to_xml Excel DataFrame.to_excel
-
load(filepath: os.PathLike, format: FileFormat | str = None, limit: int = QUERY_ROW_LIMIT **kwargs) -> DataFrame
Loads the data frame from the filepath specified. This function will load at maximum 10,000,000 rows of data from the specified file (this limit is configurable).
-
Args:
-
filepath (os.PathLike): Filepath to load data frame from. -
format (FileFormat | str, Optional): Format of the file to load data frame from. Defaults to None, in which case the format is inferred. -
limit (int, optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Reader CSV read_csv JSON read_json Parquet read_parquet HDF5 read_hdf XML read_xml Excel read_excel
-
-
Returns: (
DataFrame) Data frame object loaded from data in the specified file.
S3
mage_ai.io.s3.S3
Handles data transfer between an S3 bucket and the Mage app. The S3 client supports importing and exporting with the following file formats:
- CSV
- JSON
- Parquet
- HDF5
- XML
- Excel
aws configure, then AWS credentials for accesses the S3 bucket can be manually specified through either Mage’s configuration loader system or through keyword arguments in the constructor (see constructor).
If using configuration settings, specify the following keys:
Constructor
__init__(self, verbose: bool)
Initializes the S3 data loading client.
- Args
verbose (bool): Enables verbose output printing. Defaults to False.- If IAM profile is not set up using
aws configureand Mage’s configuration loader is not used, then specify your AWS credentials through the following keyword arguments:aws_access_key_id (str): AWS Access Key ID credentialaws_secret_access_key (str): AWS Secret Access Key credentialaws_region (str): Region associated with AWS IAM profile
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> S3
Creates S3 data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
S3) The constructed dataloader using this method
Methods
export -export(df: DataFrame, bucket_name: str, object_key: str, format: FileFormat | str = None, **kwargs) -> None:
Exports data frame to an S3 bucket.
If the format is HDF5, the default key under which the data frame is stored is the stem of the filename. For example, if the file to write the data frame to is ‘storage/my_data_frame.hdf5’, the key would be ‘my_data_frame’. This can be overridden using the key keyword argument.
-
Args:
-
data (Union[DataFrame, str]): Data frame or file path to export. -
bucket_name (str): Name of the bucket to export data frame to. -
object_key (str): Object key in S3 bucket to export data frame to. -
format (FileFormat | str, Optional): Format of the file to export data frame to. Defaults toNone, in which case the format is inferred. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Writer CSV DataFrame.to_csv JSON DataFrame.to_json Parquet DataFrame.to_parquet HDF5 DataFrame.to_hdf XML DataFrame.to_xml Excel DataFrame.to_excel
-
load(bucket_name: str, object_key: str, format: FileFormat | str = None, limit: int = QUERY_ROW_LIMIT **kwargs) -> DataFrame
Loads data from object in S3 bucket into a Pandas data frame. This function will load at maximum 10,000,000 rows of data from the specified file (this limit is configurable).
-
Args:
-
bucket_name (str): Name of the bucket to load data from. -
object_key (str): Key of the object in S3 bucket to load data from. -
format (FileFormat | str, Optional): Format of the file to load data frame from. Defaults to None, in which case the format is inferred. -
limit (int, optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Reader CSV read_csv JSON read_json Parquet read_parquet HDF5 read_hdf XML read_xml Excel read_excel
-
-
Returns: (
DataFrame) Data frame object loaded from data in the specified file.
GoogleCloudStorage
mage_ai.io.google_cloud_storage.GoogleCloudStorage
Handles data transfer between a Google Cloud Storage bucket and the Mage app. Supports loading files of any of the following types:
- CSV
- JSON
- Parquet
- HDF5
- XML
- Excel
GOOGLE_APPLICATION_CREDENTIALS environment is set, no further arguments are needed other than those specified below. Otherwise, use the factory method with_config to construct the data loader using manually specified credentials.
To authenticate (and authorize) access to Google Cloud Storage, credentials must be provided.
Below are the different ways to access those credentials:
- Define the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable holding the filepath to your service account key - Define the
GOOGLE_SERVICE_ACC_KEY_FILEPATHkey with your configuration loader or thepath_to_credentialskeyword argument with the client constructor holding the filepath to your service account key - Define the
GOOGLE_SERVICE_ACC_KEYkey with your configuration loader or thecredentials_mappingkeyword argument with the client constructor holding a mapping sharing the same contents as your service key- if using a configuration file, be careful to wrap your service key values in quotes so the YAML parser reads the settings correctly
- Manually pass the
google.oauth2.service_account.Credentialsobject with the keyword argumentcredentials
Constructor
__init__(self, verbose: bool = False, **kwargs)
Initializes the GoogleCloudStorage data loading client.
-
Args
verbose (bool): Enables verbose output printing. Defaults toFalse.credentials_mapping (Mapping[str, str])- Mapping object corresponding to your service account key. See instructions above on when to use this keyword argumentpath_to_credentials (str)- Filepath to service account key. See instructions above on when to use this keyword argument.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> GoogleCloudStorage
Creates GoogleCloudStorage data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
GoogleCloudStorage) The constructed dataloader using this method
Methods
export -export(df: DataFrame, bucket_name: str, object_key: str, format: FileFormat | str = None, **kwargs) -> None:
Exports data frame to a GoogleCloudStorage bucket.
-
Args:
-
data (Union[DataFrame, str]): Data frame or file path to export. -
bucket_name (str): Name of the bucket to export data frame to. -
object_key (str): Object key in GoogleCloudStorage bucket to export data frame to. -
format (FileFormat | str, Optional): Format of the file to export data frame to. Defaults toNone, in which case the format is inferred. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Writer CSV DataFrame.to_csv JSON DataFrame.to_json Parquet DataFrame.to_parquet HDF5 DataFrame.to_hdf XML DataFrame.to_xml Excel DataFrame.to_excel
-
load(bucket_name: str, object_key: str, format: FileFormat | str = None, limit: int = QUERY_ROW_LIMIT, **kwargs) -> DataFrame
Loads data from object in GoogleCloudStorage bucket into a Pandas data frame. This function will load at maximum 10,000,000 rows of data from the specified file (this limit is configurable).
-
Args:
-
bucket_name (str): Name of the bucket to load data from. -
object_key (str): Key of the object in GoogleCloudStorage bucket to load data from. -
format (FileFormat | str, Optional): Format of the file to load data frame from. Defaults to None, in which case the format is inferred. -
limit (int, optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Reader CSV read_csv JSON read_json Parquet read_parquet HDF5 read_hdf XML read_xml Excel read_excel
-
-
Returns: (
DataFrame) Data frame object loaded from data in the specified file.
AzureBlobStorage
mage_ai.io.azure_blob_storage.AzureBlobStorage
Handles data transfer between an Azure Blob Storage container and Mage. Supports loading files of the following types:
- CSV
- JSON
- Parquet
- HDF5
- XML
- Excel
AZURE_CLIENT_IDAZURE_CLIENT_SECRETAZURE_TENANT_IDAZURE_STORAGE_ACCOUNT_NAME
io_config.yaml.
Constructor
Initializes data loader from an Azure Blob Storage container.-
Args
verbose (bool): Enables verbose output printing. Defaults toFalse.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> AzureBlobStorage
Creates AzureBlobStorage data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
AzureBlobStorage) The constructed dataloader using this method
Methods
export -export(df: DataFrame, bucket_name: str, object_key: str, format: FileFormat | str = None, **kwargs) -> None:
Exports data frame to an Azure Blob Storage container.
-
Args:
-
df (DataFrame): Data frame to export. -
container_name (str): Name of the Azure container to export data to. -
blob_path (str): The desired output path of the data in your Azure Blob -
format (FileFormat | str, Optional): Format of the file to export data frame to. Defaults toNone, in which case the format is inferred. -
**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats at:Format Pandas Writer CSV DataFrame.to_csv JSON DataFrame.to_json Parquet DataFrame.to_parquet HDF5 DataFrame.to_hdf XML DataFrame.to_xml Excel DataFrame.to_excel
-
load(container_name: str, blob_path: str, format: FileFormat | str = None, limit: int = QUERY_ROW_LIMIT, **kwargs) -> DataFrame
Loads data from object in Azure Blob Storage into a Pandas data frame. This function will load at maximum 10,000,000 rows of data from the specified file (this limit is configurable).
GoogleSheets
mage_ai.io.google_sheets.GoogleSheets
Handles data transfer between a Google Sheets spreadsheet and the Mage app.
Authentication is the same for our other Google Cloud data loaders. Read more about configuration here.
The Google Sheets class is defined here.
BigQuery
mage_ai.io.bigquery.BigQuery
Handles data transfer between a BigQuery data warehouse and the Mage app.
Authentication with a Google BigQuery warehouse requires specifying the service account key for the service account that has access to the BigQuery warehouse. There are four ways to provide this service key:
- Define the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable holding the filepath to your service account key - Define the
GOOGLE_SERVICE_ACC_KEY_FILEPATHkey with your configuration loader or thepath_to_credentialskeyword argument with the client constructor holding the filepath to your service account key - Define the
GOOGLE_SERVICE_ACC_KEYkey with your configuration loader or thecredentials_mappingkeyword argument with the client constructor holding a mapping sharing the same contents as your service key- if using a configuration file, be careful to wrap your service key values in quotes so the YAML parser reads the settings correctly
- Manually pass the
google.oauth2.service_account.Credentialsobject with the keyword argumentcredentials
Constructor
__init__(self, **kwargs)
Initializes the BigQuery data loading client.
-
Args
verbose (bool): Enables verbose output printing. Defaults toFalse.credentials_mapping (Mapping[str, str])- Mapping object corresponding to your service account key. See instructions above on when to use this keyword argumentpath_to_credentials (str)- Filepath to service account key. See instructions above on when to use this keyword argument.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> BigQuery
Creates BigQuery data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
BigQuery) BigQuery data loading client constructed using this method
with_credentials_file(cls, path_to_credentials: str, **kwargs) -> BigQuery
Constructs BigQuery data loader using the file containing the service account key.
- Args:
path_to_credentials (str): Path to the credentials file.**kwargs: Additional parameters to pass to BigQuery client constructor.
- Returns: (
BigQuery) BigQuery data loading client constructed using this method
with_credentials_object(cls, credentials: Mapping[str, str], **kwargs) -> BigQuery
Constructs BigQuery data loader using manually specified service account key credentials.
- Args:
credentials (Mapping[str, str]): Mapping containing same key-value pairs as a service account key.**kwargs: Additional parameters to pass to BigQuery client constructor.
- Returns: (
BigQuery) BigQuery data loading client constructed using this method
Methods
execute -execute(query_string: str, **kwargs) -> None
Sends query to the connected BigQuery warehouse.
- Args:
query_string (str): Query to execute on the BigQuery warehouse.**kwargs: Additional arguments to pass to query, such as query configurations. See Client.query() docs for additional arguments.
export(df: DataFrame, table_name: str, database: str, schema: str, if_exists: str, **kwargs) -> None
Exports a data frame to a Google BigQuery warehouse. If table doesn’t exist, the table is automatically created.
- Args:
df (DataFrame): Data frame to exporttable_id (str): ID of the table to export the data frame to. If of the format"your-project.your_dataset.your_table_name". If this table exists, the table schema must match the data frame schema. If this table doesn’t exist, the table schema is automatically inferred.if_exists (str): Specifies export policy if table exists. Either -'fail': throw an error. -'replace': drops existing table and creates new table of same name. -'append': appends data frame to existing table. In this case the schema must match the original table. Defaults to'replace'. Ifwrite_dispositionis specified as a keyword argument, this parameter is ignored (as both define the same functionality).**configuration_params: Configuration parameters for export job. See valid configuration parameters at LoadJobConfig docs.
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
When a select query is provided, this function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query. See Google BigQuery Python client docs for additional arguments.
sample(schema: str, table: str, size: int, **kwargs) -> DataFrame
Sample data from a table in the BigQuery warehouse. Sample is not guaranteed to be random.
-
Args:
schema (str): The schema to select the table from.size (int): The number of rows to sample. Defaults to 10,000,000.table (str): The table to sample from in the connected database.
-
Returns: (
DataFrame) Sampled data from the table.
PostgreSQL
mage_ai.io.postgres.Postgres
Handles data transfer between a PostgreSQL database and the Mage app. The Postgres client utilizes the following keys to connect the PostgreSQL database.
Constructor
__init__(self, **kwargs)
Initializes the Postgres data loading client.
- Args:
dbname (str): The name of the database to connect to.user (str): The user with which to connect to the database with.password (str): The login password for the user.host (str): Host address for database.port (str): Port on which the database is running.verbose (bool): Enables verbose output printing. Defaults toFalse.**kwargs: Additional settings for creating psycopg2 connection
Attributes
- conn (
psycopg2.connection.Connection) - underlying psycopg2 Connection object
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Postgres
Creates Postgres data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
Postgres) The constructed dataloader using this method
Methods
close -close()
Closes connection to PostgreSQL database.
commit - commit()
Saves all changes made to the database since the previous transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected PostgreSQL database. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the PostgreSQL database.**kwargs: Additional parameters to pass to the query. Seepsycopg2docs for configuring query parameters.
export(df: DataFrame, table_name: str, database: str, schema: str, if_exists: str, index: bool, **kwargs) -> None
Exports data frame to the PostgreSQL database from a Pandas data frame. If table doesn’t exist, the table is automatically created. If the schema doesn’t exist, the schema is also created.
Any changes made to the database will not be saved unless commit() is called afterward.
-
Args:
-
df (DataFrame): Data frame to export to the PostgreSQL database. -
table_name (str): Name of the table to export data to (excluding database and schema). -
database (str): Name of the database in which the table is located. -
schema (str): Name of the schema in which the table is located. -
if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table.
'replace'. -
index (bool): IfTrue, the data frame index is also exported alongside the table. Defaults toFalse. -
**kwargs: Additional arguments to pass to writer. See Advanced Export Parameters for details on UPSERT (unique_constraints,unique_conflict_method) and other advanced options likeoverwrite_types,query_string,auto_clean_name, etc.
-
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query. Seepsycopg2docs for configuring query parameters.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to PostgreSQL database.
rollback - rollback()
Rolls back (deletes) all database changes made since the last transaction.
sample - sample(schema: str, table: str, size: int, **kwargs) -> DataFrame
Sample data from a table in the PostgreSQL database. Sample is not guaranteed to be random.
-
Args:
schema (str): The schema to select the table from.size (int): The number of rows to sample. Defaults to 10,000,000.table (str): The table to sample from in the connected database.
-
Returns: (
DataFrame) Sampled data from the table.
Advanced Export Parameters for SQL Databases
The following advanced export parameters are available for all SQL database clients (PostgreSQL, MySQL, MSSQL, Snowflake, BigQuery, ClickHouse, Trino, Databricks SQL, SQLite, etc.) that inherit fromBaseSQL. These parameters provide fine-grained control over how data is exported.
Note: UPSERT (UPDATE or INSERT) functionality is only supported on a subset of SQL databases (e.g., PostgreSQL, MySQL, MSSQL, Snowflake, BigQuery, SQLite). It is not supported on Redshift, ClickHouse, Trino or Databricks SQL. See below for details.
UPSERT (UPDATE or INSERT)
UPSERT allows you to update existing rows when they match unique constraints, or insert new rows if they don’t exist. This is useful for maintaining data integrity and avoiding duplicate entries. To enable UPSERT functionality, you need to specify bothunique_constraints and unique_conflict_method parameters:
unique_constraints (List[str]): A list of column names that form the unique constraint. These columns are used to identify existing rows for updates.unique_conflict_method (str): Specifies how to handle conflicts when unique constraints are violated. Can be either:'UPDATE'(or use constantUNIQUE_CONFLICT_METHOD_UPDATE): Updates existing rows that match the unique constraints with new values from the data frame, and inserts new rows that don’t match.'IGNORE'(or use constantUNIQUE_CONFLICT_METHOD_IGNORE): Ignores rows that would violate unique constraints (does not update or insert).
- If a row with the same
user_idandemailalready exists, it will be updated with the new values from the data frame. - If no matching row exists, a new row will be inserted.
ON CONFLICT, MySQL uses ON DUPLICATE KEY UPDATE, MSSQL uses MERGE), but the interface remains consistent across all supported databases.
UPSERT functionality is not supported on Redshift, ClickHouse, Trino and Databricks SQL.
Additional Export Parameters
The following parameters provide additional control over the export process:allow_reserved_words (bool): IfTrue, allows column names that are SQL reserved words. Defaults toFalse.auto_clean_name (bool): IfTrue, automatically cleans column and table names (removes special characters, handles reserved words). Defaults toTrue.case_sensitive (bool): IfTrue, preserves case sensitivity for identifiers. Defaults toFalse.cascade_on_drop (bool): IfTrue, addsCASCADEwhen dropping tables. Defaults toFalse.drop_table_on_replace (bool): IfTrue, drops the table when usingif_exists='replace'instead of deleting all rows. Defaults toFalse.overwrite_types (Dict[str, str]): A dictionary mapping column names to SQL data types to override automatic type inference. Example:{'price': 'DECIMAL(10,2)', 'description': 'TEXT'}.query_string (str): A custom SQL query string to use for creating the table. If provided, the table is created usingCREATE TABLE AS SELECT ...syntax.skip_check_table_exists (bool): IfTrue, skips checking if the table exists before export. Defaults toFalse.skip_create_schema (bool): IfTrue, skips creating the schema if it doesn’t exist. Defaults toFalse.skip_semicolon_at_end (bool): IfTrue, omits the semicolon at the end of SQL statements. Defaults toFalse.
OracleDB
Constructor
__init__(self, **kwargs)
Initializes the OracleDB data loading client.
- Args:
user (str): The user for connecting to the database with.password (str): The login password for the user.host (str): Host address for database.port (int): Port on which the database is running Defaults to 3306.service (str): OracleDB servicemode (str): switch between OracleDB mode ofthinorthickverbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating psycopg2 connection
thick, it is required to use the customized oracle Dockerfile in integrations/oracle/Dockerfile.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Postgres
Creates OracleDB data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
OracleDB) The constructed dataloader using this method
Methods
execute -execute(query_string: str, **kwargs) -> None
Sends query to the connected OracleDB database. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the OracleDB database.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, schema_name: str, table_name: str, if_exists: str, index: bool, **kwargs) -> None
Exports data frame to the OracleDB database from a Pandas data frame. If table doesn’t exist, the table is automatically created. The schema_name can be set as None since it is not required for OracleDB loaders.
Any changes made to the database will not be saved unless commit() is called afterward.
-
Args:
-
df (DataFrame): Data frame to export to the OracleDB database. -
schema_name (str): Not required for OracleDB loaders. Set toNone. -
table_name (str): Name of the table to export data to (excluding database). -
if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table.
'replace'. -
index (bool): IfTrue, the data frame index is also exported alongside the table. Defaults toFalse. -
**kwargs: Additional arguments to pass to writer.
-
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.verbose (bool): Enables verbose output printing. Defaults toTrue.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to OracleDB database.
MySQL
mage_ai.io.mysql.MySQL
Handles data transfer between a MySQL database and the Mage app. The MySQL client utilizes the following keys to connect the MySQL database.
Constructor
__init__(self, **kwargs)
Initializes the MySQL data loading client.
- Args:
database (str): The name of the database to connect to.user (str): The user for connecting to the database with.password (str): The login password for the user.host (str): Host address for database.port (int): Port on which the database is running Defaults to 3306.allow_local_infile (bool): Enables verbose capability to load local file. Defaults toFalse.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating psycopg2 connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> MySQL
Creates MySQL data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
MySQL) The constructed dataloader using this method
Methods
close -close()
Closes connection to MySQL database.
commit - commit()
Saves all changes made to the database since the previous transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected MySQL database. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the MySQL database.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, schema_name: str, table_name: str, if_exists: str, index: bool, **kwargs) -> None
Exports data frame to the MySQL database from a Pandas data frame. If table doesn’t exist, the table is automatically created. The schema_name can be set as None since it is not required for MySQL loaders.
Any changes made to the database will not be saved unless commit() is called afterward.
-
Args:
-
df (DataFrame): Data frame to export to the MySQL database. -
schema_name (str): Not required for MySQL loaders. Set toNone. -
table_name (str): Name of the table to export data to (excluding database). -
if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table.
'replace'. -
index (bool): IfTrue, the data frame index is also exported alongside the table. Defaults toFalse. -
**kwargs: Additional arguments to pass to writer. See Advanced Export Parameters for details on UPSERT (unique_constraints,unique_conflict_method) and other advanced options.
-
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to MySQL database.
sample - sample(database: str, table: str, size: int, **kwargs) -> DataFrame
Sample data from a table in the MySQL database. Sample is not guaranteed to be random.
-
Args:
database (str): The database to select the table from.table (str): The table to sample from in the connected database.size (int): The number of rows to sample. Defaults to 10,000,000.
-
Returns: (
DataFrame) Sampled data from the table.
DuckDB
mage_ai.io.duckdb
Handles data transfer between DuckDB database and the Mage app.
DuckDB client utilizes the following keys to connect:
Constructor
- Args:
database (str): The name of the database to connect to.schema: Schema to use.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating psycopg2 connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> DuckDB
Creates DuckDB data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
DuckDB) The constructed dataloader using this method
Methods
close -close()
Closes connection to DuckDB database.
open - open()
Opens a connection to DuckDB database.
table_exists - table_exists()
Build query to check if a table existing in the DB.
- Args:
schema_name: Name of the schema to use.table_name: Name of the table.
- Returns: true or false.
upload_dataframe()
Build query to insert data into table.
- Args:
df: Data in dataframe type.
get_type()
Convert data from input type into DuckDB type.
- Args:
column: Metadata associated with the column.dtype: Input data type.
- Returns: DuckDB data type in string format.
Snowflake
mage_ai.io.snowflake.Snowflake
Handles data transfer between a Snowflake data warehouse and the Mage app. The Snowflake client utilizes the following keys to authenticate access and connect to Snowflake servers.
Constructor
__init__(self, **kwargs)
Initializes settings for connecting to Snowflake data warehouse.
The following arguments must be provided to the connector, all other
arguments are optional.
Required Arguments:
user (str): Username for the Snowflake user.password (str): Login Password for the user.account (str): Snowflake account identifier (including region, excludingsnowflake-computing.comsuffix).
verbose (bool): Specify whether to print verbose output.database (str): Name of the default database to use. If unspecified no database is selected on login.schema (str): Name of the default schema to use. If unspecified no schema is selected on login.warehouse (str): Name of the default warehouse to use. If unspecified no warehouse is selected on login.
Attributes
- conn (
snowflake.connector.Connection) - underlying Snowflake Connection object
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Snowflake
Creates Snowflake data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.verbose (bool): Enables verbose output printing. Defaults to False.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
Snowflake) The constructed dataloader using this method
Methods
close -close()
Closes connection to Snowflake server.
commit - commit()
Saves all changes made to the warehouse since the previous transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected Snowflake warehouse. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the Snowflake warehouse.**kwargs: Additional parameters to pass to the query. See Snowflake Connector Docs for additional parameters.
export(df: DataFrame, table_name: str, database: str, schema: str, if_exists: str, **kwargs) -> None
Exports a Pandas data frame to a Snowflake warehouse based on the table name. If table doesn’t exist, the table is automatically created.
Any changes made to the database will not be saved unless commit() is called afterward.
-
Args:
-
df (DataFrame): Data frame to export to a Snowflake warehouse. -
table_name (str): Name of the table to export data to (excluding database and schema). -
database (str): Name of the database in which the table is located. -
schema (str): Name of the schema in which the table is located. -
if_exists (str, optional): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table.
'append'. -
**kwargs: Additional arguments to pass to writer. See Advanced Export Parameters for details on UPSERT (unique_constraints,unique_conflict_method) and other advanced options.
-
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from Snowflake into a Pandas data frame based on the query given. This will fail unless a SELECT query is provided.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse using execute.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.*args, **kwargs: Additional parameters to pass to the query. See Snowflake Connector Docs for additional parameters.
- Returns: (
DataFrame) Data frame containing the queried data
open()
Opens a connection to Snowflake servers.
rollback - rollback()
Rolls back (deletes) all database changes made since the last transaction.
sample - sample(schema: str, table: str, size: int, **kwargs) -> DataFrame
Sample data from a table in the Snowflake warehouse. Sample is not guaranteed to be random.
-
Args:
schema (str): The schema to select the table from.size (int): The number of rows to sample. Defaults to 10,000,000.table (str): The table to sample from in the connected database.
-
Returns: (
DataFrame) Sampled data from the data frame.
Redshift
mage_ai.io.redshift.Redshift
Handles data transfer between a Redshift cluster and the Mage app. Mage uses temporary credentials to authenticate access to a Redshift cluster. There are two ways to specify these credentials:
-
Pre-generate temporary credentials and specify them in the configuration settings. Add the following keys to the configuration settings for
Redshiftto use the temporary credentials: -
Provide an IAM Profile to automatically generate temporary credentials for connection. The IAM profile is read from
~/.aws/and is used with theGetClusterCredentialsendpoint to generate temporary credentials. Add the following keys to the configuration settings forRedshiftto generate temporary credentialsIf an IAM profile is not setup usingaws configure, manually specify the AWS credentials in the configuration settings as well.
Constructor
__init__(**kwargs)
-
Args
verbose: Enables verbose output. Defaults toFalse.
Attributes
- conn (
redshift_connector.Connection) - the underlying Redshift Connection object.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Redshift
Initializes Redshift client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor.
- Returns: (
Redshift) Redshift data loading client constructed using this method
with_temporary_credentials(database: str, host: str, user: str, password: str, port: int = 5439, **kwargs) -> Redshift
Creates a Redshift data loader with temporary database credentials.
- Args:
database (str): Name of the database to connect to.host (str): The hostname of the Redshift cluster which the database belongs to.user (str): Temporary credentials username for use in authentication.password (str): Temporary credentials password for use in authentication.port (int, optional): Port number of the Redshift cluster. Defaults to 5439.**kwargs: Additional parameters passed to the loader constructor.
- Returns: (
Redshift) Redshift data loading client constructed using this method
with_iam(cluster_identifier: str, database: str, db_user: str, profile: str, **kwargs) -> Redshift
Creates a Redshift data loader using an IAM profile from ~/.aws.
The IAM Profile settings can also be manually specified as keyword arguments to this constructor, but is not recommended. If credentials are manually specified, the region of the Redshift cluster must also be specified.
- Args:
cluster_identifier (str): Identifier of the cluster to connect to.database (str): The database to connect to within the specified cluster.db_user (str): Database usernameprofile (str, optional): The profile to use from stored credentials file. Defaults to ‘default’.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
Redshift) Redshift data loading client constructed using this method
Methods
close -close()
Closes connection to the Redshift cluster specified in the loader configuration.
commit - commit()
Saves all changes made to the database since the last transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected Redshift cluster. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the Redshift cluster.**kwargs: Additional parameters to pass to the query. Seeredshift-connectordocs for configuring query parameters.
export(df: DataFrame, table_name: str) -> None
Exports a Pandas data frame to a Redshift cluster under the specified table. The changes made to the database will not be saved unless commit() is called afterward.
- Args:
df (DataFrame): Data frame to export to database in Redshift cluster.table_name (str): Name of the table to export the data to. Table must already exist.
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse using execute.
-
Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.*args, **kwargs: Additional parameters to send to query, including parameters for use with format strings. Seeredshift-connectordocs for configuring query parameters.
-
Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to the Redshift cluster specified in the loader configuration.
rollback - rollback()
Rolls back (deletes) all database changes made since the last transaction.
sample - sample(schema: str, table: str, size: int, **kwargs) -> DataFrame
Sample data from a table in the selected database in the Redshift cluster. Sample is not guaranteed to be random.
-
Args:
schema (str): The schema to select the table from.size (int): The number of rows to sample. Defaults to 10,000,000.table (str): The table to sample from in the connected database.
-
Returns: (
DataFrame) Sampled data from the table.
MSSQL
mage_ai.io.mssql.MSSQL
Handles data transfer between a Microsoft SQL Server database and the Mage app. The MSSQL client utilizes the following keys to connect to the database.
Constructor
__init__(self, **kwargs)
Initializes the MSSQL data loading client.
- Args:
database (str): The name of the database to connect to.user (str): The user for connecting to the database with.password (str): The login password for the user.host (str): Host address for database.port (int): Port on which the database is running. Defaults to 1433.schema (str): Schema name. Defaults to ‘dbo’.authentication (str, optional): Authentication method.connection_method (str): Connection method (‘direct’ or ‘ssh_tunnel’). Defaults to ‘direct’.ssh_host (str, optional): SSH tunnel host.ssh_port (int, optional): SSH tunnel port.ssh_username (str, optional): SSH tunnel username.ssh_password (str, optional): SSH tunnel password.ssh_pkey (str, optional): SSH tunnel private key path.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> MSSQL
Creates MSSQL data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
MSSQL) The constructed dataloader using this method
Methods
close -close()
Closes connection to MSSQL database.
commit - commit()
Saves all changes made to the database since the previous transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected MSSQL database. Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the MSSQL database.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, schema_name: str, table_name: str, if_exists: str, index: bool, **kwargs) -> None
Exports data frame to the MSSQL database from a Pandas data frame. If table doesn’t exist, the table is automatically created.
Any changes made to the database will not be saved unless commit() is called afterward.
- Args:
df (DataFrame): Data frame to export to the MSSQL database.schema_name (str): Name of the schema in which the table is located. Defaults to ‘dbo’.table_name (str): Name of the table to export data to.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
index (bool): IfTrue, the data frame index is also exported alongside the table. Defaults toFalse.**kwargs: Additional arguments to pass to the writer. See the Advanced Export Parameters section for details on using options such asunique_constraints,unique_conflict_method, and other advanced export options.
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to MSSQL database.
rollback - rollback()
Rolls back (deletes) all database changes made since the last transaction.
ClickHouse
mage_ai.io.clickhouse.ClickHouse
Handles data transfer between a ClickHouse data warehouse and the Mage app. The ClickHouse client utilizes the following keys to authenticate access and connect to ClickHouse servers.
Constructor
__init__(self, **kwargs)
Initializes the ClickHouse data loading client.
- Args:
host (str): ClickHouse hostname.port (int): ClickHouse port.username (str): ClickHouse username.password (str): ClickHouse password.database (str): Database name. Defaults to ‘default’.interface (str, optional): ClickHouse interface.secure (bool, optional): Enable SSL.ca_cert (str, optional): Path to CA certificate.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings passed to ClickHouse client
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> ClickHouse
Creates ClickHouse data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
ClickHouse) The constructed dataloader using this method
Methods
execute -execute(command_string: str, **kwargs) -> None
Sends command to the connected ClickHouse warehouse.
- Args:
command_string (str): Command to execute on the ClickHouse warehouse.**kwargs: Additional arguments to pass to command.
load(query_string: str, limit: int, **kwargs) -> DataFrame
Loads data from ClickHouse into a Pandas data frame based on the query given. This will fail if the query returns no data from the database. When a select query is provided, this function will load at maximum 10,000,000 rows of data. To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded dataframe to. Defaults to 10,000,000.**kwargs: Additional arguments to pass to query.
- Returns: (
DataFrame) Data frame associated with the given query.
export(df: DataFrame, table_name: str, database: str, if_exists: str, **kwargs) -> None
Exports a Pandas data frame to a ClickHouse warehouse. If table doesn’t exist, the table is automatically created.
- Args:
df (DataFrame): Data frame to export to ClickHouse warehouse.table_name (str): Name of the table to export data to.database (str): Name of the database in which the table is located.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
**kwargs: Advanced export parameters. See Advanced Export Parameters. (UPSERT not supported.)
SQLite
mage_ai.io.sqlite.SQLite
Handles data transfer between a SQLite database and the Mage app.
Constructor
__init__(self, database: str, verbose: bool = True, **kwargs)
Initializes the SQLite data loading client.
- Args:
database (str): Path to the SQLite database file.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> SQLite
Creates SQLite data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
SQLite) The constructed dataloader using this method
Methods
close -close()
Closes connection to SQLite database.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected SQLite database.
- Args:
query_string (str): The query to execute on the SQLite database.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, table_name: str, if_exists: str, index: bool, **kwargs) -> None
Exports data frame to the SQLite database from a Pandas data frame. If table doesn’t exist, the table is automatically created.
- Args:
df (DataFrame): Data frame to export to the SQLite database.table_name (str): Name of the table to export data to.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
index (bool): IfTrue, the data frame index is also exported alongside the table. Defaults toFalse.**kwargs: Advanced export options such asunique_constraints,unique_conflict_method,overwrite_types,auto_clean_name, etc. See the Advanced Export Parameters section for details.
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from the results of a query into a Pandas data frame. This will fail if the query returns no data from the database.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to SQLite database.
Trino
mage_ai.io.trino.Trino
Handles data transfer between a Trino data warehouse and the Mage app. The Trino client utilizes the following keys to authenticate access and connect to Trino servers.
Constructor
__init__(self, catalog: str, host: str, user: str, password: str = None, port: int = 8080, schema: str = None, verbose: bool = True, **kwargs)
Initializes the Trino data loading client.
- Args:
catalog (str): Trino catalog name.host (str): Trino hostname.user (str): Trino username.password (str, optional): Trino password.port (int): Trino port. Defaults to 8080.schema (str, optional): Trino schema name.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Trino
Creates Trino data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
Trino) The constructed dataloader using this method
Methods
close -close()
Closes connection to Trino warehouse.
commit - commit()
Saves all changes made to the warehouse since the previous transaction.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected Trino warehouse. Any changes made to the warehouse will not be saved unless commit() is called afterward.
- Args:
query_string (str): The query to execute on the Trino warehouse.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, table_name: str, schema: str, if_exists: str, **kwargs) -> None
Exports a Pandas data frame to a Trino warehouse. If table doesn’t exist, the table is automatically created.
Any changes made to the warehouse will not be saved unless commit() is called afterward.
- Args:
df (DataFrame): Data frame to export to Trino warehouse.table_name (str): Name of the table to export data to.schema (str): Name of the schema in which the table is located.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
**kwargs: Advanced export parameters. See Advanced Export Parameters. (UPSERT not supported.)
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from Trino into a Pandas data frame based on the query given. This will fail if the query returns no data from the warehouse.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to Trino warehouse.
rollback - rollback()
Rolls back (deletes) all warehouse changes made since the last transaction.
MongoDB
mage_ai.io.mongodb.MongoDB
Handles data transfer between a MongoDB database and the Mage app. The MongoDB client utilizes the following keys to connect to the database.
Constructor
__init__(self, connection_string: str = None, host: str = None, port: int = 27017, user: str = None, password: str = None, database: str = None, collection: str = None, verbose: bool = True, **kwargs)
Initializes the MongoDB data loading client.
- Args:
connection_string (str, optional): MongoDB connection string. If provided, other connection parameters are ignored.host (str, optional): MongoDB hostname.port (int, optional): MongoDB port. Defaults to 27017.user (str, optional): MongoDB username.password (str, optional): MongoDB password.database (str, optional): MongoDB database name.collection (str, optional): MongoDB collection name.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> MongoDB
Creates MongoDB data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
MongoDB) The constructed dataloader using this method
Methods
load -load(collection: str = None, query: Dict = None, **kwargs) -> DataFrame
Loads the data frame from the MongoDB collection.
- Args:
collection (str, optional): MongoDB collection name. Defaults to collection specified in constructor.query (Dict, optional): Filter the result by using a query object. Examples:{ "address": "Park Lane 38" },{ "address": { "$gt": "S" } }
- Returns: (
DataFrame) Data frame object loaded from the MongoDB collection.
export(data: Union[DataFrame, List[Dict]], collection: str = None, **kwargs) -> None
Exports the input dataframe to the MongoDB collection.
- Args:
data (Union[DataFrame, List[Dict]]): Data frame or List of Dictionary to export.collection (str, optional): MongoDB collection name. Defaults to collection specified in constructor.
Databricks SQL
Only in Mage Pro.Try our fully managed solution to access this advanced feature.
mage_ai.io.databricks_sql.DatabricksSQL
Handles data transfer between a Databricks SQL warehouse and the Mage app. The DatabricksSQL client utilizes the following keys to authenticate access and connect to Databricks.
Constructor
__init__(self, verbose: bool = True, **kwargs)
Initializes the Databricks SQL data loading client.
- Args:
access_token (str): Databricks access token (required).host (str): Databricks workspace hostname (required).http_path (str): Databricks SQL warehouse HTTP path (required).database (str, optional): Database/catalog name.schema (str, optional): Schema name.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating connection
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> DatabricksSQL
Creates Databricks SQL data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
DatabricksSQL) The constructed dataloader using this method
Methods
close -close()
Closes connection to Databricks SQL warehouse.
execute - execute(query_string: str, **kwargs) -> None
Sends query to the connected Databricks SQL warehouse.
- Args:
query_string (str): The query to execute on the Databricks SQL warehouse.**kwargs: Additional parameters to pass to the query.
export(df: DataFrame, table_name: str, database: str, schema: str, if_exists: str, **kwargs) -> None
Exports a Pandas data frame to a Databricks SQL warehouse. If table doesn’t exist, the table is automatically created.
- Args:
df (DataFrame): Data frame to export to Databricks SQL warehouse.table_name (str): Name of the table to export data to.database (str): Name of the database/catalog in which the table is located.schema (str): Name of the schema in which the table is located.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
**kwargs: See Advanced Export Parameters for available options.
load(query_string: str, limit: int, *args, **kwargs) -> DataFrame
Loads data from Databricks SQL into a Pandas data frame based on the query given. This will fail if the query returns no data from the warehouse.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in warehouse.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional parameters to pass to the query.
- Returns: (
DataFrame) Data frame containing the queried data.
open()
Opens a connection to Databricks SQL warehouse.
Spark
mage_ai.io.spark.Spark
Handles data transfer between a Spark session and the Mage app. The Spark client utilizes the following keys to connect to Spark.
Constructor
__init__(self, **kwargs)
Initializes the Spark data loading client.
- Args:
host (str): Spark master URL (e.g., ‘local’, ‘spark://host:port’).method (str, optional): Spark connection method.database (str, optional): Database/schema name.verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings for creating Spark session
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> Spark
Creates Spark data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
Spark) The constructed dataloader using this method
Methods
execute -execute(query_string: str, **kwargs) -> None
Sends query to the connected Spark session.
- Args:
query_string (str): Query to execute on the Spark session.**kwargs: Additional arguments to pass to query.
load(query_string: str, limit: int, **kwargs) -> DataFrame
Loads data from Spark into a Pandas data frame based on the query given. This will fail if the query returns no data.
This function will load at maximum 10,000,000 rows of data (this limit is configurable). To operate on more data, consider performing data transformations in Spark.
- Args:
query_string (str): Query to fetch a table or subset of a table.limit (int, Optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs: Additional arguments to pass to query.
- Returns: (
DataFrame) Data frame containing the queried data.
export(df: DataFrame, table_name: str, database: str, if_exists: str, **kwargs) -> None
Exports a Pandas data frame to a Spark session. If table doesn’t exist, the table is automatically created.
- Args:
df (DataFrame): Data frame to export to Spark session.table_name (str): Name of the table to export data to.database (str): Name of the database in which the table is located.if_exists (ExportWritePolicy): Specifies export policy if table exists. Either'fail': throw an error.'replace': drops existing table and creates new table of same name.'append': appends data frame to existing table. Defaults to'replace'.
**kwargs: Additional arguments to pass to writer.
Azure Data Lake Storage
Only in Mage Pro.Try our fully managed solution to access this advanced feature.
mage_ai.io.azure_data_lake_storage.AzureDataLakeStorage
Handles data transfer between an Azure Data Lake Storage Gen2 container and the Mage app. Supports loading files of the following types:
- CSV
- JSON
- Parquet
AZURE_CLIENT_IDAZURE_CLIENT_SECRETAZURE_TENANT_IDAZURE_STORAGE_ACCOUNT_NAME
io_config.yaml.
Constructor
__init__(self, verbose: bool = True, **kwargs)
Initializes data loader from an Azure Data Lake Storage Gen2 container.
- Args:
storage_account_name (str): Azure Storage account name (required).verbose (bool): Enables verbose output printing. Defaults toTrue.**kwargs: Additional settings. IfAZURE_CLIENT_ID,AZURE_CLIENT_SECRET, andAZURE_TENANT_IDare provided, they will be used for authentication. Otherwise, DefaultAzureCredential will be used.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> AzureDataLakeStorage
Creates AzureDataLakeStorage data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
AzureDataLakeStorage) The constructed dataloader using this method
Methods
export -export(df: DataFrame, container_name: str, file_path: str, format: FileFormat | str = None, **kwargs) -> None
Exports data frame to an Azure Data Lake Storage Gen2 container.
- Args:
df (DataFrame): Data frame to export.container_name (str): Name of the Azure container to export data to.file_path (str): The desired output path of the data in your Azure Data Lake Storage.format (FileFormat | str, Optional): Format of the file to export data frame to. Defaults toNone, in which case the format is inferred.**kwargs- Additional keyword arguments to pass to the file writer. See possible arguments by file formats in FileIO section.
load(container_name: str, file_path: str, format: FileFormat | str = None, limit: int = QUERY_ROW_LIMIT, **kwargs) -> DataFrame
Loads data from object in Azure Data Lake Storage Gen2 into a Pandas data frame. This function will load at maximum 10,000,000 rows of data from the specified file (this limit is configurable).
- Args:
container_name (str): Name of the container to load data from.file_path (str): Path of the file in Azure Data Lake Storage to load data from.format (FileFormat | str, Optional): Format of the file to load data frame from. Defaults to None, in which case the format is inferred.limit (int, optional): The number of rows to limit the loaded data frame to. Defaults to 10,000,000.**kwargs- Additional keyword arguments to pass to the file reader. See possible arguments by file formats in FileIO section.
- Returns: (
DataFrame) Data frame object loaded from data in the specified file.
Microsoft Fabric Warehouse
Only in Mage Pro.Try our fully managed solution to access this advanced feature.
mage_ai.io.microsoft_fabric_warehouse.MicrosoftFabricWarehouse
Handles data transfer between a Microsoft Fabric Warehouse and the Mage app. This client extends the MSSQL client and uses the same connection methods.
Constructor
__init__(self, **kwargs)
Initializes the Microsoft Fabric Warehouse data loading client. Uses the same parameters as MSSQL.
Factory Methods
with_config -with_config(config: BaseConfigLoader, **kwargs) -> MicrosoftFabricWarehouse
Creates Microsoft Fabric Warehouse data loading client from configuration settings.
- Args:
config (BaseConfigLoader): Configuration loader object.**kwargs: Additional parameters passed to the loader constructor
- Returns: (
MicrosoftFabricWarehouse) The constructed dataloader using this method
Configuration Settings
Connections to third-party data storage require you to specify confidential information such as login information or access keys. While you can manually specify this information code while constructing data loading clients, it is recommended to not store the secrets directly in code. Instead, Mage provides configuration loaders which allow data loading clients to use your secrets without explicitly writing them in code. Currently, the following sources (and their corresponding configuration loader) can be used to load configuration settings:- Configuration File -
ConfigFileLoader - Environment Variables -
EnvironmentVariableLoader - AWS Secrets Manager -
AWSSecretLoader
mage_ai.io.config.ConfigKey enum. Not all keys need be specified at once - only use the keys related to the services you utilize.
| Key Name | Service | Client Constructor Parameter | Description | Notes |
|---|---|---|---|---|
| AWS General | ||||
| AWS_ACCESS_KEY_ID | AWS General | - | AWS Access Key ID credential | Used by Redshift and S3 |
| AWS_SECRET_ACCESS_KEY | AWS General | - | AWS Secret Access Key credential | Used by Redshift and S3 |
| AWS_SESSION_TOKEN | AWS General | - | AWS Session Token (used to generate temporary DB credentials) | Used by Redshift |
| AWS_REGION | AWS General | - | AWS Region | Used by Redshift and S3 |
| AWS_ENDPOINT | AWS General | - | AWS endpoint URL | Used by S3. Optional |
| AWS Redshift | ||||
| REDSHIFT_DBNAME | AWS Redshift | database | Name of Redshift database to connect to | |
| REDSHIFT_HOST | AWS Redshift | host | Redshift Cluster hostname | Use with temporary credentials |
| REDSHIFT_PORT | AWS Redshift | port | Redshift Cluster port. Optional, defaults to 5439. | Use with temporary credentials |
| REDSHIFT_SCHEMA | AWS Redshift | schema | Redshift schema name | Optional |
| REDSHIFT_TEMP_CRED_USER | AWS Redshift | user | Redshift temporary credentials username. | Use with temporary credentials |
| REDSHIFT_TEMP_CRED_PASSWORD | AWS Redshift | password | Redshift temporary credentials password. | Use with temporary credentials |
| REDSHIFT_DBUSER | AWS Redshift | db_user | Redshift database user to generate credentials for. | Use to generate temporary credentials |
| REDSHIFT_CLUSTER_ID | AWS Redshift | cluster_identifier | Redshift cluster ID | Use to generate temporary credentials |
| REDSHIFT_IAM_PROFILE | AWS Redshift | profile | Name of the IAM profile to generate temporary credentials with | Use to generate temporary credentials |
| PostgreSQL | ||||
| POSTGRES_DBNAME | PostgreSQL | dbname | Database name | |
| POSTGRES_USER | PostgreSQL | user | Database login username | |
| POSTGRES_PASSWORD | PostgreSQL | password | Database login password | |
| POSTGRES_HOST | PostgreSQL | host | Database hostname | |
| POSTGRES_PORT | PostgreSQL | port | PostgreSQL database port | |
| POSTGRES_SCHEMA | PostgreSQL | schema | PostgreSQL schema name | Optional |
| POSTGRES_CONNECTION_METHOD | PostgreSQL | connection_method | Connection method (‘direct’ or ‘ssh_tunnel’) | Optional |
| POSTGRES_SSH_HOST | PostgreSQL | ssh_host | SSH tunnel host | Optional |
| POSTGRES_SSH_PORT | PostgreSQL | ssh_port | SSH tunnel port | Optional |
| POSTGRES_SSH_USERNAME | PostgreSQL | ssh_username | SSH tunnel username | Optional |
| POSTGRES_SSH_PASSWORD | PostgreSQL | ssh_password | SSH tunnel password | Optional |
| POSTGRES_SSH_PKEY | PostgreSQL | ssh_pkey | SSH tunnel private key path | Optional |
| POSTGRES_SSL_MODE | PostgreSQL | sslmode | SSL mode | Optional |
| POSTGRES_SSL_ROOTCERT | PostgreSQL | sslrootcert | SSL root certificate path | Optional |
| POSTGRES_SSL_CERT | PostgreSQL | sslcert | SSL certificate path | Optional |
| POSTGRES_SSL_KEY | PostgreSQL | sslkey | SSL key path | Optional |
| POSTGRES_CONNECT_TIMEOUT | PostgreSQL | connect_timeout | Connection timeout in seconds | Optional |
| Snowflake | ||||
| SNOWFLAKE_USER | Snowflake | user | Snowflake username | |
| SNOWFLAKE_PASSWORD | Snowflake | password | Snowflake password | |
| SNOWFLAKE_ACCOUNT | Snowflake | account | Snowflake account ID (including region) | |
| SNOWFLAKE_DEFAULT_DB | Snowflake | database | Default database to use. Optional, no database chosen if unspecified. | |
| SNOWFLAKE_DEFAULT_SCHEMA | Snowflake | schema | Default schema to use. Optional, no schema chosen if unspecified. | |
| SNOWFLAKE_DEFAULT_WH | Snowflake | warehouse | Default warehouse to use. Optional, no warehouse chosen if unspecified. | |
| SNOWFLAKE_PRIVATE_KEY_PASSPHRASE | Snowflake | private_key_passphrase | Private key passphrase for key pair authentication | Optional |
| SNOWFLAKE_PRIVATE_KEY_PATH | Snowflake | private_key_path | Path to private key file for key pair authentication | Optional |
| SNOWFLAKE_ROLE | Snowflake | role | Snowflake role name | Optional |
| SNOWFLAKE_TIMEOUT | Snowflake | timeout | Query timeout in seconds | Optional |
| Google BigQuery | ||||
| GOOGLE_SERVICE_ACC_KEY | Google BigQuery | credentials_mapping | Service account key | |
| GOOGLE_SERVICE_ACC_KEY_FILEPATH | Google BigQuery | path_to_credentials | Path to service account key | |
| GOOGLE_LOCATION | Google BigQuery | location | Google Cloud location | Optional |
| MySQL | ||||
| MYSQL_DATABASE | MySQL | database | MySQL database name | |
| MYSQL_USER | MySQL | user | MySQL username | |
| MYSQL_PASSWORD | MySQL | password | MySQL password | |
| MYSQL_HOST | MySQL | host | MySQL hostname | |
| MYSQL_PORT | MySQL | port | MySQL port. Defaults to 3306. | |
| MYSQL_CONNECTION_METHOD | MySQL | connection_method | Connection method (‘direct’ or ‘ssh_tunnel’) | Optional |
| MYSQL_SSH_HOST | MySQL | ssh_host | SSH tunnel host | Optional |
| MYSQL_SSH_PORT | MySQL | ssh_port | SSH tunnel port | Optional |
| MYSQL_SSH_USERNAME | MySQL | ssh_username | SSH tunnel username | Optional |
| MYSQL_SSH_PASSWORD | MySQL | ssh_password | SSH tunnel password | Optional |
| MYSQL_SSH_PKEY | MySQL | ssh_pkey | SSH tunnel private key path | Optional |
| MSSQL | ||||
| MSSQL_DATABASE | MSSQL | database | MSSQL database name | |
| MSSQL_USER | MSSQL | user | MSSQL username | |
| MSSQL_PASSWORD | MSSQL | password | MSSQL password | |
| MSSQL_HOST | MSSQL | host | MSSQL hostname | |
| MSSQL_PORT | MSSQL | port | MSSQL port. Defaults to 1433. | |
| MSSQL_SCHEMA | MSSQL | schema | MSSQL schema name. Defaults to ‘dbo’. | |
| MSSQL_AUTHENTICATION | MSSQL | authentication | Authentication method | Optional |
| MSSQL_CONNECTION_METHOD | MSSQL | connection_method | Connection method (‘direct’ or ‘ssh_tunnel’) | Optional |
| MSSQL_SSH_HOST | MSSQL | ssh_host | SSH tunnel host | Optional |
| MSSQL_SSH_PORT | MSSQL | ssh_port | SSH tunnel port | Optional |
| MSSQL_SSH_USERNAME | MSSQL | ssh_username | SSH tunnel username | Optional |
| MSSQL_SSH_PASSWORD | MSSQL | ssh_password | SSH tunnel password | Optional |
| MSSQL_SSH_PKEY | MSSQL | ssh_pkey | SSH tunnel private key path | Optional |
| MSSQL_DRIVER | MSSQL | driver | ODBC driver name | Optional |
| ClickHouse | ||||
| CLICKHOUSE_HOST | ClickHouse | host | ClickHouse hostname | |
| CLICKHOUSE_PORT | ClickHouse | port | ClickHouse port | |
| CLICKHOUSE_USERNAME | ClickHouse | username | ClickHouse username | |
| CLICKHOUSE_PASSWORD | ClickHouse | password | ClickHouse password | |
| CLICKHOUSE_DATABASE | ClickHouse | database | ClickHouse database name | |
| CLICKHOUSE_INTERFACE | ClickHouse | interface | ClickHouse interface | Optional |
| CLICKHOUSE_SSL_CA_CERT | ClickHouse | ca_cert | Path to CA certificate for SSL | Optional |
| Trino | ||||
| TRINO_CATALOG | Trino | catalog | Trino catalog name | |
| TRINO_HOST | Trino | host | Trino hostname | |
| TRINO_PORT | Trino | port | Trino port. Defaults to 8080. | |
| TRINO_USER | Trino | user | Trino username | |
| TRINO_PASSWORD | Trino | password | Trino password | Optional |
| TRINO_SCHEMA | Trino | schema | Trino schema name | Optional |
| Databricks SQL | ||||
| DATABRICKS_ACCESS_TOKEN | Databricks SQL | access_token | Databricks access token | |
| DATABRICKS_HOST | Databricks SQL | host | Databricks workspace hostname | |
| DATABRICKS_HTTP_PATH | Databricks SQL | http_path | Databricks SQL warehouse HTTP path | |
| DATABRICKS_DATABASE | Databricks SQL | database | Databricks database/catalog name | Optional |
| DATABRICKS_SCHEMA | Databricks SQL | schema | Databricks schema name | Optional |
| Spark | ||||
| SPARK_HOST | Spark | host | Spark master URL | |
| SPARK_METHOD | Spark | method | Spark connection method | Optional |
| SPARK_SCHEMA | Spark | database | Spark database/schema name | Optional |
| MongoDB | ||||
| MONGODB_CONNECTION_STRING | MongoDB | connection_string | MongoDB connection string | Alternative to individual settings |
| MONGODB_HOST | MongoDB | host | MongoDB hostname | |
| MONGODB_PORT | MongoDB | port | MongoDB port. Defaults to 27017. | |
| MONGODB_USER | MongoDB | user | MongoDB username | |
| MONGODB_PASSWORD | MongoDB | password | MongoDB password | |
| MONGODB_DATABASE | MongoDB | database | MongoDB database name | |
| MONGODB_COLLECTION | MongoDB | collection | MongoDB collection name | Optional |
| OracleDB | ||||
| ORACLEDB_USER | OracleDB | user | OracleDB username | |
| ORACLEDB_PASSWORD | OracleDB | password | OracleDB password | |
| ORACLEDB_HOST | OracleDB | host | OracleDB hostname | |
| ORACLEDB_PORT | OracleDB | port | OracleDB port | |
| ORACLEDB_SERVICE | OracleDB | service | OracleDB service name | |
| ORACLEDB_MODE | OracleDB | mode | OracleDB mode (‘thin’ or ‘thick’) | Optional |
| DuckDB | ||||
| DUCKDB_DATABASE | DuckDB | database | DuckDB database path | |
| DUCKDB_SCHEMA | DuckDB | schema | DuckDB schema name. Defaults to ‘main’. | Optional |
| MOTHERDUCK_TOKEN | DuckDB | - | MotherDuck authentication token | Optional |
| SQLite | ||||
| SQLITE | SQLite | database | SQLite database file path | |
| Azure | ||||
| AZURE_CLIENT_ID | Azure Services | client_id | Azure AD application client ID | Used by Azure Blob Storage, Azure Data Lake Storage |
| AZURE_CLIENT_SECRET | Azure Services | client_secret | Azure AD application client secret | Used by Azure Blob Storage, Azure Data Lake Storage |
| AZURE_TENANT_ID | Azure Services | tenant_id | Azure AD tenant ID | Used by Azure Blob Storage, Azure Data Lake Storage |
| AZURE_STORAGE_ACCOUNT_NAME | Azure Services | storage_account_name | Azure Storage account name | Used by Azure Blob Storage, Azure Data Lake Storage |
| Microsoft Fabric | ||||
| MICROSOFT_FABRIC_WAREHOUSE_NAME | Microsoft Fabric | warehouse_name | Microsoft Fabric warehouse name | |
| MICROSOFT_FABRIC_WAREHOUSE_ENDPOINT | Microsoft Fabric | endpoint | Microsoft Fabric warehouse endpoint | |
| MICROSOFT_FABRIC_WAREHOUSE_SCHEMA | Microsoft Fabric | schema | Microsoft Fabric warehouse schema | Optional |
Configuration Loader APIs
This section contains the exact APIs and more detailed information on the configuration loaders. Every configuration loader has two functions:-
contains- checks if the configuration source contains the requested key. Commonly, theinoperation is used to check for setting existence (but is not always identical ascontainscan accept multiple parameters while theinkeyword only accepts the key). -
get- gets the configuration setting associated with the given key. If the key doesn’t exist, returns None. Commonly, the data model overload__getitem__is used to fetch a configuration setting (but is not always identical asgetcan accept multiple parameters while__getitem__does not).
Configuration File
Loads configuration settings from a configuration file. For detailed information about creating and managingio_config.yaml files, see the IO Config Setup documentation.
Example:
__init__(filepath: os.PathLike, profile: str)
Initializes IO Configuration loader. Input configuration file can have two formats:
- Standard: contains a subset of the configuration keys specified in
ConfigKey. This is the default and recommended format. Below is an example configuration file using this format.The above configuration file has a single profile named'default'. Each profile organizes a set of keys to use (for example, distinguishing production keys versus development keys). A configuration file can have multiple profiles. - Verbose: Instead of configuration keys, each profile stores an object of settings associated with
each data migration client. This format was used in previous versions of this tool, and exists
for backwards compatibility. Below is an example configuration file using this format.
env_var syntax to reference environment variables in either configuration file format.
filepath (os.PathLike, optional): Path to IO configuration file. Defaults to'[repo_path]/io_config.yaml'profile (str, optional): Profile to load configuration settings from. Defaults to'default'.
contains(self, key: ConfigKey | str) -> Any
Checks if the configuration setting stored under key is contained.
- Args:
key (str): Name of the configuration setting to check.
- Returns (
bool) ReturnsTrueif configuration setting exists, otherwise returnsFalse.
get(self, key: ConfigKey | str) -> Any
Loads the configuration setting stored under key.
- Args:
key (str): Key name of the configuration setting to load
- Returns: (
Any) Configuration setting corresponding to the given key
Environment Variables
Loads configuration settings from environment variables in your current environment. Example:__init__(self) - no parameters for construction.
Methods:
contains - contains(env_var: ConfigKey | str) -> bool
Checks if the environment variable given by env_var exists.
-
Args:
key (ConfigKey | str): Name of the configuration setting to check existence of.
-
Returns (
bool) ReturnsTrueif configuration setting exists, otherwise returnsFalse.
get(env_var: ConfigKey | str) -> Any
Loads the config setting stored under the environment variable env_var.
-
Args:
env_var (str): Name of the environment variable to load configuration setting from
-
Returns: (
Any) The configuration setting stored underenv_var
AWS Secret Loader
Loads secrets from AWS Secrets Manager. To authenticate access to AWS Secrets Manager, either- Configure your AWS profile using the AWS CLI
- Manually specify your AWS Credentials when constructing the configuration loader
python config = AWSSecretLoader( aws_access_key_id = 'your access key id', aws_secret_access_key = 'your secret key', region_name = 'your region' )
__init__(self, **kwargs):
- Keyword Arguments:
aws_access_key_id (str, Optional): AWS access key ID credential.aws_secret_access_key (str, Optional): AWS secret access key credential.region_name (str, Optional): AWS region which Secrets Manager is created in.
contains( secret_id: ConfigKey | str, version_id: str, version_stage_label : str) -> bool
Check if there is a secret with ID secret_id contained. Can also specify the version of the
secret to check. If
- both
version_idandversion_stage_labelare specified, both must agree on the secret version. - neither of
version_idorversion_stage_labelare specified, any version is checked. - one of
version_idandversion_stage_labelare specified, the associated version is checked.
in operator, comparisons to specific versions are not allowed.
- Args:
secret_id (str): ID of the secret to loadversion_id (str, Optional): ID of the version of the secret to load. Defaults toNone.version_stage_label (str, Optional): Staging label of the version of the secret to load. Defaults toNone.
- Returns: (
bool) Returns true if secret exists, otherwise returns false.
get(secret_id: ConfigKey | str, version_id: str, version_stage_label : str) -> bytes | str
Loads the secret stored under secret_id. Can also specify the version of the
secret to fetch. If
- both
version_idandversion_stage_labelare specified, both must agree on the secret version. - neither of
version_idorversion_stage_labelare specified, the current version is loaded. - one of
version_idandversion_stage_labelare specified, the associated version is loaded.
__getitem__ overload, comparisons to specific versions are not allowed.
-
Args:
secret_id (str): ID of the secret to loadversion_id (str, Optional): ID of the version of the secret to load. Defaults toNone.version_stage_label (str, Optional): Staging label of the version of the secret to load. Defaults toNone.
-
Returns: (
bytes | str) The secret stored undersecret_idin AWS secret manager. If secret is a binary value, returns abytesobject; else returns astringobject