Skip to main content

Add credentials

  1. Create a new pipeline or open an existing pipeline.
  2. Expand the left side of your screen to view the file browser.
  3. Scroll down and click on a file named io_config.yaml.
  4. Enter the following keys and values under the key named default (you can have multiple profiles, add it under whichever is relevant to you)
version: 0.1.1
default:
  sqlite:
    database: /path/to/your/sqlite.db
Notes:
  • database should be the absolute path to your SQLite database file
  • The database file will be created automatically if it doesn’t exist
  • Make sure the directory containing the database file has proper write permissions

Using SQL block

  1. Create a new pipeline or open an existing pipeline.
  2. Add a data loader, transformer, or data exporter block.
  3. Select SQL.
  4. Under the Data provider dropdown, select SQLite.
  5. Under the Profile dropdown, select default (or the profile you added credentials underneath).
  6. Next to the Save to schema label, enter the schema name you want this block to save data to.
  7. Under the Write policy dropdown, select Replace or Append (please see SQL blocks guide for more information on write policies).
  8. Enter in this test query: SELECT 1.
  9. Run the block.

Using raw SQL

You can also use raw SQL with SQLite by enabling the “Use raw SQL” option in the SQL block configuration. This allows you to write complete SQL statements including CREATE TABLE, INSERT, UPDATE, and DELETE statements. Example raw SQL:
CREATE TABLE IF NOT EXISTS users (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    email TEXT UNIQUE
);

INSERT INTO users (name, email) VALUES 
    ('John Doe', 'john@example.com'),
    ('Jane Smith', 'jane@example.com');

SELECT * FROM users;
Note: When using raw SQL, you’re responsible for writing the complete SQL statements. Mage won’t automatically create tables or handle data insertion.

Using Python block

  1. Create a new pipeline or open an existing pipeline.
  2. Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
  3. Select Generic (no template).
  4. Enter this code snippet (note: change the config_profile from default if you have a different profile):
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.sqlite import SQLite
from os import path
from pandas import DataFrame

if 'data_loader' not in globals():
    from mage_ai.data_preparation.decorators import data_loader


@data_loader
def load_data_from_sqlite(**kwargs) -> DataFrame:
    query = 'SELECT 1'
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'

    with SQLite.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        return loader.load(query)
  1. Run the block.

Export a dataframe

Here is an example code snippet to export a dataframe to SQLite:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.sqlite import SQLite
from pandas import DataFrame
from os import path

if 'data_exporter' not in globals():
    from mage_ai.data_preparation.decorators import data_exporter


@data_exporter
def export_data_to_sqlite(df: DataFrame, **kwargs) -> None:
    schema_name = 'your_schema_name'  # Specify the name of the schema to export data to
    table_name = 'your_table_name'  # Specify the name of the table to export data to
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'

    with SQLite.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        loader.export(
            df,
            schema_name,
            table_name,
            index=False,  # Specifies whether to include index in exported table
            if_exists='replace',  # Specify resolution policy if table name already exists
            drop_table_on_replace=False,   # Whether to drop the table when "if_exists" param is set to "replace"
        )


Custom types

To overwrite a column type when running a python export block, simply specify the column name and type in the overwrite_types dict in data exporter config Here is an example code snippet:
from mage_ai.settings.repo import get_repo_path
from mage_ai.io.config import ConfigFileLoader
from mage_ai.io.sqlite import SQLite
from pandas import DataFrame
from os import path

if 'data_exporter' not in globals():
    from mage_ai.data_preparation.decorators import data_exporter


@data_exporter
def export_data_to_sqlite(df: DataFrame, **kwargs) -> None:
    schema_name = 'your_schema_name'  # Specify the name of the schema to export data to
    table_name = 'your_table_name'  # Specify the name of the table to export data to
    config_path = path.join(get_repo_path(), 'io_config.yaml')
    config_profile = 'default'
    overwrite_types = {'column_name': 'TEXT'}

    with SQLite.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
        loader.export(
            df,
            schema_name,
            table_name,
            index=False,  # Specifies whether to include index in exported table
            if_exists='replace',  # Specify resolution policy if table name already exists
            overwrite_types=overwrite_types,
        )

Method arguments

Field nameDescriptionExample values
auto_clean_nameWhether to automatically clean the column name (replace the empty space with underscore, avoid using number as the prefix of the column name)True/False (default: True)
case_sensitiveWhether to support case sensitive columnsTrue/False (default: False)
drop_table_on_replaceWhether to drop the table when “if_exists” param is set to “replace”.True/False (default: False)
if_existsSpecify resolution policy if table name already exists”fail”/“replace”/“append” (default: “replace”)
overwrite_typesOverwrite the column types{'column1': 'INTEGER', 'column2': 'TEXT'}

SQLite-specific considerations

Database file location

  • SQLite databases are stored as single files on disk
  • Ensure the file path is accessible and has proper read/write permissions
  • Use absolute paths to avoid issues with working directory changes

Schema support

  • SQLite doesn’t have true schema support like PostgreSQL
  • The “schema” parameter in Mage is used for table naming conventions
  • Tables are created in the main database without schema prefixes

Data types

  • SQLite uses dynamic typing (type affinity)
  • Common types: INTEGER, REAL, TEXT, BLOB, NULL
  • Mage will map pandas data types to appropriate SQLite types

Performance tips

  • SQLite performs best with smaller datasets (< 1GB)
  • Consider using PRAGMA statements for performance tuning
  • Use transactions for bulk operations
I