
Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml
. - Enter the following keys and values under the key named
default
(you can have multiple profiles, add it under whichever is relevant to you)
database
should be the absolute path to your SQLite database file- The database file will be created automatically if it doesn’t exist
- Make sure the directory containing the database file has proper write permissions
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL
. - Under the
Data provider
dropdown, selectSQLite
. - Under the
Profile
dropdown, selectdefault
(or the profile you added credentials underneath). - Next to the
Save to schema
label, enter the schema name you want this block to save data to. - Under the
Write policy
dropdown, selectReplace
orAppend
(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1
. - Run the block.
Using raw SQL
You can also use raw SQL with SQLite by enabling the “Use raw SQL” option in the SQL block configuration. This allows you to write complete SQL statements includingCREATE TABLE
, INSERT
, UPDATE
, and DELETE
statements.
Example raw SQL:
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template)
. - Enter this code snippet (note: change the
config_profile
fromdefault
if you have a different profile):
- Run the block.
Export a dataframe
Here is an example code snippet to export a dataframe to SQLite:Custom types
To overwrite a column type when running a python export block, simply specify the column name and type in theoverwrite_types
dict in data exporter config
Here is an example code snippet:
Method arguments
Field name | Description | Example values |
---|---|---|
auto_clean_name | Whether to automatically clean the column name (replace the empty space with underscore, avoid using number as the prefix of the column name) | True/False (default: True) |
case_sensitive | Whether to support case sensitive columns | True/False (default: False) |
drop_table_on_replace | Whether to drop the table when “if_exists” param is set to “replace”. | True/False (default: False) |
if_exists | Specify resolution policy if table name already exists | ”fail”/“replace”/“append” (default: “replace”) |
overwrite_types | Overwrite the column types | {'column1': 'INTEGER', 'column2': 'TEXT'} |
SQLite-specific considerations
Database file location
- SQLite databases are stored as single files on disk
- Ensure the file path is accessible and has proper read/write permissions
- Use absolute paths to avoid issues with working directory changes
Schema support
- SQLite doesn’t have true schema support like PostgreSQL
- The “schema” parameter in Mage is used for table naming conventions
- Tables are created in the main database without schema prefixes
Data types
- SQLite uses dynamic typing (type affinity)
- Common types:
INTEGER
,REAL
,TEXT
,BLOB
,NULL
- Mage will map pandas data types to appropriate SQLite types
Performance tips
- SQLite performs best with smaller datasets (< 1GB)
- Consider using
PRAGMA
statements for performance tuning - Use transactions for bulk operations