Overview

Use Trino as a destination in Mage to write data to a variety of underlying data sources including PostgreSQL, Delta Lake, Iceberg, S3, and more—via Trino’s supported catalog and connector system.

Trino enables federated query and write access across databases, data warehouses, and data lakes, all through a unified SQL engine.


Configuration Parameters

You must provide the following parameters when configuring Trino as a destination:

KeyDescriptionExample ValueRequired
catalogTrino catalog used to access the target data source (e.g., PostgreSQL, Iceberg, Delta Lake).my_prod_pg
connectorTrino connector name. Determines how Trino interacts with the catalog’s backend.postgresql
hostHostname or IP address of the Trino coordinator.127.0.0.1
portPort where the Trino coordinator is running. Default is 8080.8080
usernameUsername for authenticating with the Trino coordinator.admin
password(Optional) Password for Trino authentication.abc123...
schemaSchema name where the target table will be created or written to.public
tableName of the destination table to write data into.dim_users_v1
query_max_lengthMaximum character length allowed for Trino SQL query payloads.1000000
sslWhether to disable SSL certificate verification. Set false to disable.false
location(Delta Lake only) URI location of the target table or storage bucket.s3://my-delta-bucket/
ignore_location_for_temp_tablesPrevents Trino from setting WITH LOCATION for temp tables. Recommended when using Delta Lake with Glue metastore.false (default)

Notes on Delta Lake + Glue Integration

When using the Delta Lake connector with a Glue metastore, tables created with the WITH LOCATION property will not delete underlying data when dropped.

To avoid residual files when Mage creates temp tables, set:

ignore_location_for_temp_tables: true