Storage
ClickHouse
Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml
. - Enter the following keys and values under the key named
default
(you can have multiple profiles, add it under whichever is relevant to you)
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL
. - Under the
Data provider
dropdown, selectClickHouse
. - Under the
Profile
dropdown, selectdefault
(or the profile you added credentials underneath). - Enter the optional table name of the table to write to.
- Under the
Write policy
dropdown, selectReplace
orAppend
(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1
. - Run the block.
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template)
. - Enter this code snippet (note: change the
config_profile
fromdefault
if you have a different profile):
- Run the block.
Destination table in Data Exporter
If the destination table does not exist and the Write policy
is set
to Replace
, data exporter
will automatically create a table in ClickHouse
with Engine = Memory
and a default schema inferred from the data.
However, this may not be optimal for various use cases. Since table creation
in ClickHouse
can involve numerous details, it is strongly advised to
create the destination table manually before loading data to ensure it
meets specific requirements.
If generating the destination table via Mage export method, one can ovewrite specific column types
by using the overwrite_types
dict setting.
Example:
Here is the ClickHouse data exporter code snippet that you can use: