Storage
Redshift
Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml
. - Enter the following keys and values under the key named
default
(you can have multiple profiles, add it under whichever is relevant to you)
When connecting to Redshift Serverless, you can use workgroup-name.account-number.aws-region.redshift-serverless.amazonaws.com
as the REDSHIFT_HOST
value.
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL
. - Under the
Data provider
dropdown, selectRedshift
. - Under the
Profile
dropdown, selectdefault
(or the profile you added credentials underneath). - Next to the
Save to schema
label, enter the schema name you want this block to save data to. - Under the
Write policy
dropdown, selectReplace
orAppend
(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1
. - Run the block.
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template)
. - Enter this code snippet (note: change the
config_profile
fromdefault
if you have a different profile):
- Run the block.
- Custom types.
To overwrite a column type when running a python export block, simply specify the column name and type in the overwrite_types
dict in data exporter config
Here is an example code snippet: