Add credentials
Before you begin, you’ll need to create a service account key. Please read Google Cloud’s documentation on how to create that. Once your finished, following these steps:- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml. - Enter the following keys and values under the key named
default(you can have multiple profiles, add it under whichever is relevant to you) - Note: you only need to add the keys under
GOOGLE_SERVICE_ACC_KEYor the value for keyGOOGLE_SERVICE_ACC_KEY_FILEPATH(both are not simultaneously required. If you useGOOGLE_SERVICE_ACC_KEY_FILEPATH, please deleteGOOGLE_SERVICE_ACC_KEYin theio_config.yaml).
Required permissions
Using SQL blocks
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL. - Under the
Data providerdropdown, selectBigQuery. - Under the
Profiledropdown, selectdefault(or the profile you added credentials underneath). - Next to the
Databaselabel, enter the database name you want this block to save data to. - Next to the
Save to schemalabel, enter the schema name you want this block to save data to. - Under the
Write policydropdown, selectReplaceorAppend(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1. - Run the block.
Using Python blocks
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template). - Enter this code snippet (note: change the
config_profilefromdefaultif you have a different profile):
- Run the block.
Export a dataframe
Here is an example code snippet to export a dataframe to BigQuery:overwrite_types dict in data exporter config.
You can find the supported types in this doc. Here is the example code: