Here are the high level steps to build a data integration pipeline:
pip
or conda
.
[+ New pipeline]
button."{{ env_var('SECRET_KEY') }}"
: this will extract the value from the
SECRET_KEY
key in your environment variables."{{ variables('SECRET_KEY') }}"
: this will extract the value from the
SECRET_KEY
key in your
runtime variables.source
configuration yaml:
[Fetch list of streams]
under the section labeled Select stream.
FULL_TABLE
: synchronize the entire set of records from the source.INCREMENTAL
: synchronize the records starting after the most recent
bookmarked record from the previous synchronization run.IGNORE
: skip the new record if it’s a duplicate of an existing record.UPDATE
: update the existing record with the new record’s properties.Bookmark property values
table under the
Manually edit bookmark property values setting in the Settings section.Bookmark property values
table, you need to select a destination first.Save
. You MUST click the Save
button in order for the bookmark value
to be updated."{{ env_var('PASSWORD') }}"
: this will extract the value from the
PASSWORD
key in your environment variables."{{ variables('PASSWORD') }}"
: this will extract the value from the
PASSWORD
key in your runtime variables.Pipelines / pipeline name / Edit
.
Once you’re on the pipeline triggers page, create a
new scheduled trigger
and choose the @once
interval. For more schedules, read the
other options here.
[Start trigger]
button at
the top of the page. You’ll see a new pipeline run appear shortly on the screen.
You can click the logs for that pipeline
run to view the progress of your synchronization. Any string config values over
8 characters long in the logs will be redacted for security purposes. You’ll see
these values hidden as ********
in the logs.