In her post, Xiaoxu introduces the concept of a communication lifecycle for data pipelines. Proactive feedback to stakeholders is essential for establishing trust and building confidence in data. Xiaoxu demonstrates how Mage’s callbacks can be used to automate tasks based on block status, reducing the communication lifecycle and establishing a healthy relationship between data owners and consumers.
Sri Nikitha walks through a demo of Mage: from installation to a sample pipeline that loads data from a NYC taxi dataset, transforms it, and exports to BigQuery.
Jafar Sharif walks through a tutorial on how to transfer data from Postgres to Snowflake using Mage, including some core ETL functionality. A quick and fun read!
How To Create Data Pipelines in Mage
Shashank walks through a high-level overview of how to create data pipelines using Mage.
End-to-end Data Engineering Project with Mage
Using Mage and Python for orchestration and ingestion, Darshil walks through how to take his dataset from source to visualization using a combination of popular tools, including Mage, GCS, BigQuery, and Looker Studio.
Building an ETL Pipeline with Mage
Arul discusses the process of constructing an ETL (extract-transform-load) pipeline in Mage. Starting from Netflix’s top 100 movie dataset, Arul extracts data, transforms it to a useable format, ans loads it into a database for further analysis.