Written Tutorials

Simplifying Data Pipelines With Mage Data Integrations

Ethan Brown introduces Mage Data Integration Blocks, bridging no-code simplicity with standard pipeline capabilities for seamless data integration across various sources. From Postgres to Salesforce, Ethan showcases the effortless process of pulling and leveraging data within standard pipelines, promising streamlined development and efficient syncs. Explore the future of data integration with Mage, where complexity meets accessibility.

Mage.ai – Tutorials and Intro

Jan provides a series of tutorials from intro and installation to building pipelines and charts.

How to Streamline Communication in Data Pipelines Using Mage

In her post, Xiaoxu introduces the concept of a communication lifecycle for data pipelines. Proactive feedback to stakeholders is essential for establishing trust and building confidence in data. Xiaoxu demonstrates how Mage’s callbacks can be used to automate tasks based on block status, reducing the communication lifecycle and establishing a healthy relationship between data owners and consumers.

Modern Data Engineering with Mage

Sri Nikitha walks through a demo of Mage: from installation to a sample pipeline that loads data from a NYC taxi dataset, transforms it, and exports to BigQuery.

Postgres to Snowflake Data Transfer

Jafar Sharif walks through a tutorial on how to transfer data from Postgres to Snowflake using Mage, including some core ETL functionality. A quick and fun read!

Video Tutorials

How To Create Data Pipelines in Mage

Shashank walks through a high-level overview of how to create data pipelines using Mage.

End-to-end Data Engineering Project with Mage

Using Mage and Python for orchestration and ingestion, Darshil walks through how to take his dataset from source to visualization using a combination of popular tools, including Mage, GCS, BigQuery, and Looker Studio.

Building an ETL Pipeline with Mage

Arul discusses the process of constructing an ETL (extract-transform-load) pipeline in Mage. Starting from Netflix’s top 100 movie dataset, Arul extracts data, transforms it to a useable format, ans loads it into a database for further analysis.