Overview
Pipeline runs represent individual executions of your data pipelines in Mage. Each time a pipeline is triggered (manually or automatically), a pipeline run record is created that tracks the entire execution process, including all block runs, status updates, timing information, and any errors that occur.Key Benefits of Pipeline Run Management:
- Execution tracking - Monitor the status and progress of every pipeline execution
- Performance insights - Analyze execution times, resource usage, and bottlenecks
- Error debugging - Detailed logs and error information for troubleshooting
- Historical analysis - Review past executions to identify patterns and issues
- Retry capabilities - Restart failed or incomplete pipeline runs
- Audit trail - Complete record of when and how pipelines were executed
Pipeline Run and Pipeline Execution are the same thing in Mage. You may see either term used throughout the product.
Pipeline Run Lifecycle
Run Statuses
Pipeline runs progress through several statuses during their lifecycle:initial
- Pipeline run has been created but not yet startedrunning
- Pipeline is currently executingcompleted
- Pipeline finished successfullyfailed
- Pipeline encountered an error and stoppedcancelled
- Pipeline run was manually cancelled or timed out
Block Run Statuses
Within each pipeline run, individual blocks have their own statuses:initial
- Block run created but not startedqueued
- Block run is in the queue and waiting to be executedrunning
- Block run is currently executingcompleted
- Block run finished successfullyfailed
- Block run encountered an errorcancelled
- Block run was cancelledupstream_failed
- Block cannot run because upstream blocks failedcondition_failed
- The block was skipped because its own or an upstream block’s condition was not satisfied.
Viewing Pipeline Runs
Project-Wide Pipeline Runs Dashboard
View all pipeline runs across your entire project:- Navigate to the homepage and click “Pipeline runs” in the left navigation
- Or go directly to
/pipeline-runs
URL - Filter and search by pipeline, status, date range, or trigger
- Monitor key metrics like success rate, average duration, and recent activity
View Types
Current and Past Runs:- View all completed, failed, cancelled, and currently running pipeline runs
- Available in both Mage OSS and Mage Pro
- Shows historical execution data and current status
- View scheduled pipeline runs that are yet to execute
- See future run dates and trigger information
- Monitor upcoming pipeline executions and scheduling
Pipeline Run List Filters
The pipeline run list page provides comprehensive filtering options: Available Filters:- Pipeline UUID - Filter by specific pipeline identifiers
- Status - Filter by run status (Cancelled, Done, Failed, Last run failed, Ready, Running)
- Pipeline Tag - Filter by pipeline tags
- Search - Text search across pipeline runs (Mage Pro only)
- Apply filters - Apply selected filter criteria
- Reset filters - Clear all filters and return to defaults
Mage OSS vs Pro:
- Mage OSS: Supports viewing current and past runs with basic filtering (Pipeline UUID, Status, Pipeline Tag)
- Mage Pro: Supports both current/past runs AND upcoming runs, plus advanced filtering including text search and monitoring features
Pipeline-Specific Run History
View runs for a specific pipeline:- Click on a pipeline from the main Pipelines Dashboard
- Navigate to the “Runs” section or click on a trigger from the “Triggers” section
- View detailed run information including block runs, logs, and metrics
- Access individual run details by clicking on specific runs
/pipelines/[pipeline_uuid]/runs
Batch Operations
The pipeline-specific run page supports batch operations for managing multiple runs: Available Batch Actions:- Retry selected - Retry specific selected pipeline runs
- Retry all incomplete block runs - Retry all incomplete block runs across the pipeline
- Cancel selected running - Cancel specific selected running pipeline runs
- Cancel all running - Cancel all currently running pipeline runs
- Select pipeline runs by checking the boxes next to the runs you want to manage
- Click the “Actions” dropdown to access batch operation options
- Choose your desired action from the dropdown menu
- Confirm the operation when prompted
Batch operations are particularly useful for managing multiple failed runs or stopping multiple running pipelines at once, improving operational efficiency.
Individual Pipeline Run Details
For detailed analysis of a specific run:- Click on a pipeline run from the runs list
- View the individual run page at
/pipelines/[pipeline_uuid]/runs/[pipeline_run_id]
- Examine block run details including execution order, timing, and status
- Access logs and error information for each block
- Review runtime variables and configuration used
Pipeline Run Retry
Retrying Failed Runs
When a pipeline run fails, you can retry it in several ways:For detailed information about retry configuration and management, see Retrying block runs from a pipeline run.
Retry Incomplete Blocks
Retry from the failed block instead of restarting the entire pipeline. For detailed instructions, see Retrying block runs from a pipeline run.Retry from Selected Block
Retry starting from any selected block regardless of its current status. For detailed instructions, see Retrying block runs from a pipeline run.Automatic Retry Configuration
Configure automatic retries to handle transient failures. For detailed retry configuration examples and advanced settings, see Retrying block runs from a pipeline run.Pipeline Run Data and Monitoring
Key Metrics and Information
Each pipeline run contains valuable information:- Execution timing - Start time, completion time, and duration
- Status tracking - Current status and any status changes
- Block run details - Individual block execution information
- Runtime variables - Variables used during execution
- Error information - Detailed error messages and stack traces
- Resource usage - CPU, memory, and other resource metrics
- SLA compliance - Whether the run met configured SLAs
Database Storage
Pipeline run data is stored in database tables:pipeline_run
- Main pipeline run recordsblock_run
- Individual block run records within each pipeline run
You can query pipeline run data directly from the database or use Mage’s Python API to access run information programmatically. For detailed database querying examples, see Query pipeline run metadata from database. For API reference documentation, see Pipeline runs API reference.
Querying Pipeline Run Data
Query pipeline run data directly from the database or use Mage’s Python API. For comprehensive database querying examples and Python API usage, see Query pipeline run metadata from database.Advanced Pipeline Run Features
Runtime Variables
Override pipeline variables for specific runs:- Trigger-level variables - Set variables when creating or editing triggers (see Pipeline Trigger Overview)
- API-triggered variables - Pass variables via API requests (see Trigger pipeline via API)
- Manual run variables - Override variables when manually triggering runs
Runtime variables allow you to customize pipeline behavior for different environments, data sources, or processing requirements without modifying the pipeline code. Learn more about using runtime variables.
SLA Monitoring
Configure and monitor Service Level Agreements. For detailed SLA configuration, see Pipeline Trigger Overview.- Expected completion time - Set target completion times for pipeline runs
- Time units - Configure in minutes, hours, or days
- SLA tracking - Monitor whether runs meet their SLA targets
- Alerting - Get notified when SLAs are missed
Timeout Configuration
Control pipeline run execution time. For detailed timeout configuration, see Pipeline Trigger Overview.- Timeout duration - Set maximum execution time in seconds
- Timeout status - Choose how to handle runs that exceed timeout (failed, cancelled, warning)
- Resource protection - Prevent runaway pipelines from consuming excessive resources
Debugging Pipeline Run Failures
When pipeline runs fail, use these debugging techniques to identify and resolve issues:Accessing Run Information
Individual Run Details:- Navigate to the failed run in the pipeline runs list
- Click on the run to view detailed information
- Examine the execution timeline to see where the failure occurred
- Review block run statuses to identify which blocks failed
- Click on individual block runs to see detailed information
- Access block logs for error messages and stack traces
- Review block output to understand data state at failure
- Check upstream dependencies to ensure data flow is correct
Common Failure Patterns
Upstream Block Failures:- Issue: Downstream blocks show
upstream_failed
status - Solution: Fix the upstream block that failed first, then retry
- Issue: Blocks fail due to unexpected data format or missing columns
- Solution: Add data validation and error handling in your blocks
- Issue: Pipeline runs timeout or consume excessive resources
- Solution: Optimize block code, increase timeout settings, or break into smaller pipelines
- Issue: API calls, database connections, or file access failures
- Solution: Implement retry logic and proper error handling
- Issue: Pipeline runs remain in
initial
orqueued
status with no visible errors in logs - Solution: Follow these troubleshooting steps:
- Check concurrency limits - Verify that block run concurrency or pipeline run concurrency limit is not set to 0
- Check resource usage - Monitor CPU and RAM usage to ensure sufficient resources are allocated
- Check server error logs - Review server-side logs for underlying issues
- Restart the server - Try restarting the Mage server to clear any stuck processes
- Contact Mage team - If the issue persists, reach out to the Mage team for debugging assistance at mage.ai/chat
Debugging Tools
- Detailed logs - Access comprehensive execution logs for each block
- Error tracking - View error messages, stack traces, and debugging information
- Execution timeline - See the order and timing of block executions
- Variable inspection - Review runtime variables and their values
- Block output export - Save block output as CSV for analysis (see Saving block output as CSV)
Best Practices for Pipeline Run Management
Monitoring and Alerting
- Set up monitoring for key pipeline runs and their success rates
- Configure alerts for failed runs, SLA violations, and performance issues
- Regular review of run history to identify patterns and improvements
Error Handling
- Implement retry logic for transient failures
- Use proper error handling in your block code
- Validate data at each step to catch issues early
- Test thoroughly before deploying to production
Performance Optimization
- Monitor execution times and optimize slow blocks
- Use appropriate resources for your workload
- Implement caching where appropriate
- Consider parallel processing for independent operations
- Configure concurrency limits to control resource usage (see Concurrency)
Documentation and Maintenance
- Document pipeline purposes and expected behavior
- Maintain run history for audit and compliance purposes
- Regular cleanup of old run data to manage storage
- Version control your pipeline code and configurations
Frequently Asked Questions
How can I retrieve the output of a block after triggering a pipeline via API?
How can I retrieve the output of a block after triggering a pipeline via API?
You can retrieve the output of a block run by calling the block run output API. For example:Steps to get block run ID:
- Trigger your pipeline via API or UI
- Navigate to the pipeline run details page
- Find the specific block run ID from the block runs list
- Use the API endpoint above with the block run ID
Can I monitor the status of a pipeline run programmatically?
Can I monitor the status of a pipeline run programmatically?
Yes, you can monitor pipeline run status programmatically using multiple methods:API Methods:
- Use the Mage API to query pipeline run status and outputs
- Access individual pipeline run details via
/api/pipeline_runs/<pipeline_run_id>
- Monitor block run statuses within a pipeline run
- Query the
pipeline_run
andblock_run
tables directly - Use Python scripts to filter by status, execution date, or pipeline UUID
- Batch update pipeline run statuses programmatically
How do I cancel all pipelines that are in Ready/Running status?
How do I cancel all pipelines that are in Ready/Running status?
You can cancel all running pipeline runs using either method:UI Method:
- Navigate to
/pipelines/[pipeline_uuid]/runs
and click “Cancel all running” button - This cancels all running pipeline runs for a specific pipeline
- Write a Python script to cancel all ready or running pipeline runs
- Example code and guidance are available in the database query documentation
How are runtime variables managed in pipeline runs?
How are runtime variables managed in pipeline runs?
Runtime variables are managed at multiple levels in pipeline runs:Variable Storage:
- Run records store information about runtime variables used during execution
- Variables are typically specified at the time of pipeline trigger
- Variables can be overridden at the trigger level for different environments
- Trigger-level variables - Set when creating or editing triggers
- API-triggered variables - Passed via API requests in the payload
- Manual run variables - Overridden when manually triggering runs
- Access variables in blocks using
kwargs.get("variable_name", "default_value")
- Variables allow customization for different environments without code changes
- Runtime variables are stored in the
variables
field of pipeline run records
Why do some pipeline runs get stuck or blocks remain in the initial status?
Why do some pipeline runs get stuck or blocks remain in the initial status?
Pipeline runs can get stuck in
initial
or queued
status due to several common issues:Common Causes:- Concurrency limits - Block run concurrency or pipeline run concurrency limit set to 0
- Resource constraints - Insufficient CPU or RAM allocation
- Executor issues - Problems with k8s_executor, volume mounting, or resource constraints
- Server issues - Mage server experiencing problems or stuck processes
- Check concurrency limits - Verify concurrency settings are not set to 0
- Monitor resource usage - Ensure sufficient CPU and RAM are allocated
- Check server error logs - Review server-side logs for underlying issues
- Restart the server - Try restarting Mage to clear stuck processes
- Switch executors - Try local_python executor for testing to isolate issues
- If logs are not visible in the UI, check log files directly in the log path
- Use the debugging tools available in the pipeline run details page
- Contact the Mage team at mage.ai/chat for persistent issues
How do I set up automated or scheduled pipeline runs?
How do I set up automated or scheduled pipeline runs?
You can set up automated pipeline runs using scheduled triggers in Mage:Creating Scheduled Triggers:
- Navigate to pipeline - Go to your pipeline’s trigger page
- Create new trigger - Click “New trigger” and select “Schedule”
- Configure frequency - Choose from hourly, daily, weekly, monthly, or custom cron expressions
- Set additional options - Configure SLA, timeout, runtime variables, and other settings
- Save trigger - Enable the trigger to start automated runs
- Run exactly once - Execute once and disable
- Hourly/Daily/Weekly/Monthly - Standard recurring schedules
- Always on - Immediately create new runs when previous completes
- Custom cron - Advanced scheduling with cron expressions
- Docker deployments - Container must remain running for scheduled triggers to work
- SaaS version - Mage offers a managed service that handles scheduling automatically
- Set runtime variables for different environments
- Configure SLA monitoring and timeout settings
- Override global variables at trigger level
- Use “Run once” button for testing triggers
Need more help? If your question isn’t answered here, join the Mage community on Slack to get help from other users and the Mage team.