AI Client
You now have the option to select from various AI clients to harness the capabilities of Mage AI, as detailed in the Mage AI capabilities documentation. Currently, we offer support for OpenAI and Hugging Face, with the promise of additional AI clients being added in the future.Use Hugging Face Client
Setup
Hugging Face Inference Endpoint
In order to utilize the Hugging Face AI Client, it is necessary to establish a Hugging Face inference endpoint. You can set it up following this guide. This process is quite straightforward. It entails- selecting the specific model you wish to use,
- determining the hosting environment (AWS or Azure),
- specifying the geographical region,
- choosing the type of GPU.
Mage Project Setup
Within your Mage project’s metadata YAML configuration, please include the subsequent “ai_config” section:How to add a new AI Client
You may find it necessary to employ an AI client other than those offered by OpenAI and Hugging Face. Additionally, you might wish to make direct calls to your Language Model (LLM). This can be accomplished by enabling a new AI client for your specific needs. This is an example PR.Create new AI config
Create a dedicated configuration to save the params required to connect to LLM in the config.py. For instance, when using the Hugging Face client, the LLM is hosted within the inference endpoint, mandating both the API and Token for invoking the service for inference. In the OpenAI client, the OpenAI key is required to facilitate model inference.Create dedicated AI Client
Inherit the AIClient interface and implements the two required functions: “inference_with_prompt” and “find_block_params”.- Inference_with_prompt function does the LLM model inference. It takes the prompt template, required variables being used in the prompt and return the inference result.
- Find_block_params function does a multi classification based on code description to generate required types including block_type, pipeline_type, language, action type and data source.