AI Clients
Use and enable differnt AI clients.
AI Client
You now have the option to select from various AI clients to harness the capabilities of Mage AI, as detailed in the Mage AI capabilities documentation. Currently, we offer support for OpenAI and Hugging Face, with the promise of additional AI clients being added in the future.
Use Hugging Face Client
Setup
Hugging Face Inference Endpoint
In order to utilize the Hugging Face AI Client, it is necessary to establish a Hugging Face inference endpoint. You can set it up following this guide.
This process is quite straightforward. It entails
- selecting the specific model you wish to use,
- determining the hosting environment (AWS or Azure),
- specifying the geographical region,
- choosing the type of GPU.
For your convenience and based on our testing, we recommend using the “mistralai/Mistral-7B-Instruct-v0.1” model.
Once the Inference endpoint is operational, it will provide you with an API URL and a corresponding token for establishing a secure connection.
Mage Project Setup
Within your Mage project’s metadata YAML configuration, please include the subsequent “ai_config” section:
ai_config:
mode: 'hugging_face'
open_ai_config:
openai_api_key: key
hugging_face_config:
huggingface_api: api_url
huggingface_inference_api_token: api_token
The “mode” parameter determines your selection of the AI client to be employed. It can be specified as either “open_ai” or “hugging_face,” with the default value being set to “open_ai.”
“hugging_face_config” as a mandatory configuration if you choose to use the hugging face client. This configuration includes the two essential elements obtained from the Hugging Face inference endpoint, namely, the API and Token.
You are ready to go once the “ai_config” is setup. At this point, you can fully leverage Mage AI’s capabilities, such as generating blocks with text description, automatically write comments for your functions, etc.
How to add a new AI Client
You may find it necessary to employ an AI client other than those offered by OpenAI and Hugging Face. Additionally, you might wish to make direct calls to your Language Model (LLM). This can be accomplished by enabling a new AI client for your specific needs.
This is an example PR.
Create new AI config
Create a dedicated configuration to save the params required to connect to LLM in the config.py. For instance, when using the Hugging Face client, the LLM is hosted within the inference endpoint, mandating both the API and Token for invoking the service for inference. In the OpenAI client, the OpenAI key is required to facilitate model inference.
Create dedicated AI Client
Inherit the AIClient interface and implements the two required functions: “inference_with_prompt” and “find_block_params”.
- Inference_with_prompt function does the LLM model inference. It takes the prompt template, required variables being used in the prompt and return the inference result.
- Find_block_params function does a multi classification based on code description to generate required types including block_type, pipeline_type, language, action type and data source.
You can read your configuration in the Setup function and initialize the client to talk to your service.
Enable in llm_pipeline_wizard
The last action to take is modifying the Setup function within ”mage_ai/ai/llm_pipeline_wizard.py” to introduce a new mode of your client and initialize your AI client.
Was this page helpful?