- Documentation
Volcengine Kubernetes Engine
Cloud-native AI
ServingKit
Best practices
Quickly deploying the MiniMax-M2.5 model by using the LLM general-purpose template
Best practices
Quickly deploying the MiniMax-M2.5 model by using the LLM general-purpose template
Quickly deploying the MiniMax-M2.5 model by using the LLM general-purpose template
Last updated: 2026.03.16 14:55:26