### What this PR does / why we need it?
[Kthena](https://github.com/volcano-sh/kthena) is a Kubernetes-native
LLM inference platform that transforms how organizations deploy and
manage Large Language Models in production. Built with declarative model
lifecycle management and intelligent request routing, it provides high
performance and enterprise-grade scalability for LLM inference
workloads.
The platform extends Kubernetes with purpose-built Custom Resource
Definitions (CRDs) for managing LLM workloads, supporting multiple
inference engines (vLLM, SGLang, Triton) and advanced serving patterns
like prefill-decode disaggregation.
This pr added a example on deloying llm on Ascend Kubernetes clusters.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
Signed-off-by: Zhonghu Xu <xuzhonghu@huawei.com>
98 B
98 B
Deployment Guide
:::{toctree} :caption: Deployment Guide :maxdepth: 1 using_volcano_kthena :::