forked from EngineX-Cambricon/enginex-mlu370-vllm
228 lines
10 KiB
Plaintext
228 lines
10 KiB
Plaintext
Metadata-Version: 2.2
|
|
Name: vllm
|
|
Version: 0.6.4.post1+mlu0.6.2.pt2.5
|
|
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs on MLU backendon
|
|
Home-page:
|
|
Author: Cambricon vLLM Team
|
|
License: Apache 2.0
|
|
Project-URL: Homepage, https://github.com/vllm-project/vllm
|
|
Project-URL: Documentation, https://vllm.readthedocs.io/en/latest/
|
|
Classifier: Programming Language :: Python :: 3.9
|
|
Classifier: Programming Language :: Python :: 3.10
|
|
Classifier: Programming Language :: Python :: 3.11
|
|
Classifier: Programming Language :: Python :: 3.12
|
|
Classifier: License :: OSI Approved :: Apache Software License
|
|
Classifier: Intended Audience :: Developers
|
|
Classifier: Intended Audience :: Information Technology
|
|
Classifier: Intended Audience :: Science/Research
|
|
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
Classifier: Topic :: Scientific/Engineering :: Information Analysis
|
|
Requires-Python: >=3.8
|
|
Description-Content-Type: text/markdown
|
|
License-File: LICENSE
|
|
Requires-Dist: psutil
|
|
Requires-Dist: sentencepiece
|
|
Requires-Dist: numpy<2.0.0
|
|
Requires-Dist: requests>=2.26.0
|
|
Requires-Dist: tqdm
|
|
Requires-Dist: py-cpuinfo
|
|
Requires-Dist: transformers>=4.45.2
|
|
Requires-Dist: tokenizers>=0.19.1
|
|
Requires-Dist: protobuf
|
|
Requires-Dist: fastapi<0.113.0,>=0.107.0; python_version < "3.9"
|
|
Requires-Dist: fastapi!=0.113.*,!=0.114.0,>=0.107.0; python_version >= "3.9"
|
|
Requires-Dist: aiohttp
|
|
Requires-Dist: openai>=1.45.0
|
|
Requires-Dist: uvicorn[standard]
|
|
Requires-Dist: pydantic>=2.9
|
|
Requires-Dist: pillow
|
|
Requires-Dist: prometheus_client>=0.18.0
|
|
Requires-Dist: prometheus-fastapi-instrumentator>=7.0.0
|
|
Requires-Dist: tiktoken>=0.6.0
|
|
Requires-Dist: lm-format-enforcer<0.11,>=0.10.9
|
|
Requires-Dist: outlines<0.1,>=0.0.43
|
|
Requires-Dist: typing_extensions>=4.10
|
|
Requires-Dist: filelock>=3.10.4
|
|
Requires-Dist: partial-json-parser
|
|
Requires-Dist: pyzmq
|
|
Requires-Dist: msgspec
|
|
Requires-Dist: gguf==0.10.0
|
|
Requires-Dist: importlib_metadata
|
|
Requires-Dist: mistral_common[opencv]>=1.5.0
|
|
Requires-Dist: pyyaml
|
|
Requires-Dist: six>=1.16.0; python_version > "3.11"
|
|
Requires-Dist: setuptools>=74.1.1; python_version > "3.11"
|
|
Requires-Dist: einops
|
|
Requires-Dist: compressed-tensors==0.8.0
|
|
Requires-Dist: tensorizer
|
|
Requires-Dist: matplotlib>=3.7.4
|
|
Requires-Dist: accelerate
|
|
Requires-Dist: loguru
|
|
Requires-Dist: ray==2.40.0
|
|
Requires-Dist: triton==3.0.0
|
|
Requires-Dist: torch==2.5.0
|
|
Requires-Dist: torch-mlu>=1.23.1
|
|
Requires-Dist: torch_mlu_ops>=1.2.2
|
|
Requires-Dist: xformers==0.0.24
|
|
Requires-Dist: datasets
|
|
Requires-Dist: transformers_stream_generator
|
|
Requires-Dist: huggingface-hub==0.25.2
|
|
Provides-Extra: tensorizer
|
|
Requires-Dist: tensorizer>=2.9.0; extra == "tensorizer"
|
|
Provides-Extra: audio
|
|
Requires-Dist: librosa; extra == "audio"
|
|
Requires-Dist: soundfile; extra == "audio"
|
|
Provides-Extra: video
|
|
Requires-Dist: decord; extra == "video"
|
|
Dynamic: author
|
|
Dynamic: classifier
|
|
Dynamic: description
|
|
Dynamic: description-content-type
|
|
Dynamic: license
|
|
Dynamic: project-url
|
|
Dynamic: provides-extra
|
|
Dynamic: requires-dist
|
|
Dynamic: requires-python
|
|
Dynamic: summary
|
|
|
|
<p align="center">
|
|
<picture>
|
|
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/logos/vllm-logo-text-dark.png">
|
|
<img alt="vLLM" src="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/logos/vllm-logo-text-light.png" width=55%>
|
|
</picture>
|
|
</p>
|
|
|
|
<h3 align="center">
|
|
Easy, fast, and cheap LLM serving for everyone
|
|
</h3>
|
|
|
|
<p align="center">
|
|
| <a href="https://docs.vllm.ai"><b>Documentation</b></a> | <a href="https://vllm.ai"><b>Blog</b></a> | <a href="https://arxiv.org/abs/2309.06180"><b>Paper</b></a> | <a href="https://discord.gg/jz7wjKhh6g"><b>Discord</b></a> | <a href="https://x.com/vllm_project"><b>Twitter/X</b></a> |
|
|
|
|
</p>
|
|
|
|
|
|
---
|
|
|
|
**vLLM, AMD, Anyscale Meet & Greet at [Ray Summit 2024](http://raysummit.anyscale.com) (Monday, Sept 30th, 5-7pm PT) at Marriott Marquis San Francisco**
|
|
|
|
We are excited to announce our special vLLM event in collaboration with AMD and Anyscale.
|
|
Join us to learn more about recent advancements of vLLM on MI300X.
|
|
Register [here](https://lu.ma/db5ld9n5) and be a part of the event!
|
|
|
|
---
|
|
|
|
*Latest News* 🔥
|
|
- [2024/09] We hosted [the sixth vLLM meetup](https://lu.ma/87q3nvnh) with NVIDIA! Please find the meetup slides [here](https://docs.google.com/presentation/d/1wrLGwytQfaOTd5wCGSPNhoaW3nq0E-9wqyP7ny93xRs/edit?usp=sharing).
|
|
- [2024/07] We hosted [the fifth vLLM meetup](https://lu.ma/lp0gyjqr) with AWS! Please find the meetup slides [here](https://docs.google.com/presentation/d/1RgUD8aCfcHocghoP3zmXzck9vX3RCI9yfUAB2Bbcl4Y/edit?usp=sharing).
|
|
- [2024/07] In partnership with Meta, vLLM officially supports Llama 3.1 with FP8 quantization and pipeline parallelism! Please check out our blog post [here](https://blog.vllm.ai/2024/07/23/llama31.html).
|
|
- [2024/06] We hosted [the fourth vLLM meetup](https://lu.ma/agivllm) with Cloudflare and BentoML! Please find the meetup slides [here](https://docs.google.com/presentation/d/1iJ8o7V2bQEi0BFEljLTwc5G1S10_Rhv3beed5oB0NJ4/edit?usp=sharing).
|
|
- [2024/04] We hosted [the third vLLM meetup](https://robloxandvllmmeetup2024.splashthat.com/) with Roblox! Please find the meetup slides [here](https://docs.google.com/presentation/d/1A--47JAK4BJ39t954HyTkvtfwn0fkqtsL8NGFuslReM/edit?usp=sharing).
|
|
- [2024/01] We hosted [the second vLLM meetup](https://lu.ma/ygxbpzhl) with IBM! Please find the meetup slides [here](https://docs.google.com/presentation/d/12mI2sKABnUw5RBWXDYY-HtHth4iMSNcEoQ10jDQbxgA/edit?usp=sharing).
|
|
- [2023/10] We hosted [the first vLLM meetup](https://lu.ma/first-vllm-meetup) with a16z! Please find the meetup slides [here](https://docs.google.com/presentation/d/1QL-XPFXiFpDBh86DbEegFXBXFXjix4v032GhShbKf3s/edit?usp=sharing).
|
|
- [2023/08] We would like to express our sincere gratitude to [Andreessen Horowitz](https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/) (a16z) for providing a generous grant to support the open-source development and research of vLLM.
|
|
- [2023/06] We officially released vLLM! FastChat-vLLM integration has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid-April. Check out our [blog post](https://vllm.ai).
|
|
|
|
---
|
|
## About
|
|
vLLM is a fast and easy-to-use library for LLM inference and serving.
|
|
|
|
vLLM is fast with:
|
|
|
|
- State-of-the-art serving throughput
|
|
- Efficient management of attention key and value memory with **PagedAttention**
|
|
- Continuous batching of incoming requests
|
|
- Fast model execution with CUDA/HIP graph
|
|
- Quantizations: [GPTQ](https://arxiv.org/abs/2210.17323), [AWQ](https://arxiv.org/abs/2306.00978), INT4, INT8, and FP8.
|
|
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
|
|
- Speculative decoding
|
|
- Chunked prefill
|
|
|
|
**Performance benchmark**: We include a [performance benchmark](https://buildkite.com/vllm/performance-benchmark/builds/4068) that compares the performance of vLLM against other LLM serving engines ([TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), [text-generation-inference](https://github.com/huggingface/text-generation-inference) and [lmdeploy](https://github.com/InternLM/lmdeploy)).
|
|
|
|
vLLM is flexible and easy to use with:
|
|
|
|
- Seamless integration with popular Hugging Face models
|
|
- High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
|
|
- Tensor parallelism and pipeline parallelism support for distributed inference
|
|
- Streaming outputs
|
|
- OpenAI-compatible API server
|
|
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
|
|
- Prefix caching support
|
|
- Multi-lora support
|
|
|
|
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
|
|
- Transformer-like LLMs (e.g., Llama)
|
|
- Mixture-of-Expert LLMs (e.g., Mixtral)
|
|
- Embedding Models (e.g. E5-Mistral)
|
|
- Multi-modal LLMs (e.g., LLaVA)
|
|
|
|
Find the full list of supported models [here](https://docs.vllm.ai/en/latest/models/supported_models.html).
|
|
|
|
## Getting Started
|
|
|
|
Install vLLM with `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
|
|
|
|
```bash
|
|
pip install vllm
|
|
```
|
|
|
|
Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to learn more.
|
|
- [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
|
|
- [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html)
|
|
- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)
|
|
|
|
## Contributing
|
|
|
|
We welcome and value any contributions and collaborations.
|
|
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.
|
|
|
|
## Sponsors
|
|
|
|
vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!
|
|
|
|
<!-- Note: Please sort them in alphabetical order. -->
|
|
<!-- Note: Please keep these consistent with docs/source/community/sponsors.md -->
|
|
|
|
- a16z
|
|
- AMD
|
|
- Anyscale
|
|
- AWS
|
|
- Crusoe Cloud
|
|
- Databricks
|
|
- DeepInfra
|
|
- Dropbox
|
|
- Google Cloud
|
|
- Lambda Lab
|
|
- NVIDIA
|
|
- Replicate
|
|
- Roblox
|
|
- RunPod
|
|
- Sequoia Capital
|
|
- Skywork AI
|
|
- Trainy
|
|
- UC Berkeley
|
|
- UC San Diego
|
|
- ZhenFund
|
|
|
|
We also have an official fundraising venue through [OpenCollective](https://opencollective.com/vllm). We plan to use the fund to support the development, maintenance, and adoption of vLLM.
|
|
|
|
## Citation
|
|
|
|
If you use vLLM for your research, please cite our [paper](https://arxiv.org/abs/2309.06180):
|
|
```bibtex
|
|
@inproceedings{kwon2023efficient,
|
|
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
|
|
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
|
|
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
|
|
year={2023}
|
|
}
|
|
```
|
|
|
|
## Contact Us
|
|
|
|
* For technical questions and feature requests, please use Github issues or discussions.
|
|
* For discussing with fellow users, please use Discord.
|
|
* For security disclosures, please use Github's security advisory feature.
|
|
* For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.
|