diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..c453fa8 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,24 @@ + + +# Code of Conduct + +We follow the vLLM community’s code of conduct: [vLLM - CODE OF CONDUCT](https://github.com/vllm-project/vllm/blob/main/CODE_OF_CONDUCT.md). + +For more information, please refer to the [principles](https://vllm-kunlun.readthedocs.io/en/latest/community/governance.html#principles). diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..5f4ba17 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,78 @@ + + +# Contributing to vLLM Kunlun + +Welcome to vLLM Kunlun! We warmly welcome community contributions. Feel free to +[report issues](https://github.com/baidu/vLLM-Kunlun/issues/new/choose) or submit +[pull requests](https://github.com/baidu/vLLM-Kunlun/compare). +Before getting started, we highly recommend reading the Contributing Guide below. + +## Contributing + +Please refer to the +[CONTRIBUTING guide](https://vllm-kunlun.readthedocs.io/en/latest/developer_guide/contribution/index.html) +for detailed instructions. It provides a step-by-step guide on how to set up the +development environment, build the project, and run tests. + +## Issues + +We use GitHub Issues to track bugs, feature requests, and other public discussions. + +### Search Existing Issues First + +Before opening a new issue, please search through existing issues to check whether +a similar bug report or feature request already exists. This helps avoid duplicates +and keeps discussions focused. + +### Reporting New Issues + +When opening a new issue, please provide as much information as possible, such as: +- A clear and detailed problem description +- Relevant logs or error messages +- Code snippets, screenshots, or videos if applicable + +The more context you provide, the easier it will be for maintainers to diagnose and resolve the issue. + +## Pull Requests + +We strongly welcome pull requests to help improve vLLM Kunlun. + +### Submitting Pull Requests + +All pull requests will be reviewed by the maintainers. Automated checks and tests +will be run as part of the review process. Once all checks pass and the review is +approved, the pull request will be accepted. Please note that merging into the +`main` branch may not happen immediately and could be subject to scheduling. + +Before submitting a pull request, please make sure that: + +1. You fork the repository and create your branch from `main`. +2. You update relevant code comments or documentation if APIs are changed. +3. You add the appropriate copyright notice to the top of any new files. +4. Your code passes linting and style checks. +5. Your changes are fully tested. +6. You submit the pull request against the development branch as required. + +## License + +By contributing to +[vLLM Kunlun](https://github.com/baidu/vLLM-Kunlun), +you agree that your contributions will be licensed under the +[Apache License 2.0](https://github.com/baidu/vLLM-Kunlun/blob/main/LICENSE.txt). diff --git a/MAINTAINERS.md b/MAINTAINERS.md new file mode 100644 index 0000000..8f3f300 --- /dev/null +++ b/MAINTAINERS.md @@ -0,0 +1,36 @@ + + +# VLLM Kunlun Maintainers + +Below is a list of maintainers for vLLM Kunlun, presented in no particular order. + +## Maintainers + +| | Name | Company | +|:-------------------------------------------------------------------------:|:---------------------------------------------------:|:-------------:| +| | [xyDong0223](https://github.com/xyDong0223) | Baidu | +| | [baoqian426](https://github.com/baoqian426) | Baidu | +| | [chanzhennan](https://github.com/chanzhennan) | Baidu | +| | [Hanyu-Jin](https://github.com/Hanyu-Jin) | KunLunXin | +| | [chenyili0619](https://github.com/chenyili0619) | Baidu | +| | [ldh2020](https://github.com/ldh2020) | Baidu | + + +For more information, please refer to the [website](https://vllm-kunlun.readthedocs.io/en/latest/community/contributors.html). diff --git a/README.md b/README.md index eeb06ac..291e407 100644 --- a/README.md +++ b/README.md @@ -8,14 +8,14 @@ --- -## Latest News🔥 +## Latest News 🔥 - [2025/12] Initial release of vLLM Kunlun --- # Overview -vLLM Kunlun (vllm-kunlun) is a community-maintained hardware plugin designed to seamlessly run vLLM on the Kunlun XPU. It is the recommended approach for integrating the Kunlun backend within the vLLM community, adhering to the principles outlined in the [RFC]: Hardware pluggable. This plugin provides a hardware-pluggable interface that decouples the integration of the Kunlun XPU with vLLM. +vLLM Kunlun (vllm-kunlun) is a community-maintained hardware plugin designed to seamlessly run vLLM on the Kunlun XPU. It is the recommended approach for integrating the Kunlun backend within the vLLM community, adhering to the principles outlined in the [RFC Hardware pluggable](https://github.com/vllm-project/vllm/issues/11162). This plugin provides a hardware-pluggable interface that decouples the integration of the Kunlun XPU with vLLM. By utilizing the vLLM Kunlun plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, and Multi-modal LLMs, can run effortlessly on the Kunlun XPU. @@ -116,12 +116,21 @@ Please use the following recommended versions to get started quickly: --- -## Contributing +## Contribute to vLLM Kunlun -See [CONTRIBUTING]() for more details, which is a step-by-step guide to help you set up the development environment, build, and test. +If you're interested in contributing to this project, please read [Contributing to vLLM Kunlun](CONTRIBUTING.md) to vLLM Kunlun. -We welcome and value any contributions and collaborations: -- Open an [Issue]() if you find a bug or have a feature request +If you're interested in contributing to this project, please read [Contributing](CONTRIBUTING.md) to vLLM Kunlun. + +## Star History 🔥 + +We opened the project at Dec 8, 2025. We love open source and collaboration ❤️ + +[![Star History Chart](https://api.star-history.com/svg?repos=baidu/vLLM-Kunlun&type=Date)](https://www.star-history.com/#baidu/vLLM-Kunlun&Date) + +## Sponsors 👋 + +We sincerely appreciate the [**KunLunXin**](https://www.kunlunxin.com/) team for their support in providing GPU resources, which enabled efficient model adaptation debugging, comprehensive end-to-end testing, and broader model compatibility. ## License