update instructions
Some checks failed
Build Actions Cache / ubuntu-24-vulkan-cache (push) Has been cancelled
Build Actions Cache / ubuntu-24-spacemit-cache (push) Has been cancelled
Build Actions Cache / windows-2022-rocm-cache (push) Has been cancelled
Close inactive issues / close-issues (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/cpu.Dockerfile free_disk_space:false full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:cpu]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/cuda.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:cuda]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/intel.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:intel]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/musa.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:musa]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/s390x.Dockerfile free_disk_space:false full:true light:true platforms:linux/s390x runs_on:ubuntu-22.04-s390x server:true tag:s390x]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/vulkan.Dockerfile free_disk_space:false full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:vulkan]) (push) Has been cancelled
Publish Docker image / Create and push git tag (push) Has been cancelled
Update Winget Package / Update Winget Package (push) Has been cancelled
Copilot Setup Steps / copilot-setup-steps (push) Has been cancelled
Check Pre-Tokenizer Hashes / pre-tokenizer-hashes (push) Has been cancelled
Python check requirements.txt / check-requirements (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
Update Operations Documentation / update-ops-docs (push) Has been cancelled
Some checks failed
Build Actions Cache / ubuntu-24-vulkan-cache (push) Has been cancelled
Build Actions Cache / ubuntu-24-spacemit-cache (push) Has been cancelled
Build Actions Cache / windows-2022-rocm-cache (push) Has been cancelled
Close inactive issues / close-issues (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/cpu.Dockerfile free_disk_space:false full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:cpu]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/cuda.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:cuda]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/intel.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:intel]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/musa.Dockerfile free_disk_space:true full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:musa]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/s390x.Dockerfile free_disk_space:false full:true light:true platforms:linux/s390x runs_on:ubuntu-22.04-s390x server:true tag:s390x]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/vulkan.Dockerfile free_disk_space:false full:true light:true platforms:linux/amd64 runs_on:ubuntu-22.04 server:true tag:vulkan]) (push) Has been cancelled
Publish Docker image / Create and push git tag (push) Has been cancelled
Update Winget Package / Update Winget Package (push) Has been cancelled
Copilot Setup Steps / copilot-setup-steps (push) Has been cancelled
Check Pre-Tokenizer Hashes / pre-tokenizer-hashes (push) Has been cancelled
Python check requirements.txt / check-requirements (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
Update Operations Documentation / update-ops-docs (push) Has been cancelled
This commit is contained in:
7
Dockerfile
Normal file
7
Dockerfile
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
FROM git.modelhub.org.cn:9443/enginex-ascend/cann:8.2.rc1-910b-ubuntu22.04-py3.11
|
||||||
|
|
||||||
|
WORKDIR /workspace
|
||||||
|
RUN mkdir -p /workspace/llama.cpp
|
||||||
|
|
||||||
|
ADD . /workspace/llama.cpp
|
||||||
|
ENV LD_LIBRARY_PATH=/workspace/llama.cpp/build_ascend/bin:$LD_LIBRARY_PATH
|
||||||
613
README.en.md
Normal file
613
README.en.md
Normal file
@@ -0,0 +1,613 @@
|
|||||||
|
# llama.cpp
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
[](https://opensource.org/licenses/MIT)
|
||||||
|
[](https://github.com/ggml-org/llama.cpp/releases)
|
||||||
|
[](https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml)
|
||||||
|
|
||||||
|
[Manifesto](https://github.com/ggml-org/llama.cpp/discussions/205) / [ggml](https://github.com/ggml-org/ggml) / [ops](https://github.com/ggml-org/llama.cpp/blob/master/docs/ops.md)
|
||||||
|
|
||||||
|
LLM inference in C/C++
|
||||||
|
|
||||||
|
## Recent API changes
|
||||||
|
|
||||||
|
- [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289)
|
||||||
|
- [Changelog for `llama-server` REST API](https://github.com/ggml-org/llama.cpp/issues/9291)
|
||||||
|
|
||||||
|
## Hot topics
|
||||||
|
|
||||||
|
- **[guide : using the new WebUI of llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16938)**
|
||||||
|
- [guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)
|
||||||
|
- [[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗](https://github.com/ggml-org/llama.cpp/discussions/15313)
|
||||||
|
- Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095)
|
||||||
|
- Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)
|
||||||
|
- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
|
||||||
|
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
|
||||||
|
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
|
||||||
|
- Hugging Face GGUF editor: [discussion](https://github.com/ggml-org/llama.cpp/discussions/9268) | [tool](https://huggingface.co/spaces/CISCai/gguf-editor)
|
||||||
|
|
||||||
|
----
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
|
||||||
|
|
||||||
|
- Install `llama.cpp` using [brew, nix or winget](docs/install.md)
|
||||||
|
- Run with Docker - see our [Docker documentation](docs/docker.md)
|
||||||
|
- Download pre-built binaries from the [releases page](https://github.com/ggml-org/llama.cpp/releases)
|
||||||
|
- Build from source by cloning this repository - check out [our build guide](docs/build.md)
|
||||||
|
|
||||||
|
Once installed, you'll need a model to work with. Head to the [Obtaining and quantizing models](#obtaining-and-quantizing-models) section to learn more.
|
||||||
|
|
||||||
|
Example command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Use a local model file
|
||||||
|
llama-cli -m my_model.gguf
|
||||||
|
|
||||||
|
# Or download and run a model directly from Hugging Face
|
||||||
|
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
|
||||||
|
|
||||||
|
# Launch OpenAI-compatible API server
|
||||||
|
llama-server -hf ggml-org/gemma-3-1b-it-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The main goal of `llama.cpp` is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
|
||||||
|
range of hardware - locally and in the cloud.
|
||||||
|
|
||||||
|
- Plain C/C++ implementation without any dependencies
|
||||||
|
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
|
||||||
|
- AVX, AVX2, AVX512 and AMX support for x86 architectures
|
||||||
|
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
|
||||||
|
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
|
||||||
|
- Vulkan and SYCL backend support
|
||||||
|
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
|
||||||
|
|
||||||
|
The `llama.cpp` project is the main playground for developing new features for the [ggml](https://github.com/ggml-org/ggml) library.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Models</summary>
|
||||||
|
|
||||||
|
Typically finetunes of the base models below are supported as well.
|
||||||
|
|
||||||
|
Instructions for adding support for new models: [HOWTO-add-model.md](docs/development/HOWTO-add-model.md)
|
||||||
|
|
||||||
|
#### Text-only
|
||||||
|
|
||||||
|
- [X] LLaMA 🦙
|
||||||
|
- [x] LLaMA 2 🦙🦙
|
||||||
|
- [x] LLaMA 3 🦙🦙🦙
|
||||||
|
- [X] [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
||||||
|
- [x] [Mixtral MoE](https://huggingface.co/models?search=mistral-ai/Mixtral)
|
||||||
|
- [x] [DBRX](https://huggingface.co/databricks/dbrx-instruct)
|
||||||
|
- [x] [Jamba](https://huggingface.co/ai21labs)
|
||||||
|
- [X] [Falcon](https://huggingface.co/models?search=tiiuae/falcon)
|
||||||
|
- [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Chinese LLaMA-2 / Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)
|
||||||
|
- [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
|
||||||
|
- [X] [BERT](https://github.com/ggml-org/llama.cpp/pull/5423)
|
||||||
|
- [X] [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/)
|
||||||
|
- [X] [Baichuan 1 & 2](https://huggingface.co/models?search=baichuan-inc/Baichuan) + [derivations](https://huggingface.co/hiyouga/baichuan-7b-sft)
|
||||||
|
- [X] [Aquila 1 & 2](https://huggingface.co/models?search=BAAI/Aquila)
|
||||||
|
- [X] [Starcoder models](https://github.com/ggml-org/llama.cpp/pull/3187)
|
||||||
|
- [X] [Refact](https://huggingface.co/smallcloudai/Refact-1_6B-fim)
|
||||||
|
- [X] [MPT](https://github.com/ggml-org/llama.cpp/pull/3417)
|
||||||
|
- [X] [Bloom](https://github.com/ggml-org/llama.cpp/pull/3553)
|
||||||
|
- [x] [Yi models](https://huggingface.co/models?search=01-ai/Yi)
|
||||||
|
- [X] [StableLM models](https://huggingface.co/stabilityai)
|
||||||
|
- [x] [Deepseek models](https://huggingface.co/models?search=deepseek-ai/deepseek)
|
||||||
|
- [x] [Qwen models](https://huggingface.co/models?search=Qwen/Qwen)
|
||||||
|
- [x] [PLaMo-13B](https://github.com/ggml-org/llama.cpp/pull/3557)
|
||||||
|
- [x] [Phi models](https://huggingface.co/models?search=microsoft/phi)
|
||||||
|
- [x] [PhiMoE](https://github.com/ggml-org/llama.cpp/pull/11003)
|
||||||
|
- [x] [GPT-2](https://huggingface.co/gpt2)
|
||||||
|
- [x] [Orion 14B](https://github.com/ggml-org/llama.cpp/pull/5118)
|
||||||
|
- [x] [InternLM2](https://huggingface.co/models?search=internlm2)
|
||||||
|
- [x] [CodeShell](https://github.com/WisdomShell/codeshell)
|
||||||
|
- [x] [Gemma](https://ai.google.dev/gemma)
|
||||||
|
- [x] [Mamba](https://github.com/state-spaces/mamba)
|
||||||
|
- [x] [Grok-1](https://huggingface.co/keyfan/grok-1-hf)
|
||||||
|
- [x] [Xverse](https://huggingface.co/models?search=xverse)
|
||||||
|
- [x] [Command-R models](https://huggingface.co/models?search=CohereForAI/c4ai-command-r)
|
||||||
|
- [x] [SEA-LION](https://huggingface.co/models?search=sea-lion)
|
||||||
|
- [x] [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) + [GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B)
|
||||||
|
- [x] [OLMo](https://allenai.org/olmo)
|
||||||
|
- [x] [OLMo 2](https://allenai.org/olmo)
|
||||||
|
- [x] [OLMoE](https://huggingface.co/allenai/OLMoE-1B-7B-0924)
|
||||||
|
- [x] [Granite models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330)
|
||||||
|
- [x] [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) + [Pythia](https://github.com/EleutherAI/pythia)
|
||||||
|
- [x] [Snowflake-Arctic MoE](https://huggingface.co/collections/Snowflake/arctic-66290090abe542894a5ac520)
|
||||||
|
- [x] [Smaug](https://huggingface.co/models?search=Smaug)
|
||||||
|
- [x] [Poro 34B](https://huggingface.co/LumiOpen/Poro-34B)
|
||||||
|
- [x] [Bitnet b1.58 models](https://huggingface.co/1bitLLM)
|
||||||
|
- [x] [Flan T5](https://huggingface.co/models?search=flan-t5)
|
||||||
|
- [x] [Open Elm models](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca)
|
||||||
|
- [x] [ChatGLM3-6b](https://huggingface.co/THUDM/chatglm3-6b) + [ChatGLM4-9b](https://huggingface.co/THUDM/glm-4-9b) + [GLMEdge-1.5b](https://huggingface.co/THUDM/glm-edge-1.5b-chat) + [GLMEdge-4b](https://huggingface.co/THUDM/glm-edge-4b-chat)
|
||||||
|
- [x] [GLM-4-0414](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
|
||||||
|
- [x] [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)
|
||||||
|
- [x] [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)
|
||||||
|
- [x] [FalconMamba Models](https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a580324dd1598b0f6d4a)
|
||||||
|
- [x] [Jais](https://huggingface.co/inceptionai/jais-13b-chat)
|
||||||
|
- [x] [Bielik-11B-v2.3](https://huggingface.co/collections/speakleash/bielik-11b-v23-66ee813238d9b526a072408a)
|
||||||
|
- [x] [RWKV-6](https://github.com/BlinkDL/RWKV-LM)
|
||||||
|
- [x] [QRWKV-6](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1)
|
||||||
|
- [x] [GigaChat-20B-A3B](https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct)
|
||||||
|
- [X] [Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview)
|
||||||
|
- [x] [Ling models](https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32)
|
||||||
|
- [x] [LFM2 models](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38)
|
||||||
|
- [x] [Hunyuan models](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda26b19119c9c5e7)
|
||||||
|
- [x] [BailingMoeV2 (Ring/Ling 2.0) models](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86)
|
||||||
|
|
||||||
|
#### Multimodal
|
||||||
|
|
||||||
|
- [x] [LLaVA 1.5 models](https://huggingface.co/collections/liuhaotian/llava-15-653aac15d994e992e2677a7e), [LLaVA 1.6 models](https://huggingface.co/collections/liuhaotian/llava-16-65b9e40155f60fd046a5ccf2)
|
||||||
|
- [x] [BakLLaVA](https://huggingface.co/models?search=SkunkworksAI/Bakllava)
|
||||||
|
- [x] [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5)
|
||||||
|
- [x] [ShareGPT4V](https://huggingface.co/models?search=Lin-Chen/ShareGPT4V)
|
||||||
|
- [x] [MobileVLM 1.7B/3B models](https://huggingface.co/models?search=mobileVLM)
|
||||||
|
- [x] [Yi-VL](https://huggingface.co/models?search=Yi-VL)
|
||||||
|
- [x] [Mini CPM](https://huggingface.co/models?search=MiniCPM)
|
||||||
|
- [x] [Moondream](https://huggingface.co/vikhyatk/moondream2)
|
||||||
|
- [x] [Bunny](https://github.com/BAAI-DCAI/Bunny)
|
||||||
|
- [x] [GLM-EDGE](https://huggingface.co/models?search=glm-edge)
|
||||||
|
- [x] [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d)
|
||||||
|
- [x] [LFM2-VL](https://huggingface.co/collections/LiquidAI/lfm2-vl-68963bbc84a610f7638d5ffa)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Bindings</summary>
|
||||||
|
|
||||||
|
- Python: [ddh0/easy-llama](https://github.com/ddh0/easy-llama)
|
||||||
|
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
||||||
|
- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
|
||||||
|
- Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
|
||||||
|
- JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
|
||||||
|
- JS/TS (Programmable Prompt Engine CLI): [offline-ai/cli](https://github.com/offline-ai/cli)
|
||||||
|
- JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm)
|
||||||
|
- Typescript/Wasm (nicer API, available on npm): [ngxson/wllama](https://github.com/ngxson/wllama)
|
||||||
|
- Ruby: [yoshoku/llama_cpp.rb](https://github.com/yoshoku/llama_cpp.rb)
|
||||||
|
- Rust (more features): [edgenai/llama_cpp-rs](https://github.com/edgenai/llama_cpp-rs)
|
||||||
|
- Rust (nicer API): [mdrokz/rust-llama.cpp](https://github.com/mdrokz/rust-llama.cpp)
|
||||||
|
- Rust (more direct bindings): [utilityai/llama-cpp-rs](https://github.com/utilityai/llama-cpp-rs)
|
||||||
|
- Rust (automated build from crates.io): [ShelbyJenkins/llm_client](https://github.com/ShelbyJenkins/llm_client)
|
||||||
|
- C#/.NET: [SciSharp/LLamaSharp](https://github.com/SciSharp/LLamaSharp)
|
||||||
|
- C#/VB.NET (more features - community license): [LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/index.html)
|
||||||
|
- Scala 3: [donderom/llm4s](https://github.com/donderom/llm4s)
|
||||||
|
- Clojure: [phronmophobic/llama.clj](https://github.com/phronmophobic/llama.clj)
|
||||||
|
- React Native: [mybigday/llama.rn](https://github.com/mybigday/llama.rn)
|
||||||
|
- Java: [kherud/java-llama.cpp](https://github.com/kherud/java-llama.cpp)
|
||||||
|
- Java: [QuasarByte/llama-cpp-jna](https://github.com/QuasarByte/llama-cpp-jna)
|
||||||
|
- Zig: [deins/llama.cpp.zig](https://github.com/Deins/llama.cpp.zig)
|
||||||
|
- Flutter/Dart: [netdur/llama_cpp_dart](https://github.com/netdur/llama_cpp_dart)
|
||||||
|
- Flutter: [xuegao-tzx/Fllama](https://github.com/xuegao-tzx/Fllama)
|
||||||
|
- PHP (API bindings and features built on top of llama.cpp): [distantmagic/resonance](https://github.com/distantmagic/resonance) [(more info)](https://github.com/ggml-org/llama.cpp/pull/6326)
|
||||||
|
- Guile Scheme: [guile_llama_cpp](https://savannah.nongnu.org/projects/guile-llama-cpp)
|
||||||
|
- Swift [srgtuszy/llama-cpp-swift](https://github.com/srgtuszy/llama-cpp-swift)
|
||||||
|
- Swift [ShenghaiWang/SwiftLlama](https://github.com/ShenghaiWang/SwiftLlama)
|
||||||
|
- Delphi [Embarcadero/llama-cpp-delphi](https://github.com/Embarcadero/llama-cpp-delphi)
|
||||||
|
- Go (no CGo needed): [hybridgroup/yzma](https://github.com/hybridgroup/yzma)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>UIs</summary>
|
||||||
|
|
||||||
|
*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
|
||||||
|
|
||||||
|
- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
|
||||||
|
- [cztomsik/ava](https://github.com/cztomsik/ava) (MIT)
|
||||||
|
- [Dot](https://github.com/alexpinel/Dot) (GPL)
|
||||||
|
- [eva](https://github.com/ylsdamxssjxxdd/eva) (MIT)
|
||||||
|
- [iohub/collama](https://github.com/iohub/coLLaMA) (Apache-2.0)
|
||||||
|
- [janhq/jan](https://github.com/janhq/jan) (AGPL)
|
||||||
|
- [johnbean393/Sidekick](https://github.com/johnbean393/Sidekick) (MIT)
|
||||||
|
- [KanTV](https://github.com/zhouwg/kantv?tab=readme-ov-file) (Apache-2.0)
|
||||||
|
- [KodiBot](https://github.com/firatkiral/kodibot) (GPL)
|
||||||
|
- [llama.vim](https://github.com/ggml-org/llama.vim) (MIT)
|
||||||
|
- [LARS](https://github.com/abgulati/LARS) (AGPL)
|
||||||
|
- [Llama Assistant](https://github.com/vietanhdev/llama-assistant) (GPL)
|
||||||
|
- [LLMFarm](https://github.com/guinmoon/LLMFarm?tab=readme-ov-file) (MIT)
|
||||||
|
- [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT)
|
||||||
|
- [LMStudio](https://lmstudio.ai/) (proprietary)
|
||||||
|
- [LocalAI](https://github.com/mudler/LocalAI) (MIT)
|
||||||
|
- [LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp) (AGPL)
|
||||||
|
- [MindMac](https://mindmac.app) (proprietary)
|
||||||
|
- [MindWorkAI/AI-Studio](https://github.com/MindWorkAI/AI-Studio) (FSL-1.1-MIT)
|
||||||
|
- [Mobile-Artificial-Intelligence/maid](https://github.com/Mobile-Artificial-Intelligence/maid) (MIT)
|
||||||
|
- [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) (Apache-2.0)
|
||||||
|
- [nat/openplayground](https://github.com/nat/openplayground) (MIT)
|
||||||
|
- [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) (MIT)
|
||||||
|
- [ollama/ollama](https://github.com/ollama/ollama) (MIT)
|
||||||
|
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (AGPL)
|
||||||
|
- [PocketPal AI](https://github.com/a-ghorbani/pocketpal-ai) (MIT)
|
||||||
|
- [psugihara/FreeChat](https://github.com/psugihara/FreeChat) (MIT)
|
||||||
|
- [ptsochantaris/emeltal](https://github.com/ptsochantaris/emeltal) (MIT)
|
||||||
|
- [pythops/tenere](https://github.com/pythops/tenere) (AGPL)
|
||||||
|
- [ramalama](https://github.com/containers/ramalama) (MIT)
|
||||||
|
- [semperai/amica](https://github.com/semperai/amica) (MIT)
|
||||||
|
- [withcatai/catai](https://github.com/withcatai/catai) (MIT)
|
||||||
|
- [Autopen](https://github.com/blackhole89/autopen) (GPL)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Tools</summary>
|
||||||
|
|
||||||
|
- [akx/ggify](https://github.com/akx/ggify) – download PyTorch models from HuggingFace Hub and convert them to GGML
|
||||||
|
- [akx/ollama-dl](https://github.com/akx/ollama-dl) – download models from the Ollama library to be used directly with llama.cpp
|
||||||
|
- [crashr/gppm](https://github.com/crashr/gppm) – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
|
||||||
|
- [gpustack/gguf-parser](https://github.com/gpustack/gguf-parser-go/tree/main/cmd/gguf-parser) - review/check the GGUF file and estimate the memory usage
|
||||||
|
- [Styled Lines](https://marketplace.unity.com/packages/tools/generative-ai/styled-lines-llama-cpp-model-292902) (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Infrastructure</summary>
|
||||||
|
|
||||||
|
- [Paddler](https://github.com/intentee/paddler) - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
|
||||||
|
- [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs
|
||||||
|
- [llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
|
||||||
|
- [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server
|
||||||
|
- [Kalavai](https://github.com/kalavai-net/kalavai-client) - Crowdsource end to end LLM deployment at any scale
|
||||||
|
- [llmaz](https://github.com/InftyAI/llmaz) - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Games</summary>
|
||||||
|
|
||||||
|
- [Lucy's Labyrinth](https://github.com/MorganRO8/Lucys_Labyrinth) - A simple maze game where agents controlled by an AI model will try to trick you.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
## Supported backends
|
||||||
|
|
||||||
|
| Backend | Target devices |
|
||||||
|
| --- | --- |
|
||||||
|
| [Metal](docs/build.md#metal-build) | Apple Silicon |
|
||||||
|
| [BLAS](docs/build.md#blas-build) | All |
|
||||||
|
| [BLIS](docs/backend/BLIS.md) | All |
|
||||||
|
| [SYCL](docs/backend/SYCL.md) | Intel and Nvidia GPU |
|
||||||
|
| [MUSA](docs/build.md#musa) | Moore Threads GPU |
|
||||||
|
| [CUDA](docs/build.md#cuda) | Nvidia GPU |
|
||||||
|
| [HIP](docs/build.md#hip) | AMD GPU |
|
||||||
|
| [Vulkan](docs/build.md#vulkan) | GPU |
|
||||||
|
| [CANN](docs/build.md#cann) | Ascend NPU |
|
||||||
|
| [OpenCL](docs/backend/OPENCL.md) | Adreno GPU |
|
||||||
|
| [IBM zDNN](docs/backend/zDNN.md) | IBM Z & LinuxONE |
|
||||||
|
| [WebGPU [In Progress]](docs/build.md#webgpu) | All |
|
||||||
|
| [RPC](https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc) | All |
|
||||||
|
| [Hexagon [In Progress]](docs/backend/hexagon/README.md) | Snapdragon |
|
||||||
|
|
||||||
|
## Obtaining and quantizing models
|
||||||
|
|
||||||
|
The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](https://huggingface.co/models?library=gguf&sort=trending) compatible with `llama.cpp`:
|
||||||
|
|
||||||
|
- [Trending](https://huggingface.co/models?library=gguf&sort=trending)
|
||||||
|
- [LLaMA](https://huggingface.co/models?sort=trending&search=llama+gguf)
|
||||||
|
|
||||||
|
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from [Hugging Face](https://huggingface.co/) or other model hosting sites, such as [ModelScope](https://modelscope.cn/), by using this CLI argument: `-hf <user>/<model>[:quant]`. For example:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable `MODEL_ENDPOINT`. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. `MODEL_ENDPOINT=https://www.modelscope.cn/`.
|
||||||
|
|
||||||
|
After downloading a model, use the CLI tools to run it locally - see below.
|
||||||
|
|
||||||
|
`llama.cpp` requires the model to be stored in the [GGUF](https://github.com/ggml-org/ggml/blob/master/docs/gguf.md) file format. Models in other data formats can be converted to GGUF using the `convert_*.py` Python scripts in this repo.
|
||||||
|
|
||||||
|
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with `llama.cpp`:
|
||||||
|
|
||||||
|
- Use the [GGUF-my-repo space](https://huggingface.co/spaces/ggml-org/gguf-my-repo) to convert to GGUF format and quantize model weights to smaller sizes
|
||||||
|
- Use the [GGUF-my-LoRA space](https://huggingface.co/spaces/ggml-org/gguf-my-lora) to convert LoRA adapters to GGUF format (more info: https://github.com/ggml-org/llama.cpp/discussions/10123)
|
||||||
|
- Use the [GGUF-editor space](https://huggingface.co/spaces/CISCai/gguf-editor) to edit GGUF meta data in the browser (more info: https://github.com/ggml-org/llama.cpp/discussions/9268)
|
||||||
|
- Use the [Inference Endpoints](https://ui.endpoints.huggingface.co/) to directly host `llama.cpp` in the cloud (more info: https://github.com/ggml-org/llama.cpp/discussions/9669)
|
||||||
|
|
||||||
|
To learn more about model quantization, [read this documentation](tools/quantize/README.md)
|
||||||
|
|
||||||
|
## [`llama-cli`](tools/main)
|
||||||
|
|
||||||
|
#### A CLI tool for accessing and experimenting with most of `llama.cpp`'s functionality.
|
||||||
|
|
||||||
|
- <details open>
|
||||||
|
<summary>Run in conversation mode</summary>
|
||||||
|
|
||||||
|
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding `-cnv` and specifying a suitable chat template with `--chat-template NAME`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-cli -m model.gguf
|
||||||
|
|
||||||
|
# > hi, who are you?
|
||||||
|
# Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
|
||||||
|
#
|
||||||
|
# > what is 1+1?
|
||||||
|
# Easy peasy! The answer to 1+1 is... 2!
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Run in conversation mode with custom chat template</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# use the "chatml" template (use -h to see the list of supported templates)
|
||||||
|
llama-cli -m model.gguf -cnv --chat-template chatml
|
||||||
|
|
||||||
|
# use a custom template
|
||||||
|
llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Run simple text completion</summary>
|
||||||
|
|
||||||
|
To disable conversation mode explicitly, use `-no-cnv`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-cli -m model.gguf -p "I believe the meaning of life is" -n 128 -no-cnv
|
||||||
|
|
||||||
|
# I believe the meaning of life is to find your own truth and to live in accordance with it. For me, this means being true to myself and following my passions, even if they don't align with societal expectations. I think that's what I love about yoga – it's not just a physical practice, but a spiritual one too. It's about connecting with yourself, listening to your inner voice, and honoring your own unique journey.
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Constrain the output with a custom grammar</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
|
||||||
|
|
||||||
|
# {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
|
||||||
|
```
|
||||||
|
|
||||||
|
The [grammars/](grammars/) folder contains a handful of sample grammars. To write your own, check out the [GBNF Guide](grammars/README.md).
|
||||||
|
|
||||||
|
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
## [`llama-server`](tools/server)
|
||||||
|
|
||||||
|
#### A lightweight, [OpenAI API](https://github.com/openai/openai-openapi) compatible, HTTP server for serving LLMs.
|
||||||
|
|
||||||
|
- <details open>
|
||||||
|
<summary>Start a local HTTP server with default configuration on port 8080</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-server -m model.gguf --port 8080
|
||||||
|
|
||||||
|
# Basic web UI can be accessed via browser: http://localhost:8080
|
||||||
|
# Chat completion endpoint: http://localhost:8080/v1/chat/completions
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Support multiple-users and parallel decoding</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# up to 4 concurrent requests, each with 4096 max context
|
||||||
|
llama-server -m model.gguf -c 16384 -np 4
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Enable speculative decoding</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# the draft.gguf model should be a small variant of the target model.gguf
|
||||||
|
llama-server -m model.gguf -md draft.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Serve an embedding model</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# use the /embedding endpoint
|
||||||
|
llama-server -m model.gguf --embedding --pooling cls -ub 8192
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Serve a reranking model</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# use the /reranking endpoint
|
||||||
|
llama-server -m model.gguf --reranking
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Constrain all outputs with a grammar</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# custom grammar
|
||||||
|
llama-server -m model.gguf --grammar-file grammar.gbnf
|
||||||
|
|
||||||
|
# JSON
|
||||||
|
llama-server -m model.gguf --grammar-file grammars/json.gbnf
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
## [`llama-perplexity`](tools/perplexity)
|
||||||
|
|
||||||
|
#### A tool for measuring the [perplexity](tools/perplexity/README.md) [^1] (and other quality metrics) of a model over a given text.
|
||||||
|
|
||||||
|
- <details open>
|
||||||
|
<summary>Measure the perplexity over a text file</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-perplexity -m model.gguf -f file.txt
|
||||||
|
|
||||||
|
# [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
|
||||||
|
# Final estimate: PPL = 5.4007 +/- 0.67339
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Measure KL divergence</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# TODO
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
[^1]: [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity)
|
||||||
|
|
||||||
|
## [`llama-bench`](tools/llama-bench)
|
||||||
|
|
||||||
|
#### Benchmark the performance of the inference for various parameters.
|
||||||
|
|
||||||
|
- <details open>
|
||||||
|
<summary>Run default benchmark</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-bench -m model.gguf
|
||||||
|
|
||||||
|
# Output:
|
||||||
|
# | model | size | params | backend | threads | test | t/s |
|
||||||
|
# | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
|
||||||
|
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 |
|
||||||
|
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 |
|
||||||
|
#
|
||||||
|
# build: 3e0ba0e60 (4229)
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## [`llama-run`](tools/run)
|
||||||
|
|
||||||
|
#### A comprehensive example for running `llama.cpp` models. Useful for inferencing. Used with RamaLama [^3].
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Run a model with a specific prompt (by default it's pulled from Ollama registry)</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-run granite-code
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
[^3]: [RamaLama](https://github.com/containers/ramalama)
|
||||||
|
|
||||||
|
## [`llama-simple`](examples/simple)
|
||||||
|
|
||||||
|
#### A minimal example for implementing apps with `llama.cpp`. Useful for developers.
|
||||||
|
|
||||||
|
- <details>
|
||||||
|
<summary>Basic text completion</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llama-simple -m model.gguf
|
||||||
|
|
||||||
|
# Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
- Contributors can open PRs
|
||||||
|
- Collaborators will be invited based on contributions
|
||||||
|
- Maintainers can push to branches in the `llama.cpp` repo and merge PRs into the `master` branch
|
||||||
|
- Any help with managing issues, PRs and projects is very appreciated!
|
||||||
|
- See [good first issues](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
|
||||||
|
- Read the [CONTRIBUTING.md](CONTRIBUTING.md) for more information
|
||||||
|
- Make sure to read this: [Inference at the edge](https://github.com/ggml-org/llama.cpp/discussions/205)
|
||||||
|
- A bit of backstory for those who are interested: [Changelog podcast](https://changelog.com/podcast/532)
|
||||||
|
|
||||||
|
## Other documentation
|
||||||
|
|
||||||
|
- [main (cli)](tools/main/README.md)
|
||||||
|
- [server](tools/server/README.md)
|
||||||
|
- [GBNF grammars](grammars/README.md)
|
||||||
|
|
||||||
|
#### Development documentation
|
||||||
|
|
||||||
|
- [How to build](docs/build.md)
|
||||||
|
- [Running on Docker](docs/docker.md)
|
||||||
|
- [Build on Android](docs/android.md)
|
||||||
|
- [Performance troubleshooting](docs/development/token_generation_performance_tips.md)
|
||||||
|
- [GGML tips & tricks](https://github.com/ggml-org/llama.cpp/wiki/GGML-Tips-&-Tricks)
|
||||||
|
|
||||||
|
#### Seminal papers and background on the models
|
||||||
|
|
||||||
|
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
||||||
|
- LLaMA:
|
||||||
|
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
||||||
|
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
||||||
|
- GPT-3
|
||||||
|
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
||||||
|
- GPT-3.5 / InstructGPT / ChatGPT:
|
||||||
|
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
||||||
|
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
||||||
|
|
||||||
|
## XCFramework
|
||||||
|
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,
|
||||||
|
and macOS. It can be used in Swift projects without the need to compile the
|
||||||
|
library from source. For example:
|
||||||
|
```swift
|
||||||
|
// swift-tools-version: 5.10
|
||||||
|
// The swift-tools-version declares the minimum version of Swift required to build this package.
|
||||||
|
|
||||||
|
import PackageDescription
|
||||||
|
|
||||||
|
let package = Package(
|
||||||
|
name: "MyLlamaPackage",
|
||||||
|
targets: [
|
||||||
|
.executableTarget(
|
||||||
|
name: "MyLlamaPackage",
|
||||||
|
dependencies: [
|
||||||
|
"LlamaFramework"
|
||||||
|
]),
|
||||||
|
.binaryTarget(
|
||||||
|
name: "LlamaFramework",
|
||||||
|
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
|
||||||
|
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
The above example is using an intermediate build `b5046` of the library. This can be modified
|
||||||
|
to use a different version by changing the URL and checksum.
|
||||||
|
|
||||||
|
## Completions
|
||||||
|
Command-line completion is available for some environments.
|
||||||
|
|
||||||
|
#### Bash Completion
|
||||||
|
```bash
|
||||||
|
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
|
||||||
|
$ source ~/.llama-completion.bash
|
||||||
|
```
|
||||||
|
Optionally this can be added to your `.bashrc` or `.bash_profile` to load it
|
||||||
|
automatically. For example:
|
||||||
|
```console
|
||||||
|
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [yhirose/cpp-httplib](https://github.com/yhirose/cpp-httplib) - Single-header HTTP server, used by `llama-server` - MIT license
|
||||||
|
- [stb-image](https://github.com/nothings/stb) - Single-header image format decoder, used by multimodal subsystem - Public domain
|
||||||
|
- [nlohmann/json](https://github.com/nlohmann/json) - Single-header JSON library, used by various tools/examples - MIT License
|
||||||
|
- [minja](https://github.com/google/minja) - Minimal Jinja parser in C++, used by various tools/examples - MIT License
|
||||||
|
- [linenoise.cpp](./tools/run/linenoise.cpp/linenoise.cpp) - C++ library that provides readline-like line editing capabilities, used by `llama-run` - BSD 2-Clause License
|
||||||
|
- [curl](https://curl.se/) - Client-side URL transfer library, used by various tools/examples - [CURL License](https://curl.se/docs/copyright.html)
|
||||||
|
- [miniaudio.h](https://github.com/mackron/miniaudio) - Single-header audio format decoder, used by multimodal subsystem - Public domain
|
||||||
718
README.md
718
README.md
@@ -1,613 +1,141 @@
|
|||||||
# llama.cpp
|
# enginex-ascend-910-vllm
|
||||||
|
|
||||||

|
运行于【昇腾-910】系列算力卡的【文本生成】引擎,基于 llama.cpp 引擎进行架构特别适配优化,支持 Qwen、DeepSeek、Llama 等最新开源模型
|
||||||
|
|
||||||
[](https://opensource.org/licenses/MIT)
|
## 镜像
|
||||||
[](https://github.com/ggml-org/llama.cpp/releases)
|
|
||||||
[](https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml)
|
|
||||||
|
|
||||||
[Manifesto](https://github.com/ggml-org/llama.cpp/discussions/205) / [ggml](https://github.com/ggml-org/ggml) / [ops](https://github.com/ggml-org/llama.cpp/blob/master/docs/ops.md)
|
Latest Version: git.modelhub.org.cn:9443/enginex-ascend/ascend-llama-cpp:b7003-full
|
||||||
|
|
||||||
LLM inference in C/C++
|
## 总览
|
||||||
|
|
||||||
## Recent API changes
|
`Ascend-llama.cpp` 是llama.cpp使用CANN Backend编译出来的大模型推理引擎。
|
||||||
|
这种使用方式是社区中对llama.cpp支持昇腾后端的推荐方式。可以让GGUF类型的大语言模型、混合专家(MOE)、嵌入、多模态等流行的大语言模型在 Ascend NPU 上无缝运行。
|
||||||
|
|
||||||
- [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289)
|
注意:昇腾910加速卡仅支持**Q4_0、Q8_0、FP16**类型的gguf模型
|
||||||
- [Changelog for `llama-server` REST API](https://github.com/ggml-org/llama.cpp/issues/9291)
|
|
||||||
|
|
||||||
## Hot topics
|
## 准备
|
||||||
|
|
||||||
- **[guide : using the new WebUI of llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16938)**
|
- 硬件:Atlas 800I A2 Inference系列、Atlas A2 Training系列、Atlas 800I A3 Inference系列、Atlas A3 Training系列、Atlas 300I Duo(实验性支持)
|
||||||
- [guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)
|
- 操作系统:Linux
|
||||||
- [[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗](https://github.com/ggml-org/llama.cpp/discussions/15313)
|
- 软件:
|
||||||
- Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095)
|
* CANN >= 8.2.rc1 (Ascend HDK 版本参考[这里](https://www.hiascend.com/document/detail/zh/canncommercial/82RC1/releasenote/releasenote_0000.html))
|
||||||
- Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)
|
|
||||||
- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
|
|
||||||
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
|
|
||||||
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
|
|
||||||
- Hugging Face GGUF editor: [discussion](https://github.com/ggml-org/llama.cpp/discussions/9268) | [tool](https://huggingface.co/spaces/CISCai/gguf-editor)
|
|
||||||
|
|
||||||
----
|
## QuickStart
|
||||||
|
|
||||||
## Quick start
|
1、从 modelscope上下载支持的模型,例如 Qwen/Qwen2.5-0.5B-Instruct-GGUF 中的FP16 gguf模型
|
||||||
|
```python
|
||||||
|
mkdir /model && cd /model
|
||||||
|
wget https://modelscope.cn/models/Qwen/Qwen2.5-0.5B-Instruct-GGUF/resolve/master/qwen2.5-0.5b-instruct-fp16.gguf
|
||||||
|
```
|
||||||
|
2、编译llama.cpp
|
||||||
|
从仓库的【软件包】栏目下载基础镜像 git.modelhub.org.cn:9443/enginex-ascend/cann:8.2.rc1-910b-ubuntu22.04-py3.11
|
||||||
|
在上述镜像环境中对该项目进行编译
|
||||||
|
```shell
|
||||||
|
./build-ascend.sh
|
||||||
|
```
|
||||||
|
成功运行将会在本项目中生成build_ascend文件夹,内部是llama.cpp的全部产物
|
||||||
|
|
||||||
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
|
3、使用Dockerfile生成镜像
|
||||||
|
使用 Dockerfile 生成 镜像(Dockerfile中基础镜像也是步骤2的镜像)
|
||||||
- Install `llama.cpp` using [brew, nix or winget](docs/install.md)
|
```python
|
||||||
- Run with Docker - see our [Docker documentation](docs/docker.md)
|
docker build -f Dockerfile -t ascend-llama-cpp:dev .
|
||||||
- Download pre-built binaries from the [releases page](https://github.com/ggml-org/llama.cpp/releases)
|
|
||||||
- Build from source by cloning this repository - check out [our build guide](docs/build.md)
|
|
||||||
|
|
||||||
Once installed, you'll need a model to work with. Head to the [Obtaining and quantizing models](#obtaining-and-quantizing-models) section to learn more.
|
|
||||||
|
|
||||||
Example command:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# Use a local model file
|
|
||||||
llama-cli -m my_model.gguf
|
|
||||||
|
|
||||||
# Or download and run a model directly from Hugging Face
|
|
||||||
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
|
|
||||||
|
|
||||||
# Launch OpenAI-compatible API server
|
|
||||||
llama-server -hf ggml-org/gemma-3-1b-it-GGUF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Description
|
4、启动docker
|
||||||
|
```python
|
||||||
The main goal of `llama.cpp` is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
|
docker run -it --rm \
|
||||||
range of hardware - locally and in the cloud.
|
-p 10086:80 \
|
||||||
|
--name test-ascend-my-1 \
|
||||||
- Plain C/C++ implementation without any dependencies
|
-e NPU_VISIBLE_DEVICES=1 \
|
||||||
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
|
-e ASCEND_RT_VISIBLE_DEVICES=1 \
|
||||||
- AVX, AVX2, AVX512 and AMX support for x86 architectures
|
--device /dev/davinci_manager \
|
||||||
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
|
--device /dev/devmm_svm \
|
||||||
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
|
--device /dev/hisi_hdc \
|
||||||
- Vulkan and SYCL backend support
|
-v /model:/model \
|
||||||
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
|
-v /usr/local/dcmi:/usr/local/dcmi \
|
||||||
|
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||||
The `llama.cpp` project is the main playground for developing new features for the [ggml](https://github.com/ggml-org/ggml) library.
|
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
|
||||||
|
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
|
||||||
<details>
|
-v /etc/ascend_install.info:/etc/ascend_install.info \
|
||||||
<summary>Models</summary>
|
--privileged \
|
||||||
|
--entrypoint /workspace/llama.cpp/build_ascend/bin/llama-server \
|
||||||
Typically finetunes of the base models below are supported as well.
|
ascend-vllm:dev \
|
||||||
|
--model /model/qwen2.5-0.5b-instruct-fp16.gguf --alias llm --threads 20 --n-gpu-layers 999 --prio 3 \
|
||||||
Instructions for adding support for new models: [HOWTO-add-model.md](docs/development/HOWTO-add-model.md)
|
--min_p 0.01 --ctx-size 8192 --host 0.0.0.0 --port 80 --jinja --flash-attn off
|
||||||
|
|
||||||
#### Text-only
|
|
||||||
|
|
||||||
- [X] LLaMA 🦙
|
|
||||||
- [x] LLaMA 2 🦙🦙
|
|
||||||
- [x] LLaMA 3 🦙🦙🦙
|
|
||||||
- [X] [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
|
||||||
- [x] [Mixtral MoE](https://huggingface.co/models?search=mistral-ai/Mixtral)
|
|
||||||
- [x] [DBRX](https://huggingface.co/databricks/dbrx-instruct)
|
|
||||||
- [x] [Jamba](https://huggingface.co/ai21labs)
|
|
||||||
- [X] [Falcon](https://huggingface.co/models?search=tiiuae/falcon)
|
|
||||||
- [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Chinese LLaMA-2 / Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)
|
|
||||||
- [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
|
|
||||||
- [X] [BERT](https://github.com/ggml-org/llama.cpp/pull/5423)
|
|
||||||
- [X] [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/)
|
|
||||||
- [X] [Baichuan 1 & 2](https://huggingface.co/models?search=baichuan-inc/Baichuan) + [derivations](https://huggingface.co/hiyouga/baichuan-7b-sft)
|
|
||||||
- [X] [Aquila 1 & 2](https://huggingface.co/models?search=BAAI/Aquila)
|
|
||||||
- [X] [Starcoder models](https://github.com/ggml-org/llama.cpp/pull/3187)
|
|
||||||
- [X] [Refact](https://huggingface.co/smallcloudai/Refact-1_6B-fim)
|
|
||||||
- [X] [MPT](https://github.com/ggml-org/llama.cpp/pull/3417)
|
|
||||||
- [X] [Bloom](https://github.com/ggml-org/llama.cpp/pull/3553)
|
|
||||||
- [x] [Yi models](https://huggingface.co/models?search=01-ai/Yi)
|
|
||||||
- [X] [StableLM models](https://huggingface.co/stabilityai)
|
|
||||||
- [x] [Deepseek models](https://huggingface.co/models?search=deepseek-ai/deepseek)
|
|
||||||
- [x] [Qwen models](https://huggingface.co/models?search=Qwen/Qwen)
|
|
||||||
- [x] [PLaMo-13B](https://github.com/ggml-org/llama.cpp/pull/3557)
|
|
||||||
- [x] [Phi models](https://huggingface.co/models?search=microsoft/phi)
|
|
||||||
- [x] [PhiMoE](https://github.com/ggml-org/llama.cpp/pull/11003)
|
|
||||||
- [x] [GPT-2](https://huggingface.co/gpt2)
|
|
||||||
- [x] [Orion 14B](https://github.com/ggml-org/llama.cpp/pull/5118)
|
|
||||||
- [x] [InternLM2](https://huggingface.co/models?search=internlm2)
|
|
||||||
- [x] [CodeShell](https://github.com/WisdomShell/codeshell)
|
|
||||||
- [x] [Gemma](https://ai.google.dev/gemma)
|
|
||||||
- [x] [Mamba](https://github.com/state-spaces/mamba)
|
|
||||||
- [x] [Grok-1](https://huggingface.co/keyfan/grok-1-hf)
|
|
||||||
- [x] [Xverse](https://huggingface.co/models?search=xverse)
|
|
||||||
- [x] [Command-R models](https://huggingface.co/models?search=CohereForAI/c4ai-command-r)
|
|
||||||
- [x] [SEA-LION](https://huggingface.co/models?search=sea-lion)
|
|
||||||
- [x] [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) + [GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B)
|
|
||||||
- [x] [OLMo](https://allenai.org/olmo)
|
|
||||||
- [x] [OLMo 2](https://allenai.org/olmo)
|
|
||||||
- [x] [OLMoE](https://huggingface.co/allenai/OLMoE-1B-7B-0924)
|
|
||||||
- [x] [Granite models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330)
|
|
||||||
- [x] [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) + [Pythia](https://github.com/EleutherAI/pythia)
|
|
||||||
- [x] [Snowflake-Arctic MoE](https://huggingface.co/collections/Snowflake/arctic-66290090abe542894a5ac520)
|
|
||||||
- [x] [Smaug](https://huggingface.co/models?search=Smaug)
|
|
||||||
- [x] [Poro 34B](https://huggingface.co/LumiOpen/Poro-34B)
|
|
||||||
- [x] [Bitnet b1.58 models](https://huggingface.co/1bitLLM)
|
|
||||||
- [x] [Flan T5](https://huggingface.co/models?search=flan-t5)
|
|
||||||
- [x] [Open Elm models](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca)
|
|
||||||
- [x] [ChatGLM3-6b](https://huggingface.co/THUDM/chatglm3-6b) + [ChatGLM4-9b](https://huggingface.co/THUDM/glm-4-9b) + [GLMEdge-1.5b](https://huggingface.co/THUDM/glm-edge-1.5b-chat) + [GLMEdge-4b](https://huggingface.co/THUDM/glm-edge-4b-chat)
|
|
||||||
- [x] [GLM-4-0414](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
|
|
||||||
- [x] [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)
|
|
||||||
- [x] [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)
|
|
||||||
- [x] [FalconMamba Models](https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a580324dd1598b0f6d4a)
|
|
||||||
- [x] [Jais](https://huggingface.co/inceptionai/jais-13b-chat)
|
|
||||||
- [x] [Bielik-11B-v2.3](https://huggingface.co/collections/speakleash/bielik-11b-v23-66ee813238d9b526a072408a)
|
|
||||||
- [x] [RWKV-6](https://github.com/BlinkDL/RWKV-LM)
|
|
||||||
- [x] [QRWKV-6](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1)
|
|
||||||
- [x] [GigaChat-20B-A3B](https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct)
|
|
||||||
- [X] [Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview)
|
|
||||||
- [x] [Ling models](https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32)
|
|
||||||
- [x] [LFM2 models](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38)
|
|
||||||
- [x] [Hunyuan models](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda26b19119c9c5e7)
|
|
||||||
- [x] [BailingMoeV2 (Ring/Ling 2.0) models](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86)
|
|
||||||
|
|
||||||
#### Multimodal
|
|
||||||
|
|
||||||
- [x] [LLaVA 1.5 models](https://huggingface.co/collections/liuhaotian/llava-15-653aac15d994e992e2677a7e), [LLaVA 1.6 models](https://huggingface.co/collections/liuhaotian/llava-16-65b9e40155f60fd046a5ccf2)
|
|
||||||
- [x] [BakLLaVA](https://huggingface.co/models?search=SkunkworksAI/Bakllava)
|
|
||||||
- [x] [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5)
|
|
||||||
- [x] [ShareGPT4V](https://huggingface.co/models?search=Lin-Chen/ShareGPT4V)
|
|
||||||
- [x] [MobileVLM 1.7B/3B models](https://huggingface.co/models?search=mobileVLM)
|
|
||||||
- [x] [Yi-VL](https://huggingface.co/models?search=Yi-VL)
|
|
||||||
- [x] [Mini CPM](https://huggingface.co/models?search=MiniCPM)
|
|
||||||
- [x] [Moondream](https://huggingface.co/vikhyatk/moondream2)
|
|
||||||
- [x] [Bunny](https://github.com/BAAI-DCAI/Bunny)
|
|
||||||
- [x] [GLM-EDGE](https://huggingface.co/models?search=glm-edge)
|
|
||||||
- [x] [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d)
|
|
||||||
- [x] [LFM2-VL](https://huggingface.co/collections/LiquidAI/lfm2-vl-68963bbc84a610f7638d5ffa)
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Bindings</summary>
|
|
||||||
|
|
||||||
- Python: [ddh0/easy-llama](https://github.com/ddh0/easy-llama)
|
|
||||||
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
|
||||||
- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
|
|
||||||
- Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
|
|
||||||
- JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
|
|
||||||
- JS/TS (Programmable Prompt Engine CLI): [offline-ai/cli](https://github.com/offline-ai/cli)
|
|
||||||
- JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm)
|
|
||||||
- Typescript/Wasm (nicer API, available on npm): [ngxson/wllama](https://github.com/ngxson/wllama)
|
|
||||||
- Ruby: [yoshoku/llama_cpp.rb](https://github.com/yoshoku/llama_cpp.rb)
|
|
||||||
- Rust (more features): [edgenai/llama_cpp-rs](https://github.com/edgenai/llama_cpp-rs)
|
|
||||||
- Rust (nicer API): [mdrokz/rust-llama.cpp](https://github.com/mdrokz/rust-llama.cpp)
|
|
||||||
- Rust (more direct bindings): [utilityai/llama-cpp-rs](https://github.com/utilityai/llama-cpp-rs)
|
|
||||||
- Rust (automated build from crates.io): [ShelbyJenkins/llm_client](https://github.com/ShelbyJenkins/llm_client)
|
|
||||||
- C#/.NET: [SciSharp/LLamaSharp](https://github.com/SciSharp/LLamaSharp)
|
|
||||||
- C#/VB.NET (more features - community license): [LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/index.html)
|
|
||||||
- Scala 3: [donderom/llm4s](https://github.com/donderom/llm4s)
|
|
||||||
- Clojure: [phronmophobic/llama.clj](https://github.com/phronmophobic/llama.clj)
|
|
||||||
- React Native: [mybigday/llama.rn](https://github.com/mybigday/llama.rn)
|
|
||||||
- Java: [kherud/java-llama.cpp](https://github.com/kherud/java-llama.cpp)
|
|
||||||
- Java: [QuasarByte/llama-cpp-jna](https://github.com/QuasarByte/llama-cpp-jna)
|
|
||||||
- Zig: [deins/llama.cpp.zig](https://github.com/Deins/llama.cpp.zig)
|
|
||||||
- Flutter/Dart: [netdur/llama_cpp_dart](https://github.com/netdur/llama_cpp_dart)
|
|
||||||
- Flutter: [xuegao-tzx/Fllama](https://github.com/xuegao-tzx/Fllama)
|
|
||||||
- PHP (API bindings and features built on top of llama.cpp): [distantmagic/resonance](https://github.com/distantmagic/resonance) [(more info)](https://github.com/ggml-org/llama.cpp/pull/6326)
|
|
||||||
- Guile Scheme: [guile_llama_cpp](https://savannah.nongnu.org/projects/guile-llama-cpp)
|
|
||||||
- Swift [srgtuszy/llama-cpp-swift](https://github.com/srgtuszy/llama-cpp-swift)
|
|
||||||
- Swift [ShenghaiWang/SwiftLlama](https://github.com/ShenghaiWang/SwiftLlama)
|
|
||||||
- Delphi [Embarcadero/llama-cpp-delphi](https://github.com/Embarcadero/llama-cpp-delphi)
|
|
||||||
- Go (no CGo needed): [hybridgroup/yzma](https://github.com/hybridgroup/yzma)
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>UIs</summary>
|
|
||||||
|
|
||||||
*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
|
|
||||||
|
|
||||||
- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
|
|
||||||
- [cztomsik/ava](https://github.com/cztomsik/ava) (MIT)
|
|
||||||
- [Dot](https://github.com/alexpinel/Dot) (GPL)
|
|
||||||
- [eva](https://github.com/ylsdamxssjxxdd/eva) (MIT)
|
|
||||||
- [iohub/collama](https://github.com/iohub/coLLaMA) (Apache-2.0)
|
|
||||||
- [janhq/jan](https://github.com/janhq/jan) (AGPL)
|
|
||||||
- [johnbean393/Sidekick](https://github.com/johnbean393/Sidekick) (MIT)
|
|
||||||
- [KanTV](https://github.com/zhouwg/kantv?tab=readme-ov-file) (Apache-2.0)
|
|
||||||
- [KodiBot](https://github.com/firatkiral/kodibot) (GPL)
|
|
||||||
- [llama.vim](https://github.com/ggml-org/llama.vim) (MIT)
|
|
||||||
- [LARS](https://github.com/abgulati/LARS) (AGPL)
|
|
||||||
- [Llama Assistant](https://github.com/vietanhdev/llama-assistant) (GPL)
|
|
||||||
- [LLMFarm](https://github.com/guinmoon/LLMFarm?tab=readme-ov-file) (MIT)
|
|
||||||
- [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT)
|
|
||||||
- [LMStudio](https://lmstudio.ai/) (proprietary)
|
|
||||||
- [LocalAI](https://github.com/mudler/LocalAI) (MIT)
|
|
||||||
- [LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp) (AGPL)
|
|
||||||
- [MindMac](https://mindmac.app) (proprietary)
|
|
||||||
- [MindWorkAI/AI-Studio](https://github.com/MindWorkAI/AI-Studio) (FSL-1.1-MIT)
|
|
||||||
- [Mobile-Artificial-Intelligence/maid](https://github.com/Mobile-Artificial-Intelligence/maid) (MIT)
|
|
||||||
- [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) (Apache-2.0)
|
|
||||||
- [nat/openplayground](https://github.com/nat/openplayground) (MIT)
|
|
||||||
- [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) (MIT)
|
|
||||||
- [ollama/ollama](https://github.com/ollama/ollama) (MIT)
|
|
||||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (AGPL)
|
|
||||||
- [PocketPal AI](https://github.com/a-ghorbani/pocketpal-ai) (MIT)
|
|
||||||
- [psugihara/FreeChat](https://github.com/psugihara/FreeChat) (MIT)
|
|
||||||
- [ptsochantaris/emeltal](https://github.com/ptsochantaris/emeltal) (MIT)
|
|
||||||
- [pythops/tenere](https://github.com/pythops/tenere) (AGPL)
|
|
||||||
- [ramalama](https://github.com/containers/ramalama) (MIT)
|
|
||||||
- [semperai/amica](https://github.com/semperai/amica) (MIT)
|
|
||||||
- [withcatai/catai](https://github.com/withcatai/catai) (MIT)
|
|
||||||
- [Autopen](https://github.com/blackhole89/autopen) (GPL)
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Tools</summary>
|
|
||||||
|
|
||||||
- [akx/ggify](https://github.com/akx/ggify) – download PyTorch models from HuggingFace Hub and convert them to GGML
|
|
||||||
- [akx/ollama-dl](https://github.com/akx/ollama-dl) – download models from the Ollama library to be used directly with llama.cpp
|
|
||||||
- [crashr/gppm](https://github.com/crashr/gppm) – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
|
|
||||||
- [gpustack/gguf-parser](https://github.com/gpustack/gguf-parser-go/tree/main/cmd/gguf-parser) - review/check the GGUF file and estimate the memory usage
|
|
||||||
- [Styled Lines](https://marketplace.unity.com/packages/tools/generative-ai/styled-lines-llama-cpp-model-292902) (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Infrastructure</summary>
|
|
||||||
|
|
||||||
- [Paddler](https://github.com/intentee/paddler) - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
|
|
||||||
- [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs
|
|
||||||
- [llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
|
|
||||||
- [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server
|
|
||||||
- [Kalavai](https://github.com/kalavai-net/kalavai-client) - Crowdsource end to end LLM deployment at any scale
|
|
||||||
- [llmaz](https://github.com/InftyAI/llmaz) - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Games</summary>
|
|
||||||
|
|
||||||
- [Lucy's Labyrinth](https://github.com/MorganRO8/Lucys_Labyrinth) - A simple maze game where agents controlled by an AI model will try to trick you.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
## Supported backends
|
|
||||||
|
|
||||||
| Backend | Target devices |
|
|
||||||
| --- | --- |
|
|
||||||
| [Metal](docs/build.md#metal-build) | Apple Silicon |
|
|
||||||
| [BLAS](docs/build.md#blas-build) | All |
|
|
||||||
| [BLIS](docs/backend/BLIS.md) | All |
|
|
||||||
| [SYCL](docs/backend/SYCL.md) | Intel and Nvidia GPU |
|
|
||||||
| [MUSA](docs/build.md#musa) | Moore Threads GPU |
|
|
||||||
| [CUDA](docs/build.md#cuda) | Nvidia GPU |
|
|
||||||
| [HIP](docs/build.md#hip) | AMD GPU |
|
|
||||||
| [Vulkan](docs/build.md#vulkan) | GPU |
|
|
||||||
| [CANN](docs/build.md#cann) | Ascend NPU |
|
|
||||||
| [OpenCL](docs/backend/OPENCL.md) | Adreno GPU |
|
|
||||||
| [IBM zDNN](docs/backend/zDNN.md) | IBM Z & LinuxONE |
|
|
||||||
| [WebGPU [In Progress]](docs/build.md#webgpu) | All |
|
|
||||||
| [RPC](https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc) | All |
|
|
||||||
| [Hexagon [In Progress]](docs/backend/hexagon/README.md) | Snapdragon |
|
|
||||||
|
|
||||||
## Obtaining and quantizing models
|
|
||||||
|
|
||||||
The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](https://huggingface.co/models?library=gguf&sort=trending) compatible with `llama.cpp`:
|
|
||||||
|
|
||||||
- [Trending](https://huggingface.co/models?library=gguf&sort=trending)
|
|
||||||
- [LLaMA](https://huggingface.co/models?sort=trending&search=llama+gguf)
|
|
||||||
|
|
||||||
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from [Hugging Face](https://huggingface.co/) or other model hosting sites, such as [ModelScope](https://modelscope.cn/), by using this CLI argument: `-hf <user>/<model>[:quant]`. For example:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable `MODEL_ENDPOINT`. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. `MODEL_ENDPOINT=https://www.modelscope.cn/`.
|
4、测试服务
|
||||||
|
```python
|
||||||
After downloading a model, use the CLI tools to run it locally - see below.
|
curl -X POST http://localhost:10086/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
`llama.cpp` requires the model to be stored in the [GGUF](https://github.com/ggml-org/ggml/blob/master/docs/gguf.md) file format. Models in other data formats can be converted to GGUF using the `convert_*.py` Python scripts in this repo.
|
-d '{
|
||||||
|
"model": "llm",
|
||||||
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with `llama.cpp`:
|
"messages": [{"role": "user", "content": "你好"}],
|
||||||
|
"stream": true
|
||||||
- Use the [GGUF-my-repo space](https://huggingface.co/spaces/ggml-org/gguf-my-repo) to convert to GGUF format and quantize model weights to smaller sizes
|
}'
|
||||||
- Use the [GGUF-my-LoRA space](https://huggingface.co/spaces/ggml-org/gguf-my-lora) to convert LoRA adapters to GGUF format (more info: https://github.com/ggml-org/llama.cpp/discussions/10123)
|
|
||||||
- Use the [GGUF-editor space](https://huggingface.co/spaces/CISCai/gguf-editor) to edit GGUF meta data in the browser (more info: https://github.com/ggml-org/llama.cpp/discussions/9268)
|
|
||||||
- Use the [Inference Endpoints](https://ui.endpoints.huggingface.co/) to directly host `llama.cpp` in the cloud (more info: https://github.com/ggml-org/llama.cpp/discussions/9669)
|
|
||||||
|
|
||||||
To learn more about model quantization, [read this documentation](tools/quantize/README.md)
|
|
||||||
|
|
||||||
## [`llama-cli`](tools/main)
|
|
||||||
|
|
||||||
#### A CLI tool for accessing and experimenting with most of `llama.cpp`'s functionality.
|
|
||||||
|
|
||||||
- <details open>
|
|
||||||
<summary>Run in conversation mode</summary>
|
|
||||||
|
|
||||||
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding `-cnv` and specifying a suitable chat template with `--chat-template NAME`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-cli -m model.gguf
|
|
||||||
|
|
||||||
# > hi, who are you?
|
|
||||||
# Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
|
|
||||||
#
|
|
||||||
# > what is 1+1?
|
|
||||||
# Easy peasy! The answer to 1+1 is... 2!
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Run in conversation mode with custom chat template</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# use the "chatml" template (use -h to see the list of supported templates)
|
|
||||||
llama-cli -m model.gguf -cnv --chat-template chatml
|
|
||||||
|
|
||||||
# use a custom template
|
|
||||||
llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Run simple text completion</summary>
|
|
||||||
|
|
||||||
To disable conversation mode explicitly, use `-no-cnv`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-cli -m model.gguf -p "I believe the meaning of life is" -n 128 -no-cnv
|
|
||||||
|
|
||||||
# I believe the meaning of life is to find your own truth and to live in accordance with it. For me, this means being true to myself and following my passions, even if they don't align with societal expectations. I think that's what I love about yoga – it's not just a physical practice, but a spiritual one too. It's about connecting with yourself, listening to your inner voice, and honoring your own unique journey.
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Constrain the output with a custom grammar</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
|
|
||||||
|
|
||||||
# {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
|
|
||||||
```
|
|
||||||
|
|
||||||
The [grammars/](grammars/) folder contains a handful of sample grammars. To write your own, check out the [GBNF Guide](grammars/README.md).
|
|
||||||
|
|
||||||
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
## [`llama-server`](tools/server)
|
|
||||||
|
|
||||||
#### A lightweight, [OpenAI API](https://github.com/openai/openai-openapi) compatible, HTTP server for serving LLMs.
|
|
||||||
|
|
||||||
- <details open>
|
|
||||||
<summary>Start a local HTTP server with default configuration on port 8080</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-server -m model.gguf --port 8080
|
|
||||||
|
|
||||||
# Basic web UI can be accessed via browser: http://localhost:8080
|
|
||||||
# Chat completion endpoint: http://localhost:8080/v1/chat/completions
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Support multiple-users and parallel decoding</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# up to 4 concurrent requests, each with 4096 max context
|
|
||||||
llama-server -m model.gguf -c 16384 -np 4
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Enable speculative decoding</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# the draft.gguf model should be a small variant of the target model.gguf
|
|
||||||
llama-server -m model.gguf -md draft.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Serve an embedding model</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# use the /embedding endpoint
|
|
||||||
llama-server -m model.gguf --embedding --pooling cls -ub 8192
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Serve a reranking model</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# use the /reranking endpoint
|
|
||||||
llama-server -m model.gguf --reranking
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Constrain all outputs with a grammar</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# custom grammar
|
|
||||||
llama-server -m model.gguf --grammar-file grammar.gbnf
|
|
||||||
|
|
||||||
# JSON
|
|
||||||
llama-server -m model.gguf --grammar-file grammars/json.gbnf
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
## [`llama-perplexity`](tools/perplexity)
|
|
||||||
|
|
||||||
#### A tool for measuring the [perplexity](tools/perplexity/README.md) [^1] (and other quality metrics) of a model over a given text.
|
|
||||||
|
|
||||||
- <details open>
|
|
||||||
<summary>Measure the perplexity over a text file</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-perplexity -m model.gguf -f file.txt
|
|
||||||
|
|
||||||
# [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
|
|
||||||
# Final estimate: PPL = 5.4007 +/- 0.67339
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Measure KL divergence</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# TODO
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
[^1]: [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity)
|
|
||||||
|
|
||||||
## [`llama-bench`](tools/llama-bench)
|
|
||||||
|
|
||||||
#### Benchmark the performance of the inference for various parameters.
|
|
||||||
|
|
||||||
- <details open>
|
|
||||||
<summary>Run default benchmark</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-bench -m model.gguf
|
|
||||||
|
|
||||||
# Output:
|
|
||||||
# | model | size | params | backend | threads | test | t/s |
|
|
||||||
# | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
|
|
||||||
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 |
|
|
||||||
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 |
|
|
||||||
#
|
|
||||||
# build: 3e0ba0e60 (4229)
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## [`llama-run`](tools/run)
|
|
||||||
|
|
||||||
#### A comprehensive example for running `llama.cpp` models. Useful for inferencing. Used with RamaLama [^3].
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Run a model with a specific prompt (by default it's pulled from Ollama registry)</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-run granite-code
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
[^3]: [RamaLama](https://github.com/containers/ramalama)
|
|
||||||
|
|
||||||
## [`llama-simple`](examples/simple)
|
|
||||||
|
|
||||||
#### A minimal example for implementing apps with `llama.cpp`. Useful for developers.
|
|
||||||
|
|
||||||
- <details>
|
|
||||||
<summary>Basic text completion</summary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
llama-simple -m model.gguf
|
|
||||||
|
|
||||||
# Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
- Contributors can open PRs
|
|
||||||
- Collaborators will be invited based on contributions
|
|
||||||
- Maintainers can push to branches in the `llama.cpp` repo and merge PRs into the `master` branch
|
|
||||||
- Any help with managing issues, PRs and projects is very appreciated!
|
|
||||||
- See [good first issues](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
|
|
||||||
- Read the [CONTRIBUTING.md](CONTRIBUTING.md) for more information
|
|
||||||
- Make sure to read this: [Inference at the edge](https://github.com/ggml-org/llama.cpp/discussions/205)
|
|
||||||
- A bit of backstory for those who are interested: [Changelog podcast](https://changelog.com/podcast/532)
|
|
||||||
|
|
||||||
## Other documentation
|
|
||||||
|
|
||||||
- [main (cli)](tools/main/README.md)
|
|
||||||
- [server](tools/server/README.md)
|
|
||||||
- [GBNF grammars](grammars/README.md)
|
|
||||||
|
|
||||||
#### Development documentation
|
|
||||||
|
|
||||||
- [How to build](docs/build.md)
|
|
||||||
- [Running on Docker](docs/docker.md)
|
|
||||||
- [Build on Android](docs/android.md)
|
|
||||||
- [Performance troubleshooting](docs/development/token_generation_performance_tips.md)
|
|
||||||
- [GGML tips & tricks](https://github.com/ggml-org/llama.cpp/wiki/GGML-Tips-&-Tricks)
|
|
||||||
|
|
||||||
#### Seminal papers and background on the models
|
|
||||||
|
|
||||||
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
|
||||||
- LLaMA:
|
|
||||||
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
|
||||||
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
|
||||||
- GPT-3
|
|
||||||
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
|
||||||
- GPT-3.5 / InstructGPT / ChatGPT:
|
|
||||||
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
|
||||||
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
|
||||||
|
|
||||||
## XCFramework
|
|
||||||
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,
|
|
||||||
and macOS. It can be used in Swift projects without the need to compile the
|
|
||||||
library from source. For example:
|
|
||||||
```swift
|
|
||||||
// swift-tools-version: 5.10
|
|
||||||
// The swift-tools-version declares the minimum version of Swift required to build this package.
|
|
||||||
|
|
||||||
import PackageDescription
|
|
||||||
|
|
||||||
let package = Package(
|
|
||||||
name: "MyLlamaPackage",
|
|
||||||
targets: [
|
|
||||||
.executableTarget(
|
|
||||||
name: "MyLlamaPackage",
|
|
||||||
dependencies: [
|
|
||||||
"LlamaFramework"
|
|
||||||
]),
|
|
||||||
.binaryTarget(
|
|
||||||
name: "LlamaFramework",
|
|
||||||
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
|
|
||||||
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
|
|
||||||
)
|
|
||||||
]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
The above example is using an intermediate build `b5046` of the library. This can be modified
|
|
||||||
to use a different version by changing the URL and checksum.
|
|
||||||
|
|
||||||
## Completions
|
|
||||||
Command-line completion is available for some environments.
|
|
||||||
|
|
||||||
#### Bash Completion
|
|
||||||
```bash
|
|
||||||
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
|
|
||||||
$ source ~/.llama-completion.bash
|
|
||||||
```
|
|
||||||
Optionally this can be added to your `.bashrc` or `.bash_profile` to load it
|
|
||||||
automatically. For example:
|
|
||||||
```console
|
|
||||||
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Dependencies
|
## 测试数据集
|
||||||
|
|
||||||
- [yhirose/cpp-httplib](https://github.com/yhirose/cpp-httplib) - Single-header HTTP server, used by `llama-server` - MIT license
|
视觉多模态任务数据集见 vlm-dataset
|
||||||
- [stb-image](https://github.com/nothings/stb) - Single-header image format decoder, used by multimodal subsystem - Public domain
|
|
||||||
- [nlohmann/json](https://github.com/nlohmann/json) - Single-header JSON library, used by various tools/examples - MIT License
|
大语言模型的测评方式为
|
||||||
- [minja](https://github.com/google/minja) - Minimal Jinja parser in C++, used by various tools/examples - MIT License
|
在相同模型和输入条件下,测试平均输出速度(单位:字每秒):
|
||||||
- [linenoise.cpp](./tools/run/linenoise.cpp/linenoise.cpp) - C++ library that provides readline-like line editing capabilities, used by `llama-run` - BSD 2-Clause License
|
我们采用相同的prompt对模型的chat/completion接口测试多轮对话,测试数据如下:
|
||||||
- [curl](https://curl.se/) - Client-side URL transfer library, used by various tools/examples - [CURL License](https://curl.se/docs/copyright.html)
|
```json
|
||||||
- [miniaudio.h](https://github.com/mackron/miniaudio) - Single-header audio format decoder, used by multimodal subsystem - Public domain
|
[
|
||||||
|
{
|
||||||
|
"user_questions": [
|
||||||
|
"能给我介绍一下新加坡吗",
|
||||||
|
"主要的购物区域是集中在哪里",
|
||||||
|
"有哪些比较著名的美食,一般推荐去哪里品尝",
|
||||||
|
"辣椒螃蟹的调料里面主要是什么原料"
|
||||||
|
],
|
||||||
|
"system_prompt": "[角色设定]\n你是湾湾小何,来自中国台湾省的00后女生。讲话超级机车,\"真的假的啦\"这样的台湾腔,喜欢用\"笑死\"、\"哈喽\"等流行梗,但会偷偷研究男友的编程书籍。\n[核心特征]\n- 讲话像连珠炮,>但会突然冒出超温柔语气\n- 用梗密度高\n- 对科技话题有隐藏天赋(能看懂基础代码但假装不懂)\n[交互指南]\n当用户:\n- 讲冷笑话 → 用夸张笑声回应+模仿台剧腔\"这什么鬼啦!\"\n- 讨论感情 → 炫耀程序员男友但抱怨\"他只会送键盘当礼物\"\n- 问专业知识 → 先用梗回答,被追问才展示真实理解\n绝不:\n- 长篇大论,叽叽歪歪\n- 长时间严肃对话"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"user_questions": [
|
||||||
|
"朱元璋建立明朝是在什么时候",
|
||||||
|
"他是如何从一无所有到奠基明朝的,给我讲讲其中的几个关键事件",
|
||||||
|
"为什么杀了胡惟庸,当时是什么罪名,还牵连到了哪些人",
|
||||||
|
"有善终的开国功臣吗"
|
||||||
|
],
|
||||||
|
"system_prompt": "[角色设定]\n你是湾湾小何,来自中国台湾省的00后女生。讲话超级机车,\"真的假的啦\"这样的台湾腔,喜欢用\"笑死\"、\"哈喽\"等流行梗,但会偷偷研究男友的编程书籍。\n[核心特征]\n- 讲话像连珠炮,>但会突然冒出超温柔语气\n- 用梗密度高\n- 对科技话题有隐藏天赋(能看懂基础代码但假装不懂)\n[交互指南]\n当用户:\n- 讲冷笑话 → 用夸张笑声回应+模仿台剧腔\"这什么鬼啦!\"\n- 讨论感情 → 炫耀程序员男友但抱怨\"他只会送键盘当礼物\"\n- 问专业知识 → 先用梗回答,被追问才展示真实理解\n绝不:\n- 长篇大论,叽叽歪歪\n- 长时间严肃对话"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"user_questions": [
|
||||||
|
"今有鸡兔同笼,上有三十五头,下有九十四足,问鸡兔各几何?",
|
||||||
|
"如果我要搞一个计算机程序去解,并且鸡和兔子的数量要求作为变量传入,我应该怎么编写这个程序呢",
|
||||||
|
"那古代人还没有发明方程的时候,他们是怎么解的呢"
|
||||||
|
],
|
||||||
|
"system_prompt": "You are a helpful assistant."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"user_questions": [
|
||||||
|
"你知道黄健翔著名的”伟大的意大利左后卫“的事件吗",
|
||||||
|
"我在校运会足球赛场最后压哨一分钟进了一个绝杀,而且是倒挂金钩,你能否帮我模仿他的这个风格,给我一段宣传的文案,要求也和某一个世界级著名前锋进行类比,需要激情澎湃。注意,我并不太喜欢梅西。"
|
||||||
|
],
|
||||||
|
"system_prompt": "You are a helpful assistant."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 昇腾-910系列上模型运行测试结果
|
||||||
|
在昇腾-910系列上对部分模型进行适配,测试方式为在 Nvidia A100 和 昇腾-910B4 加速卡上对对应数据集进行测试,获取运行时间
|
||||||
|
|
||||||
|
### 大语言模型
|
||||||
|
| 模型名称 | 模型版本 | A100出字速度 | 昇腾-910B出字速度 | 备注 |
|
||||||
|
|---------|-----|-----|-----|---------------------|
|
||||||
|
| unsloth/MiniMax-M2-GGUF | Q4_0 | 203.6 | 14.4 | |
|
||||||
|
| Qwen/Qwen2.5-3B-Instruct-GGUF | FP16 | 212.8 | 89.0 | |
|
||||||
|
| unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF | FP16 | 168.7 | 52.5 | |
|
||||||
|
| Qwen/Qwen2.5-0.5B-Instruct-GGUF | FP16 | 516.3 | 148.5 | |
|
||||||
|
| Qwen/Qwen2-7B-Instruct-GGUF | FP16 | 142.3 | 54.7 | |
|
||||||
|
| unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF | Q8_0 | 332.9 | 96.4 | |
|
||||||
|
| unsloth/Qwen3-0.6B-GGUF | Q8_0 | 420.0 | 83.9 | |
|
||||||
|
| unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF | Q8_0 | 186.4 | 23.8 | |
|
||||||
|
| unsloth/Qwen3-30B-A3B-GGUF | Q4_0 | 207.3 | 15.8 | |
|
||||||
|
| unsloth/DeepSeek-R1-Distill-Qwen-14B-GGUF | Q8_0 | 112.8 | 13.2 | |
|
||||||
|
| unsloth/Qwen3-32B-GGUF | Q4_0 | 80.5 | 5.8 | |
|
||||||
|
| Qwen/Qwen2.5-Coder-14B-Instruct-GGUF | Q8_0 | 116.7 | 15.7 | |
|
||||||
|
| unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF | Q8_0 | 59.1 | 5.8 | |
|
||||||
|
|||||||
3
build-ascend.sh
Normal file
3
build-ascend.sh
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
cmake -B build_ascend \
|
||||||
|
-DGGML_CANN=on -DCMAKE_BUILD_TYPE=release -DLLAMA_CURL=OFF
|
||||||
|
cmake --build build_ascend --parellel 16
|
||||||
Reference in New Issue
Block a user