ChineseAlpacaGroup 98fb82e034 upload model
2024-05-29 06:12:51 +00:00
2024-05-29 00:52:37 +00:00
2024-05-29 06:11:24 +00:00
2024-05-29 06:12:51 +00:00
2024-05-29 00:45:37 +00:00
2024-05-29 00:45:37 +00:00
2024-05-29 00:49:18 +00:00
2024-05-29 00:49:18 +00:00
2024-05-29 06:08:42 +00:00
2024-05-29 06:02:51 +00:00
2024-05-29 00:55:34 +00:00

frameworks, license, model-type, language, tools
frameworks license model-type language tools
other
Apache License 2.0
llama
zh
en
llamacpp

license: apache-2.0 language:

  • zh
  • en

Llama-3-Chinese-8B-Instruct-v3-GGUF

这个仓库包含了Llama-3-Chinese-8B-Instruct-v3-GGUF兼容llama.cpp/ollama等Llama-3-Chinese-8B-Instruct-v3模型的量化版本。

注意:这是一个指令模型,可以直接适用于对话、问答等任务。

更多细节性能、使用方法等请参考GitHub项目页面https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

量化性能

评测指标PPL越低越好

Quant Size PPL
Q2_K 2.96 GB 10.0534 +/- 0.13135
Q3_K 3.74 GB 6.3295 +/- 0.07816
Q4_0 4.34 GB 6.3200 +/- 0.07893
Q4_K 4.58 GB 6.0042 +/- 0.07431
Q5_0 5.21 GB 6.0437 +/- 0.07526
Q5_K 5.34 GB 5.9484 +/- 0.07399
Q6_K 6.14 GB 5.9469 +/- 0.07404
Q8_0 7.95 GB 5.8933 +/- 0.07305
F16 14.97 GB 5.8902 +/- 0.07303

其他


This repository contains Llama-3-Chinese-8B-Instruct-v3-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B-Instruct-v3.

Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Quant Size PPL
Q2_K 2.96 GB 10.0534 +/- 0.13135
Q3_K 3.74 GB 6.3295 +/- 0.07816
Q4_0 4.34 GB 6.3200 +/- 0.07893
Q4_K 4.58 GB 6.0042 +/- 0.07431
Q5_0 5.21 GB 6.0437 +/- 0.07526
Q5_K 5.34 GB 5.9484 +/- 0.07399
Q6_K 6.14 GB 5.9469 +/- 0.07404
Q8_0 7.95 GB 5.8933 +/- 0.07305
F16 14.97 GB 5.8902 +/- 0.07303

Others

Description
Model synced from source: ChineseAlpacaGroup/llama-3-chinese-8b-instruct-v3-gguf
Readme 37 KiB