Update README.md

This commit is contained in:
ai-modelscope
2025-02-08 16:17:33 +08:00
parent 6dca1fd81a
commit 7201fd422e
13 changed files with 111 additions and 111 deletions

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7cbccffcba1e679d2fb906a55b5d6d641aa21bff9ee20fe30cef6dd6b9f8e410
oid sha256:a870730e0fff8e6c4656e4f611faa16bde7b16f33c059292167a174bf15e5b2e
size 580874080

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e89a46e80c150f92bc0d5dfacb069f46770807664340339a22d886031ee06ddc
oid sha256:ef44bca078435843c7788f9c03e7b495a099c131f845ed19dc0257e561e8caf3
size 732524384

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9bdd2550b78bdee5653da8f9d07d74faaafd7cc50602eac9107d8f2c60677d24
oid sha256:7b36f7efe910829de87ab1fcd9a55fdb0bf481d8e158345b13dc48486a324a79
size 690843488

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf5363d4847936b37950b1d048199b616b80edd40e78ed15e7b0f2e36c32433b
oid sha256:1444f1e3d528823058010ea5152b8f32a332f9bce37290d480c4d4e50b04eb41
size 641691488

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:150e95f201edeaef9b5d37aec70fcee65533ddeca51a4d1092037195f0302f69
oid sha256:eadfd8fd4e29d48e720eb87fc8242d3a8d4d2dacd52c722adc8e69e48c668efc
size 770928480

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e95df2f5144fd67bf11421ad59812fbde45a79c90eb49cb5d79e3d7bddaa1331
oid sha256:26bac8efd811cb41a80db4393dbe5c8360abd54b98954ec766aa4ba7dacc0bc5
size 807694176

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e256d51eed0799552f91a4fb8f0d05315b3e3dc1cd4499514e7d87a845951c88
oid sha256:5550376826ef08901a4145559647844d5e70a950d69145e83ae2d262ce5ce0e2
size 775647072

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:922306ce6b89b731d20130c2a5e909be530c5703fd24b26f7d050bad3faf3fc5
oid sha256:f6aea9bca54d1b5033035771963e0bf24d307ff756ab476744a2b43ad2eeb68d
size 892563296

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7cd16584b15af8c268ca60454b63b5021b0b2c7edf66cff2b41210a333b2233d
oid sha256:7f5165ccccbd6953de2a35ca56300ebedfb70739a407dd096e3a5c658477aefa
size 911503200

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:66260aca245c04327590457d52debf378e0fc93b881b2effb71f9a00703a1953
oid sha256:a14d69cc881f282405b8ba59ab6377a3eb7f2c3686077d1be796c87f6298c398
size 892563296

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3fe00ecc20650188a714bf1766cecbd938e6119e9311c8fe91ca3fb5f3246591
oid sha256:4bf385159856b7c50a938b1228112318d9f99238a76880ea0f6381ab879982b3
size 1021800288

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c0522ea0da43f9a089b5f2df7699a659cc320a5259c33506c44925fa03232fc
oid sha256:da49f51ced8c15546e7779beb677fb53eb5d0b3b38ac4607ac60d58d77074823
size 1321082720

198
README.md
View File

@@ -1,99 +1,99 @@
---
base_model: meta-llama/Llama-3.2-1B-Instruct
license: llama3.2
model_creator: meta
model_name: Llama-3.2-1B-Instruct
quantized_by: Second State Inc.
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
- llama
- llama-3
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3.2-1B-Instruct-GGUF
## Original Model
[meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.14.5](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.5) and above
- Prompt template
- Prompt type: `llama-3-chat`
- Prompt string
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template llama-3-chat \
--ctx-size 128000 \
--model-name Llama-3.2-1b
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template llama-3-chat \
--ctx-size 128000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Llama-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 2 | 581 MB| smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 733 MB| small, substantial quality loss |
| [Llama-3.2-1B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 691 MB| very small, high quality loss |
| [Llama-3.2-1B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 642 MB| very small, high quality loss |
| [Llama-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 771 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 808 MB| medium, balanced quality - recommended |
| [Llama-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 776 MB| small, greater quality loss |
| [Llama-3.2-1B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 893 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 912 MB| large, very low quality loss - recommended |
| [Llama-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 893 MB| large, low quality loss - recommended |
| [Llama-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.02 GB| very large, extremely low quality loss |
| [Llama-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.32 GB| very large, extremely low quality loss - not recommended |
| [Llama-3.2-1B-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-f16.gguf) | f16 | 16 | 2.48 GB| |
*Quantized with llama.cpp b3807*
---
base_model: meta-llama/Llama-3.2-1B-Instruct
license: llama3.2
model_creator: meta
model_name: Llama-3.2-1B-Instruct
quantized_by: Second State Inc.
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
- llama
- llama-3
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3.2-1B-Instruct-GGUF
## Original Model
[meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.16.5](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.16.5) and above
- Prompt template
- Prompt type: `llama-3-chat`
- Prompt string
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template llama-3-chat \
--ctx-size 128000 \
--model-name Llama-3.2-1b
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template llama-3-chat \
--ctx-size 128000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Llama-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 2 | 581 MB| smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 733 MB| small, substantial quality loss |
| [Llama-3.2-1B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 691 MB| very small, high quality loss |
| [Llama-3.2-1B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 642 MB| very small, high quality loss |
| [Llama-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 771 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 808 MB| medium, balanced quality - recommended |
| [Llama-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 776 MB| small, greater quality loss |
| [Llama-3.2-1B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 893 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 912 MB| large, very low quality loss - recommended |
| [Llama-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 893 MB| large, low quality loss - recommended |
| [Llama-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.02 GB| very large, extremely low quality loss |
| [Llama-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.32 GB| very large, extremely low quality loss - not recommended |
| [Llama-3.2-1B-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-f16.gguf) | f16 | 16 | 2.48 GB| |
*Quantized with llama.cpp b4466*