初始化项目,由ModelHub XC社区提供模型

Model: empower-dev/llama3-empower-functions-small-gguf
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 09:22:09 +08:00
commit 79cc02b77c
10 changed files with 413094 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
ggml-model-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
ggml-model-f16.gguf filter=lfs diff=lfs merge=lfs -text

65
README.md Normal file
View File

@@ -0,0 +1,65 @@
---
license: apache-2.0
tags:
- function
- function-calling
- tool-using
---
## [Deprecation]:
Please use the new [empower functions v1.1 models family](https://huggingface.co/collections/empower-dev/empower-functions-v11-66df72d78c1f7b80bda36f5f).
v1.1 is fully compatible with the existing prompts and has much better accuracy and a longer context window.
## Empower Functions Model
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6424a49f12ba34f9894ab9b7/wXkYX_NXEFtpmBsQd6nIV.png)
[https://github.com/empower-ai/empower-functions](https://github.com/empower-ai/empower-functions)
Empower Functions is a family of LLMs(large language models) that offer GPT-4 level capabilities for real-world "tool using" use cases, with full compatibility support to be served as a drop-in replacement.
This is the `llama3-empower-functions-small` model, for other sizes, please visit [the empower-functions collection](https://huggingface.co/collections/empower-dev/empower-functions-663e9a22df93b46804df75a8)
## Key Features
* Automatic tool using, able to decide when to use tools and when to converse, optimized for long conversations
* Parallel call, supports calling one function multiple times, multiple functions, or a combination of both
* Sequential calling, supports calling multiple functions sequentially to fulfill the user request
* Streaming
## Family of Models
| Model | Specs | Links | Notes |
| ------------------------------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
| llama3-empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-small), [GGUF](https://huggingface.co/empower-dev/llama3-empower-functions-small-gguf) | Most cost-effective, locally runnable |
| empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | [model](https://huggingface.co/empower-dev/empower-functions-medium) | Balance in accuracy and cost |
| llama3-empower-functions-large | 8k context, based on [Llama3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-large) | Best accuracy |
### Hardware Requirement
We have tested and the family of models in following setup:
- empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
- empower-functions-medium: fp16 on 2xA100 80G
- empower-functions-large: fp16 on 4xA100 80G
## Usage
There are three ways to use the empower-functions model. You can either directly [prompt the raw model](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#prompt-raw-model), run it [locally](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#running-locally) through llama-cpp-python, or use our [hosted API](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#using-empower-api)
## Evaluation
We benchmarked our model against a few other options, on [three datasets](https://huggingface.co/empower-dev):
- Single Turn Dataset: The model is evaluated for its ability to execute a precise function call, assessing both the accuracy of the selected function and the arguments.
- Parallel Call Dataset: In this scenario, the model demonstrates its capacity to handle multiple (2-6) function calls within a single message, a feature not supported by Fireworks and Anyscale.
- Multi-Turn Dataset: Designed to simulate a complex real-world environment, such as a healthcare appointment booking system, the model navigates between natural conversation, initiating function calls, asking clarifying questions, and, when necessary, transferring to customer service. The assessment focuses on the accuracy of intent classification and the correctness of function calls.
For more detailed evaluation result, please refer to our [github repo](https://github.com/empower-ai/empower-functions)
## Demo App
Check our healthcare appointment booking [demo](https://app.empower.dev/chat-demo)
Want to customize the model? Please contact us at [founders@empower.dev](mailto:founders@empower.dev)

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.38.0",
"use_cache": false,
"vocab_size": 128256
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": [
128001,
128009
],
"max_length": 4096,
"temperature": 0.6,
"top_p": 0.9,
"transformers_version": "4.38.0"
}

3
ggml-model-Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cc02b78d70cb9ccd07f3745ebc0e7027efde7faf6606c60964a2cdf0639b344c
size 4920734016

3
ggml-model-f16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7e40996aa2d751bc12c4974bc40a23c15f3bf2f308047a396b511d8844937ef6
size 16068890944

View File

@@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 16060522496
},
"weight_map": {
"lm_head.weight": "pytorch_model-00004-of-00004.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.norm.weight": "pytorch_model-00004-of-00004.bin"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|eot_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|end_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

410562
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

2063
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff