初始化项目,由ModelHub XC社区提供模型

Model: Azzedde/llama3.1-8b-text2cypher
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 15:14:29 +08:00
commit 7573cd6e92
12 changed files with 2651 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

159
README.md Normal file
View File

@@ -0,0 +1,159 @@
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: mit
datasets:
- neo4j/text2cypher-2024v1
language:
- en
base_model:
- unsloth/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
## Model Card for Llama3.1-8B-Cypher
### Model Details
**Model Description**
This is the model card for **Llama3.1-8B-Cypher**, a fine-tuned version of Metas Llama-3.1-8B, optimized for generating **Cypher queries** from natural language input. The model has been trained using **Unsloth** for efficient fine-tuning and inference.
**Developed by**: Azzedine (GitHub: Azzedde)
**Funded by [optional]**: N/A
**Shared by [optional]**: Azzedde
**Model Type**: Large Language Model (LLM) optimized for Cypher query generation
**Language(s) (NLP)**: English
**License**: Apache 2.0
**Finetuned from model [optional]**: Meta-Llama-3.1-8B-Instruct
### Model Sources
**Repository**: [Hugging Face](https://huggingface.co/Azzedde/llama3.1-8b-text2cypher)
**Paper [optional]**: N/A
**Demo [optional]**: N/A
### Uses
#### Direct Use
This model is designed for generating **Cypher queries** for **Neo4j databases** based on natural language inputs. It can be used in:
- Database administration
- Knowledge graph construction
- Query automation for structured data retrieval
#### Downstream Use [optional]
- Integrating into **LLM-based database assistants**
- Automating **graph database interactions** in enterprise applications
- Enhancing **semantic search and recommendation systems**
#### Out-of-Scope Use
- General NLP tasks unrelated to graph databases
- Applications requiring strong factual accuracy outside Cypher query generation
### Bias, Risks, and Limitations
- The model may **generate incorrect or suboptimal Cypher queries**, especially for **complex database schemas**.
- The model has not been trained to **validate or optimize queries**, so users should manually **verify generated queries**.
- Limited to **English-language inputs** and **Neo4j graph database use cases**.
### Recommendations
Users should be aware of:
- The importance of **validating model-generated queries** before execution.
- The **potential for biases** in database schema interpretation.
- The need for **fine-tuning on domain-specific datasets** for best performance.
### How to Get Started with the Model
Use the following code to load and use the model:
```python
from unsloth import FastLanguageModel
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Azzedde/llama3.1-8b-text2cypher")
model = FastLanguageModel.from_pretrained("Azzedde/llama3.1-8b-text2cypher")
# Example inference
cypher_prompt = """Below is a database Neo4j schema and a question related to that database. Write a Cypher query to answer the question.
### Schema:
{schema}
### Question:
{question}
### Cypher:
"""
input_text = cypher_prompt.format(schema="<Your Schema>", question="Find all users with more than 5 transactions")
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)
print(tokenizer.decode(outputs[0]))
```
### Training Details
**Training Data**: The model was fine-tuned on the **Neo4j Text2Cypher dataset (2024v1)**.
**Training Procedure**:
- **Preprocessing**: Tokenized using the **Alpaca format**.
- **Training Hyperparameters**:
- `batch_size=2`
- `gradient_accumulation_steps=4`
- `num_train_epochs=3`
- `learning_rate=2e-4`
- `fp16=True`
### Evaluation
#### Testing Data
- Used the **Neo4j Text2Cypher 2024v1 test split**.
#### Factors
- Model performance was measured on **accuracy of Cypher query generation**.
#### Metrics
- **Exact Match** with ground truth Cypher queries.
- **Execution Success Rate** on a test Neo4j instance.
#### Results
- **High accuracy** for standard database queries.
- **Some errors in complex queries requiring multi-hop reasoning**.
### Environmental Impact
**Hardware Type**: Tesla T4 (Google Colab)
**Hours Used**: ~7.71 minutes
**Cloud Provider**: Google Colab
**Compute Region**: N/A
**Carbon Emitted**: Estimated using ML Impact calculator
### Technical Specifications
#### Model Architecture and Objective
- Based on **Llama-3.1 8B** with **LoRA fine-tuning**.
#### Compute Infrastructure
- Fine-tuned using **Unsloth** for efficient training and inference.
#### Hardware
- **GPU**: Tesla T4
- **Max Reserved Memory**: ~7.922 GB
#### Software
- **Libraries Used**: `unsloth`, `transformers`, `TRL`, `datasets`
### Citation [optional]
**BibTeX:**
```
@article{llama3.1-8b-cypher,
author = {Azzedde},
title = {Llama3.1-8B-Cypher: A Cypher Query Generation Model},
year = {2025},
url = {https://huggingface.co/Azzedde/llama3.1-8b-text2cypher}
}
```
**APA:**
Azzedde. (2025). *Llama3.1-8B-Cypher: A Cypher Query Generation Model*. Retrieved from [Hugging Face](https://huggingface.co/Azzedde/llama3.1-8b-text2cypher)
### More Information
For questions, reach out via **Hugging Face discussions** or GitHub issues.
### Model Card Authors
- **Azzedde** (GitHub: Azzedde)
### Model Card Contact
**Contact**: [Hugging Face Profile](https://huggingface.co/Azzedde)

39
config.json Normal file
View File

@@ -0,0 +1,39 @@
{
"_name_or_path": "unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pad_token_id": 128004,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.48.3",
"unsloth_fixed": true,
"unsloth_version": "2025.2.15",
"use_cache": true,
"vocab_size": 128256
}

14
generation_config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": [
128001,
128008,
128009
],
"max_length": 131072,
"pad_token_id": 128004,
"temperature": 0.6,
"top_p": 0.9,
"transformers_version": "4.48.3"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7701f55bd428db18b631cf7c1da2ad1a8e687dace60341c3439382a6bb15d879
size 4976718338

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70549a6b2babda4d1970d749d106bd2d3469908c496af6d9c1d216d45e2992d9
size 4999826630

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca1d1a161f1a4a72131e3db9d0156a6cdadc7b1d76e9540e6f11c8cf693f06cd
size 4915939082

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1af3b7e60309d003565bf23a52374b9f3222588e903b4510fc866f1fd35a1515
size 1168140873

View File

@@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 16060522496
},
"weight_map": {
"lm_head.weight": "pytorch_model-00004-of-00004.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
"model.norm.weight": "pytorch_model-00004-of-00004.bin"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|eot_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|finetune_right_pad_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
size 17209920

2067
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff