commit 89be9b39ba9234603b68affe0da76b8c32494e67 Author: ModelHub XC Date: Fri Apr 10 11:39:54 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: itpossible/Chinese-Mistral-7B-Instruct-v0.1 Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..7bc225d --- /dev/null +++ b/.gitattributes @@ -0,0 +1,34 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bin.* filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zstandard filter=lfs diff=lfs merge=lfs -text +*.tfevents* filter=lfs diff=lfs merge=lfs -text +*.db* filter=lfs diff=lfs merge=lfs -text +*.ark* filter=lfs diff=lfs merge=lfs -text +**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text +**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text +**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..58c4749 --- /dev/null +++ b/README.md @@ -0,0 +1,145 @@ +
+

+ Chinese-Mistral +

+
+ +## 🎉 新闻 + +- [2024-04-04] 发布Chinese-Mistral指令精调模型。 +- [2024-03-31] 发布Chinese-Mistral基座模型。 + +## 🚀 介绍 + +随着Mistral AI公司开源其七十亿参数模型[Mistral-7B](https://huggingface.co/meta-llama/Llama-2-7b-hf),该模型超越[Llama](https://huggingface.co/meta-llama),成为当前最强大的开源模型之一。Mistral-7B在各类基准测试中,不仅超过了Llama2-13B,而且在推理、数学、代码生成任务中超过Llama2-34B。 +然而,Mistral-7B的训练语料主要为英文文本,其中文能力较为欠缺。其次,Mistral-7B的词表不支持中文,导致其对中文的编码和解码效率较低,限制了在中文场景的应用。
+为了克服这一局限,清华大学地球系统科学系地球和空间信息科学实验室基于Mistral-7B进行了中文词表扩充和增量预训练,增强了Mistral-7B在中文任务上的表现,并提高了其对中文文本的编解码效率。
+项目地址:https://github.com/THU-ESIS/Chinese-Mistral + +## 📥 模型下载 + +本项目开源了Chinese-Mistral-7B与Chinese-Mistral-7B-instruct: + +| 模型 | 下载地址 | 说明 | +|:-----------------------------:|:------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:| +| Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)
[wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)
[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | 完整基座模型 | +| Chinese-Mistral-7B-Instruct | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)
[wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)
[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | 完整指令精调模型
中英文alpaca_gpt4进行lora微调| + +## 📈 模型性能 + +### 模型综合能力 + +我们采用C-Eval、CMMLU和MMLU三个评测数据集全面评估Chinese-Mistral-7B: + +- C-Eval:它是一个全面的中文基础模型评估套件。包含13948个多项选择题,涵盖52个学科和四个难度级别。它旨在评估模型在人文、社科、理工等多个学科大类上的知识和推理能力。 +- CMMLU:它是一个综合性的中文评估基准。涵盖了从基础学科到高级专业水平的67个主题。它专门用于评估语言模型在中文语境下的知识和推理能力。 +- MMLU:它是一个包含了57个子任务的英文评测数据集。涵盖了从初等数学、美国历史、计算机科学到法律等多个领域,难度覆盖高中水平到专家水平,有效地衡量了模型在人文、社科和理工等多个学科大类中的综合知识能力。 + +下表展示了开源社区较流行的中文Llama2、中文Mistral与我们发布的Chinese-Mistral-7B的评测结果。评测方式采用5-shot,采用opencompass在相同的实验条件下进行评测。 + +| 模型名称 | C-Eval | CMMLU | MMLU | 平均得分 | +|:-----------------------------------------------------------------------------------------------:|:-------------:|:-------------:|:------------:|:-----------------:| +| [Linly-Al/Chinese-LLaMA-2-7B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-7B-hf) | 31.2 | 30.14 | 35.09 | 32.14 | +| [hfl/chinese-llama-2-7b](https://huggingface.co/hfl/chinese-llama-2-7b) | 27.4 | 33.38 | 37.25 | 32.68 | +| [Linly-Al/Chinese-LLaMA-2-13B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-13B-hf) | 39.9 | 42.48 | 52.54 | 44.97 | +| [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b) | 41.0 | 43.25 | 52.94 | 45.73 | +| [gywy/Mistral-7B-v0.1-chinese](https://huggingface.co/gywy/Mistral-7B-v0.1-chinese) | 37.4 | 36.45 | 37.38 | 37.08 | +|[OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base)| 44.4 | 46.32 | 57.79 | 49.50 | +| **[Chinese-Mistral-7B (本模型)](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)** | **47.5** | **47.52** | **58.29** | **51.10** | + +由上表可知,Chinese-Mistral-7B的中文和英文通识能力不仅超过同等参数量的中文Llama2模型,而且在多项评测中优于130亿参数量的中文Llama2。同时,Chinese-Mistral-7B的评测表现高于开源社区其他同等参数量的中文Mistral。 + +### 中文编解码效率 + +我们从WuDaoCorpus2中采样训练数据,使用sentencepiece训练中文BPE词表,并人工选取部分其他优秀中文词表进行词表融合。经过严格的人工审核,最终形成的词表大小为63776。为了提高模型计算效率,我们在词表末尾添加<|sym1|>、……、<|sym96|>,使得词表大小为128的倍数,最终得到的词表大小为63872。 +我们随机选取了WuDaoCorpus2_part-2021278643作为测试数据以评测分词效果。经统计,测试数据包括67013857个单词,我们用单词数量除以分词后的Token数量,计算压缩率。压缩率越大,表明分词效果越好,在中文场景的编解码效率越高。 + +| 模型名称 | 模型类型 | 词表大小 | Token数量 | 压缩率 | +|:-----------------------------------------------------------------------------------------------:|:-------------:|:-------------:|:------------:|:-----------------:| +| [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) | Llama | 32000 | 97406876 | 0.6880 | +| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | Mistral | 32000 | 76269008 | 0.8787 | +| [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) | GLM | 64789 | 43487673 | 1.5410 | +| [Linly-Al/Chinese-LLaMA-2-13B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-13B-hf) | Llama | 40076 | 65402900 | 1.0246 | +| [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b) | Llama | 55296 | 45763513 | 1.4644 | +| [OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base) | Mistral | 36608 | 65329642 | 1.0256 | +|[gywy/Mistral-7B-v0.1-chinese](https://huggingface.co/gywy/Mistral-7B-v0.1-chinese)| Mistral | 48593 | 46670146 | 1.4359 | +| **[Chinese-Mistral-7B (本模型)](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)** | Mistral | 63872 | **43044156** | **1.5569** | + + + +由上表可知,Chinese-Mistral-7B在可观的词表大小条件下,取得了最高的压缩率,表明其能够高效处理中文文本。 + +## 💻 模型推理 + +如下是使用Chinese-Mistral-7B进行推理的代码示例。 + +```python +import torch +from transformers import AutoTokenizer, AutoModelForCausalLM + +device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") + +model_path = "itpossible/Chinese-Mistral-7B-v0.1" +tokenizer = AutoTokenizer.from_pretrained(model_path) +model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device) + +text = "我是一个人工智能助手,我能够帮助你做如下这些事情:" +inputs = tokenizer(text, return_tensors="pt").to(device) + +outputs = model.generate(**inputs, max_new_tokens=120, do_sample=True) +print(tokenizer.decode(outputs[0], skip_special_tokens=True)) +``` + +如下是使用Chinese-Mistral-7B-Instruct进行推理的代码示例。 +```python +import torch +from transformers import AutoTokenizer, AutoModelForCausalLM + +device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") + +model_path = "itpossible/Chinese-Mistral-7B-Instruct-v0.1" +tokenizer = AutoTokenizer.from_pretrained(model_path) +model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device) + +text = "请为我推荐中国三座比较著名的山" +messages = [{"role": "user", "content": text}] + +inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device) +outputs = model.generate(inputs, max_new_tokens=300, do_sample=True) +outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] +print(outputs) +``` +Chinese-Mistral-7B 与Chinese-Mistral-7B-Instruct生成效果示例如下: + +| 模型名称 | Chinese-Mistral-7B | Chinese-Mistral-7B-Instruct | +|--- |--- |--- | +| 用户输入 |
我是一个人工智能助手,我能够帮助你做如下这些事情: |
请为我推荐中国三座比较著名的山 | +| 模型输出 |
1、搜索资料,我可以帮你搜索任何你需要的资料。比如搜索某本书的内容,搜索某某地的地图,搜索某部电影的播放地址等。
2、计算,我可以帮你计算各种数学题,比如加法,减法,乘积,除法等等。计算各种不同科目的公式,比如化学方程式的计算,物理公式的计算等。
3、做笔记,你可以把你的想法记录在我的字典里面,或者记录日程安排。
4、翻译,你可以把的设置为你的即时翻译器,将你要翻译的内容放到信息框里面,然后点击发送就可以。 |
中国的山川风景十分美丽,以下三座山是国内比较著名的景点:
1.泰山:泰山,位于山东省泰安市北部,历史悠久,是我国五大名山之一,海拔约1545米,其雄伟的地势和壮丽的风光吸引了不少游客前来游览。
2.黄山:黄山位于安徽省东南部,因独特的山水风光和丰富的文化和历史积淀而闻名于世,这里悬崖峭壁,奇峰怪石,云海雾海,景色奇特秀丽,被誉为“天下第一奇山”。
3.峨眉山:峨眉山位于四川省峨眉山市东北部,是中国四大佛教名山之一,因雄伟壮观的山峰和丰富多彩的森林资源而闻名于世。这里气候湿润,植被覆盖率极高,景色秀丽,被赞誉为“峨眉天下秀”。 | + +## 📝 训练数据 + +训练数据采样于WanJuan、baike2018qa、Dolma、gutenberg-books等高质量开源数据集。我们对这些数据集进行细粒度清洗,并充分考虑训练数据集中不同类别数据的占比。 + +## ⚠️ 局限性 + +Chinese-Mistral-7B的开发旨在为开源社区提供一个性能优越的中文大语言模型。请注意,由于模型大小及训练数据规模限制,本模型仍可能生成误导性内容或者有害内容。因此,在部署任何由Chinese-Mistral系列模型驱动的应用程序之前,开发人员必须进行安全测试,对模型进行相应调整,以满足安全性需求。 + +## ✒️ 引用 + +如果您觉得本项目对您的研究有所帮助或使用了本项目的模型,请引用本项目: + +```bibtex +@misc{Chinese-Mistral, + author = {Zhou, Chen and Yuqi, Bai}, + title = {Chinese-Mistral: An Efficient and Effective Chinese Large Language Model}, + year = {2024}, + publisher = {GitHub}, + journal = {GitHub repository}, + howpublished = {\url{https://github.com/THU-ESIS/Chinese-Mistral}} +} +``` + +## 结语 +我们欢迎社区的支持和合作,共同推动通用大语言模型和领域大语言模型的发展。联系方式:
+白玉琪,清华大学地球系统科学系长聘教授,实验室负责人,yuqibai@tsinghua.edu.cn
+陈舟,清华大学地球系统科学系博士生,大语言模型组组长,chenz22@mails.tsinghua.edu.cn \ No newline at end of file diff --git a/config.json b/config.json new file mode 100644 index 0000000..5aa769b --- /dev/null +++ b/config.json @@ -0,0 +1,26 @@ +{ + "_name_or_path": "/home/chenzhou/Project/LLaMA-Factory/saves/pt/full/Mistral-7B-v0.1-expand_voc_embedding_init-expand_voc-0111", + "architectures": [ + "MistralForCausalLM" + ], + "attention_dropout": 0.0, + "bos_token_id": 1, + "eos_token_id": 2, + "hidden_act": "silu", + "hidden_size": 4096, + "initializer_range": 0.02, + "intermediate_size": 14336, + "max_position_embeddings": 32768, + "model_type": "mistral", + "num_attention_heads": 32, + "num_hidden_layers": 32, + "num_key_value_heads": 8, + "rms_norm_eps": 1e-05, + "rope_theta": 10000.0, + "sliding_window": 4096, + "tie_word_embeddings": false, + "torch_dtype": "bfloat16", + "transformers_version": "4.36.2", + "use_cache": true, + "vocab_size": 63872 +} diff --git a/configuration.json b/configuration.json new file mode 100644 index 0000000..f9291c3 --- /dev/null +++ b/configuration.json @@ -0,0 +1 @@ +{"framework":"Pytorch","task":"text-generation"} \ No newline at end of file diff --git a/generation_config.json b/generation_config.json new file mode 100644 index 0000000..c533f93 --- /dev/null +++ b/generation_config.json @@ -0,0 +1,6 @@ +{ + "_from_model_config": true, + "bos_token_id": 1, + "eos_token_id": 2, + "transformers_version": "4.36.2" +} diff --git a/model-00001-of-00004.safetensors b/model-00001-of-00004.safetensors new file mode 100644 index 0000000..a1fb264 --- /dev/null +++ b/model-00001-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dad23bc950f7f6c809ec103120669ca61999b20458331e5f9e13443db0c0a07b +size 3895582632 diff --git a/model-00002-of-00004.safetensors b/model-00002-of-00004.safetensors new file mode 100644 index 0000000..6281ae3 --- /dev/null +++ b/model-00002-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93b1bd778386a0b603622c96f20a312d95f8f0fc1eacb20bb5edb5ec671ea2d2 +size 3926025424 diff --git a/model-00003-of-00004.safetensors b/model-00003-of-00004.safetensors new file mode 100644 index 0000000..85ef3c8 --- /dev/null +++ b/model-00003-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a108495cc8b552278998c72c8d98b67747167bf913836fc0a1deea3a2638549 +size 3926025440 diff --git a/model-00004-of-00004.safetensors b/model-00004-of-00004.safetensors new file mode 100644 index 0000000..7ccbee1 --- /dev/null +++ b/model-00004-of-00004.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbbdc531eab5c2322a0310cae23bd2871861434177fb3b59f9655b346c7fe595 +size 3258055368 diff --git a/model.safetensors.index.json b/model.safetensors.index.json new file mode 100644 index 0000000..c1f932f --- /dev/null +++ b/model.safetensors.index.json @@ -0,0 +1,298 @@ +{ + "metadata": { + "total_size": 15005655040 + }, + "weight_map": { + "lm_head.weight": "model-00004-of-00004.safetensors", + "model.embed_tokens.weight": "model-00001-of-00004.safetensors", + "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", + "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.25.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", + "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.26.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.26.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.28.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.28.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.28.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.29.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.29.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.29.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.30.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.30.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.30.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.30.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.31.mlp.down_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", + "model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", + "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", + "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", + "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", + "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", + "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", + "model.norm.weight": "model-00004-of-00004.safetensors" + } +} diff --git a/special_tokens_map.json b/special_tokens_map.json new file mode 100644 index 0000000..492d4b2 --- /dev/null +++ b/special_tokens_map.json @@ -0,0 +1,30 @@ +{ + "bos_token": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "eos_token": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "pad_token": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "unk_token": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } +} diff --git a/tokenizer.model b/tokenizer.model new file mode 100644 index 0000000..3f0e7f6 --- /dev/null +++ b/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:561d3879ed270f6b6f8e1ff5411eac1c315935e5f0b2318ca92ed6d1531c07f7 +size 991499 diff --git a/tokenizer_config.json b/tokenizer_config.json new file mode 100644 index 0000000..2815273 --- /dev/null +++ b/tokenizer_config.json @@ -0,0 +1,43 @@ +{ + "add_bos_token": true, + "add_eos_token": false, + "added_tokens_decoder": { + "0": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "1": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "2": { + "content": "", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "bos_token": "", + "clean_up_tokenization_spaces": false, + "eos_token": "", + "legacy": true, + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "left", + "sp_model_kwargs": {}, + "spaces_between_special_tokens": false, + "split_special_tokens": false, + "tokenizer_class": "LlamaTokenizer", + "unk_token": "", + "use_default_system_prompt": false +}