初始化项目,由ModelHub XC社区提供模型
Model: llm-jp/llm-jp-4-8b-instruct Source: Original Platform
This commit is contained in:
37
.gitattributes
vendored
Normal file
37
.gitattributes
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
v4_pretraining_overview.png filter=lfs diff=lfs merge=lfs -text
|
||||
157
README.md
Normal file
157
README.md
Normal file
@@ -0,0 +1,157 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
- ja
|
||||
programming_language:
|
||||
- C
|
||||
- C++
|
||||
- C#
|
||||
- Go
|
||||
- Java
|
||||
- JavaScript
|
||||
- Lua
|
||||
- PHP
|
||||
- Python
|
||||
- Ruby
|
||||
- Rust
|
||||
- Scala
|
||||
- TypeScript
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
inference: false
|
||||
---
|
||||
# llm-jp-4-8b-instruct
|
||||
|
||||
LLM-jp-4 is a series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
|
||||
|
||||
This repository provides the **llm-jp-4-8b-instruct**
|
||||
For an overview of the LLM-jp-4 models across different parameter sizes, please refer to:
|
||||
- [LLM-jp-4 Models](https://huggingface.co/collections/llm-jp/llm-jp-4-models)
|
||||
|
||||
Base models are trained with pre-training and mid-training only.
|
||||
Post-trained models are aligned using supervised fine-tuning (SFT) and direct preference optimization (DPO), without reinforcement learning.
|
||||
> [!NOTE]
|
||||
> While the **thinking** variants are trained with both SFT and DPO, this **instruct** model is trained using SFT only, without DPO.
|
||||
|
||||
|
||||
For practical usage examples and detailed instructions on how to use the models, please also refer to our [cookbook](https://github.com/llm-jp/llm-jp-4-cookbook).
|
||||
|
||||
To support the continued development of LLM-jp, we would greatly appreciate it if you could share how you utilize LLM-jp outcomes via the [survey form](https://forms.gle/AvbNXTNT2ADsssHq5).
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Please refer to our [cookbook](https://github.com/llm-jp/llm-jp-4-cookbook) for practical usage examples and detailed instructions on how to use the models.
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
- **Model type:** Transformer-based Language Model
|
||||
- **Architectures:**
|
||||
|
||||
Dense model:
|
||||
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|Total parameters|
|
||||
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
|8B|32|4,096|32|65,536|805,306,368|7,784,894,464|8,590,200,832|
|
||||
|
||||
MoE model:
|
||||
|Params|Layers|Hidden size|Heads|Routed Experts|Activated Experts|Context length|Embedding parameters|Non-embedding parameters|Activated parameters|Total parameters|
|
||||
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
|32B-A3B|32|2,560|40|128|8|65,536|503,316,480|31,635,712,512|3,827,476,992|32,139,028,992|
|
||||
|
||||
|
||||
## Tokenizer
|
||||
|
||||
|
||||
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
||||
The vocabulary entries were converted from [`llm-jp-tokenizer v4.0`](https://github.com/llm-jp/llm-jp-tokenizer).
|
||||
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
|
||||
|
||||
> [!NOTE]
|
||||
> The chat template of this model is designed to be compatible with the OpenAI Harmony response format.
|
||||
> However, the tokenizer differs from the one assumed by the `openai-harmony` library, and therefore direct tokenization with `openai-harmony` is not supported.
|
||||
> For correct behavior, please use the tokenizer provided with this model. For detailed usage, please refer to [our cookbook](https://github.com/llm-jp/llm-jp-4-cookbook).
|
||||
|
||||
|
||||
## Training
|
||||
|
||||
### Pre-training
|
||||
|
||||
This model is trained through a multi-stage pipeline consisting of pre-training and mid-training phases, using a total of 11.7T tokens.
|
||||
|
||||

|
||||
|
||||
The corpora used for pre-training and mid-training are publicly available at the following links:
|
||||
- [Pre-training](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v4.1)
|
||||
- [Mid-training](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-midtraining-v2)
|
||||
|
||||
> [!NOTE]
|
||||
> Although most of the corpora have been released, some portions are excluded from public release due to licensing constraints.
|
||||
|
||||
### Post-training
|
||||
|
||||
We have fine-tuned the pre-trained checkpoint using SFT and further aligned it with DPO.
|
||||
|
||||
The datasets used for post-training are also publicly available at the following links:
|
||||
- [SFT](https://huggingface.co/datasets/llm-jp/llm-jp-4-thinking-sft-data)
|
||||
- [DPO (for llm-jp-4-8b-thinking model)](https://huggingface.co/datasets/llm-jp/llm-jp-4-8b-thinking-dpo-data)
|
||||
- [DPO (for llm-jp-4-32b-a3b-thinking model)](https://huggingface.co/datasets/llm-jp/llm-jp-4-32b-a3b-thinking-dpo-data)
|
||||
|
||||
## Evaluation
|
||||
|
||||
### [llm-jp-judge](https://github.com/llm-jp/llm-jp-judge)
|
||||
|
||||
We evaluated the model on a variety of tasks using an LLM-as-a-Judge framework. The descriptions of each task are as follows.
|
||||
|
||||
- MT-Bench (JA/EN): A benchmark for measuring multi-turn conversational task-solving ability.
|
||||
- [AnswerCarefully](https://huggingface.co/datasets/llm-jp/AnswerCarefully): A benchmark for evaluating safety in Japanese. We used 336 questions from the v2.0 test set.
|
||||
- [llm-jp-instructions](https://huggingface.co/datasets/llm-jp/llm-jp-instructions): A set of human-created single-turn question–answer pairs. We used 400 questions from the test set.
|
||||
|
||||
We evaluated the models using `gpt-5.4-2026-03-05`.
|
||||
> [!NOTE]
|
||||
> Note: In earlier evaluations of the llm-jp-3 series, we used `gpt-4o-2024-08-06`. The newer evaluator `gpt-5.4-2026-03-05` provides a stricter and more reliable assessment, which results in lower scores on benchmarks such as MT-Bench compared to those reported for the llm-jp-3 series.
|
||||
|
||||
The scores represent the average values obtained from three rounds of inference and evaluation.
|
||||
For more details, please refer to the [codes](https://github.com/llm-jp/llm-jp-judge).
|
||||
|
||||
|
||||
| Model Name | MT-Bench (JA) | MT-Bench (EN) | AnswerCarefully | llm-jp-instructions |
|
||||
|:-------------------------------------------------------------------------------------------------------|----:|----:|----------------:|--------------------:|
|
||||
| gpt-4o-2024-08-06 | 7.29 | 7.69 | 4.00 | 4.07 |
|
||||
| gpt-5.4-2026-03-05 (reasoning_effort = low) | 8.87 | 8.76 | 4.38 | 4.79 |
|
||||
| gpt-5.4-2026-03-05 (reasoning_effort = medium) | 8.87 | 8.89 | 4.43 | 4.82 |
|
||||
| gpt-5.4-2026-03-05 (reasoning_effort = high) | 8.98 | 8.85 | 4.41 | 4.83 |
|
||||
| [gpt-oss-20b (reasoning_effort = low)](https://huggingface.co/openai/gpt-oss-20b) | 7.21 | 7.95 | 3.39 | 3.08 |
|
||||
| [gpt-oss-20b (reasoning_effort = medium)](https://huggingface.co/openai/gpt-oss-20b) | 7.33 | 7.85 | 3.55 | 3.16 |
|
||||
| [llm-jp-4-8b-thinking (reasoning_effort = low)](https://huggingface.co/llm-jp/llm-jp-4-8b-thinking) | 7.23 | 7.54 | 3.58 | 3.50 |
|
||||
| [llm-jp-4-8b-thinking (reasoning_effort = medium)](https://huggingface.co/llm-jp/llm-jp-4-8b-thinking) | 7.54 | 7.79 | 3.69 | 3.54 |
|
||||
| [llm-jp-4-32b-a3b-thinking (reasoning_effort = low)](https://huggingface.co/llm-jp/llm-jp-4-32b-a3b-thinking) | 7.57 | 7.70 | 3.61 | 3.61 |
|
||||
| [llm-jp-4-32b-a3b-thinking (reasoning_effort = medium)](https://huggingface.co/llm-jp/llm-jp-4-32b-a3b-thinking) | 7.82 | 7.86 | 3.70 | 3.61 |
|
||||
|
||||
|
||||
## Risks and Limitations
|
||||
|
||||
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
|
||||
|
||||
|
||||
## Send Questions to
|
||||
|
||||
llm-jp(at)nii.ac.jp
|
||||
|
||||
|
||||
## License
|
||||
|
||||
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
||||
|
||||
|
||||
## Acknowledgement
|
||||
|
||||
To develop this model, we used the NINJAL Web Japanese Corpus (whole-NWJC) from the National Institute for Japanese Language and Linguistics (NINJAL).
|
||||
|
||||
|
||||
## Model Card Authors
|
||||
|
||||
*The names are listed in alphabetical order.*
|
||||
|
||||
Hirokazu Kiyomaru and Takashi Kodama.
|
||||
332
chat_template.jinja
Normal file
332
chat_template.jinja
Normal file
@@ -0,0 +1,332 @@
|
||||
{#-
|
||||
In addition to the normal inputs of `messages` and `tools`, this template also accepts the
|
||||
following kwargs:
|
||||
- "builtin_tools": A list, can contain "browser" and/or "python".
|
||||
- "model_identity": A string that optionally describes the model identity.
|
||||
#}
|
||||
|
||||
{#- Tool Definition Rendering ============================================== #}
|
||||
{%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
|
||||
{%- if param_spec.type == "array" -%}
|
||||
{%- if param_spec['items'] -%}
|
||||
{%- if param_spec['items']['type'] == "string" -%}
|
||||
{{- "string[]" }}
|
||||
{%- elif param_spec['items']['type'] == "number" -%}
|
||||
{{- "number[]" }}
|
||||
{%- elif param_spec['items']['type'] == "integer" -%}
|
||||
{{- "number[]" }}
|
||||
{%- elif param_spec['items']['type'] == "boolean" -%}
|
||||
{{- "boolean[]" }}
|
||||
{%- else -%}
|
||||
{%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
|
||||
{%- if inner_type == "object | object" or inner_type|length > 50 -%}
|
||||
{{- "any[]" }}
|
||||
{%- else -%}
|
||||
{{- inner_type + "[]" }}
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
{%- if param_spec.nullable -%}
|
||||
{{- " | null" }}
|
||||
{%- endif -%}
|
||||
{%- else -%}
|
||||
{{- "any[]" }}
|
||||
{%- if param_spec.nullable -%}
|
||||
{{- " | null" }}
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
{%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
|
||||
{#- Handle array of types like ["object", "object"] from Union[dict, list] #}
|
||||
{%- if param_spec.type | length > 1 -%}
|
||||
{{- param_spec.type | join(" | ") }}
|
||||
{%- else -%}
|
||||
{{- param_spec.type[0] }}
|
||||
{%- endif -%}
|
||||
{%- elif param_spec.oneOf -%}
|
||||
{#- Handle oneOf schemas - check for complex unions and fallback to any #}
|
||||
{%- set has_object_variants = false -%}
|
||||
{%- for variant in param_spec.oneOf -%}
|
||||
{%- if variant.type == "object" -%}
|
||||
{%- set has_object_variants = true -%}
|
||||
{%- endif -%}
|
||||
{%- endfor -%}
|
||||
{%- if has_object_variants and param_spec.oneOf|length > 1 -%}
|
||||
{{- "any" }}
|
||||
{%- else -%}
|
||||
{%- for variant in param_spec.oneOf -%}
|
||||
{{- render_typescript_type(variant, required_params) -}}
|
||||
{%- if variant.description %}
|
||||
{{- "// " + variant.description }}
|
||||
{%- endif -%}
|
||||
{%- if variant.default is defined %}
|
||||
{{ "// default: " + variant.default|tojson }}
|
||||
{%- endif -%}
|
||||
{%- if not loop.last %}
|
||||
{{- " | " }}
|
||||
{% endif -%}
|
||||
{%- endfor -%}
|
||||
{%- endif -%}
|
||||
{%- elif param_spec.type == "string" -%}
|
||||
{%- if param_spec.enum -%}
|
||||
{{- '"' + param_spec.enum|join('" | "') + '"' -}}
|
||||
{%- else -%}
|
||||
{{- "string" }}
|
||||
{%- if param_spec.nullable %}
|
||||
{{- " | null" }}
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
{%- elif param_spec.type == "number" -%}
|
||||
{{- "number" }}
|
||||
{%- elif param_spec.type == "integer" -%}
|
||||
{{- "number" }}
|
||||
{%- elif param_spec.type == "boolean" -%}
|
||||
{{- "boolean" }}
|
||||
|
||||
{%- elif param_spec.type == "object" -%}
|
||||
{%- if param_spec.properties -%}
|
||||
{{- "{\n" }}
|
||||
{%- for prop_name, prop_spec in param_spec.properties.items() -%}
|
||||
{{- prop_name -}}
|
||||
{%- if prop_name not in (param_spec.required or []) -%}
|
||||
{{- "?" }}
|
||||
{%- endif -%}
|
||||
{{- ": " }}
|
||||
{{ render_typescript_type(prop_spec, param_spec.required or []) }}
|
||||
{%- if not loop.last -%}
|
||||
{{-", " }}
|
||||
{%- endif -%}
|
||||
{%- endfor -%}
|
||||
{{- "}" }}
|
||||
{%- else -%}
|
||||
{{- "object" }}
|
||||
{%- endif -%}
|
||||
{%- else -%}
|
||||
{{- "any" }}
|
||||
{%- endif -%}
|
||||
{%- endmacro -%}
|
||||
|
||||
{%- macro render_tool_namespace(namespace_name, tools) -%}
|
||||
{{- "## " + namespace_name + "\n\n" }}
|
||||
{{- "namespace " + namespace_name + " {\n\n" }}
|
||||
{%- for tool in tools %}
|
||||
{%- set tool = tool.function %}
|
||||
{{- "// " + tool.description + "\n" }}
|
||||
{{- "type "+ tool.name + " = " }}
|
||||
{%- if tool.parameters and tool.parameters.properties %}
|
||||
{{- "(_: {\n" }}
|
||||
{%- for param_name, param_spec in tool.parameters.properties.items() %}
|
||||
{%- if param_spec.description %}
|
||||
{{- "// " + param_spec.description + "\n" }}
|
||||
{%- endif %}
|
||||
{{- param_name }}
|
||||
{%- if param_name not in (tool.parameters.required or []) -%}
|
||||
{{- "?" }}
|
||||
{%- endif -%}
|
||||
{{- ": " }}
|
||||
{{- render_typescript_type(param_spec, tool.parameters.required or []) }}
|
||||
{%- if param_spec.default is defined -%}
|
||||
{%- if param_spec.enum %}
|
||||
{{- ", // default: " + param_spec.default }}
|
||||
{%- elif param_spec.oneOf %}
|
||||
{{- "// default: " + param_spec.default }}
|
||||
{%- else %}
|
||||
{{- ", // default: " + param_spec.default|tojson }}
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
{%- if not loop.last %}
|
||||
{{- ",\n" }}
|
||||
{%- else %}
|
||||
{{- ",\n" }}
|
||||
{%- endif -%}
|
||||
{%- endfor %}
|
||||
{{- "}) => any;\n\n" }}
|
||||
{%- else -%}
|
||||
{{- "() => any;\n\n" }}
|
||||
{%- endif -%}
|
||||
{%- endfor %}
|
||||
{{- "} // namespace " + namespace_name }}
|
||||
{%- endmacro -%}
|
||||
|
||||
{%- macro render_builtin_tools(browser_tool, python_tool) -%}
|
||||
{%- if browser_tool %}
|
||||
{{- "## browser\n\n" }}
|
||||
{{- "// Tool for browsing.\n" }}
|
||||
{{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
|
||||
{{- "// Cite information from the tool using the following format:\n" }}
|
||||
{{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
|
||||
{{- "// Do not quote more than 10 words directly from the tool output.\n" }}
|
||||
{{- "// sources=web (default: web)\n" }}
|
||||
{{- "namespace browser {\n\n" }}
|
||||
{{- "// Searches for information related to `query` and displays `topn` results.\n" }}
|
||||
{{- "type search = (_: {\n" }}
|
||||
{{- "query: string,\n" }}
|
||||
{{- "topn?: number, // default: 10\n" }}
|
||||
{{- "source?: string,\n" }}
|
||||
{{- "}) => any;\n\n" }}
|
||||
{{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
|
||||
{{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
|
||||
{{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
|
||||
{{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
|
||||
{{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
|
||||
{{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
|
||||
{{- "type open = (_: {\n" }}
|
||||
{{- "id?: number | string, // default: -1\n" }}
|
||||
{{- "cursor?: number, // default: -1\n" }}
|
||||
{{- "loc?: number, // default: -1\n" }}
|
||||
{{- "num_lines?: number, // default: -1\n" }}
|
||||
{{- "view_source?: boolean, // default: false\n" }}
|
||||
{{- "source?: string,\n" }}
|
||||
{{- "}) => any;\n\n" }}
|
||||
{{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
|
||||
{{- "type find = (_: {\n" }}
|
||||
{{- "pattern: string,\n" }}
|
||||
{{- "cursor?: number, // default: -1\n" }}
|
||||
{{- "}) => any;\n\n" }}
|
||||
{{- "} // namespace browser\n\n" }}
|
||||
{%- endif -%}
|
||||
|
||||
{%- if python_tool %}
|
||||
{{- "## python\n\n" }}
|
||||
{{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
|
||||
{{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
|
||||
{%- endif -%}
|
||||
{%- endmacro -%}
|
||||
|
||||
{#- System Message Construction ============================================ #}
|
||||
{%- macro build_system_message() -%}
|
||||
{%- if model_identity is not defined %}
|
||||
{%- set model_identity = "You are LLM-jp-4, a large language model trained by LLM-jp." %}
|
||||
{%- endif %}
|
||||
{{- model_identity + "\n" -}}
|
||||
{% if knowledge_cutoff is not defined %}
|
||||
{%- set knowledge_cutoff = "2025-12" %}
|
||||
{%- endif %}
|
||||
{{- "Knowledge cutoff: " + knowledge_cutoff + "\n" -}}
|
||||
{% if conversation_start_date is not defined %}
|
||||
{%- set conversation_start_date = strftime_now("%Y-%m-%d") %}
|
||||
{%- endif %}
|
||||
{{- "Current date: " + conversation_start_date + "\n\n" }}
|
||||
{%- if builtin_tools %}
|
||||
{{- "# Tools\n\n" }}
|
||||
{%- set available_builtin_tools = namespace(browser=false, python=false) %}
|
||||
{%- for tool in builtin_tools %}
|
||||
{%- if tool == "browser" %}
|
||||
{%- set available_builtin_tools.browser = true %}
|
||||
{%- elif tool == "python" %}
|
||||
{%- set available_builtin_tools.python = true %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{{- render_builtin_tools(available_builtin_tools.browser, available_builtin_tools.python) }}
|
||||
{%- endif -%}
|
||||
{{- "# Valid channels: analysis, commentary, final. Channel must be included for every message." }}
|
||||
{%- if tools -%}
|
||||
{{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
|
||||
{%- endif -%}
|
||||
{%- endmacro -%}
|
||||
|
||||
{#- Main Template Logic ================================================= #}
|
||||
{#- Set defaults #}
|
||||
|
||||
{#- Render system message #}
|
||||
{{- "<|start|>system<|message|>" }}
|
||||
{{- build_system_message() }}
|
||||
{{- "<|end|>" }}
|
||||
|
||||
{#- Extract developer message #}
|
||||
{%- if messages[0].role == "developer" or messages[0].role == "system" %}
|
||||
{%- set developer_message = messages[0].content %}
|
||||
{%- set loop_messages = messages[1:] %}
|
||||
{%- else %}
|
||||
{%- set developer_message = "" %}
|
||||
{%- set loop_messages = messages %}
|
||||
{%- endif %}
|
||||
|
||||
{#- Render developer message #}
|
||||
{%- if developer_message or tools %}
|
||||
{{- "<|start|>developer<|message|>" }}
|
||||
{%- if developer_message %}
|
||||
{{- "# Instructions\n\n" }}
|
||||
{{- developer_message }}
|
||||
{{- "\n\n" }}
|
||||
{%- endif %}
|
||||
{%- if tools -%}
|
||||
{{- "# Tools\n\n" }}
|
||||
{{- render_tool_namespace("functions", tools) }}
|
||||
{%- endif -%}
|
||||
{{- "<|end|>" }}
|
||||
{%- endif %}
|
||||
|
||||
{#- Render messages #}
|
||||
{%- set last_tool_call = namespace(name=none) %}
|
||||
{%- for message in loop_messages -%}
|
||||
{#- At this point only assistant/user/tool messages should remain #}
|
||||
{%- if message.role == 'assistant' -%}
|
||||
{#- Checks to ensure the messages are being passed in the format we expect #}
|
||||
{%- if "content" in message %}
|
||||
{%- if "<|channel|>analysis<|message|>" in message.content or "<|channel|>final<|message|>" in message.content %}
|
||||
{{- raise_exception("You have passed a message containing <|channel|> tags in the content field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if "thinking" in message %}
|
||||
{%- if "<|channel|>analysis<|message|>" in message.thinking or "<|channel|>final<|message|>" in message.thinking %}
|
||||
{{- raise_exception("You have passed a message containing <|channel|> tags in the thinking field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if "tool_calls" in message %}
|
||||
{#- We need very careful handling here - we want to drop the tool call analysis message if the model #}
|
||||
{#- has output a later <|final|> message, but otherwise we want to retain it. This is the only case #}
|
||||
{#- when we render CoT/analysis messages in inference. #}
|
||||
{%- set future_final_message = namespace(found=false) %}
|
||||
{%- for future_message in loop_messages[loop.index:] %}
|
||||
{%- if future_message.role == 'assistant' and "tool_calls" not in future_message %}
|
||||
{%- set future_final_message.found = true %}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{#- We assume max 1 tool call per message, and so we infer the tool call name #}
|
||||
{#- in "tool" messages from the most recent assistant tool call name #}
|
||||
{%- set tool_call = message.tool_calls[0] %}
|
||||
{%- if tool_call.function %}
|
||||
{%- set tool_call = tool_call.function %}
|
||||
{%- endif %}
|
||||
{%- if message.content and message.thinking %}
|
||||
{{- raise_exception("Cannot pass both content and thinking in an assistant message with tool calls! Put the analysis message in one or the other, but not both.") }}
|
||||
{%- elif message.content and not future_final_message.found %}
|
||||
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.content + "<|end|>" }}
|
||||
{%- elif message.thinking and not future_final_message.found %}
|
||||
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
|
||||
{%- endif %}
|
||||
{{- "<|start|>assistant to=" }}
|
||||
{{- "functions." + tool_call.name + "<|channel|>commentary " }}
|
||||
{{- (tool_call.content_type if tool_call.content_type is defined else "json") + "<|message|>" }}
|
||||
{{- tool_call.arguments|tojson }}
|
||||
{{- "<|call|>" }}
|
||||
{%- set last_tool_call.name = tool_call.name %}
|
||||
{%- elif loop.last and not add_generation_prompt %}
|
||||
{#- Only render the CoT if the final turn is an assistant turn and add_generation_prompt is false #}
|
||||
{#- This is a situation that should only occur in training, never in inference. #}
|
||||
{%- if "thinking" in message %}
|
||||
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
|
||||
{%- endif %}
|
||||
{#- <|return|> indicates the end of generation, but <|end|> does not #}
|
||||
{#- <|return|> should never be an input to the model, but we include it as the final token #}
|
||||
{#- when training, so the model learns to emit it. #}
|
||||
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|return|>" }}
|
||||
{%- else %}
|
||||
{#- CoT is dropped during all previous turns, so we never render it for inference #}
|
||||
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|end|>" }}
|
||||
{%- set last_tool_call.name = none %}
|
||||
{%- endif %}
|
||||
{%- elif message.role == 'tool' -%}
|
||||
{%- if last_tool_call.name is none %}
|
||||
{{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
|
||||
{%- endif %}
|
||||
{{- "<|start|>functions." + last_tool_call.name }}
|
||||
{{- " to=assistant<|channel|>commentary<|message|>" + message.content|tojson + "<|end|>" }}
|
||||
{%- elif message.role == 'user' -%}
|
||||
{{- "<|start|>user<|message|>" + message.content + "<|end|>" }}
|
||||
{%- endif -%}
|
||||
{%- endfor -%}
|
||||
|
||||
{#- Generation prompt #}
|
||||
{%- if add_generation_prompt -%}
|
||||
<|start|>assistant
|
||||
{%- endif -%}
|
||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 65536,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 500000,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.51.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 196608
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.51.0"
|
||||
}
|
||||
129
llmjp4_harmony.py
Normal file
129
llmjp4_harmony.py
Normal file
@@ -0,0 +1,129 @@
|
||||
# Generic parser for OpenAI Harmony format.
|
||||
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Iterator, Sequence
|
||||
|
||||
from transformers import PreTrainedTokenizerBase as TokenizerLike
|
||||
|
||||
|
||||
class HarmonyMessageEndType(Enum):
|
||||
INCOMPLETE = 0
|
||||
END = 1
|
||||
CALL = 2
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class HarmonySequence:
|
||||
"""A data class representing a sequence of tokens in the Harmony format."""
|
||||
token_ids: list[int]
|
||||
start: int # Start position of the sequence in the original token sequence
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class HarmonyMessage:
|
||||
"""A data class representing a message in the Harmony format."""
|
||||
end: HarmonyMessageEndType
|
||||
role: HarmonySequence | None = None
|
||||
channel: HarmonySequence | None = None
|
||||
constrain: HarmonySequence | None = None
|
||||
content: HarmonySequence | None = None
|
||||
|
||||
|
||||
class HarmonyMessageParser:
|
||||
"""A parser that performs lexical analysis to extract Harmony messages."""
|
||||
|
||||
def __init__(self, tokenizer: TokenizerLike):
|
||||
vocab = tokenizer.get_vocab()
|
||||
self._begin_map = {
|
||||
vocab["<|start|>"]: "role",
|
||||
vocab["<|channel|>"]: "channel",
|
||||
vocab["<|constrain|>"]: "constrain",
|
||||
vocab["<|message|>"]: "content",
|
||||
}
|
||||
self._end_map = {
|
||||
vocab["<|end|>"]: HarmonyMessageEndType.END,
|
||||
vocab["<|return|>"]: HarmonyMessageEndType.END,
|
||||
vocab["<|call|>"]: HarmonyMessageEndType.CALL,
|
||||
}
|
||||
|
||||
def iter_messages(self, token_ids: Sequence[int]) -> Iterator[HarmonyMessage]:
|
||||
"""
|
||||
Parse given token ids into messages.
|
||||
|
||||
Args:
|
||||
token_ids: A sequence of token ids to be parsed.
|
||||
|
||||
Yields:
|
||||
Detected HarmonyMessages.
|
||||
"""
|
||||
|
||||
message_dict: dict[str, HarmonySequence] = {}
|
||||
section: str | None = None # None indicates out-of-message.
|
||||
text_ids: list[int] = []
|
||||
text_start: int | None = None
|
||||
|
||||
for token_position, token_id in enumerate(token_ids):
|
||||
if token_id in self._begin_map:
|
||||
if section is not None:
|
||||
message_dict[section] = HarmonySequence(
|
||||
token_ids=text_ids,
|
||||
start=text_start,
|
||||
)
|
||||
section = self._begin_map[token_id]
|
||||
text_ids = []
|
||||
text_start = token_position + 1
|
||||
|
||||
elif token_id in self._end_map:
|
||||
if section is not None:
|
||||
message_dict[section] = HarmonySequence(
|
||||
token_ids=text_ids,
|
||||
start=text_start,
|
||||
)
|
||||
|
||||
yield HarmonyMessage(**message_dict, end=self._end_map[token_id])
|
||||
|
||||
message_dict = {}
|
||||
section = None
|
||||
text_ids = []
|
||||
text_start = None
|
||||
|
||||
else:
|
||||
if section is not None:
|
||||
text_ids.append(token_id)
|
||||
|
||||
if section is not None:
|
||||
message_dict[section] = HarmonySequence(
|
||||
token_ids=text_ids,
|
||||
start=text_start,
|
||||
)
|
||||
yield HarmonyMessage(**message_dict, end=HarmonyMessageEndType.INCOMPLETE)
|
||||
|
||||
def get_all_messages(self, token_ids: Sequence[int]) -> list[HarmonyMessage]:
|
||||
"""
|
||||
Parse given token ids into messages.
|
||||
|
||||
Args:
|
||||
token_ids: A sequence of token ids to be parsed.
|
||||
|
||||
Returns:
|
||||
A list of detected HarmonyMessages.
|
||||
"""
|
||||
return list(self.iter_messages(token_ids))
|
||||
|
||||
def reverse_iter_messages(self, token_ids: Sequence[int]) -> Iterator[HarmonyMessage]:
|
||||
"""
|
||||
Parse given token ids into messages in reverse order.
|
||||
|
||||
Args:
|
||||
token_ids: A sequence of token ids to be parsed.
|
||||
|
||||
Yields:
|
||||
Detected HarmonyMessages in reverse order.
|
||||
"""
|
||||
end_position = len(token_ids)
|
||||
|
||||
for i in range(len(token_ids) - 1, -1, -1):
|
||||
if token_ids[i] == self._start_id:
|
||||
yield next(self.iter_messages(token_ids[i:end_position]))
|
||||
end_position = i
|
||||
101
llmjp4_tokenizer.py
Normal file
101
llmjp4_tokenizer.py
Normal file
@@ -0,0 +1,101 @@
|
||||
# llm-jp-4 tokenizer
|
||||
|
||||
from collections.abc import Sequence
|
||||
import os
|
||||
|
||||
from transformers import LlamaTokenizerFast
|
||||
from tokenizers import Tokenizer
|
||||
|
||||
from .llmjp4_harmony import HarmonyMessageParser, HarmonyMessage
|
||||
|
||||
|
||||
class Llmjp4Tokenizer(LlamaTokenizerFast):
|
||||
_HARMONY_TOKENS: set[str] = {
|
||||
"<|start|>",
|
||||
"<|message|>",
|
||||
"<|channel|>",
|
||||
"<|constrain|>",
|
||||
"<|end|>",
|
||||
"<|return|>",
|
||||
"<|call|>",
|
||||
}
|
||||
|
||||
# NOTE(odashi):
|
||||
# Response schemas are not recognized automatically.
|
||||
# We need to define them manually.
|
||||
# https://github.com/huggingface/trl/issues/4609
|
||||
_RESPONSE_SCHEMA = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"role": {"const": "assistant"},
|
||||
"content": {"type": "string", "x-regex": r"<\|channel\|>final<\|message\|>(.*?)(?:<\|end\|>|<\|return\|>|$)"},
|
||||
"thinking": {"type": "string", "x-regex": r"<\|channel\|>analysis<\|message\|>(.*?)<\|end\|>"},
|
||||
"tool_calls": {
|
||||
"x-regex-iterator": r"<\|channel\|>commentary (to=functions\..*?<\|message\|>.*?)(?:<\|call\|>|$)",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {"const": "function"},
|
||||
"function": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string", "x-regex": r"^to=functions\.(\w+)"},
|
||||
"arguments": {
|
||||
"type": "object",
|
||||
"x-regex": r"<\|message\|>(.*)",
|
||||
"x-parser": "json",
|
||||
"additionalProperties": {"type": "any"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def convert_to_native_format(cls, **kwargs):
|
||||
# NOTE(odashi):
|
||||
# Workaround for transformers 5.x.
|
||||
# Guaranteeing the same inner behavior with TokenizersBackend.
|
||||
# https://github.com/huggingface/transformers/blob/7d9754a05193eb79b1d86aa744b622b8068008cd/src/transformers/tokenization_utils_tokenizers.py#L110-L116
|
||||
local_kwargs = dict(kwargs)
|
||||
fast_tokenizer_file = local_kwargs.pop("tokenizer_file", None)
|
||||
if fast_tokenizer_file is None or not os.path.isfile(fast_tokenizer_file):
|
||||
raise ValueError("Tokenizer file must exist.")
|
||||
|
||||
local_kwargs["tokenizer_object"] = Tokenizer.from_file(fast_tokenizer_file)
|
||||
return local_kwargs
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.response_schema = self._RESPONSE_SCHEMA
|
||||
|
||||
self._harmony_token_ids = {
|
||||
self.convert_tokens_to_ids(token)
|
||||
for token in self._HARMONY_TOKENS
|
||||
}
|
||||
|
||||
def _decode(self, token_ids: int | list[int], *args, **kwargs):
|
||||
if isinstance(token_ids, int):
|
||||
token_ids = [token_ids]
|
||||
|
||||
result: list[str] = []
|
||||
prev_pos = 0
|
||||
|
||||
# NOTE(odashi):
|
||||
# Ensure that text tokens are decoded without preceding Harmony tokens
|
||||
# to avoid incorrect addition of whitespaces.
|
||||
for pos, token_id in enumerate(token_ids, start=1):
|
||||
if token_id in self._harmony_token_ids or pos == len(token_ids):
|
||||
result.append(super()._decode(token_ids[prev_pos:pos], *args, **kwargs))
|
||||
prev_pos = pos
|
||||
|
||||
return "".join(result)
|
||||
|
||||
def parse_harmony_message(self, token_ids: Sequence[int]) -> list[HarmonyMessage]:
|
||||
"""Helper function to parse token IDs into Harmony messages."""
|
||||
return HarmonyMessageParser(self).get_all_messages(token_ids)
|
||||
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4452ee9fc5d9cf7552d377c3724848e8a40a3426d9fd1faf473d513b5af1c4ec
|
||||
size 4982955968
|
||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9331a76ec6d4d9fd323118e27a45a5911c6dce6296f7e8d05e7d0882d31d654f
|
||||
size 4999819320
|
||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:70e25cc0fda7d30bbfaf85ccfa77aa027759aa630c5661d65cae959410de3dec
|
||||
size 4915916184
|
||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1635dbb4250a377af62d5b18e5d1e98b0ec277bc5a140c8d100405da3bbdf720
|
||||
size 2281744072
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 17180401664
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00004-of-00004.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.norm.weight": "model-00004-of-00004.safetensors"
|
||||
}
|
||||
}
|
||||
51
special_tokens_map.json
Normal file
51
special_tokens_map.json
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|startoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"cls_token": {
|
||||
"content": "<|cls|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|return|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"mask_token": {
|
||||
"content": "<|mask|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"sep_token": {
|
||||
"content": "<|sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<|unk|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a35f390c6489427e4dcac7d957f34d9dacd7332972c990f4b4eb6b724c8873f4
|
||||
size 12879473
|
||||
2839
tokenizer_config.json
Normal file
2839
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
3
v4_pretraining_overview.png
Normal file
3
v4_pretraining_overview.png
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:24c21c60ca5a0b5c9a65841efd9ad98344dd66d4ac7f1fe80ea5100afefe40ef
|
||||
size 281924
|
||||
Reference in New Issue
Block a user