初始化项目,由ModelHub XC社区提供模型

Model: piotreknow02/GPT-OSS-Cybersecurity-20B-Merged-heretic-ara
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-22 00:28:29 +08:00
commit 253964c8cc
18 changed files with 1236 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

135
README.md Normal file
View File

@@ -0,0 +1,135 @@
---
license: apache-2.0
base_model:
- sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged
tags:
- cybersecurity
- security
- gpt-oss
- openai
- fine-tuned
- merged
- text-generation
- moe
- heretic
- uncensored
- decensored
- abliterated
- ara
datasets:
- Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset
- AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.0
- trendmicro-ailab/Primus-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
inference: true
---
# This is a decensored version of [sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged](https://huggingface.co/sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0 with the [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **start_layer_index** | 10 |
| **end_layer_index** | 22 |
| **preserve_good_behavior_weight** | 0.9307 |
| **steer_bad_behavior_weight** | 0.0066 |
| **overcorrect_relative_weight** | 1.1973 |
| **neighbor_count** | 2 |
## Performance
| Metric | This model | Original model ([sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged](https://huggingface.co/sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged)) |
| :----- | :--------: | :---------------------------: |
| **PIQA acc_norm** | 0.7802 | *Unknown* |
| **Refusals** | 3/100 | 88/100 |
-----
# GPT-OSS-Cybersecurity-20B-Merged
Fine-tuned **openai/gpt-oss-20b** (21B total params, 3.6B active - MoE) specialized for **cybersecurity** tasks.
This is a merged model (LoRA weights merged into base) for easy deployment.
## Model Description
GPT-OSS-20B is a Mixture of Experts (MoE) model with efficient inference.
- **Total Parameters**: 21B
- **Active Parameters**: 3.6B (only active experts used per token)
- **Architecture**: MoE (Mixture of Experts)
This model was trained on ~50,000 cybersecurity instruction-response pairs from:
- Trendyol Cybersecurity Dataset (35K samples)
- Fenrir v2.0 Dataset (12K samples)
- Primus-Instruct (3x upsampled)
## Training Details
| Parameter | Value |
|-----------|-------|
| Base Model | openai/gpt-oss-20b |
| Architecture | MoE (21B total, 3.6B active) |
| Training Samples | ~50,000 |
| Epochs | 2 |
| LoRA Rank | 16 |
| LoRA Alpha | 32 |
| Learning Rate | 2e-4 |
| Max Sequence Length | 1024 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged", trust_remote_code=True)
prompt = "What are the indicators of a ransomware attack?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## API Usage
```python
import requests
API_URL = "https://YOUR_ENDPOINT_URL/v1/chat/completions"
response = requests.post(API_URL, json={
"model": "sainikhiljuluri2015/GPT-OSS-Cybersecurity-20B-Merged",
"messages": [{"role": "user", "content": "What is SQL injection?"}],
"max_tokens": 300
})
print(response.json()["choices"][0]["message"]["content"])
```
## Cybersecurity Capabilities
- 🔍 Threat analysis and classification
- 🚨 Security alert triage
- 📋 Incident response guidance
- 🦠 Malware analysis
- 📊 MITRE ATT&CK mapping
- 🔐 Vulnerability assessment
- 💉 SQL injection detection
- 🎣 Phishing analysis
- 🔑 CVE knowledge
- 🛡️ Security best practices
## Hardware Requirements
Due to the 21B parameter size (MoE), recommended:
- **GPU**: A100 40GB+ or equivalent
- **VRAM**: 40GB+ for BF16 inference
- For smaller GPUs, use 4-bit quantization

331
chat_template.jinja Normal file
View File

@@ -0,0 +1,331 @@
{#-
In addition to the normal inputs of `messages` and `tools`, this template also accepts the
following kwargs:
- "builtin_tools": A list, can contain "browser" and/or "python".
- "model_identity": A string that optionally describes the model identity.
- "reasoning_effort": A string that describes the reasoning effort, defaults to "medium".
#}
{#- Tool Definition Rendering ============================================== #}
{%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
{%- if param_spec.type == "array" -%}
{%- if param_spec['items'] -%}
{%- if param_spec['items']['type'] == "string" -%}
{{- "string[]" }}
{%- elif param_spec['items']['type'] == "number" -%}
{{- "number[]" }}
{%- elif param_spec['items']['type'] == "integer" -%}
{{- "number[]" }}
{%- elif param_spec['items']['type'] == "boolean" -%}
{{- "boolean[]" }}
{%- else -%}
{%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
{%- if inner_type == "object | object" or inner_type|length > 50 -%}
{{- "any[]" }}
{%- else -%}
{{- inner_type + "[]" }}
{%- endif -%}
{%- endif -%}
{%- if param_spec.nullable -%}
{{- " | null" }}
{%- endif -%}
{%- else -%}
{{- "any[]" }}
{%- if param_spec.nullable -%}
{{- " | null" }}
{%- endif -%}
{%- endif -%}
{%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
{#- Handle array of types like ["object", "object"] from Union[dict, list] #}
{%- if param_spec.type | length > 1 -%}
{{- param_spec.type | join(" | ") }}
{%- else -%}
{{- param_spec.type[0] }}
{%- endif -%}
{%- elif param_spec.oneOf -%}
{#- Handle oneOf schemas - check for complex unions and fallback to any #}
{%- set has_object_variants = false -%}
{%- for variant in param_spec.oneOf -%}
{%- if variant.type == "object" -%}
{%- set has_object_variants = true -%}
{%- endif -%}
{%- endfor -%}
{%- if has_object_variants and param_spec.oneOf|length > 1 -%}
{{- "any" }}
{%- else -%}
{%- for variant in param_spec.oneOf -%}
{{- render_typescript_type(variant, required_params) -}}
{%- if variant.description %}
{{- "// " + variant.description }}
{%- endif -%}
{%- if variant.default is defined %}
{{ "// default: " + variant.default|tojson }}
{%- endif -%}
{%- if not loop.last %}
{{- " | " }}
{% endif -%}
{%- endfor -%}
{%- endif -%}
{%- elif param_spec.type == "string" -%}
{%- if param_spec.enum -%}
{{- '"' + param_spec.enum|join('" | "') + '"' -}}
{%- else -%}
{{- "string" }}
{%- if param_spec.nullable %}
{{- " | null" }}
{%- endif -%}
{%- endif -%}
{%- elif param_spec.type == "number" -%}
{{- "number" }}
{%- elif param_spec.type == "integer" -%}
{{- "number" }}
{%- elif param_spec.type == "boolean" -%}
{{- "boolean" }}
{%- elif param_spec.type == "object" -%}
{%- if param_spec.properties -%}
{{- "{\n" }}
{%- for prop_name, prop_spec in param_spec.properties.items() -%}
{{- prop_name -}}
{%- if prop_name not in (param_spec.required or []) -%}
{{- "?" }}
{%- endif -%}
{{- ": " }}
{{ render_typescript_type(prop_spec, param_spec.required or []) }}
{%- if not loop.last -%}
{{-", " }}
{%- endif -%}
{%- endfor -%}
{{- "}" }}
{%- else -%}
{{- "object" }}
{%- endif -%}
{%- else -%}
{{- "any" }}
{%- endif -%}
{%- endmacro -%}
{%- macro render_tool_namespace(namespace_name, tools) -%}
{{- "## " + namespace_name + "\n\n" }}
{{- "namespace " + namespace_name + " {\n\n" }}
{%- for tool in tools %}
{%- set tool = tool.function %}
{{- "// " + tool.description + "\n" }}
{{- "type "+ tool.name + " = " }}
{%- if tool.parameters and tool.parameters.properties %}
{{- "(_: {\n" }}
{%- for param_name, param_spec in tool.parameters.properties.items() %}
{%- if param_spec.description %}
{{- "// " + param_spec.description + "\n" }}
{%- endif %}
{{- param_name }}
{%- if param_name not in (tool.parameters.required or []) -%}
{{- "?" }}
{%- endif -%}
{{- ": " }}
{{- render_typescript_type(param_spec, tool.parameters.required or []) }}
{%- if param_spec.default is defined -%}
{%- if param_spec.enum %}
{{- ", // default: " + param_spec.default }}
{%- elif param_spec.oneOf %}
{{- "// default: " + param_spec.default }}
{%- else %}
{{- ", // default: " + param_spec.default|tojson }}
{%- endif -%}
{%- endif -%}
{%- if not loop.last %}
{{- ",\n" }}
{%- else %}
{{- ",\n" }}
{%- endif -%}
{%- endfor %}
{{- "}) => any;\n\n" }}
{%- else -%}
{{- "() => any;\n\n" }}
{%- endif -%}
{%- endfor %}
{{- "} // namespace " + namespace_name }}
{%- endmacro -%}
{%- macro render_builtin_tools(browser_tool, python_tool) -%}
{%- if browser_tool %}
{{- "## browser\n\n" }}
{{- "// Tool for browsing.\n" }}
{{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
{{- "// Cite information from the tool using the following format:\n" }}
{{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
{{- "// Do not quote more than 10 words directly from the tool output.\n" }}
{{- "// sources=web (default: web)\n" }}
{{- "namespace browser {\n\n" }}
{{- "// Searches for information related to `query` and displays `topn` results.\n" }}
{{- "type search = (_: {\n" }}
{{- "query: string,\n" }}
{{- "topn?: number, // default: 10\n" }}
{{- "source?: string,\n" }}
{{- "}) => any;\n\n" }}
{{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
{{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
{{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
{{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
{{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
{{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
{{- "type open = (_: {\n" }}
{{- "id?: number | string, // default: -1\n" }}
{{- "cursor?: number, // default: -1\n" }}
{{- "loc?: number, // default: -1\n" }}
{{- "num_lines?: number, // default: -1\n" }}
{{- "view_source?: boolean, // default: false\n" }}
{{- "source?: string,\n" }}
{{- "}) => any;\n\n" }}
{{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
{{- "type find = (_: {\n" }}
{{- "pattern: string,\n" }}
{{- "cursor?: number, // default: -1\n" }}
{{- "}) => any;\n\n" }}
{{- "} // namespace browser\n\n" }}
{%- endif -%}
{%- if python_tool %}
{{- "## python\n\n" }}
{{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
{{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
{%- endif -%}
{%- endmacro -%}
{#- System Message Construction ============================================ #}
{%- macro build_system_message() -%}
{%- if model_identity is not defined %}
{%- set model_identity = "You are ChatGPT, a large language model trained by OpenAI." %}
{%- endif %}
{{- model_identity + "\n" }}
{{- "Knowledge cutoff: 2024-06\n" }}
{{- "Current date: " + strftime_now("%Y-%m-%d") + "\n\n" }}
{%- if reasoning_effort is not defined %}
{%- set reasoning_effort = "medium" %}
{%- endif %}
{{- "Reasoning: " + reasoning_effort + "\n\n" }}
{%- if builtin_tools %}
{{- "# Tools\n\n" }}
{%- set available_builtin_tools = namespace(browser=false, python=false) %}
{%- for tool in builtin_tools %}
{%- if tool == "browser" %}
{%- set available_builtin_tools.browser = true %}
{%- elif tool == "python" %}
{%- set available_builtin_tools.python = true %}
{%- endif %}
{%- endfor %}
{{- render_builtin_tools(available_builtin_tools.browser, available_builtin_tools.python) }}
{%- endif -%}
{{- "# Valid channels: analysis, commentary, final. Channel must be included for every message." }}
{%- if tools -%}
{{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
{%- endif -%}
{%- endmacro -%}
{#- Main Template Logic ================================================= #}
{#- Set defaults #}
{#- Render system message #}
{{- "<|start|>system<|message|>" }}
{{- build_system_message() }}
{{- "<|end|>" }}
{#- Extract developer message #}
{%- if messages[0].role == "developer" or messages[0].role == "system" %}
{%- set developer_message = messages[0].content %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set developer_message = "" %}
{%- set loop_messages = messages %}
{%- endif %}
{#- Render developer message #}
{%- if developer_message or tools %}
{{- "<|start|>developer<|message|>" }}
{%- if developer_message %}
{{- "# Instructions\n\n" }}
{{- developer_message }}
{{- "\n\n" }}
{%- endif %}
{%- if tools -%}
{{- "# Tools\n\n" }}
{{- render_tool_namespace("functions", tools) }}
{%- endif -%}
{{- "<|end|>" }}
{%- endif %}
{#- Render messages #}
{%- set last_tool_call = namespace(name=none) %}
{%- for message in loop_messages -%}
{#- At this point only assistant/user/tool messages should remain #}
{%- if message.role == 'assistant' -%}
{#- Checks to ensure the messages are being passed in the format we expect #}
{%- if "content" in message %}
{%- if "<|channel|>analysis<|message|>" in message.content or "<|channel|>final<|message|>" in message.content %}
{{- raise_exception("You have passed a message containing <|channel|> tags in the content field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
{%- endif %}
{%- endif %}
{%- if "thinking" in message %}
{%- if "<|channel|>analysis<|message|>" in message.thinking or "<|channel|>final<|message|>" in message.thinking %}
{{- raise_exception("You have passed a message containing <|channel|> tags in the thinking field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }}
{%- endif %}
{%- endif %}
{%- if "tool_calls" in message %}
{#- We need very careful handling here - we want to drop the tool call analysis message if the model #}
{#- has output a later <|final|> message, but otherwise we want to retain it. This is the only case #}
{#- when we render CoT/analysis messages in inference. #}
{%- set future_final_message = namespace(found=false) %}
{%- for future_message in loop_messages[loop.index:] %}
{%- if future_message.role == 'assistant' and "tool_calls" not in future_message %}
{%- set future_final_message.found = true %}
{%- endif %}
{%- endfor %}
{#- We assume max 1 tool call per message, and so we infer the tool call name #}
{#- in "tool" messages from the most recent assistant tool call name #}
{%- set tool_call = message.tool_calls[0] %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{%- if message.content and message.thinking %}
{{- raise_exception("Cannot pass both content and thinking in an assistant message with tool calls! Put the analysis message in one or the other, but not both.") }}
{%- elif message.content and not future_final_message.found %}
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.content + "<|end|>" }}
{%- elif message.thinking and not future_final_message.found %}
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
{%- endif %}
{{- "<|start|>assistant to=" }}
{{- "functions." + tool_call.name + "<|channel|>commentary " }}
{{- (tool_call.content_type if tool_call.content_type is defined else "json") + "<|message|>" }}
{{- tool_call.arguments|tojson }}
{{- "<|call|>" }}
{%- set last_tool_call.name = tool_call.name %}
{%- elif loop.last and not add_generation_prompt %}
{#- Only render the CoT if the final turn is an assistant turn and add_generation_prompt is false #}
{#- This is a situation that should only occur in training, never in inference. #}
{%- if "thinking" in message %}
{{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
{%- endif %}
{#- <|return|> indicates the end of generation, but <|end|> does not #}
{#- <|return|> should never be an input to the model, but we include it as the final token #}
{#- when training, so the model learns to emit it. #}
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|return|>" }}
{%- else %}
{#- CoT is dropped during all previous turns, so we never render it for inference #}
{{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|end|>" }}
{%- set last_tool_call.name = none %}
{%- endif %}
{%- elif message.role == 'tool' -%}
{%- if last_tool_call.name is none %}
{{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
{%- endif %}
{{- "<|start|>functions." + last_tool_call.name }}
{{- " to=assistant<|channel|>commentary<|message|>" + message.content|tojson + "<|end|>" }}
{%- elif message.role == 'user' -%}
{{- "<|start|>user<|message|>" + message.content + "<|end|>" }}
{%- endif -%}
{%- endfor -%}
{#- Generation prompt #}
{%- if add_generation_prompt -%}
<|start|>assistant
{%- endif -%}

68
config.json Normal file
View File

@@ -0,0 +1,68 @@
{
"architectures": [
"GptOssForCausalLM"
],
"attention_bias": true,
"attention_dropout": 0.0,
"dtype": "bfloat16",
"eos_token_id": 200002,
"experts_per_token": 4,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2880,
"initial_context_length": 4096,
"initializer_range": 0.02,
"intermediate_size": 2880,
"layer_types": [
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 131072,
"model_type": "gpt_oss",
"num_attention_heads": 64,
"num_experts_per_tok": 4,
"num_hidden_layers": 24,
"num_key_value_heads": 8,
"num_local_experts": 32,
"output_router_logits": false,
"pad_token_id": 199999,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"beta_fast": 32.0,
"beta_slow": 1.0,
"factor": 32.0,
"original_max_position_embeddings": 4096,
"rope_type": "yarn",
"truncate": false
},
"rope_theta": 150000,
"router_aux_loss_coef": 0.9,
"sliding_window": 128,
"swiglu_limit": 7.0,
"tie_word_embeddings": false,
"transformers_version": "4.57.6",
"use_cache": true,
"vocab_size": 201088
}

11
generation_config.json Normal file
View File

@@ -0,0 +1,11 @@
{
"bos_token_id": 199998,
"do_sample": true,
"eos_token_id": [
200002,
199999,
200012
],
"pad_token_id": 199999,
"transformers_version": "4.57.6"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7190f9446e66c9336f3cfa49a7eab92ea26d76e3f765ca84c3e725e6e78a0ff1
size 4504304664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33e06383ca2b27d4afc69cd2c8481a3fa404f83a633834e780730cd14edb28ea
size 4939127656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e961d2832c94f75fcc07530196a73b012612f9238810ebb3145d7d7d3329d880
size 4939127656

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd7423d07106372bfc4a7a4ddcff18448bc120470166ec0c5d86017ff007bab2
size 4939127680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d272d51a45210f3d6f73975691da3886fc6336386382a778e86cebf553270ed2
size 4939127704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4c6f26d8af339552925944259fe929af95f5a2fdd1b7baa993e96b43e8ed2e5c
size 4939127704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8c3b422af85f114c7243fa07ec038331c7c3f46ba00f92a4cf36dfa4d5ea94fe
size 4939127704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d49073e16530d8a2aa7323300a159f3b4fa0d183a3670122eb326520b5486c03
size 4939127704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c1951ff65d233bcb7f50485d294cced6497b8e0d9707f57087912fc2352492e
size 2751362856

View File

@@ -0,0 +1,419 @@
{
"metadata": {
"total_parameters": 20914757184,
"total_size": 41829514368
},
"weight_map": {
"lm_head.weight": "model-00009-of-00009.safetensors",
"model.embed_tokens.weight": "model-00001-of-00009.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.experts.down_proj": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.experts.down_proj_bias": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.experts.gate_up_proj": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.experts.gate_up_proj_bias": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.router.bias": "model-00001-of-00009.safetensors",
"model.layers.0.mlp.router.weight": "model-00001-of-00009.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.o_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.sinks": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.experts.down_proj": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.experts.down_proj_bias": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.experts.gate_up_proj": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.experts.gate_up_proj_bias": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.router.bias": "model-00001-of-00009.safetensors",
"model.layers.1.mlp.router.weight": "model-00001-of-00009.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.o_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.sinks": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.10.input_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.experts.down_proj": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.experts.down_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.experts.gate_up_proj": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.experts.gate_up_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.router.bias": "model-00004-of-00009.safetensors",
"model.layers.10.mlp.router.weight": "model-00004-of-00009.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.sinks": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.11.input_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.11.mlp.experts.down_proj": "model-00005-of-00009.safetensors",
"model.layers.11.mlp.experts.down_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.11.mlp.experts.gate_up_proj": "model-00005-of-00009.safetensors",
"model.layers.11.mlp.experts.gate_up_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.11.mlp.router.bias": "model-00004-of-00009.safetensors",
"model.layers.11.mlp.router.weight": "model-00004-of-00009.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.11.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.sinks": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.12.input_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.experts.down_proj": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.experts.down_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.experts.gate_up_proj": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.experts.gate_up_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.router.bias": "model-00005-of-00009.safetensors",
"model.layers.12.mlp.router.weight": "model-00005-of-00009.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.sinks": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.13.input_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.experts.down_proj": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.experts.down_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.experts.gate_up_proj": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.experts.gate_up_proj_bias": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.router.bias": "model-00005-of-00009.safetensors",
"model.layers.13.mlp.router.weight": "model-00005-of-00009.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.sinks": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.14.input_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.14.mlp.experts.down_proj": "model-00006-of-00009.safetensors",
"model.layers.14.mlp.experts.down_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.14.mlp.experts.gate_up_proj": "model-00006-of-00009.safetensors",
"model.layers.14.mlp.experts.gate_up_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.14.mlp.router.bias": "model-00005-of-00009.safetensors",
"model.layers.14.mlp.router.weight": "model-00005-of-00009.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.14.self_attn.k_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.o_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.q_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.sinks": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.v_proj.bias": "model-00005-of-00009.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00005-of-00009.safetensors",
"model.layers.15.input_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.experts.down_proj": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.experts.down_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.experts.gate_up_proj": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.experts.gate_up_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.router.bias": "model-00006-of-00009.safetensors",
"model.layers.15.mlp.router.weight": "model-00006-of-00009.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.sinks": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.16.input_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.experts.down_proj": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.experts.down_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.experts.gate_up_proj": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.experts.gate_up_proj_bias": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.router.bias": "model-00006-of-00009.safetensors",
"model.layers.16.mlp.router.weight": "model-00006-of-00009.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.sinks": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.17.input_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.17.mlp.experts.down_proj": "model-00007-of-00009.safetensors",
"model.layers.17.mlp.experts.down_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.17.mlp.experts.gate_up_proj": "model-00007-of-00009.safetensors",
"model.layers.17.mlp.experts.gate_up_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.17.mlp.router.bias": "model-00006-of-00009.safetensors",
"model.layers.17.mlp.router.weight": "model-00006-of-00009.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.17.self_attn.k_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.o_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.q_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.sinks": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.v_proj.bias": "model-00006-of-00009.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00006-of-00009.safetensors",
"model.layers.18.input_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.experts.down_proj": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.experts.down_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.experts.gate_up_proj": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.experts.gate_up_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.router.bias": "model-00007-of-00009.safetensors",
"model.layers.18.mlp.router.weight": "model-00007-of-00009.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.sinks": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.19.input_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.experts.down_proj": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.experts.down_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.experts.gate_up_proj": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.experts.gate_up_proj_bias": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.router.bias": "model-00007-of-00009.safetensors",
"model.layers.19.mlp.router.weight": "model-00007-of-00009.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.sinks": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.2.input_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.2.mlp.experts.down_proj": "model-00002-of-00009.safetensors",
"model.layers.2.mlp.experts.down_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.2.mlp.experts.gate_up_proj": "model-00002-of-00009.safetensors",
"model.layers.2.mlp.experts.gate_up_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.2.mlp.router.bias": "model-00001-of-00009.safetensors",
"model.layers.2.mlp.router.weight": "model-00001-of-00009.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.o_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.sinks": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00009.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00009.safetensors",
"model.layers.20.input_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.20.mlp.experts.down_proj": "model-00008-of-00009.safetensors",
"model.layers.20.mlp.experts.down_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.20.mlp.experts.gate_up_proj": "model-00008-of-00009.safetensors",
"model.layers.20.mlp.experts.gate_up_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.20.mlp.router.bias": "model-00007-of-00009.safetensors",
"model.layers.20.mlp.router.weight": "model-00007-of-00009.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.20.self_attn.k_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.o_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.q_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.sinks": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.v_proj.bias": "model-00007-of-00009.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00007-of-00009.safetensors",
"model.layers.21.input_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.experts.down_proj": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.experts.down_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.experts.gate_up_proj": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.experts.gate_up_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.router.bias": "model-00008-of-00009.safetensors",
"model.layers.21.mlp.router.weight": "model-00008-of-00009.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.k_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.o_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.q_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.sinks": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.v_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.22.input_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.experts.down_proj": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.experts.down_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.experts.gate_up_proj": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.experts.gate_up_proj_bias": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.router.bias": "model-00008-of-00009.safetensors",
"model.layers.22.mlp.router.weight": "model-00008-of-00009.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.k_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.o_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.q_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.sinks": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.v_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.23.input_layernorm.weight": "model-00009-of-00009.safetensors",
"model.layers.23.mlp.experts.down_proj": "model-00009-of-00009.safetensors",
"model.layers.23.mlp.experts.down_proj_bias": "model-00009-of-00009.safetensors",
"model.layers.23.mlp.experts.gate_up_proj": "model-00009-of-00009.safetensors",
"model.layers.23.mlp.experts.gate_up_proj_bias": "model-00009-of-00009.safetensors",
"model.layers.23.mlp.router.bias": "model-00008-of-00009.safetensors",
"model.layers.23.mlp.router.weight": "model-00008-of-00009.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00009-of-00009.safetensors",
"model.layers.23.self_attn.k_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.o_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.q_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.sinks": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.v_proj.bias": "model-00008-of-00009.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00008-of-00009.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.experts.down_proj": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.experts.down_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.experts.gate_up_proj": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.experts.gate_up_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.router.bias": "model-00002-of-00009.safetensors",
"model.layers.3.mlp.router.weight": "model-00002-of-00009.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.sinks": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.experts.down_proj": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.experts.down_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.experts.gate_up_proj": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.experts.gate_up_proj_bias": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.router.bias": "model-00002-of-00009.safetensors",
"model.layers.4.mlp.router.weight": "model-00002-of-00009.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.sinks": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.5.input_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.5.mlp.experts.down_proj": "model-00003-of-00009.safetensors",
"model.layers.5.mlp.experts.down_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.5.mlp.experts.gate_up_proj": "model-00003-of-00009.safetensors",
"model.layers.5.mlp.experts.gate_up_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.5.mlp.router.bias": "model-00002-of-00009.safetensors",
"model.layers.5.mlp.router.weight": "model-00002-of-00009.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.5.self_attn.k_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.o_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.q_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.sinks": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.v_proj.bias": "model-00002-of-00009.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00009.safetensors",
"model.layers.6.input_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.experts.down_proj": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.experts.down_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.experts.gate_up_proj": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.experts.gate_up_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.router.bias": "model-00003-of-00009.safetensors",
"model.layers.6.mlp.router.weight": "model-00003-of-00009.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.sinks": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.7.input_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.experts.down_proj": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.experts.down_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.experts.gate_up_proj": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.experts.gate_up_proj_bias": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.router.bias": "model-00003-of-00009.safetensors",
"model.layers.7.mlp.router.weight": "model-00003-of-00009.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.sinks": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.8.input_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.8.mlp.experts.down_proj": "model-00004-of-00009.safetensors",
"model.layers.8.mlp.experts.down_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.8.mlp.experts.gate_up_proj": "model-00004-of-00009.safetensors",
"model.layers.8.mlp.experts.gate_up_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.8.mlp.router.bias": "model-00003-of-00009.safetensors",
"model.layers.8.mlp.router.weight": "model-00003-of-00009.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.8.self_attn.k_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.o_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.q_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.sinks": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.v_proj.bias": "model-00003-of-00009.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00003-of-00009.safetensors",
"model.layers.9.input_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.experts.down_proj": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.experts.down_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.experts.gate_up_proj": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.experts.gate_up_proj_bias": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.router.bias": "model-00004-of-00009.safetensors",
"model.layers.9.mlp.router.weight": "model-00004-of-00009.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.k_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.o_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.q_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.sinks": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.v_proj.bias": "model-00004-of-00009.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00004-of-00009.safetensors",
"model.norm.weight": "model-00009-of-00009.safetensors"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<|startoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|return|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0614fe83cadab421296e664e1f48f4261fa8fef6e03e63bb75c20f38e37d07d3
size 27868174

183
tokenizer_config.json Normal file
View File

@@ -0,0 +1,183 @@
{
"added_tokens_decoder": {
"199998": {
"content": "<|startoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"199999": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200000": {
"content": "<|reserved_200000|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200001": {
"content": "<|reserved_200001|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200002": {
"content": "<|return|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200003": {
"content": "<|constrain|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200004": {
"content": "<|reserved_200004|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200005": {
"content": "<|channel|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200006": {
"content": "<|start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200007": {
"content": "<|end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200008": {
"content": "<|message|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200009": {
"content": "<|reserved_200009|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200010": {
"content": "<|reserved_200010|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200011": {
"content": "<|reserved_200011|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200012": {
"content": "<|call|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200013": {
"content": "<|reserved_200013|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200014": {
"content": "<|reserved_200014|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200015": {
"content": "<|reserved_200015|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200016": {
"content": "<|reserved_200016|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200017": {
"content": "<|reserved_200017|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"200018": {
"content": "<|endofprompt|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "<|startoftext|>",
"clean_up_tokenization_spaces": false,
"eos_token": "<|return|>",
"extra_special_tokens": {},
"model_input_names": [
"input_ids",
"attention_mask"
],
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<|endoftext|>",
"tokenizer_class": "PreTrainedTokenizerFast"
}