初始化项目,由ModelHub XC社区提供模型

Model: RthItalia/NanoLLM-Qwen2.5-3B-v3.1
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 07:44:15 +08:00
commit 3b0ebe82cb
20 changed files with 457472 additions and 0 deletions

39
.gitattributes vendored Normal file
View File

@@ -0,0 +1,39 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
quantized_modules.pt filter=lfs diff=lfs merge=lfs -text
full_single/tokenizer.json filter=lfs diff=lfs merge=lfs -text
nano_compact/tokenizer.json filter=lfs diff=lfs merge=lfs -text

54
LICENSE Normal file
View File

@@ -0,0 +1,54 @@
Qwen RESEARCH LICENSE AGREEMENT
Qwen RESEARCH LICENSE AGREEMENT Release Date: September 19, 2024
By clicking to agree or by using or distributing any portion or element of the Qwen Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
1. Definitions
a. This Qwen RESEARCH LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
b. "We" (or "Us") shall mean Alibaba Cloud.
c. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
d. "Third Parties" shall mean individuals or legal entities that are not under common control with us or you.
e. "Qwen" shall mean the large language models, and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by us.
f. "Materials" shall mean, collectively, Alibaba Cloud's proprietary Qwen and Documentation (and any portion thereof) made available under this Agreement.
g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
h. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
i. "Non-Commercial" shall mean for research or evaluation purposes only.
2. Grant of Rights
a. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Alibaba Cloud's intellectual property or other rights owned by us embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials FOR NON-COMMERCIAL PURPOSES ONLY.
b. If you are commercially using the Materials, you shall request a license from us.
3. Redistribution
You may distribute copies or make the Materials, or derivative works thereof, available as part of a product or service that contains any of them, with or without modifications, and in Source or Object form, provided that you meet the following conditions:
a. You shall give any other recipients of the Materials or derivative works a copy of this Agreement;
b. You shall cause any modified files to carry prominent notices stating that you changed the files;
c. You shall retain in all copies of the Materials that you distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "Qwen is licensed under the Qwen RESEARCH LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved."; and
d. You may add your own copyright statement to your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of your modifications, or for any such derivative works as a whole, provided your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
4. Rules of use
a. The Materials may be subject to export controls or restrictions in China, the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the Materials.
b. If you use the Materials or any outputs or results therefrom to create, train, fine-tune, or improve an AI model that is distributed or made available, you shall prominently display “Built with Qwen” or “Improved using Qwen” in the related product documentation.
5. Intellectual Property
a. We retain ownership of all intellectual property rights in and to the Materials and derivatives made by or for us. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by you, you are and will be the owner of such derivative works and modifications.
b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of us, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
c. If you commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against us or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by you, then all licenses granted to you under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
6. Disclaimer of Warranty and Limitation of Liability
a. We are not obligated to support, update, provide training for, or develop any further version of the Qwen Materials or to grant any license thereto.
b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
c. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, NO MATTER HOW ITS CAUSED.
d. You will defend, indemnify and hold harmless us from and against any claim by any third party arising out of or related to your use or distribution of the Materials.
7. Survival and Termination.
a. The term of this Agreement shall commence upon your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
b. We may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you must delete and cease use of the Materials. Sections 6 and 8 shall survive the termination of this Agreement.
8. Governing Law and Jurisdiction.
a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.
9. Other Terms and Conditions.
a. Any arrangements, understandings, or agreements regarding the Material not stated herein are separate from and independent of the terms and conditions of this Agreement. You shall request a separate license from us, if you use the Materials in ways not expressly agreed to in this Agreement.
b. We shall not be bound by any additional or different terms or conditions communicated by you unless expressly agreed.

163
README.md Normal file
View File

@@ -0,0 +1,163 @@
---
language:
- en
license: other
library_name: transformers
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- qwen2.5
- quantization
- mixed-precision
- custom-code
- text-generation
- nanollm
model-index:
- name: nano_compact_3b_qkvfp16
results:
- task:
type: text-generation
dataset:
name: Internal 4-prompt smoke suite
type: internal
metrics:
- type: model_size_gb
value: 2.3432
- type: vram_load_gb
value: 2.3432
- type: vram_peak_generate_gb
value: 2.44
- type: baseline_true_8bit_load_gb
value: 3.1703
- type: baseline_true_8bit_peak_gb
value: 3.21
---
# Nano Compact 3B QKV-FP16
`RthItalia/nano_compact_3b_qkvfp16` is the validated compact self-contained variant derived from `Qwen/Qwen2.5-3B-Instruct`.
This release is not the original overlay artifact. It is the final exported self-contained folder that loads directly with `transformers` plus `trust_remote_code=True`.
## What This Variant Is
This model uses a mixed runtime policy:
- `q_proj`, `k_proj`, `v_proj`: stored and loaded in `fp16`
- `o_proj` and most of the remaining transformer body: stored in Nano compact format
- `model.embed_tokens`: stored as a single quantized copy
- `lm_head`: tied custom head over the quantized embeddings
The objective of this policy is not maximum compression at any cost. It is the best validated tradeoff found between:
- disk size
- VRAM usage
- quality relative to the true `8bit` baseline
## Validated Runtime Envelope
Measured on the validated `3B` run:
- model size: `2.3432 GB`
- allocated after load: `2.3432 GB`
- peak generation VRAM: `~2.44 GB`
True `8bit` baseline used for comparison:
- allocated after load: `3.1703 GB`
- peak generation VRAM: `~3.21 GB`
So this winner variant preserved a meaningful VRAM advantage over the `8bit` baseline while recovering enough quality to pass the smoke comparison used during validation.
## Quality Claim
The quality claim for this release is intentionally narrow:
- it was compared against the true `8bit` baseline on a small internal prompt suite
- it is not claimed to match the full original model in all tasks
- it is not claimed to outperform the base model
During development, more aggressive variants such as:
- fully tied quantized head (`tiedq`)
- fully quantized attention
reached better size and VRAM numbers but failed the quality gate against the true `8bit` reference.
`qkvfp16` was the first variant that restored acceptable behavior on the reference prompt set while keeping a substantial memory advantage.
## How To Load
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
repo_id = "RthItalia/nano_compact_3b_qkvfp16"
tok = AutoTokenizer.from_pretrained(
repo_id,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
trust_remote_code=True,
device_map="cuda",
dtype=torch.float16,
).eval()
```
## Example Generation
```python
messages = [
{"role": "user", "content": "Explain what a neural network is in exactly 3 simple sentences."}
]
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inp = tok(text, return_tensors="pt").to(next(model.parameters()).device)
with torch.no_grad():
out = model.generate(
**inp,
max_new_tokens=120,
do_sample=False,
repetition_penalty=1.08,
eos_token_id=tok.eos_token_id,
pad_token_id=tok.eos_token_id,
)
print(tok.decode(out[0][inp["input_ids"].shape[-1]:], skip_special_tokens=True))
```
## Requirements
```bash
pip install torch transformers accelerate safetensors
```
`bitsandbytes` is not required for this exported winner variant at runtime.
## Important Notes
- `trust_remote_code=True` is required.
- The custom runtime uses a `NanoTiedHead` implementation that ties output logits to the quantized embedding table without registering the embedding module twice.
- The custom linear layers use chunked forward paths to keep peak VRAM under control.
## Limitations
- Validation was narrow and engineering-driven, not a full benchmark suite.
- This release is specifically tuned around `Qwen/Qwen2.5-3B-Instruct`.
- It should be treated as a compact experimental runtime artifact, not as a drop-in scientific proof of broader architectural claims.
## License Note
The base model is derived from `Qwen/Qwen2.5-3B-Instruct`, but this compact release should follow the licensing and distribution terms chosen for this Nano release repository.
For that reason the model card metadata uses `license: other` instead of asserting Apache coverage for the full release package.
## Provenance
- base model: `Qwen/Qwen2.5-3B-Instruct`
- winner policy name: `qkvfp16`
- published repo: `RthItalia/nano_compact_3b_qkvfp16`
---

27
config.json Normal file
View File

@@ -0,0 +1,27 @@
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 32768,
"max_window_layers": 70,
"model_type": "qwen2",
"num_attention_heads": 16,
"num_hidden_layers": 36,
"num_key_value_heads": 2,
"rms_norm_eps": 1e-06,
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.1",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}

14
generation_config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"bos_token_id": 151643,
"pad_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"repetition_penalty": 1.05,
"temperature": 0.7,
"top_p": 0.8,
"top_k": 20,
"transformers_version": "4.37.0"
}

119
load_artifact.py Normal file
View File

@@ -0,0 +1,119 @@
"""Loader NANO-v3.1 UNIVERSAL (Inference Only)"""
import os
import json
from pathlib import Path
import torch
import torch.nn as nn
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
class TrueQuantLinear(nn.Module):
def __init__(self, pq, ps, pi, dq, ds, di, out_features, bias=None, bits=8, device="cuda:0"):
super().__init__()
self.out_features = out_features
self.bits = int(bits)
self.register_buffer("pq", pq.to(device=device, dtype=torch.int8))
self.register_buffer("ps", ps.to(device=device, dtype=torch.float16))
self.register_buffer("pi", pi.to(device=device, dtype=torch.long))
self.register_buffer("dq", dq.to(device=device, dtype=torch.int8))
self.register_buffer("ds", ds.to(device=device, dtype=torch.float16))
self.register_buffer("di", di.to(device=device, dtype=torch.long))
if bias is not None:
self.register_buffer("bias", bias.to(device=device, dtype=torch.float16))
else:
self.bias = None
def forward(self, x):
d, dt = x.device, x.dtype
f = x.to(torch.float16).reshape(-1, x.shape[-1])
o = torch.zeros(f.shape[0], self.out_features, dtype=torch.float16, device=d)
if self.pq.shape[0] > 0:
o.index_copy_(-1, self.pi.to(d), f @ (self.pq.to(d, torch.float16) * self.ps.to(d).unsqueeze(1)).t())
if self.dq.shape[0] > 0:
o.index_copy_(-1, self.di.to(d), f @ (self.dq.to(d, torch.float16) * self.ds.to(d).unsqueeze(1)).t())
if self.bias is not None:
o = o + self.bias.to(d)
return o.reshape(*x.shape[:-1], self.out_features).to(dt)
def _set(root, name, value):
parts = name.split(".")
parent = root
for p in parts[:-1]:
parent = parent[int(p)] if p.isdigit() else getattr(parent, p)
if parts[-1].isdigit():
parent[int(parts[-1])] = value
else:
setattr(parent, parts[-1], value)
def get_module(root, name):
cur = root
for p in name.split("."):
cur = cur[int(p)] if p.isdigit() else getattr(cur, p)
return cur
def load_artifact(artifact_dir):
d = Path(artifact_dir)
spec = json.loads((d / "spec.json").read_text("utf-8"))
state = torch.load(d / "quantized_modules.pt", map_location="cpu")
use_4bit = os.getenv("NANO_LOAD_4BIT", "0").strip().lower() in {"1", "true", "yes", "on"}
qcfg = (
BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
if use_4bit
else BitsAndBytesConfig(load_in_8bit=True)
)
model = AutoModelForCausalLM.from_pretrained(
str(d),
quantization_config=qcfg,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(str(d), use_fast=True)
if tokenizer.pad_token_id is None:
tokenizer.pad_token = tokenizer.eos_token
for name, s in state.items():
dev = next(get_module(model, name).parameters()).device
bits = s["bits"]
if "deg_q_packed" in s:
pk, pad = s["deg_q_packed"], s["pad"]
if bits == 2:
dq = torch.stack([pk & 3, (pk >> 2) & 3, (pk >> 4) & 3, (pk >> 6) & 3], dim=-1).view(pk.shape[0], -1)
if pad > 0:
dq = dq[:, :-pad]
dq = dq.to(torch.int8) - 1
else:
dq = torch.stack([pk & 15, (pk >> 4) & 15], dim=-1).view(pk.shape[0], -1)
if pad > 0:
dq = dq[:, :-pad]
dq = dq.to(torch.int8) - 7
else:
dq = s.get("deg_q", torch.zeros(0, dtype=torch.int8))
_set(
model,
name,
TrueQuantLinear(
s["prot_q"],
s["prot_scale"],
s["prot_idx"],
dq,
s["deg_scale"],
s["deg_idx"],
s["out_features"],
s.get("bias"),
bits,
device=str(dev),
),
)
return model.eval(), tokenizer, spec

151387
merges.txt Normal file

File diff suppressed because it is too large Load Diff

118
modeling_nanollm.py Normal file
View File

@@ -0,0 +1,118 @@
import torch
import torch.nn as nn
from transformers.models.qwen2.configuration_qwen2 import Qwen2Config
from transformers.models.qwen2.modeling_qwen2 import Qwen2ForCausalLM
class NanoInt8Linear(nn.Module):
def __init__(self, in_features, out_features, has_bias=False):
super().__init__()
self.in_features = int(in_features)
self.out_features = int(out_features)
self.has_bias = bool(has_bias)
self.register_buffer("q", torch.empty((self.out_features, self.in_features), dtype=torch.int8))
self.register_buffer("scale", torch.empty((self.out_features,), dtype=torch.float16))
if self.has_bias:
self.register_buffer("bias", torch.empty((self.out_features,), dtype=torch.float16))
def forward(self, x):
dt = x.dtype
f = x.to(torch.float16).reshape(-1, x.shape[-1])
w = self.q.to(f.device, torch.float16) * self.scale.to(f.device).unsqueeze(1)
y = f @ w.t()
if self.has_bias:
y = y + self.bias.to(f.device)
return y.reshape(*x.shape[:-1], self.out_features).to(dt)
class NanoTrueQuantLinear(nn.Module):
def __init__(self, in_features, out_features, prot_rows, deg_rows, has_bias=False):
super().__init__()
self.in_features = int(in_features)
self.out_features = int(out_features)
self.has_bias = bool(has_bias)
self.register_buffer("prot_q", torch.empty((prot_rows, self.in_features), dtype=torch.int8))
self.register_buffer("prot_scale", torch.empty((prot_rows,), dtype=torch.float16))
self.register_buffer("prot_idx", torch.empty((prot_rows,), dtype=torch.long))
self.register_buffer("deg_q", torch.empty((deg_rows, self.in_features), dtype=torch.int8))
self.register_buffer("deg_scale", torch.empty((deg_rows,), dtype=torch.float16))
self.register_buffer("deg_idx", torch.empty((deg_rows,), dtype=torch.long))
if self.has_bias:
self.register_buffer("bias", torch.empty((self.out_features,), dtype=torch.float16))
def forward(self, x):
dt = x.dtype
f = x.to(torch.float16).reshape(-1, x.shape[-1])
y = torch.zeros((f.shape[0], self.out_features), dtype=torch.float16, device=f.device)
if self.prot_q.shape[0] > 0:
w = self.prot_q.to(f.device, torch.float16) * self.prot_scale.to(f.device).unsqueeze(1)
y.index_copy_(-1, self.prot_idx.to(f.device), f @ w.t())
if self.deg_q.shape[0] > 0:
w = self.deg_q.to(f.device, torch.float16) * self.deg_scale.to(f.device).unsqueeze(1)
y.index_copy_(-1, self.deg_idx.to(f.device), f @ w.t())
if self.has_bias:
y = y + self.bias.to(f.device)
return y.reshape(*x.shape[:-1], self.out_features).to(dt)
class NanoEmbedding(nn.Module):
def __init__(self, num_embeddings, embedding_dim):
super().__init__()
self.num_embeddings = int(num_embeddings)
self.embedding_dim = int(embedding_dim)
self.register_buffer("q", torch.empty((self.num_embeddings, self.embedding_dim), dtype=torch.int8))
self.register_buffer("scale", torch.empty((self.num_embeddings,), dtype=torch.float16))
def forward(self, input_ids):
return self.q[input_ids].to(torch.float16) * self.scale[input_ids].to(torch.float16).unsqueeze(-1)
class NanoTiedLMHead(nn.Module):
def __init__(self, embedding):
super().__init__()
self.register_buffer("q", embedding.q.detach().clone())
self.register_buffer("scale", embedding.scale.detach().clone())
def forward(self, x):
w = self.q.to(x.device, torch.float16) * self.scale.to(x.device).unsqueeze(1)
return x.to(torch.float16) @ w.t()
def _set_module(root, name, module):
cur = root
parts = name.split(".")
for p in parts[:-1]:
cur = cur[int(p)] if p.isdigit() else getattr(cur, p)
setattr(cur, parts[-1], module)
class NanoQwenForCausalLM(Qwen2ForCausalLM):
config_class = Qwen2Config
def tie_weights(self, *args, **kwargs):
return None
def mark_tied_weights_as_initialized(self, *args, **kwargs):
return None
def __init__(self, config):
config.tie_word_embeddings = False
super().__init__(config)
self.config.tie_word_embeddings = False
self._tied_weights_keys = []
self.all_tied_weights_keys = {}
mods = getattr(config, "nanollm_modules", {})
for name, spec in mods.items():
kind = spec["kind"]
if kind == "embedding":
mod = NanoEmbedding(spec["num_embeddings"], spec["embedding_dim"])
elif kind == "int8_linear":
mod = NanoInt8Linear(spec["in_features"], spec["out_features"], spec.get("has_bias", False))
elif kind == "truequant_linear":
mod = NanoTrueQuantLinear(
spec["in_features"], spec["out_features"],
spec["prot_rows"], spec["deg_rows"],
spec.get("has_bias", False),
)
else:
raise ValueError(f"Unknown Nano module kind: {kind}")
_set_module(self, name, mod)
if "lm_head" not in mods and isinstance(self.model.embed_tokens, NanoEmbedding):
self.lm_head = NanoTiedLMHead(self.model.embed_tokens)

View File

@@ -0,0 +1,54 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- else %}
{{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + message.content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}

1833
nano_compact/config.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e915e9a187a53afd8d2d4296aa2876f2f450ad8cb6d0247c841112aa3eb446c3
size 3402605128

View File

@@ -0,0 +1,118 @@
import torch
import torch.nn as nn
from transformers.models.qwen2.configuration_qwen2 import Qwen2Config
from transformers.models.qwen2.modeling_qwen2 import Qwen2ForCausalLM
class NanoInt8Linear(nn.Module):
def __init__(self, in_features, out_features, has_bias=False):
super().__init__()
self.in_features = int(in_features)
self.out_features = int(out_features)
self.has_bias = bool(has_bias)
self.register_buffer("q", torch.empty((self.out_features, self.in_features), dtype=torch.int8))
self.register_buffer("scale", torch.empty((self.out_features,), dtype=torch.float16))
if self.has_bias:
self.register_buffer("bias", torch.empty((self.out_features,), dtype=torch.float16))
def forward(self, x):
dt = x.dtype
f = x.to(torch.float16).reshape(-1, x.shape[-1])
w = self.q.to(f.device, torch.float16) * self.scale.to(f.device).unsqueeze(1)
y = f @ w.t()
if self.has_bias:
y = y + self.bias.to(f.device)
return y.reshape(*x.shape[:-1], self.out_features).to(dt)
class NanoTrueQuantLinear(nn.Module):
def __init__(self, in_features, out_features, prot_rows, deg_rows, has_bias=False):
super().__init__()
self.in_features = int(in_features)
self.out_features = int(out_features)
self.has_bias = bool(has_bias)
self.register_buffer("prot_q", torch.empty((prot_rows, self.in_features), dtype=torch.int8))
self.register_buffer("prot_scale", torch.empty((prot_rows,), dtype=torch.float16))
self.register_buffer("prot_idx", torch.empty((prot_rows,), dtype=torch.long))
self.register_buffer("deg_q", torch.empty((deg_rows, self.in_features), dtype=torch.int8))
self.register_buffer("deg_scale", torch.empty((deg_rows,), dtype=torch.float16))
self.register_buffer("deg_idx", torch.empty((deg_rows,), dtype=torch.long))
if self.has_bias:
self.register_buffer("bias", torch.empty((self.out_features,), dtype=torch.float16))
def forward(self, x):
dt = x.dtype
f = x.to(torch.float16).reshape(-1, x.shape[-1])
y = torch.zeros((f.shape[0], self.out_features), dtype=torch.float16, device=f.device)
if self.prot_q.shape[0] > 0:
w = self.prot_q.to(f.device, torch.float16) * self.prot_scale.to(f.device).unsqueeze(1)
y.index_copy_(-1, self.prot_idx.to(f.device), f @ w.t())
if self.deg_q.shape[0] > 0:
w = self.deg_q.to(f.device, torch.float16) * self.deg_scale.to(f.device).unsqueeze(1)
y.index_copy_(-1, self.deg_idx.to(f.device), f @ w.t())
if self.has_bias:
y = y + self.bias.to(f.device)
return y.reshape(*x.shape[:-1], self.out_features).to(dt)
class NanoEmbedding(nn.Module):
def __init__(self, num_embeddings, embedding_dim):
super().__init__()
self.num_embeddings = int(num_embeddings)
self.embedding_dim = int(embedding_dim)
self.register_buffer("q", torch.empty((self.num_embeddings, self.embedding_dim), dtype=torch.int8))
self.register_buffer("scale", torch.empty((self.num_embeddings,), dtype=torch.float16))
def forward(self, input_ids):
return self.q[input_ids].to(torch.float16) * self.scale[input_ids].to(torch.float16).unsqueeze(-1)
class NanoTiedLMHead(nn.Module):
def __init__(self, embedding):
super().__init__()
self.register_buffer("q", embedding.q.detach().clone())
self.register_buffer("scale", embedding.scale.detach().clone())
def forward(self, x):
w = self.q.to(x.device, torch.float16) * self.scale.to(x.device).unsqueeze(1)
return x.to(torch.float16) @ w.t()
def _set_module(root, name, module):
cur = root
parts = name.split(".")
for p in parts[:-1]:
cur = cur[int(p)] if p.isdigit() else getattr(cur, p)
setattr(cur, parts[-1], module)
class NanoQwenForCausalLM(Qwen2ForCausalLM):
config_class = Qwen2Config
def tie_weights(self, *args, **kwargs):
return None
def mark_tied_weights_as_initialized(self, *args, **kwargs):
return None
def __init__(self, config):
config.tie_word_embeddings = False
super().__init__(config)
self.config.tie_word_embeddings = False
self._tied_weights_keys = []
self.all_tied_weights_keys = {}
mods = getattr(config, "nanollm_modules", {})
for name, spec in mods.items():
kind = spec["kind"]
if kind == "embedding":
mod = NanoEmbedding(spec["num_embeddings"], spec["embedding_dim"])
elif kind == "int8_linear":
mod = NanoInt8Linear(spec["in_features"], spec["out_features"], spec.get("has_bias", False))
elif kind == "truequant_linear":
mod = NanoTrueQuantLinear(
spec["in_features"], spec["out_features"],
spec["prot_rows"], spec["deg_rows"],
spec.get("has_bias", False),
)
else:
raise ValueError(f"Unknown Nano module kind: {kind}")
_set_module(self, name, mod)
if "lm_head" not in mods and isinstance(self.model.embed_tokens, NanoEmbedding):
self.lm_head = NanoTiedLMHead(self.model.embed_tokens)

View File

@@ -0,0 +1,6 @@
{
"format": "compact-safetensors-v1",
"base_model_id": "Qwen/Qwen2.5-3B-Instruct",
"artifact_dir": "/workspace/nano_rebuild/runs_3b/099/final_artifact_3B",
"requires_trust_remote_code": true
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3fd169731d2cbde95e10bf356d66d5997fd885dd8dbb6fb4684da3f23b2585d8
size 11421892

View File

@@ -0,0 +1,30 @@
{
"add_prefix_space": false,
"backend": "tokenizers",
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"is_local": true,
"local_files_only": false,
"model_max_length": 131072,
"pad_token": "<|endoftext|>",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

3
quantized_modules.pt Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ced012a70e1d293e64886fce36aa5ba3effaeec3140be8632df7aff31318f25d
size 962373818

11
spec.json Normal file
View File

@@ -0,0 +1,11 @@
{
"format": "nano-v3.1",
"base_model_id": "Qwen/Qwen2.5-3B-Instruct",
"locked_count": 143,
"pending_8bit": 109,
"elapsed_seconds": 1624,
"build_reference_mode": "8bit",
"reference_scope": "original_baseline",
"self_contained": true,
"base_model_local_subdir": "."
}

303282
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

207
tokenizer_config.json Normal file
View File

@@ -0,0 +1,207 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"model_max_length": 131072,
"pad_token": "<|endoftext|>",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long