Files
NanoLLM-Qwen2.5-14B-v3.1/README.md
ModelHub XC d652fe3c32 初始化项目,由ModelHub XC社区提供模型
Model: RthItalia/NanoLLM-Qwen2.5-14B-v3.1
Source: Original Platform
2026-05-09 10:59:03 +08:00

812 B

license, library_name, base_model, tags
license library_name base_model tags
other transformers Qwen/Qwen2.5-14B-Instruct
nanollm
qwen2.5
safetensors
text-generation

NanoLLM Qwen2.5-14B-Instruct v3.1

Compact self-contained NanoLLM format is in nano_compact/.

from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "RthItalia/NanoLLM-Qwen2.5-14B-v3.1"
tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder="nano_compact", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, subfolder="nano_compact", trust_remote_code=True, device_map="auto")

Validation against 8-bit reference:

  • avg cosine: 0.98984375
  • min cosine: 0.9765625
  • gate: avg >= 0.985

nano_compact/model.safetensors contains Nano quantized tensors and does not require downloading the Qwen base weights.