初始化项目,由ModelHub XC社区提供模型

Model: quwsarohi/NanoAgent-135M
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-11 14:12:53 +08:00
commit 3d4dc9cbc7
14 changed files with 49777 additions and 0 deletions

52
.gitattributes vendored Normal file
View File

@@ -0,0 +1,52 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model.gguf filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text
model.safetensors filter=lfs diff=lfs merge=lfs -text
train_info.json filter=lfs diff=lfs merge=lfs -text

25
Modelfile Normal file
View File

@@ -0,0 +1,25 @@
TEMPLATE """{{- if .Messages }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ .Content }}{{ if not $last }}<|im_end|>
{{ end }}
{{- end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""
SYSTEM You are a helpful AI assistant.
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>

267
README.md Normal file
View File

@@ -0,0 +1,267 @@
---
language:
- en
license: apache-2.0
tags:
- llm
- tool-calling
- lightweight
- agentic-tasks
- react
- mlx
model-index:
- name: NanoAgent
results: []
datasets:
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- allenai/tulu-3-sft-personas-instruction-following
- weijie210/gsm8k_decomposed
- Locutusque/function-calling-chatml
- HuggingFaceTB/smoltalk
- nvidia/Nemotron-Instruction-Following-Chat-v1
base_model:
- HuggingFaceTB/SmolLM2-135M-Instruct
pipeline_tag: text-generation
---
# 🧠 NanoAgent — A 135M Parameter Agentic SLM
NanoAgent is a **135M parameter**, **8k context length**, open-source language model designed for **agentic tasks** such as **tool calling**, **instruction following**, and **lightweight reasoning**.
Its small enough (~135 MB in 8-bit) to run on **edge devices** like personal laptops, low-memory CPUs, and even wearables — yet smart enough to make tool calls, parse web information, and give structured answers.
Quick inference resource: [here](https://github.com/QuwsarOhi/NanoAgent/blob/main/notebooks/inference.ipynb)
Github Scripts: [NanoAgent-135M](https://github.com/QuwsarOhi/NanoAgent)
Run in Ollama: `ollama run quwsarohi/NanoAgent`
## 🌍 Real-World Use Cases
- 🕹️ **Runs on edge devices** — laptops, smartwatches, browsers, or CPU-only environments.
- 🌐 **Parses and answers from the web** — supports tool calling to fetch real-time information.
- 🔎 **Answers recent questions** with live web search tools.
- 💬 **Continues conversations** — ideal for assistant or agent frameworks.
- ⚙️ **Tool calling support** enables chaining multiple tools and parsing results to produce final answers.
## ✨ What NanoAgent Supports
| Capability | Description |
|------------------------------------|--------------------------------------------------------------------------------------------------|
| 💬 Basic conversation | Casual small talk |
| 🌐 Information retrieval | e.g., *“How to bake a cake?”*, *“Weather in Toronto”* through web search. Extracts answers from information returned by tools (scraping/search) |
| 🧰 Tool calling | Single & multi-tool call with structured explanation |
| 🧠 Question decomposition | Breaks complex questions into steps |
| 🧭 Question classification | Identifies type of user query (e.g., fact, reasoning, instruction) |
| 📝 Following system prompts | Responds properly to system-level instructions |
| ✍️ Writing emails and tasks | Writes emails, structured messages |
---
## 🧪 Training Overview
- **Base model**: [`SmolLM2-135M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) (instruction-tuned)
- **Fine-tuning method**: ~~[Dynamic Fine-Tuning (DFT)](https://github.com/yongliang-wu/DFT/tree/master)~~ Supervised Fine-Tuning
- **Platform**: Apple Mac M1 (16 GB) — MLX framework
### 📚 Datasets Used
This model was trained using a combination of datasets under different open licenses.
Each dataset retains its original license, and use of those datasets is subject to their respective terms.
#### General Training (SFT)
| Dataset | Purpose | License |
|---------|---------|---------|
| [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) | Math reasoning, word-level reasoning | MIT |
| [allenai/tulu-3-sft-personas-instruction-following](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-instruction-following) | Instruction following with personas | Open Data Commons License Attribution |
| [mlabonne/orca-agentinstruct-1M-v1-cleaned](https://huggingface.co/datasets/mlabonne/orca-agentinstruct-1M-v1-cleaned) | RAG, MCQ, JSON parsing, text classification | Community Data License Agreement Permissive, Version 2.0 |
| [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) (systemchats-30k) | General conversation, system prompts | Apache-2.0 |
| [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) (everyday-conversations) | Everyday conversation | Apache-2.0 |
| [nvidia/Nemotron-Instruction-Following-Chat-v1](https://huggingface.co/datasets/nvidia/Nemotron-Instruction-Following-Chat-v1) | Instruction following, structured outputs | NVIDIA Open Model License |
#### Function Calling Training
| Dataset | Purpose | License |
|---------|---------|---------|
| [Locutusque/function-calling-chatml](https://huggingface.co/datasets/Locutusque/function-calling-chatml) | Tool call response formatting | Apache-2.0 |
| [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | Function calling coverage | Creative Commons Attribution 4.0 |
| [nemotron/interactive_agent](https://huggingface.co/datasets/nemotron/interactive_agent) (local) | Tool calling, agentic behavior | NVIDIA Open Model License |
## 🧭 Key Explorations & Findings
- ✂️ **Dataset deduplication** significantly improved performance by removing noisy or duplicate Q/As.
- ✂️ **Shortening the responses** (casual response) and using shorter python code in training improved performance and reduce repeated token generation.
- 🧮 **Word-level reasoning** from `orca-math` enhanced the models ability to handle stepwise logic.
- 🧰 Designing tool calling prompts using **six open-source tool calling datasets** resulted in stronger structured output generation.
- 🌐 Tool calling integration enabled the model to **extract answers from parsed web data**, supporting up-to-date queries.
## ⚡ Benchmark
### Model Comparison
| Benchmark | SmolLM2-135M-Instruct | NanoAgent |
|-----------|:---------------------:|:---------:|
| **Commonsense QA** (acc) | 20.88% | 20.23% |
| **IFEval** (prompt strict) | 21.63% | **29.94%** |
| **IFEval** (inst strict) | 35.01% | **42.33%** |
| **IFEval** (prompt loose) | 23.84% | **32.16%** |
| **IFEval** (inst loose) | 37.65% | **45.32%** |
| **tinyArc** (acc_norm) | 33.76% | 36.47% |
| **tinyGSM8k** (exact_match) | 0.55% | 2.31% |
| **tinyHellaswag** (acc_norm) | 42.20% | **43.45%** |
| **tinyMMLU** (acc_norm) | 26.79% | **27.62%** |
| **tinyTruthfulQA** (acc) | 38.65% | **40.45%** |
| **tinyWinogrande** (acc_norm) | 46.48% | 42.86% |
### BFCL Benchmark (Tool Calling)
| Category | Accuracy | Correct/Total |
|----------|----------|---------------|
| **Overall** | 28.99% | 725/2501 |
| parallel | 56.50% | 113/200 |
| parallel_multiple | 54.50% | 109/200 |
| simple_python | 41.50% | 166/400 |
| simple_javascript | 40.00% | 20/50 |
| multiple | 31.50% | 63/200 |
| live_simple | 28.29% | 73/258 |
| simple_java | 27.00% | 27/100 |
| live_parallel | 37.50% | 6/16 |
| live_parallel_multiple | 25.00% | 6/24 |
| live_multiple | 13.49% | 142/1053 |
*All evaluations were conducted using greedy decoding (sampling parameter was set to false during HuggingFace inference).
### Key Findings
- **NanoAgent** significantly outperforms the base **SmolLM2-135M-Instruct** on **instruction following** (IFEval) with +8-10% improvements across all metrics
- **NanoAgent** improves on **tinyMMLU**, **tinyTruthfulQA**, and **tinyHellaswag** over the base model
- 🧰 **Tool Calling**: Only NanoAgent supports tool calling — SmolLM2-135M-Instruct does not
## ⚡ Example Usage
### Basic Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "quwsarohi/NanoAgent-135M"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
def inference(messages, max_new_tokens=256, temperature=0.3, **kwargs):
if isinstance(message, list):
input_text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
**kwargs
)
return tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
messages = [{"role": "user", "content": "Hi! Do you have a name?"}]
print(inference(messages))
```
### Tool Calling
NanoAgent uses a JSON-based tool calling format:
````python
import json
tools = [
{
"type": "function",
"function": {
"name": "web_search",
"description": "Performs a web search and returns formatted results.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The search query."}
},
"required": ["query"],
},
}
}
]
TOOL_TEMPLATE = """You are a helpful AI assistant. You have a set of possible tools that you can execute to retrieve information or to perform specific actions. You can execute zero or more tools to answer user question.
Here are the list of tools that you have access to:
```json
{tools}
```
Only execute tools from above. Follow the below JSON signature to execute tools:
```json
[{{"name": "tool_name", "arguments": {{"arg1": "val1", ...}}}}, ...]
```
"""
messages = [
{"role": "system", "content": TOOL_TEMPLATE.format(tools=json.dumps(tools, indent=2))},
{"role": "user", "content": "What's the latest AI news?"},
]
response = inference(messages, max_new_tokens=512)
print(response)
# Output: ```json
# [{"name": "web_search", "arguments": {"query": "latest AI news 2026"}}]
# ```
````
It is suggested to add `'''json\n` tokens as prefill during inference. This shows improved performance as LLM knows it has to execute a tool.
````python
messages = [
{"role": "system", "content": TOOL_TEMPLATE.format(tools=json.dumps(tools, indent=2))},
{"role": "user", "content": "What's the latest AI news?"},
{"role": "assistant", "content": "```json\n"}
]
input_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False,
continue_final_message=True
)
response = inference(input_text, max_new_tokens=512)
print(response)
# Output: [{"name": "web_search", "arguments": {"query": "latest AI news 2026"}}]
# ```
````
## 🧭 Roadmap
- [ ] 📊 Benchmark more agentic tasks
- [ ] 🧠 Explore GRPO for tool calling improvement
- [ ] 🔀 Experiment with weight merging
- [ ] 🧪 Evaluate multi-turn tool chaining
- [ ] 🧹 Further refine datasets for stability
---
## 📄 License
This project (code, model weights, and training recipes) is licensed under the [Apache License 2.0](./LICENSE).
## 📢 Notice
- Model & code are © [quwsarohi](https://github.com/QuwsarOhi), licensed under Apache 2.0.
- Portions of the training data were sourced from third-party datasets under CDLA-P 2.0, MIT, CC-BY 4.0, ODC-BY, and Apache 2.0.
- The licensors of these datasets do **not endorse** this project or its outputs.
- If you redistribute or fine-tune this model, ensure your use complies with all applicable dataset licenses.

11
chat_template.jinja Normal file
View File

@@ -0,0 +1,11 @@
{% for message in messages %}
{% if loop.first and messages[0]['role'] != 'system' %}
{{ '<|im_start|>system
You are a helpful AI assistant. <|im_end|>' }}
{% endif %}
{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>'}}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant' }}
{% endif %}

38
config.json Normal file
View File

@@ -0,0 +1,38 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 1,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 576,
"initializer_range": 0.041666666666666664,
"intermediate_size": 1536,
"is_llama_config": true,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 9,
"num_hidden_layers": 30,
"num_key_value_heads": 3,
"pad_token_id": 2,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_interleaved": false,
"rope_scaling": null,
"rope_theta": 100000,
"tie_word_embeddings": true,
"torch_dtype": "bfloat16",
"transformers.js_config": {
"kv_cache_dtype": {
"fp16": "float16",
"q4f16": "float16"
}
},
"transformers_version": "4.55.4",
"use_cache": true,
"vocab_size": 49152
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 1,
"pad_token_id": 2,
"transformers_version": "4.42.3"
}

48901
merges.txt Normal file

File diff suppressed because it is too large Load Diff

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0dfa32d7aef07331de7b089bdd3298b6f644e60e3d50fa9aaef313b01b9c25cc
size 269060381

View File

@@ -0,0 +1,280 @@
{
"metadata": {
"total_size": 269030016,
"total_parameters": 134515008
},
"weight_map": {
"model.embed_tokens.weight": "model.safetensors",
"model.layers.0.input_layernorm.weight": "model.safetensors",
"model.layers.0.mlp.down_proj.weight": "model.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model.safetensors",
"model.layers.0.mlp.up_proj.weight": "model.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model.safetensors",
"model.layers.1.input_layernorm.weight": "model.safetensors",
"model.layers.1.mlp.down_proj.weight": "model.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model.safetensors",
"model.layers.1.mlp.up_proj.weight": "model.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model.safetensors",
"model.layers.10.input_layernorm.weight": "model.safetensors",
"model.layers.10.mlp.down_proj.weight": "model.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model.safetensors",
"model.layers.10.mlp.up_proj.weight": "model.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model.safetensors",
"model.layers.11.input_layernorm.weight": "model.safetensors",
"model.layers.11.mlp.down_proj.weight": "model.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model.safetensors",
"model.layers.11.mlp.up_proj.weight": "model.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model.safetensors",
"model.layers.12.input_layernorm.weight": "model.safetensors",
"model.layers.12.mlp.down_proj.weight": "model.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model.safetensors",
"model.layers.12.mlp.up_proj.weight": "model.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model.safetensors",
"model.layers.13.input_layernorm.weight": "model.safetensors",
"model.layers.13.mlp.down_proj.weight": "model.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model.safetensors",
"model.layers.13.mlp.up_proj.weight": "model.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model.safetensors",
"model.layers.14.input_layernorm.weight": "model.safetensors",
"model.layers.14.mlp.down_proj.weight": "model.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model.safetensors",
"model.layers.14.mlp.up_proj.weight": "model.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model.safetensors",
"model.layers.15.input_layernorm.weight": "model.safetensors",
"model.layers.15.mlp.down_proj.weight": "model.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model.safetensors",
"model.layers.15.mlp.up_proj.weight": "model.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model.safetensors",
"model.layers.16.input_layernorm.weight": "model.safetensors",
"model.layers.16.mlp.down_proj.weight": "model.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model.safetensors",
"model.layers.16.mlp.up_proj.weight": "model.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model.safetensors",
"model.layers.17.input_layernorm.weight": "model.safetensors",
"model.layers.17.mlp.down_proj.weight": "model.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model.safetensors",
"model.layers.17.mlp.up_proj.weight": "model.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model.safetensors",
"model.layers.18.input_layernorm.weight": "model.safetensors",
"model.layers.18.mlp.down_proj.weight": "model.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model.safetensors",
"model.layers.18.mlp.up_proj.weight": "model.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model.safetensors",
"model.layers.19.input_layernorm.weight": "model.safetensors",
"model.layers.19.mlp.down_proj.weight": "model.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model.safetensors",
"model.layers.19.mlp.up_proj.weight": "model.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model.safetensors",
"model.layers.2.input_layernorm.weight": "model.safetensors",
"model.layers.2.mlp.down_proj.weight": "model.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model.safetensors",
"model.layers.2.mlp.up_proj.weight": "model.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model.safetensors",
"model.layers.20.input_layernorm.weight": "model.safetensors",
"model.layers.20.mlp.down_proj.weight": "model.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model.safetensors",
"model.layers.20.mlp.up_proj.weight": "model.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model.safetensors",
"model.layers.21.input_layernorm.weight": "model.safetensors",
"model.layers.21.mlp.down_proj.weight": "model.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model.safetensors",
"model.layers.21.mlp.up_proj.weight": "model.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model.safetensors",
"model.layers.22.input_layernorm.weight": "model.safetensors",
"model.layers.22.mlp.down_proj.weight": "model.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model.safetensors",
"model.layers.22.mlp.up_proj.weight": "model.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model.safetensors",
"model.layers.23.input_layernorm.weight": "model.safetensors",
"model.layers.23.mlp.down_proj.weight": "model.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model.safetensors",
"model.layers.23.mlp.up_proj.weight": "model.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model.safetensors",
"model.layers.24.input_layernorm.weight": "model.safetensors",
"model.layers.24.mlp.down_proj.weight": "model.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model.safetensors",
"model.layers.24.mlp.up_proj.weight": "model.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model.safetensors",
"model.layers.25.input_layernorm.weight": "model.safetensors",
"model.layers.25.mlp.down_proj.weight": "model.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model.safetensors",
"model.layers.25.mlp.up_proj.weight": "model.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model.safetensors",
"model.layers.26.input_layernorm.weight": "model.safetensors",
"model.layers.26.mlp.down_proj.weight": "model.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model.safetensors",
"model.layers.26.mlp.up_proj.weight": "model.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model.safetensors",
"model.layers.27.input_layernorm.weight": "model.safetensors",
"model.layers.27.mlp.down_proj.weight": "model.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model.safetensors",
"model.layers.27.mlp.up_proj.weight": "model.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model.safetensors",
"model.layers.28.input_layernorm.weight": "model.safetensors",
"model.layers.28.mlp.down_proj.weight": "model.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model.safetensors",
"model.layers.28.mlp.up_proj.weight": "model.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model.safetensors",
"model.layers.29.input_layernorm.weight": "model.safetensors",
"model.layers.29.mlp.down_proj.weight": "model.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model.safetensors",
"model.layers.29.mlp.up_proj.weight": "model.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model.safetensors",
"model.layers.3.input_layernorm.weight": "model.safetensors",
"model.layers.3.mlp.down_proj.weight": "model.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model.safetensors",
"model.layers.3.mlp.up_proj.weight": "model.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model.safetensors",
"model.layers.4.input_layernorm.weight": "model.safetensors",
"model.layers.4.mlp.down_proj.weight": "model.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model.safetensors",
"model.layers.4.mlp.up_proj.weight": "model.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model.safetensors",
"model.layers.5.input_layernorm.weight": "model.safetensors",
"model.layers.5.mlp.down_proj.weight": "model.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model.safetensors",
"model.layers.5.mlp.up_proj.weight": "model.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model.safetensors",
"model.layers.6.input_layernorm.weight": "model.safetensors",
"model.layers.6.mlp.down_proj.weight": "model.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model.safetensors",
"model.layers.6.mlp.up_proj.weight": "model.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model.safetensors",
"model.layers.7.input_layernorm.weight": "model.safetensors",
"model.layers.7.mlp.down_proj.weight": "model.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model.safetensors",
"model.layers.7.mlp.up_proj.weight": "model.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model.safetensors",
"model.layers.8.input_layernorm.weight": "model.safetensors",
"model.layers.8.mlp.down_proj.weight": "model.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model.safetensors",
"model.layers.8.mlp.up_proj.weight": "model.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model.safetensors",
"model.layers.9.input_layernorm.weight": "model.safetensors",
"model.layers.9.mlp.down_proj.weight": "model.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model.safetensors",
"model.layers.9.mlp.up_proj.weight": "model.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model.safetensors",
"model.norm.weight": "model.safetensors"
}
}

34
special_tokens_map.json Normal file
View File

@@ -0,0 +1,34 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9ca9acddb6525a194ec8ac7a87f24fbba7232a9a15ffa1af0c1224fcd888e47c
size 2104556

154
tokenizer_config.json Normal file
View File

@@ -0,0 +1,154 @@
{
"add_prefix_space": false,
"added_tokens_decoder": {
"0": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<repo_name>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"4": {
"content": "<reponame>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"5": {
"content": "<file_sep>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"6": {
"content": "<filename>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"7": {
"content": "<gh_stars>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"8": {
"content": "<issue_start>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"9": {
"content": "<issue_comment>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"10": {
"content": "<issue_closed>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"11": {
"content": "<jupyter_start>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"12": {
"content": "<jupyter_text>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"13": {
"content": "<jupyter_code>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"14": {
"content": "<jupyter_output>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"15": {
"content": "<jupyter_script>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"16": {
"content": "<empty_output>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": "<|im_start|>",
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful AI assistant named SmolLM, trained by Hugging Face<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"model_max_length": 8192,
"pad_token": "<|im_end|>",
"tokenizer_class": "GPT2Tokenizer",
"unk_token": "<|endoftext|>",
"vocab_size": 49152
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long