初始化项目,由ModelHub XC社区提供模型
Model: Ma7ee7/Meet7_0.6b_Exp_Q8 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Meet7_0.6b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
3
Meet7_0.6b.Q8_0.gguf
Normal file
3
Meet7_0.6b.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b8703b7031a494561db4672e9c44f1c1633843c5d8444907b1b6e01f378321a6
|
||||||
|
size 639446624
|
||||||
60
README.md
Normal file
60
README.md
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
---
|
||||||
|
base_model: Ma7ee7/Meet7_0.6b
|
||||||
|
tags:
|
||||||
|
- text-generation-inference
|
||||||
|
- transformers
|
||||||
|
- unsloth
|
||||||
|
- qwen3
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
---
|
||||||
|
|
||||||
|
# Meet7 0.6B — Experimental
|
||||||
|
|
||||||
|
A continued fine-tune of [Meet7 0.6B](https://huggingface.co/Ma7ee7/Meet7_0.6b), trained at a lower learning rate on the same 600-sample dataset. Trades Meet7's sharp BoolQ spike for more balanced commonsense and reasoning gains across the board.
|
||||||
|
|
||||||
|
## Benchmarks
|
||||||
|
|
||||||
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6466047a326128fd2c693cfa/KfI9qNkT6jPkuquBL39UT.png" width="600"/>
|
||||||
|
|
||||||
|
0-shot evaluation, scores are `acc_norm`.
|
||||||
|
|
||||||
|
| Task | Qwen3-0.6B (Base) | Meet7 0.6B | Experimental | Δ vs Base |
|
||||||
|
|------|:-----------------:|:----------:|:------------:|:---------:|
|
||||||
|
| BoolQ | 0.3798 | **0.5554** | 0.3991 | +01.93% |
|
||||||
|
| ARC Easy | 0.3384 | 0.3952 | **0.3965** | +05.81% |
|
||||||
|
| ARC Challenge | 0.2841 | **0.3285** | 0.3259 | +04.18% |
|
||||||
|
| HellaSwag | 0.3981 | 0.4205 | **0.4265** | +02.84% |
|
||||||
|
| PIQA | 0.6338 | 0.6583 | **0.6687** | +03.49% |
|
||||||
|
| Winogrande | 0.5225 | 0.5201 | **0.5304** | +00.79% |
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>What these measure</summary>
|
||||||
|
|
||||||
|
- **BoolQ** — Reading comprehension and yes/no factual grounding
|
||||||
|
- **ARC Easy / Challenge** — Grade-school science reasoning; Challenge is the retrieval-resistant subset
|
||||||
|
- **HellaSwag** — Commonsense sentence completion
|
||||||
|
- **PIQA** — Physical world intuition
|
||||||
|
- **Winogrande** — Commonsense pronoun resolution
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## vs Meet7 0.6B
|
||||||
|
|
||||||
|
This model is more **balanced** than Meet7. It outperforms Meet7 on HellaSwag, PIQA, and Winogrande — the physical and commonsense intuition tasks — at the cost of Meet7's large BoolQ advantage. If you need consistent commonsense reasoning, prefer this model. If yes/no QA is your primary use case, prefer Meet7.
|
||||||
|
|
||||||
|
## Model Details
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|---|---|
|
||||||
|
| **Developed by** | Ma7ee7 |
|
||||||
|
| **License** | Apache-2.0 |
|
||||||
|
| **Base model** | Ma7ee7/Meet7_0.6b |
|
||||||
|
| **Original base** | unsloth/Qwen3-0.6B-unsloth-bnb-4bit |
|
||||||
|
| **Training samples** | 600 |
|
||||||
|
| **Training** | Continued LoRA fine-tune, lower LR |
|
||||||
|
|
||||||
|
Trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face TRL.
|
||||||
|
|
||||||
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
||||||
61
config.json
Normal file
61
config.json
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Qwen3ForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"eos_token_id": 151645,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 1024,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 3072,
|
||||||
|
"layer_types": [
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention"
|
||||||
|
],
|
||||||
|
"max_position_embeddings": 40960,
|
||||||
|
"max_window_layers": 28,
|
||||||
|
"model_type": "qwen3",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_hidden_layers": 28,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": 151669,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 1000000,
|
||||||
|
"sliding_window": null,
|
||||||
|
"tie_word_embeddings": true,
|
||||||
|
"unsloth_fixed": true,
|
||||||
|
"unsloth_version": "2026.3.4",
|
||||||
|
"use_cache": true,
|
||||||
|
"use_sliding_window": false,
|
||||||
|
"vocab_size": 151936
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user