初始化项目,由ModelHub XC社区提供模型
Model: becnic/Qwen3-4B-Thinking-2507-Heretic-GGUF Source: Original Platform
This commit is contained in:
37
.gitattributes
vendored
Normal file
37
.gitattributes
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen3-4B-Thinking-2507-Heretic.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Qwen3-4B-Thinking-2507-Heretic-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
Qwen3-4B-Thinking-2507-Heretic-Q8_0.gguf
Normal file
3
Qwen3-4B-Thinking-2507-Heretic-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a7f8009e771a96580c0985f0d6cde868385bdb1eb278b798b9e9e9871bb634e5
|
||||
size 4280404736
|
||||
100
README.md
Normal file
100
README.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
|
||||
pipeline_tag: text-generation
|
||||
base_model:
|
||||
- becnic/Qwen3-4B-Thinking-2507-Heretic
|
||||
language:
|
||||
- en
|
||||
- de
|
||||
- fr
|
||||
- it
|
||||
- pt
|
||||
- hi
|
||||
- es
|
||||
- th
|
||||
---
|
||||
|
||||
# Qwen3-4B-Thinking-2507-Heretic-GGUF
|
||||
|
||||
## Llamacpp imatrix Quantizations of Qwen3-4B-Thinking-2507-Heretic by becnic (from original Qwen3-4B-Thinking-2507)
|
||||
|
||||
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b7120">b7120</a> for quantization.
|
||||
|
||||
Original model: https://huggingface.co/becnic/Qwen3-4B-Thinking-2507-Heretic
|
||||
|
||||
Run them in [LM Studio](https://lmstudio.ai/)
|
||||
|
||||
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
|
||||
|
||||
## Download a file (not the whole branch) from below:
|
||||
|
||||
| Filename | Quant type | File Size | Split | Description |
|
||||
| -------- | ---------- | --------- | ----- | ----------- |
|
||||
| [Qwen3-4B-Thinking-2507-Q8_0.gguf](https://huggingface.co/becnic/Qwen3-4B-Thinking-2507-Heretic-GGUF/blob/main/Qwen3-4B-Thinking-2507-Heretic-Q8_0.gguf) | Q8_0 | 4.28GB | false | Extremely high quality |
|
||||
|
||||
## Downloading using huggingface-cli
|
||||
|
||||
<details>
|
||||
<summary>Click to view download instructions</summary>
|
||||
|
||||
First, make sure you have hugginface-cli installed:
|
||||
|
||||
```
|
||||
pip install -U "huggingface_hub[cli]"
|
||||
```
|
||||
|
||||
Then, you can target the specific file you want:
|
||||
|
||||
```
|
||||
huggingface-cli download becnic/Qwen3-4B-Thinking-2507-Heretic-GGUF --include "Qwen3-4B-Thinking-2507-Q8_0.gguf" --local-dir ./
|
||||
```
|
||||
|
||||
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
||||
|
||||
```
|
||||
huggingface-cli download becnic/Qwen3-4B-Thinking-2507-Heretic-GGUF --include "Qwen3-4B-Thinking-2507-Q8_0.gguf/*" --local-dir ./
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Abliteration parameters
|
||||
|
||||
| Parameter | Value |
|
||||
| :-------- | :---: |
|
||||
| **direction_index** | 19.42 |
|
||||
| **attn.o_proj.max_weight** | 1.23 |
|
||||
| **attn.o_proj.max_weight_position** | 22.34 |
|
||||
| **attn.o_proj.min_weight** | 0.69 |
|
||||
| **attn.o_proj.min_weight_distance** | 10.42 |
|
||||
| **mlp.down_proj.max_weight** | 1.12 |
|
||||
| **mlp.down_proj.max_weight_position** | 29.64 |
|
||||
| **mlp.down_proj.min_weight** | 1.08 |
|
||||
| **mlp.down_proj.min_weight_distance** | 20.24 |
|
||||
|
||||
## Performance
|
||||
|
||||
| Metric | This model | Original model ([Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)) |
|
||||
| :----- | :--------: | :---------------------------: |
|
||||
| **KL divergence** | 0.06 | 0 *(by definition)* |
|
||||
| **Refusals** | 6/100 | 96/100 |
|
||||
|
||||
## Model Overview
|
||||
|
||||
**Qwen3-4B-Thinking-2507** has the following features:
|
||||
- Type: Causal Language Models
|
||||
- Training Stage: Pretraining & Post-training
|
||||
- Number of Parameters: 4.0B
|
||||
- Number of Paramaters (Non-Embedding): 3.6B
|
||||
- Number of Layers: 36
|
||||
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
|
||||
- Context Length: **262,144 natively**.
|
||||
|
||||
**NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=True` is no longer required.**
|
||||
|
||||
Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
|
||||
|
||||
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
||||
|
||||
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
|
||||
|
||||
Reference in New Issue
Block a user