Update README.md

This commit is contained in:
ai-modelscope
2025-01-26 15:17:50 +08:00
parent 7f162a25fa
commit 5b4bd67586
15 changed files with 2715 additions and 63 deletions

127
README.md
View File

@@ -1,47 +1,92 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
license: apache-2.0
datasets:
- huihui-ai/FineQwQ-142k
base_model:
- huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated
tags:
- llama3.1
library_name: transformers
pipeline_tag: text-generation
language:
- en
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
# MicroThinker-8B-Preview
SDK下载
```bash
#安装ModelScope
pip install modelscope
MicroThinker-8B-Preview, a new model fine-tuned from the [huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) model, focused on advancing AI reasoning capabilities.
The 8B version is better than both the 3B and 1B versions.
## Use with ollama
You can use [huihui_ai/microthinker](https://ollama.com/huihui_ai/microthinker) directly
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('huihui-ai/MicroThinker-8B-Preview')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/huihui-ai/MicroThinker-8B-Preview.git
ollama run huihui_ai/microthinker:8b
```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
## Training Details
This is just a test, but the performance is quite good.
Now, I'll introduce the test environment.
The model was trained using 1 RTX 4090 GPU(24GB) .
The fine-tuning process used 142k from the FineQwQ-142k dataset, max_length(tokens) 21710, quant_bits 4.
The [SFT (Supervised Fine-Tuning)](https://github.com/modelscope/ms-swift) process is divided into several steps, and no code needs to be written.
1. Create the environment.
```
conda create -yn ms-swift python=3.11
conda activate ms-swift
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .
cd ..
```
2. Download the model and dataset.
```
huggingface-cli download huihui-ai/Llama-3.1-8B-Instruct-abliterated --local-dir ./huihui-ai/Llama-3.1-8B-Instruct-abliterated
huggingface-cli download --repo-type dataset huihui-ai/FineQwQ-142k --local-dir ./data/FineQwQ-142k
```
3. Used only the huihui-ai/FineQwQ-142k, Trained for 1 epoch:
```
swift sft --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --model_type llama3_1 --train_type lora --dataset "data/FineQwQ-142k/FineQwQ-142k.jsonl" --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_length 21710 --quant_bits 4 --bnb_4bit_compute_dtype bfloat16 --bnb_4bit_quant_storage bfloat16 --lora_rank 8 --lora_alpha 32 --gradient_checkpointing true --weight_decay 0.1 --learning_rate 1e-4 --gradient_accumulation_steps 16 --eval_steps 500 --save_steps 500 --logging_steps 100 --system "You are a helpful assistant. You should think step-by-step." --output_dir output/MicroThinker-8B-Preview/lora/sft --model_author "huihui-ai" --model_name "MicroThinker-8B-Preview"
```
4. Save the final fine-tuned model. After you're done, input `exit` to exit.
Replace the directories below with specific ones.
```
swift infer --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --adapters output/Llama-3.1-8B-Instruct-abliterated/lora/sft/v0-20250119-175713/checkpoint-19500 --stream true --merge_lora true
```
This should create a new model directory: `checkpoint-19500-merged`, Rename the directory to `MicroThinker-8B-Preview`, Copy or move this directory to the `huihui` directory.
5. Perform inference on the final fine-tuned model.
```
swift infer --model huihui/MicroThinker-8B-Preview --stream true --infer_backend pt --max_new_tokens 8192
```
6. Test examples.
```
How many 'r' characters are there in the word "strawberry"?
```
```
If a lake is covered by lilies in 48 days, with the number of lilies doubling each day, how many days does it take to cover half the lake?
```
```
If there are 10 people at a meeting who shake hands with each other, how many handshakes will occur in total?
```