Upload folder using ModelScope SDK
This commit is contained in:
173
README.md
173
README.md
@@ -1,47 +1,138 @@
|
||||
---
|
||||
license: Apache License 2.0
|
||||
|
||||
#model-type:
|
||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
||||
#- gpt
|
||||
|
||||
#domain:
|
||||
##如 nlp、cv、audio、multi-modal
|
||||
#- nlp
|
||||
|
||||
#language:
|
||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
||||
#- cn
|
||||
|
||||
#metrics:
|
||||
##如 CIDEr、Blue、ROUGE 等
|
||||
#- CIDEr
|
||||
|
||||
#tags:
|
||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
||||
#- pretrained
|
||||
|
||||
#tools:
|
||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
||||
#- vllm
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- Magpie-Align/Magpie-Pro-300K-Filtered
|
||||
- mlabonne/FineTome-100k
|
||||
- unsloth/OpenMathReasoning-mini
|
||||
- prithivMLmods/Grade-Math-18K
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- Qwen/Qwen3-0.6B
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- math
|
||||
- code
|
||||
- moe
|
||||
---
|
||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
||||
|
||||
SDK下载
|
||||
```bash
|
||||
#安装ModelScope
|
||||
pip install modelscope
|
||||
```
|
||||

|
||||
|
||||
# Magpie-Qwen-CortexDual-0.6B
|
||||
|
||||
> **Magpie-Qwen-CortexDual-0.6B** is a specialized, general-purpose model designed for **math**, **code**, and **structured reasoning**. Built with **CortexDual thinking mode**, it dynamically adapts to the complexity of a problem, automatically shifting into a stepwise reasoning mode for intricate logic or math tasks. This 0.6B parameter model leverages **80% of the Magpie Pro 330k dataset** and a modular blend of datasets for general-purpose proficiency and domain versatility.
|
||||
|
||||
> \[!note]
|
||||
> GGUF : [https://huggingface.co/prithivMLmods/Magpie-Qwen-CortexDual-0.6B-GGUF](https://huggingface.co/prithivMLmods/Magpie-Qwen-CortexDual-0.6B-GGUF)
|
||||
|
||||
---
|
||||
|
||||
## Key Features
|
||||
|
||||
1. **Adaptive Reasoning via CortexDual**
|
||||
Automatically switches into a deeper thinking mode for complex problems, simulating trace-style deduction for higher-order tasks in math and code.
|
||||
|
||||
2. **Efficient and Compact**
|
||||
At 0.6B parameters, it is optimized for deployment in constrained environments while retaining high fidelity in logic, computation, and structural formatting.
|
||||
|
||||
3. **Magpie-Driven Data Synthesis**
|
||||
Trained using 80% of **Magpie Pro 330k**—a high-quality alignment and reasoning dataset—complemented with curated modular datasets for enhanced general-purpose capabilities.
|
||||
|
||||
4. **Mathematical Precision**
|
||||
Fine-tuned for arithmetic, algebra, calculus, and symbolic logic; ideal for STEM learning platforms, math solvers, and step-by-step tutoring.
|
||||
|
||||
5. **Lightweight Code Assistance**
|
||||
Understands and generates code in Python, JavaScript, and other common languages with contextual accuracy and explanation support.
|
||||
|
||||
6. **Structured Output Generation**
|
||||
Specializes in Markdown, JSON, and table outputs, suitable for technical documentation, instruction generation, and structured reasoning.
|
||||
|
||||
7. **Multilingual Competence**
|
||||
Supports over 20 languages with reasoning and translation support, expanding its reach for global educational and development use.
|
||||
|
||||
---
|
||||
|
||||
## Quickstart with Transformers
|
||||
|
||||
```python
|
||||
#SDK模型下载
|
||||
from modelscope import snapshot_download
|
||||
model_dir = snapshot_download('prithivMLmods/Magpie-Qwen-CortexDual-0.6B')
|
||||
```
|
||||
Git下载
|
||||
```
|
||||
#Git模型下载
|
||||
git clone https://www.modelscope.cn/prithivMLmods/Magpie-Qwen-CortexDual-0.6B.git
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "prithivMLmods/Magpie-Qwen-CortexDual-0.6B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
prompt = "Write a Python function to check if a number is prime. Explain each step."
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are an AI tutor skilled in both math and code."},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=512
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
print(response)
|
||||
```
|
||||
|
||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
||||
---
|
||||
|
||||
## Demo Inference
|
||||
|
||||
> [!warning]
|
||||
non-thinking (direct, reactive, retrieval-based responses)
|
||||
|
||||

|
||||
|
||||
> [!warning]
|
||||
thinking (reasoning, planning, deeper analysis)
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
## Intended Use
|
||||
|
||||
* General-purpose problem solving in math, logic, and code
|
||||
* Interactive STEM tutoring and reasoning explanation
|
||||
* Compact assistant for technical documentation and structured data tasks
|
||||
* Multilingual applications with a focus on accurate technical reasoning
|
||||
* Efficient offline deployment on low-resource devices
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
* Lower creativity and open-domain generation due to reasoning-focused tuning
|
||||
* Limited context window size due to compact model size
|
||||
* May produce simplified logic paths in highly abstract domains
|
||||
* Trade-offs in diversity and expressiveness compared to larger instruction-tuned models
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
1. [Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing](https://arxiv.org/pdf/2406.08464)
|
||||
2. [Qwen2.5 Technical Report](https://arxiv.org/pdf/2412.15115)
|
||||
3. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)
|
||||
Reference in New Issue
Block a user