初始化项目,由ModelHub XC社区提供模型
Model: alpha-ai/llama-3.2-3B-Reason-Reflect-Lite-GGUF Source: Original Platform
This commit is contained in:
41
.gitattributes
vendored
Normal file
41
.gitattributes
vendored
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
llama-3.2-3B-Reason-Reflect-Lite.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
llama-3.2-3B-Reason-Reflect-Lite.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
llama-3.2-3B-Reason-Reflect-Lite.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
108
README.md
Normal file
108
README.md
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
---
|
||||||
|
base_model:
|
||||||
|
- meta-llama/Llama-3.2-3B-Instruct
|
||||||
|
tags:
|
||||||
|
- text-generation-inference
|
||||||
|
- transformers
|
||||||
|
- alphaaico
|
||||||
|
- qwen
|
||||||
|
- reasoning
|
||||||
|
- thought
|
||||||
|
- lite
|
||||||
|
- GRPO
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
datasets:
|
||||||
|
- openai/gsm8k
|
||||||
|
---
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png"
|
||||||
|
alt="Title card"
|
||||||
|
style="width: 500px;
|
||||||
|
height: auto;
|
||||||
|
object-position: center top;">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
**Website - https://www.alphaai.biz**
|
||||||
|
|
||||||
|
# Uploaded Model
|
||||||
|
|
||||||
|
- **Developed by:** alphaaico
|
||||||
|
- **License:** apache-2.0
|
||||||
|
- **Finetuned from model:** meta-llama/Llama-3.2-3B-Instruct
|
||||||
|
|
||||||
|
This model, **llama-3.2-3B-Reason-Reflect-Lite**, is a fine-tuned version of Llama-3.2-3B-Instruct designed to not only reason through problems but also introspect on the reasoning process itself before delivering the final response. Its unique selling proposition (USP) is that it generates both a detailed reasoning and an internal thought on why that reasoning was made, all before presenting the final answer.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**llama-3.2-3B-Reason-Reflect-Lite** has been finetuned using GRPO and advanced reward modelling techniques—including custom functions such as `sequence_format_reward_func`—to enforce a strict response structure and encourage deep reasoning. While we won't divulge all the details, these techniques ensure that the model generates responses in a precise sequence that includes both a detailed reasoning process and a subsequent internal reflection before providing the final answer.
|
||||||
|
|
||||||
|
## Model Details
|
||||||
|
|
||||||
|
- **Base Model:** meta-llama/Llama-3.2-3B-Instruct
|
||||||
|
- **Fine-tuned by:** alphaaico
|
||||||
|
- **Training Framework:** Unsloth and Hugging Face’s TRL library
|
||||||
|
- **Finetuning Techniques:** GRPO and additional reward modelling methods
|
||||||
|
|
||||||
|
## Prompt Structure
|
||||||
|
|
||||||
|
The model is designed to generate responses in the following exact format:
|
||||||
|
```python
|
||||||
|
Respond in the following exact format:
|
||||||
|
<think>
|
||||||
|
[Your detailed reasoning here...]
|
||||||
|
</think>
|
||||||
|
<reflection>
|
||||||
|
[Your internal thought process about the reasoning and the question...]
|
||||||
|
</reflection>
|
||||||
|
<answer>
|
||||||
|
[Your final answer here...]
|
||||||
|
</answer>
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
- **Enhanced Thinking & Self Reflection:** Produces detailed reasoning enclosed in `<think>` tags and follows it with an internal thought process (the "why" behind the reasoning) enclosed in `<reflection>` tags before giving the final answer in `<answer>` tags.
|
||||||
|
- **Structured Output:** The response format is strictly enforced, making it easy to parse and integrate into downstream applications.
|
||||||
|
- **Optimized Inference:** Fine-tuned using Unsloth and TRL for faster and more efficient performance on consumer hardware.
|
||||||
|
- **Versatile Deployment:** Supports multiple quantization formats, including GGUF and 16-bit, to accommodate various hardware configurations.
|
||||||
|
|
||||||
|
## Quantization Levels Available
|
||||||
|
|
||||||
|
- q4_k_m
|
||||||
|
- q5_k_m
|
||||||
|
- q8_0
|
||||||
|
- 16 Bit (https://huggingface.co/alpha-ai/llama-3.2-3B-Reason-Reflect-Lite)
|
||||||
|
|
||||||
|
## Ideal Configuration for Using the Model
|
||||||
|
|
||||||
|
- **Temperature:** 0.8
|
||||||
|
- **Top-p:** 0.95
|
||||||
|
- **Max Tokens:** 1024
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
**llama-3.2-3B-Reason-Reflect-Lite** is best suited for:
|
||||||
|
- **Conversational AI:** Empowering chatbots and virtual assistants with multi-step reasoning and introspective capabilities.
|
||||||
|
- **AI Research:** Investigating advanced reasoning and decision-making processes.
|
||||||
|
- **Automated Decision Support:** Enhancing business intelligence, legal reasoning, and financial analysis systems with structured, step-by-step outputs.
|
||||||
|
- **Educational Tools:** Assisting students and professionals in structured learning and problem solving.
|
||||||
|
- **Creative Applications:** Generating reflective and detailed content for storytelling, content creation, and more.
|
||||||
|
|
||||||
|
## Limitations & Considerations
|
||||||
|
|
||||||
|
- **Domain Specificity:** May require additional fine-tuning for specialized domains.
|
||||||
|
- **Factual Accuracy:** Primarily focused on reasoning and introspection; not intended as a comprehensive factual knowledge base.
|
||||||
|
- **Inference Speed:** Enhanced reasoning capabilities may result in slightly longer inference times.
|
||||||
|
- **Potential Biases:** Output may reflect biases present in the training data.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This model is released under the Apache-2.0 license.
|
||||||
|
|
||||||
|
## Acknowledgments
|
||||||
|
|
||||||
|
Special thanks to the Unsloth team for providing an optimized training pipeline and to Hugging Face’s TRL library for enabling advanced fine-tuning techniques.
|
||||||
3
config.json
Normal file
3
config.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"model_type": "llama"
|
||||||
|
}
|
||||||
3
llama-3.2-3B-Reason-Reflect-Lite.Q4_K_M.gguf
Normal file
3
llama-3.2-3B-Reason-Reflect-Lite.Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:98590fa78b8f3b0c73580dfd6a6909771cee0325fa5c93dc66a7e1cc394934a9
|
||||||
|
size 2019377312
|
||||||
3
llama-3.2-3B-Reason-Reflect-Lite.Q5_K_M.gguf
Normal file
3
llama-3.2-3B-Reason-Reflect-Lite.Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e6e1c58552957bb221dc1ecdd3f8ab069a913faa05a957e7bd965c93a2244200
|
||||||
|
size 2322153632
|
||||||
3
llama-3.2-3B-Reason-Reflect-Lite.Q8_0.gguf
Normal file
3
llama-3.2-3B-Reason-Reflect-Lite.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:52e1b6df19ef6ffbd388527a5d45838c0531678ae4c4d6b0673c3330cafd01e2
|
||||||
|
size 3421898912
|
||||||
Reference in New Issue
Block a user