初始化项目,由ModelHub XC社区提供模型
Model: alpha-ai/Reason-With-Choice-3B-GGUF Source: Original Platform
This commit is contained in:
41
.gitattributes
vendored
Normal file
41
.gitattributes
vendored
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
unsloth.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Reason-With-Choice-3B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Reason-With-Choice-3B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
Reason-With-Choice-3B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
119
README.md
Normal file
119
README.md
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
---
|
||||||
|
base_model: meta-llama/Llama-3.2-3B-Instruct
|
||||||
|
tags:
|
||||||
|
- text-generation-inference
|
||||||
|
- transformers
|
||||||
|
- unsloth
|
||||||
|
- llama
|
||||||
|
- gguf
|
||||||
|
- GRPO
|
||||||
|
- meta
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
datasets:
|
||||||
|
- openai/gsm8k
|
||||||
|
---
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png"
|
||||||
|
alt="Title card"
|
||||||
|
style="width: 500px;
|
||||||
|
height: auto;
|
||||||
|
object-position: center top;">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
**Website - https://www.alphaai.biz**
|
||||||
|
|
||||||
|
# Uploaded model
|
||||||
|
|
||||||
|
- **Developed by:** alphaaico
|
||||||
|
- **License:** apache-2.0
|
||||||
|
- **Finetuned from model :** meta-llama/Llama-3.2-3B-Instruct
|
||||||
|
- **Training Framework:** Unsloth + Hugging Face TRL
|
||||||
|
- **Finetuning Techniques:** GRPO + Reward Modelling
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Welcome to the next evolution of AI reasoning! Reason-With-Choice-3B is not just another fine-tuned model, it's a game-changer. It doesn't just generate reasoning, it chooses whether reasoning is even necessary before delivering an answer. This self-reflective capability allows it to introspect, analyze, and adapt to the complexity of each question, ensuring the most efficient and insightful response possible.
|
||||||
|
|
||||||
|
Think about it: most AI models blindly generate reasoning even when unnecessary, leading to bloated, redundant responses. Not this one. With its built-in decision-making, Reason-With-Choice-3B determines if deep reasoning is needed or if a direct answer will suffice—bringing unparalleled efficiency and intelligence to your AI-driven applications.
|
||||||
|
|
||||||
|
## Key Highlights
|
||||||
|
- Reasoning & Self-Reflection: The model first decides if reasoning is necessary and then either provides step-by-step logic or directly answers the question.
|
||||||
|
- Structured Output: Responses follow a strict format with `<think>`, `<reflection>`, and `<answer>` sections, ensuring clarity and interpretability.
|
||||||
|
- Optimized Training: Trained using GRPO (Guided Reward Policy Optimization) to enforce structured responses and improve decision-making.
|
||||||
|
- Efficient Inference: Fine-tuned with Unsloth & Hugging Face's TRL, ensuring faster inference speeds and optimized resource utilization.
|
||||||
|
|
||||||
|
## Prompt Structure
|
||||||
|
|
||||||
|
The model generates responses in the following structured format:
|
||||||
|
```python
|
||||||
|
<think>
|
||||||
|
[Detailed reasoning, if required. Otherwise, this section remains empty.]
|
||||||
|
</think>
|
||||||
|
<reflection>
|
||||||
|
[Internal thought process explaining whether reasoning was needed.]
|
||||||
|
</reflection>
|
||||||
|
<answer>
|
||||||
|
[Final response.]
|
||||||
|
</answer>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
- Decision-Making Capability: The model intelligently determines whether reasoning is necessary before answering.
|
||||||
|
- Improved Accuracy: Training with reward functions ensures adherence to logical response structure.
|
||||||
|
- Structured Outputs: Guarantees that each response follows a predictable and interpretable format.
|
||||||
|
- Enhanced Efficiency: Optimized inference with vLLM for fast token generation and low memory footprint.
|
||||||
|
- Multi-Use Case Compatibility: Can be used for Q&A systems, logical reasoning tasks, and AI-assisted decision-making.
|
||||||
|
|
||||||
|
## Quantization Levels Available
|
||||||
|
- q4_k_m
|
||||||
|
- q5_k_m
|
||||||
|
- q8_0
|
||||||
|
- 16-bit (Full Precision, https://huggingface.co/alpha-ai/Reason-With-Choice-3B)
|
||||||
|
|
||||||
|
## Ideal Configuration for Usage
|
||||||
|
- Temperature: 0.8
|
||||||
|
- Top-p: 0.95
|
||||||
|
- Max Tokens: 1024
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
**Reason-With-Choice-3B is ideal for:**
|
||||||
|
|
||||||
|
- AI Research: Investigating decision-making and reasoning processes in AI.
|
||||||
|
- Conversational AI: Enhancing chatbot intelligence with structured reasoning.
|
||||||
|
- Automated Decision Support: Assisting in structured, step-by-step problem-solving.
|
||||||
|
- Educational Tools: Providing logical explanations for learning and problem-solving.
|
||||||
|
- Business Intelligence: AI-assisted decision-making for operational and strategic planning.
|
||||||
|
|
||||||
|
## Limitations & Considerations
|
||||||
|
- Domain Adaptation: May require further fine-tuning for domain-specific tasks.
|
||||||
|
- Inference Time: Increased processing time when reasoning is necessary.
|
||||||
|
- Potential Biases: Outputs depend on training data and may require verification for critical applications.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This model is released under the Apache-2.0 license.
|
||||||
|
|
||||||
|
## Acknowledgments
|
||||||
|
|
||||||
|
Special thanks to the Unsloth team for optimizing the fine-tuning pipeline and to Hugging Face's TRL for enabling advanced fine-tuning techniques.
|
||||||
|
|
||||||
|
## Security & Format Considerations
|
||||||
|
|
||||||
|
This model has been saved in .bin format due to Unsloth's default serialization method. If security is a concern, we recommend converting to .safetensors using:
|
||||||
|
```python
|
||||||
|
from transformers import AutoModel
|
||||||
|
from safetensors.torch import save_file
|
||||||
|
|
||||||
|
model = AutoModel.from_pretrained("path/to/model")
|
||||||
|
state_dict = model.state_dict()
|
||||||
|
save_file(state_dict, "model.safetensors")
|
||||||
|
print("Model converted to safetensors successfully.")
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, GGUF models are available for optimized inference with llama.cpp, exllama, and other runtime frameworks.
|
||||||
|
|
||||||
|
Choose the format best suited to your security, performance, and deployment requirements.
|
||||||
3
Reason-With-Choice-3B.Q4_K_M.gguf
Normal file
3
Reason-With-Choice-3B.Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5f405cbd430a45b222f3c77d3527578e0204319f9e728729be2ed96488693998
|
||||||
|
size 2019377312
|
||||||
3
Reason-With-Choice-3B.Q5_K_M.gguf
Normal file
3
Reason-With-Choice-3B.Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:96002200af4a1f759033b34a7a97ab76066ab11cba9311a2d3f299f042685335
|
||||||
|
size 2322153632
|
||||||
3
Reason-With-Choice-3B.Q8_0.gguf
Normal file
3
Reason-With-Choice-3B.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:42a67ee23126797bdf9f4be10e34b6d455912d07a20c9ab6a17a27875666b6d3
|
||||||
|
size 3421898912
|
||||||
3
config.json
Normal file
3
config.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"model_type": "llama"
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user