Training w/ 13,55%
This commit is contained in:
26
README.md
26
README.md
@@ -30,7 +30,7 @@ base_model:
|
||||
|
||||
<h1 align="center">Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm Agent-ID (tall_tame_panther)</h1>
|
||||
|
||||
<h2 align="center">Gensyn RL-Swarm: Training & GGUF Inference for Quantized LLMs</h2>
|
||||
<h2 align="center">Gensyn RL-Swarm: Training & GGUF Quantized LLMs for Inference</h2>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://huggingface.co/0xgr3y/Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm-tall_tame_panther"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Model-blue" alt="Model"></a>
|
||||
@@ -41,16 +41,22 @@ base_model:
|
||||
<a href="https://github.com/gensyn-ai/rl-swarm/blob/main/LICENSE.TXT"><img src="https://img.shields.io/badge/License-MIT-green" alt="License"></a>
|
||||
</p>
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://gensyn.ai)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## Model Overview
|
||||
|
||||
Our pick an experimental (advanced) mode at this model a continuously trained **Qwen2.5-Coder-0.5B-Instruct** fine-tuned using **Gensyn RL-Swarm** framework with **GRPO (Group Relative Policy Optimization)** and supported format **GGUF (llama.cpp)** for enhanced code generation capabilities. **Note: Current training focuses on programming challenges with adaptive weighted sampling**.
|
||||
Our pick an **experimental (advanced) mode** at this model a continuously trained `Qwen2.5-Coder-0.5B-Instruct` fine-tuned using **Gensyn RL-Swarm** framework with **GRPO (Group Relative Policy Optimization)** and supported format **GGUF (llama.cpp)** for enhanced code generation capabilities. **Note: Current training focuses on programming challenges with adaptive weighted sampling**.
|
||||
|
||||
- **Agent ID:** `tall_tame_panther`
|
||||
- **Training Status:** 🟢 LIVE - Model updates automatically every 5-10 minutes
|
||||
- **Auto-Sync GGUF Pipeline Status:** 🟢 LIVE - Commits update automatically every hour
|
||||
- **Current Progress:** Round 13,054+ / 100,000 (13.05%)
|
||||
- **Current Progress:** Round 13,533+ / 100,000 (13.53%)
|
||||
- **Framework Version:** Gensyn RL-Swarm v0.7.0
|
||||
- **Contract:** SwarmCoordinator v0.4.2
|
||||
|
||||
@@ -59,7 +65,7 @@ Our pick an experimental (advanced) mode at this model a continuously trained **
|
||||
- **Real-time Training**: Continuous learning with distributed RL across Gensyn swarm network
|
||||
- **Adaptive System**: Dynamic quality enhanced and dataset weighting for optimal learning
|
||||
- **Multi-domain Coding**: Trained on MBPP and CodeContests datasets with adaptive sampling
|
||||
- **GGUF Support**: Multiple quantized formats available (F16, Q3_K_M, Q4_K_M, Q5_K_M)
|
||||
- **GGUF Support**: Multiple quantized formats available (F16, Q3_K_M, Q4_K_M, Q5_K_M, Q6_K)
|
||||
- **llama.cpp Compatible**: Ready for edge deployment and local inference
|
||||
- **BF16 Precision**: Trained with bfloat16 for optimal performance
|
||||
- **TGI Compatible**: Supports Text Generation Inference for production deployment
|
||||
@@ -219,17 +225,19 @@ ollama create qwen2.5-coder-swarm -f Modelfile
|
||||
ollama run qwen2.5-coder-swarm "Write a function to calculate the factorial of a number."
|
||||
```
|
||||
|
||||
## Available Quantization Formats
|
||||
## Available GGUF Quantization
|
||||
|
||||
| Format | Size | Precision | Use Case | Download |
|
||||
|--------|------|-----------|----------|----------|
|
||||
| Safetensors (BF16) | 988 MB | BF16 | Full precision training/fine-tuning | `model.safetensors` |
|
||||
| GGUF F16 | 994 MB | FP16 | High quality inference | `Qwen2.5-Coder-0.5B-F16.gguf` |
|
||||
| GGUF Q6_K | 506 MB | 6-bit | High quality compression | `Qwen2.5-Coder-0.5B-Q6_K.gguf` |
|
||||
| GGUF Q5_K_M | 420 MB | 5-bit | Balanced quality/size | `Qwen2.5-Coder-0.5B-Q5_K_M.gguf` |
|
||||
| GGUF Q4_K_M | 398 MB | 4-bit | **Recommended** for production | `Qwen2.5-Coder-0.5B-Q4_K_M.gguf` |
|
||||
| GGUF Q3_K_M | 355 MB | 3-bit | Smallest, fastest | `Qwen2.5-Coder-0.5B-Q3_K_M.gguf` |
|
||||
|
||||
All GGUF formats are **llama.cpp compatible** and auto-updated hourly.
|
||||
> All GGUF formats are **llama.cpp is compatible** ready to use **Inferences chat** and auto-update be hourly.
|
||||
|
||||
|
||||
## Chat Format & Conversational
|
||||
|
||||
@@ -386,8 +394,8 @@ Check commit history for exact timestamps.
|
||||
|
||||
| Metric | Value | Target |
|
||||
|--------|-------|--------|
|
||||
| Completed Rounds | 13,054+ | 100,000 |
|
||||
| Training Progress | 13.05% | 100% |
|
||||
| Completed Rounds | 13,533+ | 100,000 |
|
||||
| Training Progress | 13.53% | 100% |
|
||||
| Update Frequency | 5-10 min | Continuous |
|
||||
|
||||
**Note**: **average\@k:** Average performance across `k` attempts, measuring consistency. **pass\@k:** Probability of at least one correct solution in `k` attempts, measuring capability.Current metrics track training rounds completed in decentralized swarm.
|
||||
@@ -463,7 +471,7 @@ git checkout <commit-hash>
|
||||
|
||||
<div align="center">
|
||||
|
||||
**🤖 Trained with ❤️ using Gensyn RL-Swarm**
|
||||
**Trained with 🩷 using Gensyn RL-Swarm**
|
||||
|
||||
[](https://gensyn.ai)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user