Update README.md

This commit is contained in:
grey
2025-11-10 14:19:49 +00:00
committed by system
parent cdfaf1675a
commit 3012a1e877

View File

@@ -59,7 +59,7 @@ model-index:
name: Training Progress (%)
---
# Qwen3-0.6B-Gensyn-Swarm (tall_tame_panther)
# Qwen3-0.6B-Gensyn-Swarm the Agent-ID (tall_tame_panther)
[![Model](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-blue)](https://huggingface.co/0xgr3y/Qwen3-0.6B-Gensyn-Swarm-tall_tame_panther)
[![GGUF](https://img.shields.io/badge/GGUF-Available-green)](https://huggingface.co/0xgr3y/Qwen3-0.6B-Gensyn-Swarm-tall_tame_panther/tree/main)
@@ -68,18 +68,19 @@ model-index:
## Model Overview
This model is a continuously trained Qwen3-0.6B fine-tuned using **Gensyn RL-Swarm** framework with **GRPO (Generalized Reward Policy Optimization)** for enhanced reasoning and mathematical capabilities. **Note: Current training focuses on math/reasoning tasks**.
This model is a continuously trained Qwen3-0.6B fine-tuned using **Gensyn RL-Swarm** framework with **GRPO (Generalized Reward Policy Optimization)** and support **GGUF (llama.cpp)** for enhanced reasoning and mathematical capabilities. **Note: Current training focuses on math & reasoning tasks**.
**Agent ID:** `tall_tame_panther`
**Training Status:** 🟢 LIVE - Model updates automatically every 5-10 minutes
**Current Progress:** Round 43,610+ / 100,000 (43,61%)
**Framework Version:** Gensyn RL-Swarm v0.6.4
**Contract:** SwarmCoordinator v0.4.2
- **Agent ID:** `tall_tame_panther`
- **Training Status:** 🟢 LIVE - Model updates automatically every 5-10 minutes
- **Auto-Sync GGUF Pipeline Status:** 🟢 LIVE - Commits update automatically every 1h-hourly
- **Current Progress:** Round 43,610+ / 100,000 (43,61%)
- **Framework Version:** Gensyn RL-Swarm v0.6.4
- **Contract:** SwarmCoordinator v0.4.2
## Key Features
- **Real-time Training**: Continuous learning with distributed RL across Gensyn swarm network
- **Multi-domain Reasoning**: Trained on logic, arithmetic, and mathematical problem-solving
- **Multi-domain Reasoning**: Trained on logic, mathematical problem-solving & reasoning tasks
- **GGUF Support**: Multiple quantized formats available (F16, Q3_K_M, Q4_K_M, Q5_K_M)
- **llama.cpp Compatible**: Ready for edge deployment and local inference
- **BF16 Precision**: Trained with bfloat16 for optimal performance