Files
Liquid-RSA-Mix-GGUF/README.md
ModelHub XC 4b7b6cebc1 初始化项目,由ModelHub XC社区提供模型
Model: kreasof-ai/Liquid-RSA-Mix-GGUF
Source: Original Platform
2026-05-02 00:30:28 +08:00

27 lines
802 B
Markdown

---
language:
- en
pipeline_tag: text-generation
---
# Under Experiment
> GOALS: SOTA math reasoning for sub-400M parameter LLM
- Base Model: [LiquidAI/LFM2-350M-Math](https://huggingface.co/LiquidAI/LFM2-350M-Math)
- Dataset: [kreasof-ai/RSA-Distillation-Mix](https://huggingface.co/datasets/kreasof-ai/RSA-Distillation-Mix)
## Benchmark
Note: we use thinking token forcing because this model occasionally output response directly without thinking tag.
Standard Decoding:
- AIME 2025: 28.3% (+1.2% from `LiquidAI/LFM2-350M-Math`) -> Link for benchmark output: https://docs.google.com/spreadsheets/d/1Gr0AFT08tWQ8ocPK3TEwfwz1h336Pfe4TB_IQxKDt3A/
- HMMT 2025: TBA
- BRUMO 2025: TBA
- CMIMC 2025: TBA
Recursive Self-Aggregation:
- AIME 2025: TBA
- HMMT 2025: TBA
- BRUMO 2025: TBA
- CMIMC 2025: TBA