初始化项目,由ModelHub XC社区提供模型
Model: kreasof-ai/Liquid-RSA-Mix-GGUF Source: Original Platform
This commit is contained in:
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# Under Experiment
|
||||
|
||||
> GOALS: SOTA math reasoning for sub-400M parameter LLM
|
||||
|
||||
- Base Model: [LiquidAI/LFM2-350M-Math](https://huggingface.co/LiquidAI/LFM2-350M-Math)
|
||||
- Dataset: [kreasof-ai/RSA-Distillation-Mix](https://huggingface.co/datasets/kreasof-ai/RSA-Distillation-Mix)
|
||||
|
||||
## Benchmark
|
||||
|
||||
Note: we use thinking token forcing because this model occasionally output response directly without thinking tag.
|
||||
|
||||
Standard Decoding:
|
||||
- AIME 2025: 28.3% (+1.2% from `LiquidAI/LFM2-350M-Math`) -> Link for benchmark output: https://docs.google.com/spreadsheets/d/1Gr0AFT08tWQ8ocPK3TEwfwz1h336Pfe4TB_IQxKDt3A/
|
||||
- HMMT 2025: TBA
|
||||
- BRUMO 2025: TBA
|
||||
- CMIMC 2025: TBA
|
||||
|
||||
Recursive Self-Aggregation:
|
||||
- AIME 2025: TBA
|
||||
- HMMT 2025: TBA
|
||||
- BRUMO 2025: TBA
|
||||
- CMIMC 2025: TBA
|
||||
Reference in New Issue
Block a user