初始化项目,由ModelHub XC社区提供模型
Model: jeffmeloy/Qwen2.5-7B-olm-v1.3 Source: Original Platform
This commit is contained in:
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-7B
|
||||
pipeline_tag: text-generation
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
---
|
||||
|
||||
## Model Description
|
||||
|
||||
Optimized Layer Merging (OLM)
|
||||
Is a transformer optimization framework implementing automated layer recombination.
|
||||
|
||||
Olm create Frankenstein's monster out of language models by cherry-picking the best performing layers across different models to create a superior hybrid.
|
||||
The core mechanism:
|
||||
|
||||
- Takes multiple language models as input
|
||||
- Uses a base model as the foundation
|
||||
- Iteratively replaces individual layers, evaluating performance on specified datasets
|
||||
- Keeps the best performing layer at each position based on metrics like perplexity, exact match, and a custom "quality" score
|
||||
- Builds a fusion model layer-by-layer while maintaining or improving performance
|
||||
|
||||
https://github.com/jeffmeloy/olm
|
||||
Reference in New Issue
Block a user