初始化项目,由ModelHub XC社区提供模型
Model: meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune Source: Original Platform
This commit is contained in:
39
README.md
Normal file
39
README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model:
|
||||
- meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge
|
||||
- speakleash/Bielik-11B-v2.3-Instruct
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- medit-lite
|
||||
- model-pruning
|
||||
- text-generation
|
||||
language:
|
||||
- pl
|
||||
- en
|
||||
---
|
||||
|
||||
<div align="center">
|
||||
<img src="https://i.ibb.co/6HbR84p/imagine-image-e5133e7c-9457-4539-a5e8-e59095e80345.png" alt="MSH-Lite" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
|
||||
</div>
|
||||
|
||||
# Marsh Harrier Lite (MSH-Lite)
|
||||
|
||||
Marsh Harrier Lite (MSH-Lite) is a compact, efficient version of the [MedIT Solutions MSH-v1-Bielik-v2.3-Instruct-MedIT-merge](meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge) model, reduced to 7 billion parameters using advanced pruning techniques. This pruning retains the core functionality and efficiency of the original model while optimizing for reduced computational resource usage.
|
||||
|
||||
## Key Features:
|
||||
- **Pruned Model**: Reduced from 11B to 7B parameters using the pruning method based on the [MedIT Solutions LLaMA pruning framework](https://github.com/MedITSolutionsKurman/llama-pruning).
|
||||
- **Optimized Performance**: Despite its reduced size, MSH-Lite delivers competitive performance across a wide array of NLP tasks.
|
||||
- **Bilingual Support**: Designed to handle both Polish (pl) and English (en) with high fluency.
|
||||
|
||||
## Technical Details:
|
||||
- **Base Model**: [MSH-v1-Bielik-v2.3-Instruct-MedIT-merge](https://huggingface.co/meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge)
|
||||
- **Parameter Count**: 7 billion
|
||||
- **Architecture**: Derived from Bielik's core architecture, with parameter optimization.
|
||||
- **Model Efficiency**: Ideal for deployments where computational efficiency is paramount.
|
||||
|
||||
## Performance Highlights:
|
||||
To be done.
|
||||
|
||||
### Acknowledgments:
|
||||
Gratitude to the **[SpeakLeash](https://speakleash.org)** project and **[ACK Cyfronet AGH](https://www.cyfronet.pl/)** for their contributions and collaboration.
|
||||
Reference in New Issue
Block a user