初始化项目,由ModelHub XC社区提供模型
Model: gyung/lfm2-1.2b-koen-mt-v4-100k-GGUF Source: Original Platform
This commit is contained in:
41
.gitattributes
vendored
Normal file
41
.gitattributes
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
lfm2-1.2b-koen-mt-v4-100k-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
105
README.md
Normal file
105
README.md
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
license: other
|
||||
license_name: lfm-open-license-v1.0
|
||||
license_link: https://huggingface.co/LiquidAI/LFM2-1.2B/blob/main/LICENSE
|
||||
language:
|
||||
- ko
|
||||
- en
|
||||
pipeline_tag: translation
|
||||
tags:
|
||||
- lfm2
|
||||
- liquid-ai
|
||||
- korean
|
||||
- gguf
|
||||
- quantization
|
||||
base_model: gyung/lfm2-1.2b-koen-mt-v4-100k
|
||||
---
|
||||
|
||||
# 🌊 LFM2-1.2B-KoEn-MT-v4-100k-GGUF
|
||||
|
||||
이 리포지토리는 [gyung/lfm2-1.2b-koen-mt-v4-100k](https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k) 모델의 **GGUF(Gemma/Llama.cpp Compatible)** 양자화 버전을 포함하고 있습니다.
|
||||
|
||||
## ℹ️ 모델 설명 (Model Description)
|
||||
|
||||
**LFM2-1.2B-KoEn-MT-v4-100k**은 LiquidAI의 `LFM2-1.2B` 아키텍처를 기반으로 한국어-영어 번역 성능을 극대화하기 위해 **100,000개의 고품질 병렬 데이터셋**으로 파인튜닝된 모델입니다.
|
||||
|
||||
* **Base Model**: LiquidAI/LFM2-1.2B
|
||||
* **Finetuned by**: Gyung
|
||||
* **Parameters**: 1.2B
|
||||
* **Purpose**: Korean-English Translation (한국어-영어 번역)
|
||||
|
||||
## 📦 제공되는 GGUF 파일 (Quantization Methods)
|
||||
|
||||
사용 환경과 필요에 따라 적절한 양자화 버전을 선택하여 다운로드하세요. (권장: `Q4_K_M` 또는 `Q5_K_M`)
|
||||
|
||||
| 파일명 (예시) | 양자화 (Quant) | 크기 (Size) | 설명 (Description) |
|
||||
| --- | --- | --- | --- |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-f16.gguf` | F16 | ~2.34 GB | 원본 성능 유지, 용량 큼 |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-q8_0.gguf` | Q8_0 | ~1.25 GB | 품질 손실 거의 없음 |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-q6_k.gguf` | Q6_K | ~963 MB | 높은 품질, 균형 잡힌 성능 |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf` | Q5_K_M | ~843 MB | **추천**: 품질과 속도/용량의 최적 균형 |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-q4_k_m.gguf` | Q4_K_M | ~731 MB | **추천**: 적은 메모리 소모, 준수한 성능 |
|
||||
| `lfm2-1.2b-koen-mt-v4-100k-q4_0.gguf` | Q4_0 | ~696 MB | 가장 가벼움, 일부 품질 저하 가능성 |
|
||||
|
||||
## 🚀 사용 방법 (Usage)
|
||||
|
||||
### llama.cpp
|
||||
|
||||
최신 버전의 `llama.cpp`를 사용하여 실행할 수 있습니다. (LFM 아키텍처 지원 여부를 확인하세요)
|
||||
|
||||
```bash
|
||||
./llama-cli -m lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf \
|
||||
-p "Translate to Korean: The model is working correctly now." \
|
||||
-n 256
|
||||
````
|
||||
|
||||
### Python (llama-cpp-python)
|
||||
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama(
|
||||
model_path="./lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf",
|
||||
n_ctx=2048,
|
||||
verbose=False
|
||||
)
|
||||
|
||||
prompt = "Translate to Korean: The model is working correctly now."
|
||||
output = llm(
|
||||
f"User: {prompt}\nAssistant:",
|
||||
max_tokens=256,
|
||||
stop=["User:", "\n"],
|
||||
echo=True
|
||||
)
|
||||
|
||||
print(output['choices'][0]['text'])
|
||||
```
|
||||
|
||||
## 📊 벤치마크 (Benchmarks)
|
||||
|
||||
원본 모델(F16) 기준 **Flores-200** 평가 결과입니다. GGUF 양자화 시 점수는 소폭 하락할 수 있습니다.
|
||||
|
||||
* **LFM2-1.2B-KOEN-MT-v4-100k**: CHrF++ **30.98** / BLEU **11.09**
|
||||
* Google Translate: CHrF++ 39.27
|
||||
* NLLB-200-Distilled-600M: CHrF++ 31.97
|
||||
|
||||
## 📜 라이선스 (License)
|
||||
|
||||
이 모델은 **Liquid AI LFM Open License v1.0**을 따릅니다.
|
||||
|
||||
* 학술/개인 연구: 제한 없음
|
||||
* 상업적 이용: 연 매출 $10M 미만 무료 (초과 시 별도 계약 필요)
|
||||
* 자세한 내용은 [LICENSE](https://huggingface.co/LiquidAI/LFM2-1.2B/blob/main/LICENSE)를 참고하세요.
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@misc{lfm2-1.2b-koen-mt-v4-100k,
|
||||
author = {Gyung},
|
||||
title = {LFM2-1.2B Korean-English Machine Translation Model v4},
|
||||
year = {2025},
|
||||
publisher = {Hugging Face},
|
||||
journal = {Hugging Face Model Hub},
|
||||
howpublished = {\url{[https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k](https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k)}}
|
||||
}
|
||||
```
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-Q4_0.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:33029cca1aeb21db5ae6215815b524b32af50bffa2d025a541daa1c887028063
|
||||
size 695751360
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-Q4_K_M.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2350bb9f69e5b8377f89bcba3d9371f9a28deeacd5e31979aefde97f2fc816b7
|
||||
size 730895040
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-Q5_K_M.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c7e7c8ffb2459bd309037858b16522628b6a3caa270fc4d26bdc48e0f6bb1e05
|
||||
size 843354816
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-Q6_K.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e3afa8aa989501d7bfad6c43542336f1380a5a03b28739accb5f27fccd83a3cb
|
||||
size 962843328
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-Q8_0.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b21a1d67e59a0c0b09c29626ec4f90bbb724e0b8df50bbf7bc335fa4ad818aaa
|
||||
size 1246253760
|
||||
3
lfm2-1.2b-koen-mt-v4-100k-f16.gguf
Normal file
3
lfm2-1.2b-koen-mt-v4-100k-f16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5821b5b54bc4b0e89ad9518ae9dcf8f42e63d25168ca175c4134565e965dc641
|
||||
size 2343326400
|
||||
Reference in New Issue
Block a user