初始化项目,由ModelHub XC社区提供模型
Model: SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged Source: Original Platform
This commit is contained in:
12
README.md
Normal file
12
README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
license: mit
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
## Model description
|
||||
|
||||
- **Model type:** A 1.1B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
|
||||
- **Language(s) (NLP):** Primarily English
|
||||
- **License:** MIT
|
||||
- **Finetuned from model:** [PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T)
|
||||
Reference in New Issue
Block a user