初始化项目,由ModelHub XC社区提供模型
Model: sajalmadan0909/llama-checkpoint-200-merged Source: Original Platform
This commit is contained in:
35
README.md
Normal file
35
README.md
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
language:
|
||||
- hi
|
||||
- en
|
||||
license: llama3.1
|
||||
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- llama
|
||||
- llama-3.1
|
||||
- merged-lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
- unsloth
|
||||
---
|
||||
|
||||
# llama-checkpoint-200-merged
|
||||
|
||||
This model is a merged checkpoint created from a LoRA fine-tune of `meta-llama/Meta-Llama-3.1-8B-Instruct`.
|
||||
|
||||
## Base Model
|
||||
|
||||
- `meta-llama/Meta-Llama-3.1-8B-Instruct`
|
||||
|
||||
## Training Data
|
||||
|
||||
- `HydraIndicLM/hindi_alpaca_dolly_67k`
|
||||
- `yahma/alpaca-cleaned`
|
||||
|
||||
## Notes
|
||||
|
||||
- This folder contains merged model weights for inference.
|
||||
- The original training checkpoint was merged with the base model locally.
|
||||
Reference in New Issue
Block a user