36 lines
657 B
Markdown
36 lines
657 B
Markdown
---
|
|
language:
|
|
- hi
|
|
- en
|
|
license: llama3.1
|
|
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
|
pipeline_tag: text-generation
|
|
library_name: transformers
|
|
tags:
|
|
- llama
|
|
- llama-3.1
|
|
- merged-lora
|
|
- sft
|
|
- transformers
|
|
- trl
|
|
- unsloth
|
|
---
|
|
|
|
# llama-checkpoint-200-merged
|
|
|
|
This model is a merged checkpoint created from a LoRA fine-tune of `meta-llama/Meta-Llama-3.1-8B-Instruct`.
|
|
|
|
## Base Model
|
|
|
|
- `meta-llama/Meta-Llama-3.1-8B-Instruct`
|
|
|
|
## Training Data
|
|
|
|
- `HydraIndicLM/hindi_alpaca_dolly_67k`
|
|
- `yahma/alpaca-cleaned`
|
|
|
|
## Notes
|
|
|
|
- This folder contains merged model weights for inference.
|
|
- The original training checkpoint was merged with the base model locally.
|