35 lines
1.2 KiB
Markdown
35 lines
1.2 KiB
Markdown
|
|
---
|
||
|
|
base_model:
|
||
|
|
- unsloth/Llama-3.2-1B-Instruct-bnb-4bit
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
tags:
|
||
|
|
- nlp
|
||
|
|
- unsloth
|
||
|
|
- trl
|
||
|
|
- sft
|
||
|
|
language:
|
||
|
|
- id
|
||
|
|
---
|
||
|
|
|
||
|
|
# Llama-3.2-1B-Instruct-bnb-4bit (Indonesian Fine-tuned)
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
|
||
|
|
This is **Llama-3.2-1B-Instruct**, a fine-tuned model based on the [Meta-Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit) architecture. The model has been specifically trained on the **Stanford Alpaca Instruction dataset**, which has been translated into **Indonesian**. This fine-tuning enhances the model's ability to understand and generate responses in **Indonesian**, making it suitable for various language-specific tasks and instruction-following applications.
|
||
|
|
|
||
|
|
## Features
|
||
|
|
|
||
|
|
- **Base Model**: [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)
|
||
|
|
- **Pipeline Tag**: `text-generation`
|
||
|
|
- **Tags**:
|
||
|
|
- `nlp`
|
||
|
|
- `unsloth`
|
||
|
|
- `trl`
|
||
|
|
- `sft`
|
||
|
|
- **Language**: `id` (Indonesian)
|
||
|
|
|
||
|
|
## Model Details
|
||
|
|
|
||
|
|
- Trained using the **Stanford Alpaca Instruction dataset** translated into Indonesian.
|
||
|
|
- Supports a wide range of text-generation and instruction-following tasks.
|
||
|
|
- Designed with 4-bit quantization for efficient inference and deployment.
|