83 lines
2.5 KiB
Markdown
83 lines
2.5 KiB
Markdown
|
|
---
|
||
|
|
base_model: unsloth/qwen2.5-7b-bnb-4bit
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- unsloth
|
||
|
|
- qwen2
|
||
|
|
license: apache-2.0
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
pipeline_tag: translation
|
||
|
|
---
|
||
|
|
|
||
|
|
# Uploaded finetuned model
|
||
|
|
|
||
|
|
- **Developed by:** hmuegyi
|
||
|
|
- **License:** apache-2.0
|
||
|
|
- **Finetuned from model :** unsloth/qwen2.5-7b-bnb-4bit
|
||
|
|
|
||
|
|
### First, we need to install python library
|
||
|
|
```python
|
||
|
|
%%capture
|
||
|
|
import os, re
|
||
|
|
if "COLAB_" not in "".join(os.environ.keys()):
|
||
|
|
!pip install unsloth
|
||
|
|
else:
|
||
|
|
# Do this only in Colab notebooks! Otherwise use pip install unsloth
|
||
|
|
import torch; v = re.match(r"[0-9]{1,}\.[0-9]{1,}", str(torch.__version__)).group(0)
|
||
|
|
xformers = "xformers==" + ("0.0.33.post1" if v=="2.9" else "0.0.32.post2" if v=="2.8" else "0.0.29.post3")
|
||
|
|
!pip install --no-deps bitsandbytes accelerate {xformers} peft trl triton cut_cross_entropy unsloth_zoo
|
||
|
|
!pip install sentencepiece protobuf "datasets==4.3.0" "huggingface_hub>=0.34.0" hf_transfer
|
||
|
|
!pip install --no-deps unsloth
|
||
|
|
!pip install transformers==4.56.2
|
||
|
|
!pip install --no-deps trl==0.22.2
|
||
|
|
```
|
||
|
|
|
||
|
|
### Then, you can test with this code
|
||
|
|
```python
|
||
|
|
from unsloth import FastLanguageModel
|
||
|
|
import torch
|
||
|
|
|
||
|
|
model, tokenizer = FastLanguageModel.from_pretrained(
|
||
|
|
model_name = "hmuegyi/Qwen2.5-7B-bnb-en-my-alt",
|
||
|
|
max_seq_length = 2048,
|
||
|
|
load_in_4bit = True, # Memory သက်သာအောင်
|
||
|
|
)
|
||
|
|
FastLanguageModel.for_inference(model)
|
||
|
|
|
||
|
|
alpaca_prompt = """### Instruction:
|
||
|
|
You are a professional English-Burmese translator.
|
||
|
|
Detect the input language and provide the translation in the opposite language.
|
||
|
|
|
||
|
|
### Input:
|
||
|
|
{}
|
||
|
|
|
||
|
|
### Response:
|
||
|
|
{}"""
|
||
|
|
|
||
|
|
input_text = "I love Myanmar Country." # you can change input text
|
||
|
|
inputs = tokenizer(
|
||
|
|
[
|
||
|
|
alpaca_prompt.format(
|
||
|
|
input_text, # Input
|
||
|
|
"", # Response
|
||
|
|
)
|
||
|
|
], return_tensors = "pt").to("cuda")
|
||
|
|
|
||
|
|
outputs = model.generate(**inputs,
|
||
|
|
max_new_tokens = 128,
|
||
|
|
temperature = 0.1,
|
||
|
|
top_p = 0.5,
|
||
|
|
use_cache = True)
|
||
|
|
|
||
|
|
response = tokenizer.batch_decode(outputs)
|
||
|
|
|
||
|
|
final_output = response[0].split("### Response:")[1].replace(tokenizer.eos_token, "").strip()
|
||
|
|
print(f"Input: {input_text}")
|
||
|
|
print(f"Translation: {final_output}")
|
||
|
|
```
|
||
|
|
|
||
|
|
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||
|
|
|
||
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|