初始化项目,由ModelHub XC社区提供模型

Model: URajinda/ShweYon-V3-Base
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-20 00:40:07 +08:00
commit 5d6adf70ce
12 changed files with 233594 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

53
README.md Normal file
View File

@@ -0,0 +1,53 @@
---
language:
- my
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- text-generation
- myanmar
- shweyon
- base-model
- custom-tokenizer
library_name: transformers
---
# 🐰 ShweYon-V3-Base (ရွှေယုန်-V3)
**ShweYon-V3-Base** သည် Qwen 2.5 1.5B ကို အခြေခံ၍ မြန်မာဘာသာစကားအတွက် အထူးပြုပြင်ထားသော **Base Model** ဖြစ်ပါသည်။ ဤ Version တွင် ယခင် Version များကဲ့သို့ Tokenizer သီးခြားသုံးရန် မလိုတော့ဘဲ Model ၏ Embedding ထဲသို့ မြန်မာတုံကင်များကို တိုက်ရိုက်ပေါင်းစပ်ထားပါသည်။
ShweYon-V3-Base is a Myanmar-centric base language model built on top of the Qwen 2.5 1.5B architecture. This model is a milestone in the "ShweYon" project, focusing on improving the efficiency of Myanmar script processing through a custom tokenizer.
## 🎯 Purpose (ရည်ရွယ်ချက်)
ဤ Model သည် မြန်မာဘာသာစကားအတွက် **Foundation Base Model** တစ်ခုအဖြစ် ရည်ရွယ်ပါသည်။ ဤ Model ကို အခြေခံ၍ Chatbot များ၊ Question Answering စနစ်များနှင့် အခြားသော Downstream NLP Task များအတွက် ထပ်မံ၍ Fine-tuning (SFT/RLHF) ပြုလုပ်ရန် အကောင်းဆုံး အုတ်မြစ်ဖြစ်ပါသည်။
## ✨ Technical Highlights
* **Integrated Tokenizer:** မြန်မာဝိဘတ်များနှင့် စကားလုံးပေါင်း ၉, ကျော် ပါဝင်သော Custom Tokenizer ကို တစ်ပါတည်း ထည့်သွင်းထားပါသည်။
* **Extended Vocabulary:** Vocabulary Size ကို `160,746` အထိ တိုးမြှင့်ထားသဖြင့် မြန်မာစာသားများကို ပိုမိုကျစ်လျစ်စွာနှင့် မြန်ဆန်စွာ တွက်ချက်နိုင်ပါသည်။
* **Base Training:** မြန်မာစာပေ စာအုပ်များစွာဖြင့် Model ၏ မြန်မာစာ အခြေခံဗဟုသုတ ပိုမိုကောင်းမွန်လာစေရန် လေ့ကျင့်ပေးထားပါသည်။
## 🚀 Quick Start
ဤ Base Model ကို အောက်ပါအတိုင်း ခေါ်ယူအသုံးပြုနိုင်ပါသည်။
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "URajinda/ShweYon-V3-Base"
# မျက်မှန်ရော ဦးနှောက်ရော တစ်ခါတည်း ပါလာပါမည်
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# စမ်းသပ်ကြည့်ရန်
prompt = "မြန်မာနိုင်ငံ၏ သမိုင်းကြောင်းမှာ"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
⚠️ Note
ဤ Model သည် Base Model သာ ဖြစ်သောကြောင့် လူသားနှင့် စကားပြောဆိုရန် (Instruction Following) အတွက် ထပ်မံ၍ Chat Fine-tuning လုပ်ရန် လိုအပ်ပါသေးသည်။
⚖️ License
Apache License 2.0

9105
added_tokens.json Normal file

File diff suppressed because it is too large Load Diff

54
chat_template.jinja Normal file
View File

@@ -0,0 +1,54 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- 'You are a helpful assistant.' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- else %}
{{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + message.content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}

59
config.json Normal file
View File

@@ -0,0 +1,59 @@
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"dtype": "bfloat16",
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 1536,
"initializer_range": 0.02,
"intermediate_size": 8960,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 131072,
"max_window_layers": 28,
"model_type": "qwen2",
"num_attention_heads": 12,
"num_hidden_layers": 28,
"num_key_value_heads": 2,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "4.57.3",
"use_cache": true,
"use_mrope": false,
"use_sliding_window": false,
"vocab_size": 160746
}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"bos_token_id": 151643,
"eos_token_id": 151643,
"max_new_tokens": 2048,
"transformers_version": "4.57.3"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a809ba3400952699da09b8cc58b3b63507f1c564f1a3b8a107a662096d0dae6d
size 3114531472

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5393e7a8df63b7219b074bb33482179d7646fc05e30d16518ed5c68b228277dc
size 13114316

72855
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff

1
vocab.json Normal file

File diff suppressed because one or more lines are too long