初始化项目,由ModelHub XC社区提供模型

Model: kth8/gemma-3-270m-it-Email-Generator
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 09:46:29 +08:00
commit c8573d35cf
14 changed files with 53384 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

96
README.md Normal file
View File

@@ -0,0 +1,96 @@
---
license: gemma
language:
- en
base_model: unsloth/gemma-3-270m-it
datasets:
- Kamisori-daijin/email-datasets-20k
pipeline_tag: text-generation
library_name: transformers
tags:
- sft
- trl
- unsloth
- google
- gemma
- gemma3
- gemma3_text
---
![logo](https://storage.googleapis.com/gweb-developer-goog-blog-assets/images/gemma-3_2.original.png)
A fine-tune of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it) on the [Kamisori-daijin/email-datasets-20k](https://huggingface.co/datasets/Kamisori-daijin/email-datasets-20k) dataset.
## Usage example
**System prompt**
```
You are a helpful assistant.
```
**User prompt**
```
Write a direct and concise business email(under 200 words) from a Founder to a Angry Shareholder regarding notifying about a price change, specifically after winning a major award.
```
**Assistant response**
```
Subject: Regarding Pricing Update - [Company Name]
Dear [Shareholder Name],
I'm writing to address your concerns regarding our recent pricing update. Youre right to raise questions, and I want to be transparent about the reasoning behind it.
Were incredibly proud to announce that [Company Name] recently won the [Award Name] award a significant validation of our innovative product and market position. This achievement, coupled with increased customer demand, has necessitated a modest price adjustment to ensure we can continue investing in product development and deliver the exceptional service our customers expect.
We remain confident in the value we provide and believe this change will ultimately benefit all our stakeholders. We welcome the opportunity to discuss this further with you.
Sincerely,
[Name]
Founder, [Company Name]
```
## Model Details
- Base Model: `unsloth/gemma-3-270m-it`
- Parameter Count: 268,098,176
- Precision: torch.bfloat16
## Hardware
- GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition
- Announced: Mar 17th, 2025
- Release Date: Mar 18th, 2025
- Memory Type: GDDR7
- Bandwidth: 1.79 TB/s
- Memory Size: 96 GB
- Memory Bus: 512 bit
- Shading Units: 24064
- TDP: 600W
## Training Settings
### PEFT
- Rank: 32
- LoRA alpha: 64
- Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Gradient checkpointing: unsloth
### SFT
- Epoch: 4
- Batch size: 32
- Gradient Accumulation steps: 1
- Warmup ratio: 0.05
- Learning rate: 0.0002
- Optimizer: adamw_torch_fused
- Learning rate scheduler: cosine
## Training stats
- Date: 2026-03-25T12:51:43.831886
- Peak VRAM usage: 16.834 GB
- Global step: 2360
- Training runtime (seconds): 470.094
- Average training loss: 1.2040837437419567
- Final validation loss: 1.2054944038391113
## Framework versions
- Unsloth: 2026.3.11
- TRL: 0.22.2
- Transformers: 4.56.2
- Pytorch: 2.10.0+cu128
- Datasets: 4.8.4
- Tokenizers: 0.22.2
## License
This model is released under the Gemma license. See the [Gemma Terms of Use](https://ai.google.dev/gemma/terms) and [Prohibited Use Policy](https://policies.google.com/terms/generative-ai/use-policy) regarding the use of Gemma-generated content.

3
added_tokens.json Normal file
View File

@@ -0,0 +1,3 @@
{
"<image_soft_token>": 262144
}

50
chat_template.jinja Normal file
View File

@@ -0,0 +1,50 @@
{# Unsloth Chat template fixes #}
{{ bos_token }}
{%- if messages[0]['role'] == 'system' -%}
{%- if messages[0]['content'] is string -%}
{%- set first_user_prefix = messages[0]['content'] + '
' -%}
{%- else -%}
{%- set first_user_prefix = messages[0]['content'][0]['text'] + '
' -%}
{%- endif -%}
{%- set loop_messages = messages[1:] -%}
{%- else -%}
{%- set first_user_prefix = "" -%}
{%- set loop_messages = messages -%}
{%- endif -%}
{%- for message in loop_messages -%}
{%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
{{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
{%- endif -%}
{%- if (message['role'] == 'assistant') -%}
{%- set role = "model" -%}
{%- else -%}
{%- set role = message['role'] -%}
{%- endif -%}
{{ '<start_of_turn>' + role + '
' + (first_user_prefix if loop.first else "") }}
{%- if message['content'] is string -%}
{{ message['content'] | trim }}
{%- elif message['content'] is iterable -%}
{%- for item in message['content'] -%}
{%- if item['type'] == 'image' -%}
{{ '<start_of_image>' }}
{%- elif item['type'] == 'text' -%}
{{ item['text'] | trim }}
{%- endif -%}
{%- endfor -%}
{%- elif message['content'] is defined -%}
{{ raise_exception("Invalid content type") }}
{%- endif -%}
{{ '<end_of_turn>
' }}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{'<start_of_turn>model
'}}
{%- endif -%}
{# Copyright 2025-present Unsloth. Apache 2.0 License. #}

55
config.json Normal file
View File

@@ -0,0 +1,55 @@
{
"_sliding_window_pattern": 6,
"architectures": [
"Gemma3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": null,
"bos_token_id": 2,
"dtype": "bfloat16",
"eos_token_id": 106,
"final_logit_softcapping": null,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 640,
"initializer_range": 0.02,
"intermediate_size": 2048,
"layer_types": [
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 32768,
"model_type": "gemma3_text",
"num_attention_heads": 4,
"num_hidden_layers": 18,
"num_key_value_heads": 1,
"pad_token_id": 0,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_local_base_freq": 10000.0,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": 512,
"transformers_version": "4.56.2",
"unsloth_fixed": true,
"use_bidirectional_attention": false,
"use_cache": true,
"vocab_size": 262144
}

14
generation_config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"bos_token_id": 2,
"cache_implementation": "hybrid",
"do_sample": true,
"eos_token_id": [
1,
106
],
"max_length": 32768,
"pad_token_id": 0,
"top_k": 64,
"top_p": 0.95,
"transformers_version": "4.56.2"
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:18bf28e6c214719f3cb7a8d232600cfebb4e4c2febc9b91adf59ae2eacdf76b8
size 536223056

33
special_tokens_map.json Normal file
View File

@@ -0,0 +1,33 @@
{
"boi_token": "<start_of_image>",
"bos_token": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eoi_token": "<end_of_image>",
"eos_token": {
"content": "<end_of_turn>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"image_token": "<image_soft_token>",
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
size 33384568

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
size 4689074

51345
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff

1743
train/log.json Normal file

File diff suppressed because it is too large Load Diff

BIN
train/training_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

BIN
train/validation_loss.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB