96 lines
3.0 KiB
Markdown
96 lines
3.0 KiB
Markdown
---
|
||
license: gemma
|
||
language:
|
||
- en
|
||
base_model: unsloth/gemma-3-270m-it
|
||
datasets:
|
||
- Kamisori-daijin/email-datasets-20k
|
||
pipeline_tag: text-generation
|
||
library_name: transformers
|
||
tags:
|
||
- sft
|
||
- trl
|
||
- unsloth
|
||
- google
|
||
- gemma
|
||
- gemma3
|
||
- gemma3_text
|
||
---
|
||

|
||
A fine-tune of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it) on the [Kamisori-daijin/email-datasets-20k](https://huggingface.co/datasets/Kamisori-daijin/email-datasets-20k) dataset.
|
||
|
||
## Usage example
|
||
**System prompt**
|
||
```
|
||
You are a helpful assistant.
|
||
```
|
||
**User prompt**
|
||
```
|
||
Write a direct and concise business email(under 200 words) from a Founder to a Angry Shareholder regarding notifying about a price change, specifically after winning a major award.
|
||
```
|
||
**Assistant response**
|
||
```
|
||
Subject: Regarding Pricing Update - [Company Name]
|
||
|
||
Dear [Shareholder Name],
|
||
|
||
I'm writing to address your concerns regarding our recent pricing update. You’re right to raise questions, and I want to be transparent about the reasoning behind it.
|
||
|
||
We’re incredibly proud to announce that [Company Name] recently won the [Award Name] award – a significant validation of our innovative product and market position. This achievement, coupled with increased customer demand, has necessitated a modest price adjustment to ensure we can continue investing in product development and deliver the exceptional service our customers expect.
|
||
|
||
We remain confident in the value we provide and believe this change will ultimately benefit all our stakeholders. We welcome the opportunity to discuss this further with you.
|
||
|
||
Sincerely,
|
||
[Name]
|
||
Founder, [Company Name]
|
||
```
|
||
## Model Details
|
||
- Base Model: `unsloth/gemma-3-270m-it`
|
||
- Parameter Count: 268,098,176
|
||
- Precision: torch.bfloat16
|
||
|
||
## Hardware
|
||
- GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition
|
||
- Announced: Mar 17th, 2025
|
||
- Release Date: Mar 18th, 2025
|
||
- Memory Type: GDDR7
|
||
- Bandwidth: 1.79 TB/s
|
||
- Memory Size: 96 GB
|
||
- Memory Bus: 512 bit
|
||
- Shading Units: 24064
|
||
- TDP: 600W
|
||
|
||
## Training Settings
|
||
### PEFT
|
||
- Rank: 32
|
||
- LoRA alpha: 64
|
||
- Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
||
- Gradient checkpointing: unsloth
|
||
|
||
### SFT
|
||
- Epoch: 4
|
||
- Batch size: 32
|
||
- Gradient Accumulation steps: 1
|
||
- Warmup ratio: 0.05
|
||
- Learning rate: 0.0002
|
||
- Optimizer: adamw_torch_fused
|
||
- Learning rate scheduler: cosine
|
||
|
||
## Training stats
|
||
- Date: 2026-03-25T12:51:43.831886
|
||
- Peak VRAM usage: 16.834 GB
|
||
- Global step: 2360
|
||
- Training runtime (seconds): 470.094
|
||
- Average training loss: 1.2040837437419567
|
||
- Final validation loss: 1.2054944038391113
|
||
|
||
## Framework versions
|
||
- Unsloth: 2026.3.11
|
||
- TRL: 0.22.2
|
||
- Transformers: 4.56.2
|
||
- Pytorch: 2.10.0+cu128
|
||
- Datasets: 4.8.4
|
||
- Tokenizers: 0.22.2
|
||
|
||
## License
|
||
This model is released under the Gemma license. See the [Gemma Terms of Use](https://ai.google.dev/gemma/terms) and [Prohibited Use Policy](https://policies.google.com/terms/generative-ai/use-policy) regarding the use of Gemma-generated content. |