初始化项目,由ModelHub XC社区提供模型
Model: canbingol/gemma3_1B_base-tr-cpt-1epoch_stage4 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
96
README.md
Normal file
96
README.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
library_name: transformers
|
||||
tags:
|
||||
- trl
|
||||
- cpt
|
||||
datasets:
|
||||
- canbingol/vngrs-web-corpus-200k
|
||||
language:
|
||||
- tr
|
||||
- en
|
||||
base_model:
|
||||
- canbingol/gemma3_1B_base-tr-cpt-1epoch_stage3
|
||||
new_version: canbingol/gemma3_1B_base-tr-cpt-2nd_epoch_stage1
|
||||
---
|
||||
|
||||
# Model Card: Gemma3-1B Turkish CPT (150K–200K Subset, 1 Epoch – Stage 4)
|
||||
|
||||
## Overview
|
||||
|
||||
This model is the **Stage 4** Turkish Continued Pretraining (CPT) variant of Gemma-3-1B.
|
||||
|
||||
Unlike Stage 1, which was initialized from `google/gemma-3-1b-pt`,
|
||||
this model was initialized from:
|
||||
|
||||
- `canbingol/gemma3_1B_base-tr-cpt-1epoch_stage3`
|
||||
|
||||
Stage 4 continues domain adaptation by exposing the model to **new data** rather than repeating the same subset.
|
||||
|
||||
The model was trained for **1 epoch** on samples **150,000 to 200,000** of the Turkish web corpus.
|
||||
|
||||
Importantly, this model is a direct continuation of Stage 3.
|
||||
Therefore, cumulatively it has been trained on samples **0–200,000** of the corpus (Stage 1: 0–50K, Stage 2: 50K–100K, Stage 3: 100K–150K, Stage 4: 150K–200K).
|
||||
|
||||
This stage corresponds to the **end of the 1-epoch pass over the full 200K-sample dataset** (i.e., completion of the first full epoch via sequential shards).
|
||||
|
||||
---
|
||||
|
||||
## Training Lineage
|
||||
|
||||
- Stage 0: `google/gemma-3-1b-pt`
|
||||
- Stage 1: Samples 0–50,000 (1 epoch)
|
||||
- Stage 2: Samples 50,000–100,000 (1 epoch)
|
||||
- Stage 3: Samples 100,000–150,000 (1 epoch)
|
||||
- Stage 4 (this release): Samples 150,000–200,000 (1 epoch, end of epoch-1)
|
||||
|
||||
Cumulative data exposure: **0–200,000 samples**
|
||||
|
||||
This represents **sequential CPT across disjoint data shards**.
|
||||
|
||||
---
|
||||
|
||||
## Training Setup
|
||||
|
||||
- Dataset: `canbingol/vngrs-web-corpus-200k`
|
||||
- Subset Used: Samples 150,000–200,000
|
||||
- Initialization: Stage 3 checkpoint
|
||||
- Training Objective: Continued Pretraining
|
||||
- Epochs: 1
|
||||
- Data Regime: Plain text
|
||||
- Token Count: **~21.6M tokens**
|
||||
- Cumulative Token Exposure (Stage 1 + Stage 2 + Stage 3 + Stage 4): **~86.1M tokens (approximate)**
|
||||
|
||||
Notes on cumulative exposure:
|
||||
- Although Stage 4 trains only on the 150K–200K shard, it inherits all adaptations learned from previous stages.
|
||||
- After this stage, the model has effectively completed exposure to the entire 0–200K dataset range through sequential continuation.
|
||||
|
||||
---
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "canbingol/gemma3_1B_base-tr-cpt-1epoch_stage4"
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
model = model.to(device)
|
||||
|
||||
prompt = "bundan böyle"
|
||||
inputs = tokenizer(prompt, return_tensors="pt").to(device)
|
||||
|
||||
outputs = model.generate(
|
||||
**inputs,
|
||||
max_new_tokens=50,
|
||||
do_sample=True,
|
||||
temperature=0.8,
|
||||
top_p=0.9
|
||||
)
|
||||
|
||||
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
||||
print(generated_text)
|
||||
72
config.json
Normal file
72
config.json
Normal file
@@ -0,0 +1,72 @@
|
||||
{
|
||||
"_sliding_window_pattern": 6,
|
||||
"architectures": [
|
||||
"Gemma3ForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"attn_logit_softcapping": null,
|
||||
"bos_token_id": 2,
|
||||
"cache_implementation": "hybrid",
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 1,
|
||||
"final_logit_softcapping": null,
|
||||
"head_dim": 256,
|
||||
"hidden_activation": "gelu_pytorch_tanh",
|
||||
"hidden_size": 1152,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 6912,
|
||||
"layer_types": [
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention"
|
||||
],
|
||||
"max_position_embeddings": 32768,
|
||||
"model_type": "gemma3_text",
|
||||
"num_attention_heads": 4,
|
||||
"num_hidden_layers": 26,
|
||||
"num_key_value_heads": 1,
|
||||
"pad_token_id": 0,
|
||||
"query_pre_attn_scalar": 256,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_parameters": {
|
||||
"full_attention": {
|
||||
"rope_theta": 1000000,
|
||||
"rope_type": "default"
|
||||
},
|
||||
"sliding_attention": {
|
||||
"rope_theta": 10000,
|
||||
"rope_type": "default"
|
||||
}
|
||||
},
|
||||
"sliding_window": 512,
|
||||
"sliding_window_pattern": 6,
|
||||
"tie_word_embeddings": true,
|
||||
"transformers_version": "5.2.0",
|
||||
"use_bidirectional_attention": false,
|
||||
"use_cache": false,
|
||||
"vocab_size": 262144
|
||||
}
|
||||
13
generation_config.json
Normal file
13
generation_config.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"bos_token_id": 2,
|
||||
"cache_implementation": "hybrid",
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
1,
|
||||
106
|
||||
],
|
||||
"pad_token_id": 0,
|
||||
"top_k": 64,
|
||||
"top_p": 0.95,
|
||||
"transformers_version": "5.2.0"
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7efbbb53e7375cc2a3ae8f85510c7b7b8b45c68603494b06cb313d7786b00663
|
||||
size 1999811208
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a74aefb1dc1340a25f29ab8370384b9ed24b2d921d7749ece7bbcfcfdf00d497
|
||||
size 33384443
|
||||
23
tokenizer_config.json
Normal file
23
tokenizer_config.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"backend": "tokenizers",
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": "<bos>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": "<eos>",
|
||||
"image_token": "<image_soft_token>",
|
||||
"is_local": false,
|
||||
"mask_token": "<mask>",
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"model_specific_special_tokens": {
|
||||
"boi_token": "<start_of_image>",
|
||||
"eoi_token": "<end_of_image>",
|
||||
"image_token": "<image_soft_token>"
|
||||
},
|
||||
"pad_token": "<pad>",
|
||||
"sp_model_kwargs": null,
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "GemmaTokenizer",
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": false
|
||||
}
|
||||
Reference in New Issue
Block a user