初始化项目,由ModelHub XC社区提供模型
Model: McGill-NLP/AfriqueGemma-12B Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
199
README.md
Normal file
199
README.md
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
---
|
||||||
|
library_name: transformers
|
||||||
|
license: cc-by-4.0
|
||||||
|
base_model: google/gemma-3-12b-pt
|
||||||
|
language:
|
||||||
|
- af # Afrikaans
|
||||||
|
- am # Amharic
|
||||||
|
- ar # Arabic
|
||||||
|
- en # English
|
||||||
|
- fr # French
|
||||||
|
- ha # Hausa
|
||||||
|
- ig # Igbo
|
||||||
|
- mg # Malagasy (Plateau)
|
||||||
|
- ny # Nyanja
|
||||||
|
- om # Oromo
|
||||||
|
- pt # Portuguese
|
||||||
|
- rw # Kinyarwanda
|
||||||
|
- sn # Shona
|
||||||
|
- so # Somali
|
||||||
|
- st # Southern Sotho
|
||||||
|
- sw # Swahili
|
||||||
|
- ti # Tigrinya
|
||||||
|
- tn # Tswana
|
||||||
|
- xh # Xhosa
|
||||||
|
- yo # Yoruba
|
||||||
|
- zu # Zulu
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
tags:
|
||||||
|
- african-languages
|
||||||
|
- multilingual
|
||||||
|
- continued-pretraining
|
||||||
|
- afrique-llm
|
||||||
|
- gemma
|
||||||
|
- llamafactory
|
||||||
|
---
|
||||||
|
|
||||||
|
# AfriqueGemma-12B
|
||||||
|
|
||||||
|
## Model Overview
|
||||||
|
|
||||||
|
**AfriqueGemma-12B** is part of the **AfriqueLLM** suite—a collection of open language models adapted to **20 African languages** through continued pre-training (CPT) on **25.2B tokens**. This model is based on [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt) and has been specifically adapted for improved performance on African languages while maintaining strong capabilities in high-resource languages.
|
||||||
|
|
||||||
|
### Key Features
|
||||||
|
|
||||||
|
- **Type**: Causal Language Model (Base/Pre-trained)
|
||||||
|
- **Base Model**: Gemma 3 12B PT
|
||||||
|
- **Parameters**: 12B
|
||||||
|
- **Context Length**: 8,192 tokens (native)
|
||||||
|
- **Training Tokens**: 25.2B tokens of carefully curated multilingual data
|
||||||
|
|
||||||
|
## Supported Languages
|
||||||
|
|
||||||
|
AfriqueGemma-12B has been adapted for the following 20 African languages plus 4 high-resource languages:
|
||||||
|
|
||||||
|
| Language | Code | Family | Script |
|
||||||
|
|----------|------|--------|--------|
|
||||||
|
| Afrikaans | afr_Latn | Germanic | Latin |
|
||||||
|
| Swahili | swh_Latn | Bantu | Latin |
|
||||||
|
| Moroccan Arabic | ary_Arab | Semitic | Arabic |
|
||||||
|
| Somali | som_Latn | Cushitic | Latin |
|
||||||
|
| Amharic | amh_Ethi | Semitic | Ethiopic |
|
||||||
|
| Egyptian Arabic | arz_Arab | Semitic | Arabic |
|
||||||
|
| Hausa | hau_Latn | Chadic | Latin |
|
||||||
|
| Kinyarwanda | kin_Latn | Bantu | Latin |
|
||||||
|
| Zulu | zul_Latn | Bantu | Latin |
|
||||||
|
| Igbo | ibo_Latn | Volta-Niger | Latin |
|
||||||
|
| Plateau Malagasy | plt_Latn | Austronesian | Latin |
|
||||||
|
| Xhosa | xho_Latn | Bantu | Latin |
|
||||||
|
| Shona | sna_Latn | Bantu | Latin |
|
||||||
|
| Yoruba | yor_Latn | Volta-Niger | Latin |
|
||||||
|
| Nyanja | nya_Latn | Bantu | Latin |
|
||||||
|
| Southern Sotho | sot_Latn | Bantu | Latin |
|
||||||
|
| Tigrinya | tir_Ethi | Semitic | Ethiopic |
|
||||||
|
| Tunisian Arabic | aeb_Arab | Semitic | Arabic |
|
||||||
|
| Oromo | gaz_Latn | Cushitic | Latin |
|
||||||
|
| Tswana | tsn_Latn | Bantu | Latin |
|
||||||
|
|
||||||
|
**High-resource languages (for catastrophic forgetting mitigation):** English, French, Portuguese, Arabic
|
||||||
|
|
||||||
|
## Training Data
|
||||||
|
|
||||||
|
Our training corpus combines multiple high-quality sources:
|
||||||
|
|
||||||
|
- **African Monolingual Data** (~22.8B tokens): FineWeb2, WURA, and MADLAD-400
|
||||||
|
- **Code** (~1B tokens): CornStack-Python for reasoning capabilities
|
||||||
|
- **Mathematics** (~1B tokens): FineMath-4+ for mathematical understanding
|
||||||
|
- **Synthetic Data** (~324M tokens): GPT-4.1 translated domain-specific content across 10 domains
|
||||||
|
|
||||||
|
We use **UniMax sampling** to create a balanced distribution, capping high-resource languages at approximately 1B tokens and upsampling lower-resource languages for up to five epochs.
|
||||||
|
|
||||||
|
## Quickstart
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
model_name = "McGill-NLP/AfriqueGemma-12B"
|
||||||
|
|
||||||
|
# Load the tokenizer and the model
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
model_name,
|
||||||
|
torch_dtype="auto",
|
||||||
|
device_map="auto"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prepare the model input
|
||||||
|
prompt = "Bawo ni o ṣe n ṣe?" # Yoruba: "How are you doing?"
|
||||||
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
||||||
|
|
||||||
|
# Generate text
|
||||||
|
generated_ids = model.generate(
|
||||||
|
**inputs,
|
||||||
|
max_new_tokens=100,
|
||||||
|
do_sample=True,
|
||||||
|
temperature=0.7,
|
||||||
|
top_p=0.9
|
||||||
|
)
|
||||||
|
output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
||||||
|
print(output)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
For deployment, you can use `vllm` or `sglang` to create an OpenAI-compatible API endpoint:
|
||||||
|
|
||||||
|
**vLLM:**
|
||||||
|
```shell
|
||||||
|
vllm serve McGill-NLP/AfriqueGemma-12B
|
||||||
|
```
|
||||||
|
|
||||||
|
**SGLang:**
|
||||||
|
```shell
|
||||||
|
python -m sglang.launch_server --model-path McGill-NLP/AfriqueGemma-12B
|
||||||
|
```
|
||||||
|
|
||||||
|
## Training Details
|
||||||
|
|
||||||
|
### Hyperparameters
|
||||||
|
|
||||||
|
- **Learning Rate**: 5e-5 (with warmup and cosine decay)
|
||||||
|
- **Context Length**: 16,384 tokens
|
||||||
|
- **Optimizer**: AdamW
|
||||||
|
- **Precision**: BF16 mixed precision
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
|
||||||
|
Training was conducted using the LLaMA-Factory framework on up to 64 NVIDIA H100 GPUs with:
|
||||||
|
- DeepSpeed ZeRO-1/ZeRO-2
|
||||||
|
- Flash Attention 3
|
||||||
|
- Sequence packing
|
||||||
|
- Liger Kernel optimizations
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
All AfriqueLLM models are evaluated on multiple multilingual benchmarks:
|
||||||
|
|
||||||
|
| Model | AfriMGSM | AfriMMLU | AfriXNLI | Belebele | FLORES | INJONG | SIB-200 | Overall | Δ |
|
||||||
|
|-------|----------|----------|----------|----------|--------|--------|---------|----------|---|
|
||||||
|
| [Gemma3-4B](https://huggingface.co/google/gemma-3-4b-pt) | 10.24 | 33.89 | 37.76 | 45.79 | 29.50 | 55.52 | 63.59 | 39.47 | |
|
||||||
|
| [AfriqueGemma-4B](https://huggingface.co/McGill-NLP/AfriqueGemma-4B) | 14.86 | 36.73 | 39.62 | 50.52 | 57.31 | 69.28 | 69.21 | 48.22 | +8.7 (22.2%) |
|
||||||
|
| [Gemma3-12B](https://huggingface.co/google/gemma-3-12b-pt) | 25.21 | 48.76 | 44.01 | 68.84 | 40.16 | 73.53 | 79.17 | 54.24 | |
|
||||||
|
| <u>[AfriqueGemma-12B](https://huggingface.co/McGill-NLP/AfriqueGemma-12B)</u> | <u>32.14</u> | <u>49.47</u> | <u>44.60</u> | <u>68.65</u> | <u>**66.89**</u> | <u>76.79</u> | <u>75.08</u> | <u>59.09</u> | <u>+4.8 (8.9%)</u> |
|
||||||
|
| [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B-Base) | 11.22 | 36.56 | 38.24 | 44.63 | 18.93 | 29.47 | 53.06 | 33.16 | |
|
||||||
|
| [AfriqueQwen-8B](https://huggingface.co/McGill-NLP/AfriqueQwen-8B) | 39.68 | 46.91 | 45.99 | 68.46 | 63.54 | 73.36 | 77.00 | 59.28 | +26.1 (78.8%) |
|
||||||
|
| [Qwen3-14B-Base](https://huggingface.co/Qwen/Qwen3-14B-Base) | 16.60 | 39.66 | 43.22 | 50.74 | 20.86 | 41.80 | 66.29 | 39.88 | |
|
||||||
|
| [AfriqueQwen-14B](https://huggingface.co/McGill-NLP/AfriqueQwen-14B) | **45.01** | **52.22** | **49.01** | **74.63** | 65.26 | **77.80** | **82.63** | **63.79** | +23.9 (60.0%) |
|
||||||
|
| [Llama3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | 8.14 | 32.27 | 37.90 | 40.95 | 23.59 | 41.37 | 59.99 | 34.89 | |
|
||||||
|
| [AfriqueLlama-8B](https://huggingface.co/McGill-NLP/AfriqueLlama-8B) | 17.51 | 36.57 | 37.39 | 50.51 | 64.88 | 71.17 | 69.14 | 49.60 | +14.7 (42.2%) |
|
||||||
|
|
||||||
|
## Model Variants
|
||||||
|
|
||||||
|
- [AfriqueGemma-4B](https://huggingface.co/McGill-NLP/AfriqueGemma-4B) - Smaller 4B variant
|
||||||
|
- [AfriqueQwen-8B](https://huggingface.co/McGill-NLP/AfriqueQwen-8B) - Qwen-based 8B model
|
||||||
|
- [AfriqueQwen-14B](https://huggingface.co/McGill-NLP/AfriqueQwen-14B) - Qwen-based 14B model (flagship)
|
||||||
|
- [AfriqueLlama-8B](https://huggingface.co/McGill-NLP/AfriqueLlama-8B) - Llama-based 8B model
|
||||||
|
|
||||||
|
## Citation
|
||||||
|
|
||||||
|
If you find our work helpful, please cite:
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@misc{yu2026afriquellmdatamixingmodel,
|
||||||
|
title={AfriqueLLM: How Data Mixing and Model Architecture Impact Continued Pre-training for African Languages},
|
||||||
|
author={Hao Yu and Tianyi Xu and Michael A. Hedderich and Wassim Hamidouche and Syed Waqas Zamir and David Ifeoluwa Adelani},
|
||||||
|
year={2026},
|
||||||
|
eprint={2601.06395},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.CL},
|
||||||
|
url={https://arxiv.org/abs/2601.06395},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This model is released under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/). Please review the license terms before use.
|
||||||
|
|
||||||
|
## Acknowledgments
|
||||||
|
|
||||||
|
We thank the creators of the base models, datasets and compute resources that made this work possible, including Mila, Compute Canada, Microsoft, the FineWeb team, WURA, MADLAD-400 and etc..
|
||||||
3
added_tokens.json
Normal file
3
added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"<image_soft_token>": 262144
|
||||||
|
}
|
||||||
1
chat_template.jinja
Normal file
1
chat_template.jinja
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ content }}{% elif message['role'] == 'assistant' %}{{ content }}{% endif %}{% endfor %}
|
||||||
109
config.json
Normal file
109
config.json
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Gemma3ForConditionalGeneration"
|
||||||
|
],
|
||||||
|
"boi_token_index": 255999,
|
||||||
|
"eoi_token_index": 256000,
|
||||||
|
"hidden_size": 3840,
|
||||||
|
"image_token_index": 262144,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"mm_tokens_per_image": 256,
|
||||||
|
"model_type": "gemma3",
|
||||||
|
"text_config": {
|
||||||
|
"_sliding_window_pattern": 6,
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"attn_logit_softcapping": null,
|
||||||
|
"final_logit_softcapping": null,
|
||||||
|
"head_dim": 256,
|
||||||
|
"hidden_activation": "gelu_pytorch_tanh",
|
||||||
|
"hidden_size": 3840,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 15360,
|
||||||
|
"layer_types": [
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"sliding_attention",
|
||||||
|
"full_attention"
|
||||||
|
],
|
||||||
|
"max_position_embeddings": 131072,
|
||||||
|
"model_type": "gemma3_text",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_hidden_layers": 48,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"query_pre_attn_scalar": 256,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_local_base_freq": 10000.0,
|
||||||
|
"rope_scaling": {
|
||||||
|
"factor": 8.0,
|
||||||
|
"rope_type": "linear"
|
||||||
|
},
|
||||||
|
"rope_theta": 1000000.0,
|
||||||
|
"sliding_window": 1024,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"use_cache": false,
|
||||||
|
"vocab_size": 262208
|
||||||
|
},
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.55.4",
|
||||||
|
"use_cache": false,
|
||||||
|
"vision_config": {
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"hidden_act": "gelu_pytorch_tanh",
|
||||||
|
"hidden_size": 1152,
|
||||||
|
"image_size": 896,
|
||||||
|
"intermediate_size": 4304,
|
||||||
|
"layer_norm_eps": 1e-06,
|
||||||
|
"model_type": "siglip_vision_model",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_channels": 3,
|
||||||
|
"num_hidden_layers": 27,
|
||||||
|
"patch_size": 14,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"vision_use_head": false
|
||||||
|
}
|
||||||
|
}
|
||||||
14
generation_config.json
Normal file
14
generation_config.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"bos_token_id": 2,
|
||||||
|
"cache_implementation": "hybrid",
|
||||||
|
"do_sample": true,
|
||||||
|
"eos_token_id": [
|
||||||
|
1,
|
||||||
|
106
|
||||||
|
],
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"temperature": 0.8,
|
||||||
|
"top_k": 64,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"transformers_version": "4.55.4"
|
||||||
|
}
|
||||||
BIN
language_group_scores.pdf
Normal file
BIN
language_group_scores.pdf
Normal file
Binary file not shown.
3
model-00001-of-00006.safetensors
Normal file
3
model-00001-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:fe811a5f3ee2990671f20098d42268c30c097c860beeeb4fba2f2d74cdc31ab4
|
||||||
|
size 4979902192
|
||||||
3
model-00002-of-00006.safetensors
Normal file
3
model-00002-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:93d0a0286ca9f3edb66fbf633f159687e62b8c4ed6ee62e1d2ec17758f5bc3c4
|
||||||
|
size 4931296592
|
||||||
3
model-00003-of-00006.safetensors
Normal file
3
model-00003-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:343ef6169aa5f91686eb7cd2274361af509343112b05d3bc6fec0806f6f5a142
|
||||||
|
size 4931296656
|
||||||
3
model-00004-of-00006.safetensors
Normal file
3
model-00004-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:dc58f6cca09723c3183f0124da175e0e7155e989db7f5fbee20d0c64710bf264
|
||||||
|
size 4931296656
|
||||||
3
model-00005-of-00006.safetensors
Normal file
3
model-00005-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5bcde71bd9ee041bbb2d69c6b33e64f6ef6be5760b07c783ae20cbe54894800d
|
||||||
|
size 4601000928
|
||||||
3
model-00006-of-00006.safetensors
Normal file
3
model-00006-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:89c53cd81ca824c96addad6f6792f45d23370f7e87a2a0ccc52a0afe7c754abb
|
||||||
|
size 2013757584
|
||||||
1074
model.safetensors.index.json
Normal file
1074
model.safetensors.index.json
Normal file
File diff suppressed because it is too large
Load Diff
37
preprocessor_config.json
Normal file
37
preprocessor_config.json
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{
|
||||||
|
"crop_size": null,
|
||||||
|
"data_format": "channels_first",
|
||||||
|
"default_to_square": true,
|
||||||
|
"device": null,
|
||||||
|
"disable_grouping": null,
|
||||||
|
"do_center_crop": null,
|
||||||
|
"do_convert_rgb": null,
|
||||||
|
"do_normalize": true,
|
||||||
|
"do_pan_and_scan": null,
|
||||||
|
"do_rescale": true,
|
||||||
|
"do_resize": true,
|
||||||
|
"image_mean": [
|
||||||
|
0.5,
|
||||||
|
0.5,
|
||||||
|
0.5
|
||||||
|
],
|
||||||
|
"image_processor_type": "Gemma3ImageProcessorFast",
|
||||||
|
"image_seq_length": 256,
|
||||||
|
"image_std": [
|
||||||
|
0.5,
|
||||||
|
0.5,
|
||||||
|
0.5
|
||||||
|
],
|
||||||
|
"input_data_format": null,
|
||||||
|
"pan_and_scan_max_num_crops": null,
|
||||||
|
"pan_and_scan_min_crop_size": null,
|
||||||
|
"pan_and_scan_min_ratio_to_activate": null,
|
||||||
|
"processor_class": "Gemma3Processor",
|
||||||
|
"resample": 2,
|
||||||
|
"rescale_factor": 0.00392156862745098,
|
||||||
|
"return_tensors": null,
|
||||||
|
"size": {
|
||||||
|
"height": 896,
|
||||||
|
"width": 896
|
||||||
|
}
|
||||||
|
}
|
||||||
33
special_tokens_map.json
Normal file
33
special_tokens_map.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"boi_token": "<start_of_image>",
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<bos>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eoi_token": "<end_of_image>",
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<eos>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"image_token": "<image_soft_token>",
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||||
|
size 33384568
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||||
|
size 4689074
|
||||||
51347
tokenizer_config.json
Normal file
51347
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user