初始化项目,由ModelHub XC社区提供模型

Model: abhinand/gemma-2b-tamil
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-14 00:32:04 +08:00
commit 0c609577e5
11 changed files with 501 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

167
README.md Normal file
View File

@@ -0,0 +1,167 @@
---
language:
- en
- ta
license: other
base_model: google/gemma-2b
datasets:
- wikimedia/wikipedia
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
- name: gemma-2b-tamil
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 34.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil
name: Open LLM Leaderboard
---
# Gemma 2B Tamil v0.1 Alpha - Base Model [Experimental Release]
This is a Tamil foundational model continually pretrained from Google Gemma 2B. This is an experiment to see if Gemma can be adapted for Tamil without expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model.
> **Please Note:** This model, labeled as a FOUNDATIONAL Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Tamil, you may find [abhinand/gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha) more suitable for your needs.
**Procedure:**
1. The [Gemma base model](https://huggingface.co/google/gemma-2b) was continually pretrained on all available Tamil Wikipedia data for 3 epochs.
2. The updated model was then finetuned on a mix of English and Tamil alpaca datasets for 5 epochs. Finetuned model can be found [here](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha).
> **Note:** This project is currently under development (FOR TAMIL). The initial pretraining phase may not have been extensive enough, which suggests that the model's performance could improve by extending the pretraining on a larger dataset, such as CulturaX.
## Model description
- **Model type:** A 2B parameter GPT-like model continually pretrained on all available Tamil data from [Wikipedia dataset](https://huggingface.co/datasets/wikimedia/wikipedia).
- **Language(s):** Bilingual. English and Tamil.
- **License:** [Google Gemma Terms of Use](https://ai.google.dev/gemma/terms)
- **Training Precision:** `bfloat16`
- **Training Hardware:** 4x Nvidia RTX 3090 GPUs
- **Training Cost:** $20
## Support my work
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/abhinand.b)
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-tamil)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.13|
|AI2 Reasoning Challenge (25-Shot)|47.44|
|HellaSwag (10-Shot) |71.30|
|MMLU (5-Shot) |38.21|
|TruthfulQA (0-shot) |34.93|
|Winogrande (5-shot) |65.98|
|GSM8k (5-shot) |12.89|

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"_name_or_path": "google/gemma-2b",
"architectures": [
"GemmaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 2,
"eos_token_id": 1,
"head_dim": 256,
"hidden_act": "gelu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 16384,
"max_position_embeddings": 8192,
"model_type": "gemma",
"num_attention_heads": 8,
"num_hidden_layers": 18,
"num_key_value_heads": 1,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 10000.0,
"torch_dtype": "bfloat16",
"transformers_version": "4.39.0.dev0",
"use_cache": true,
"vocab_size": 256000
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 2,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.39.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c4bf96578863d46894b2e3258a7f9edfe43745530242ed628c41bfa18ae631bb
size 1948291744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c41eb9e5a51165e0d561b01fc4e99c8abb55b799d630d976d06a6d01b8c78350
size 1981891704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:babc8691c262de203049e362ec0213b03002a99a2a5452f6ea5c723dd58f6ff1
size 1082180304

View File

@@ -0,0 +1,171 @@
{
"metadata": {
"total_size": 5012344832
},
"weight_map": {
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.13.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.14.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.norm.weight": "model-00003-of-00003.safetensors"
}
}

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<eos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:61a7b147390c64585d6c3543dd6fc636906c9af3865a5548f27f31aee1d4c8e2
size 4241003

51
tokenizer_config.json Normal file
View File

@@ -0,0 +1,51 @@
{
"add_bos_token": true,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<eos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "<bos>",
"clean_up_tokenization_spaces": false,
"eos_token": "<eos>",
"legacy": null,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<pad>",
"padding_side": "left",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"split_special_tokens": false,
"tokenizer_class": "GemmaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}