初始化项目,由ModelHub XC社区提供模型
Model: gplsi/Aitana-7B-S-base-1.0 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
304
README.md
Normal file
304
README.md
Normal file
@@ -0,0 +1,304 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- ca
|
||||
- es
|
||||
- en
|
||||
base_model: BSC-LT/salamandra-7b
|
||||
tags:
|
||||
- valencian
|
||||
- catalan
|
||||
- spanish
|
||||
- english
|
||||
- text-generation
|
||||
- alia
|
||||
- gplsi
|
||||
datasets:
|
||||
- gplsi/alia_dogv
|
||||
- gplsi/alia_les_corts
|
||||
- gplsi/alia_amic
|
||||
- gplsi/alia_boua
|
||||
- gplsi/alia_tourism
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Aitana-7B-S-base-1.0
|
||||
|
||||
**Aitana-7B-S-base-1.0** is a generative language model from the **Aitana family**, developed by the [GPLSI (Language and Information System Group)](https://gplsi.dlsi.ua.es/) at the University of Alicante. This model is based on [BSC-LT/salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) and has been continuously pre-trained on multilingual data (Valencian, Spanish, and English) to improve representation of Valencian and Catalan languages.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Model Description](#model-description)
|
||||
- [Evaluation](#evaluation)
|
||||
- [Training Data](#training-data)
|
||||
- [Intended Uses](#intended-uses)
|
||||
- [How to Use](#how-to-use)
|
||||
- [Additional Information](#additional-information)
|
||||
|
||||
## Model Description
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Base Model** | [BSC-LT/salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) |
|
||||
| **Architecture** | Transformer decoder-only |
|
||||
| **Parameters** | ~7.77B |
|
||||
| **Languages** | Valencian, Spanish, English |
|
||||
| **License** | Apache 2.0 |
|
||||
|
||||
Aitana-7B-S-base-1.0 extends the multilingual Salamandra foundation with additional training on domain-specific Valencian, Spanish, and English data. The training emphasizes administrative, legal, and tourism domains.
|
||||
|
||||
## Training Data
|
||||
|
||||
This model was trained on the following ALIA datasets:
|
||||
|
||||
| Dataset ID | Name | Language | Source |
|
||||
|------------|------|----------|--------|
|
||||
| dc8 | dogv_va_2025 | Valencian | [gplsi/alia_dogv](https://huggingface.co/datasets/gplsi/alia_dogv) |
|
||||
| dc9 | dogv_es_2025 | Spanish | [gplsi/alia_dogv](https://huggingface.co/datasets/gplsi/alia_dogv) |
|
||||
| dc10 | corts_es_va_2025 | Spanish/Valencian | [gplsi/alia_les_corts](https://huggingface.co/datasets/gplsi/alia_les_corts) |
|
||||
| dc11 | amic_va_2025 | Valencian | [gplsi/alia_amic](https://huggingface.co/datasets/gplsi/alia_amic) |
|
||||
| dc12 | boua_va_2025 | Valencian | [gplsi/alia_boua](https://huggingface.co/datasets/gplsi/alia_boua) |
|
||||
| dc13 | boua_es_2025 | Spanish | [gplsi/alia_boua](https://huggingface.co/datasets/gplsi/alia_boua) |
|
||||
| dc14 | tourism_va_2025 | Valencian | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
| dc15 | tourism_es_2025 | Spanish | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
| dc16 | tourism_en_2025 | English | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
|
||||
### Data Sources
|
||||
|
||||
- **DOGV (Diari Oficial de la Generalitat Valenciana)**: Official communications of the Valencian Community including laws and public sector communications
|
||||
- **Les Corts Valencianes**: Transcripts from the Valencian Parliament plenary sessions and committee meetings
|
||||
- **AMIC**: Valencian language corpus
|
||||
- **BOUA (Butlletí Oficial de la Universitat d'Alacant)**: Official University of Alicante documents including grants, regulations, and resolutions
|
||||
- **Tourism**: Multilingual tourism domain content
|
||||
|
||||
## Intended Uses
|
||||
|
||||
This model can be used for:
|
||||
|
||||
- **Text generation** in Valencian, Spanish, and English
|
||||
- **Fine-tuning** for specific downstream tasks
|
||||
- **Domain adaptation** for administrative, legal, or tourism applications
|
||||
|
||||
> **Note**: Due to the formal register of training data (administrative and legal domains), generated text tends toward formal language.
|
||||
|
||||
## How to Use
|
||||
|
||||
### Transformers
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import pipeline, AutoTokenizer
|
||||
|
||||
model_id = "gplsi/Aitana-7B-S-base-1.0"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
generator = pipeline(
|
||||
"text-generation",
|
||||
model=model_id,
|
||||
tokenizer=tokenizer,
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
# Valencian example
|
||||
text = "Les corts valencianes han pres la decisió de"
|
||||
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
|
||||
print(result[0]['generated_text'])
|
||||
|
||||
# Spanish example
|
||||
text = "El turismo en la Comunidad Valenciana"
|
||||
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
|
||||
print(result[0]['generated_text'])
|
||||
```
|
||||
|
||||
|
||||
## Evaluation
|
||||
|
||||
In the following table, we can see the results obtained with different benchmarks from [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) in comparison with the model used for continuous pre-training.
|
||||
The results have been obtained from the model pre-trained; no instruction tuning or fine-tuning of any kind has been performed.
|
||||
|
||||
### Normalized score per language
|
||||
|
||||
| Language | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|----------|----------|----------|
|
||||
| **Spanish** | **0.255** | 0.252 |
|
||||
| **Catalan** | 0.373 | **0.378** |
|
||||
| **English** | 0.329 | **0.364** |
|
||||
| **Valencian** | **0.614** | **0.614** |
|
||||
|
||||
### Valencian
|
||||
|
||||
|
||||
#### Classification Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|----------------------------|-------------|---------------|-----------------------|
|
||||
| XNLI | va |Natural Language Inference | acc | **0.50** | 0.50 |
|
||||
|
||||
|
||||
#### Generation Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|----------------------------|-------------|---------------|-----------------------|
|
||||
| Cocoteros | va |Reading Comprehension | bleu | 12.01 | **16.19** |
|
||||
| Phrases ca-va | va-ca |Translation - Adaptation | bleu | **86.80** | 85.33 |
|
||||
| Phrases va-ca | va-ca |Translation - Adaptation | bleu | **94.71** | 80.00 |
|
||||
| Phrases va-es | va-es |Translation | bleu | 79.74 | **80.59** |
|
||||
| Phrases es-va | es-va |Translation | bleu | 66.42 | **69.78** |
|
||||
| Truthfulqa_va | va | Truthfulness | bleu_acc| 0.33 | **0.37** |
|
||||
|
||||
|
||||
### Catalan
|
||||
|
||||
|
||||
#### Classification Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|----------------------------|-------------|---------------|-----------------------|
|
||||
| Belebele Cat_latn | ca | Reading Comprehension | acc | 0.51 | **0.54** |
|
||||
| COPA | ca | Commonsense Reasoning | acc | 0.80 | **0.82** |
|
||||
| XStoryCloze | ca | Commonsense Reasoning | acc | 0.75 | **0.77** |
|
||||
| OpenBookQA | ca | Question Answering | acc | 0.38 | **0.38** |
|
||||
| PAWS | ca | Paraphrasing | acc | 0.62 | **0.62** |
|
||||
| PiQA | ca | Question Answering | acc | 0.71 | **0.72** |
|
||||
| SiQA | ca | Question Answering | acc | 0.49 | **0.51** |
|
||||
| ARC Easy | ca | Question Answering | acc | 0.73 | **0.73** |
|
||||
| ARC Challenge | ca | Question Answering | acc | **0.47** | 0.46 |
|
||||
| XNLI | ca | Natural Language Inference | acc | **0.51** | 0.50 |
|
||||
| Teca | ca | Natural Language Inference | acc | 0.53 | **0.53** |
|
||||
| WNLI | ca | Natural Language Inference | acc | 0.59 | **0.62** |
|
||||
| Catcola | ca | Linguistic Acceptability | acc | **0.73** | 0.73 |
|
||||
| Catcola | ca | Linguistic Acceptability | mcc | **0.29** | 0.15 |
|
||||
| Catalanqa | ca | Question Answering | F1 | 0.82 | **0.83** |
|
||||
| Mgsm direct | ca | Math | exact match | 0.07 | **0.09** |
|
||||
| Catalanqa | ca | Question Answering | exact match | 0.62 | **0.65** |
|
||||
| Xquad | ca | Question Answering | exact match | 0.49 | **0.51** |
|
||||
| Xquad | ca | Question Answering | F1 | 0.71 | **0.73** |
|
||||
|
||||
|
||||
|
||||
#### Generation Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|----------------------------|--------|----------------|-----------------------|
|
||||
| Cabreu abstractive | ca | Summarization | bleu | 8.73 | **11.32** |
|
||||
| Cabreu extractive | ca | Summarization | bleu | **44.55** | 41.80 |
|
||||
| Cabreu extreme | ca | Summarization | bleu | 10.66 | **12.54** |
|
||||
|
||||
|
||||
|
||||
### Spanish
|
||||
|
||||
#### Classification Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|---------------------------|-------------|---------------|-----------------------|
|
||||
| Belebele | es | Reading Comprehension | acc | 0.493 | **0.561** |
|
||||
| PAWS | es | Paraphrasing | acc | **0.608** | 0.591 |
|
||||
| XNLI | es | Natural Language Inference| acc | **0.468** | 0.462 |
|
||||
| WNLI | es | Natural Language Inference| acc | **0.465** | 0.437 |
|
||||
| XStoryCloze | es | Commonsense Reasoning | acc | 0.745 | **0.756** |
|
||||
| Escola | es | Linguistic Acceptability | acc | **0.706** | 0.678 |
|
||||
| Escola | es | Linguistic Acceptability | mcc | **0.295** | 0.146 |
|
||||
| OpenbookQA | es | Question Answering | acc | **0.406** | 0.382 |
|
||||
| MGSM Direct | es | Math | exact match | 0.068 | **0.080** |
|
||||
| XQUAD | es | Question Answering | exact match | 0.501 | **0.505** |
|
||||
| XQUAD | es | Question Answering | F1 | 0.711 | **0.719** |
|
||||
|
||||
|
||||
|
||||
#### Generation Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|---------------------|---------|----------------|-----------------------|
|
||||
| Cocoteros | es |Reading Comprehension| bleu | 13.68 | **17.51** |
|
||||
| XLSum | es | Summarization | bleu | 3.59 | **5.75** |
|
||||
|
||||
### English
|
||||
|
||||
#### Classification Benchmarks
|
||||
|
||||
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|
||||
|------------------------------|--------|----------------------------|-------------|---------------|-----------------------|
|
||||
| Arc Challenge | en | Question Answering | acc | **0.527** | 0.526 |
|
||||
| Arc Easy | en | Question Answering | acc | **0.824** | 0.814 |
|
||||
| Belebele | en | Reading Comprehension | acc | 0.549 | **0.573** |
|
||||
| PAWS | en | Paraphrasing | acc | **0.633** | 0.615 |
|
||||
| XNLI | en | Natural Language Inference | acc | **0.483** | 0.476 |
|
||||
| XStoryCloze | en | Commonsense Reasoning | acc | **0.795** | 0.793 |
|
||||
| OpenBookQA | en | Question Answering | acc | 0.356 | **0.362** |
|
||||
| PiQA | en | Question Answering | acc | 0.797 | **0.799** |
|
||||
| Social iqa | en | Question Answering | acc | **0.513** | 0.512 |
|
||||
| WNLI | en | Natural Language Inference | acc | 0.479 | **0.606** |
|
||||
| MGSM Direct | en | Math | exact match | 0.280 | **0.564** |
|
||||
| TriviaQA | en | Question Answering | exact match | 0.597 | **0.602** |
|
||||
| CoLA | en | Linguistic Acceptability | mcc | **0.412** | 0.361 |
|
||||
|
||||
## Additional Information
|
||||
|
||||
### Author
|
||||
|
||||
The model has been developed by the **Language and [Information Systems Group (GPLSI)](https://gplsi.dlsi.ua.es/)** and the **[Centro de Inteligencia Digital (CENID)](https://cenid.es)**, both part of the **[University of Alicante (UA)](https://www.ua.es/es/)**, as part of their ongoing research in **Natural Language Processing (NLP)**.
|
||||
|
||||
|
||||
### Part of the Aitana Family
|
||||
|
||||
This model is part of the Aitana model family developed by the GPLSI research group, which includes:
|
||||
|
||||
- [gplsi/Aitana-2B-S](https://huggingface.co/gplsi/Aitana-2B-S) - Valencian-focused 2B model
|
||||
- [gplsi/Aitana-2B-S-base-1.0](https://huggingface.co/gplsi/Aitana-2B-S-base-1.0) - Base version (1.0) of the 2B model
|
||||
- [gplsi/Aitana-6.3B](https://huggingface.co/gplsi/Aitana-6.3B) - Larger 6.3B parameter model
|
||||
- [gplsi/Aitana-TA-2B-S](https://huggingface.co/gplsi/Aitana-TA-2B-S) - Translation model (Spanish ↔ Valencian)
|
||||
- [gplsi/Aitana-2B-S-LF](https://www.google.com/search?q=https://huggingface.co/gplsi/Aitana-2B-S-LF) - 2B Text Generation variant
|
||||
- [gplsi/Aitana-2B-S-tourism-base-1.0](https://huggingface.co/gplsi/Aitana-2B-S-tourism-base-1.0) - Domain-specific base model focused on Tourism
|
||||
- [gplsi/Aitana-tourism-mb-encoder-1.0](https://huggingface.co/gplsi/Aitana-tourism-mb-encoder-1.0) - Tourism domain Fill-Mask/Encoder model
|
||||
- [gplsi/Aitana-FraudDetection-R-1.0](https://huggingface.co/gplsi/Aitana-FraudDetection-R-1.0) - Text Classification model for Fraud Detection
|
||||
|
||||
|
||||
### Funding
|
||||
|
||||
This work is funded by the **Ministerio para la Transformación Digital y de la Función Pública**, co-financed by the **EU – NextGenerationEU**, within the framework of the project *Desarrollo de Modelos ALIA*.
|
||||
|
||||
|
||||
### Acknowledgments
|
||||
|
||||
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
|
||||
|
||||
Special thanks to:
|
||||
- [Language Technologies Laboratory at Barcelona Supercomputing Center](https://www.bsc.es/es/discover-bsc/organisation/research-structure/language-technologies-laboratory)
|
||||
- [Centro Vasco de Tecnología de la Lengua (HiTZ)](https://www.hitz.eus/es)
|
||||
- [Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)](https://citius.gal/)
|
||||
- [Sistemas Inteligentes de Acceso a la Información (SINAI)](https://www.ujaen.es/investigacion-y-transferencia/grupos-de-investigacion/sistemas-inteligentes-de-acceso-la-informacion-sinai)
|
||||
- [Instituto Universitario de Investigación Informática (IUII)](https://web.ua.es/es/iuii/)
|
||||
- [Leonardo HPC System](https://leonardo-supercomputer.cineca.eu/)
|
||||
- [European supercomputing ecosystem (EUROHPC)](https://www.eurohpc-ju.europa.eu/)
|
||||
|
||||
|
||||
We also acknowledge the financial, technical, and scientific support of the **Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA**, whose contribution has been essential to the completion of this research.
|
||||
|
||||
### License
|
||||
|
||||
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
||||
|
||||
### Disclaimer
|
||||
|
||||
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
|
||||
|
||||
### Reference
|
||||
```bibtex
|
||||
@misc{gplsi-aitana-2B-S-base-1.0,
|
||||
author = {Estevanell-Valladares, Ernesto L. and Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Consuegra-Ayala, Juan Pablo and Galiano, Santiago and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
|
||||
title = {Aitana 7B base: Continually pre-trained on Valencian},
|
||||
year = {2026},
|
||||
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
|
||||
howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
|
||||
note = {Accessed: 2026-4-8}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2026 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID),
|
||||
University of Alicante (UA).
|
||||
Distributed under the Apache License 2.0.**
|
||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 11008,
|
||||
"max_position_embeddings": 8192,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.40.2",
|
||||
"use_cache": true,
|
||||
"vocab_size": 256000
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.45.2"
|
||||
}
|
||||
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d35dd7970143b8f7c73bea75fc2e639ed5176c5e8bcddfb252067282bcc5b43c
|
||||
size 4982973048
|
||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7d7b0ca473d9240738ae86ceccf3573388e72bb18d12981bddb0f9641623d05a
|
||||
size 4995660232
|
||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8fa487237135bb8b1d8d23dcd7178e2a0f6963ba081d51ee8e031bfea63c6129
|
||||
size 3460482936
|
||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6a2d377a1e8649882ea726b9a2c5666edfda341330e6ee39a3a799dbb4d6d645
|
||||
size 2097152128
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 15536234496
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00004-of-00004.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.norm.weight": "model-00003-of-00004.safetensors"
|
||||
}
|
||||
}
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2e90b85b3e3b3ebfc6b9bafeb954b37f2435eed595738337e53f2a746d23d5a2
|
||||
size 37007416
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fa490e57cebce5cb1a0a5b1a5d3fa4de05aee53dc3a44791f1c3401db44d802d
|
||||
size 4813274
|
||||
43
tokenizer_config.json
Normal file
43
tokenizer_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"add_prefix_space": true,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"bos_token": "<s>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"legacy": false,
|
||||
"local_files_only": true,
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": null,
|
||||
"sp_model_kwargs": {},
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": false
|
||||
}
|
||||
Reference in New Issue
Block a user