初始化项目,由ModelHub XC社区提供模型
Model: gplsi/Aitana-2B-S-tourism-base-1.0 Source: Original Platform
This commit is contained in:
37
.gitattributes
vendored
Normal file
37
.gitattributes
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
Aitana-s2b-c0dc7-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
3
Aitana-s2b-c0dc7-f16.gguf
Normal file
3
Aitana-s2b-c0dc7-f16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8b7a8de07ea8499447c92ab7e4052fbd4e5032b6b44ed98ba803edb1fea2fff1
|
||||
size 4513628320
|
||||
181
README.md
Normal file
181
README.md
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- ca
|
||||
- es
|
||||
- en
|
||||
base_model: gplsi/Aitana-2B-S-base-1.0
|
||||
tags:
|
||||
- valencian
|
||||
- catalan
|
||||
- spanish
|
||||
- english
|
||||
- text-generation
|
||||
- tourism
|
||||
- alia
|
||||
- gplsi
|
||||
datasets:
|
||||
- gplsi/alia_tourism
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Aitana-2B-S-tourism-base-1.0
|
||||
|
||||
**Aitana-2B-S-tourism-base-1.0** is a generative language model from the **Aitana family**, developed by the [GPLSI (Language and Information System Group)](https://gplsi.dlsi.ua.es/) at the University of Alicante. This model is based on [gplsi/Aitana-2B-S-base-1.0](https://huggingface.co/gplsi/Aitana-2B-S-base-1.0) and has been further trained on tourism domain data to enhance performance in tourism-related text generation.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Model Description](#model-description)
|
||||
- [Training Data](#training-data)
|
||||
- [Intended Uses](#intended-uses)
|
||||
- [How to Use](#how-to-use)
|
||||
- [GGUF for LM Studio](#gguf-for-lm-studio)
|
||||
- [Additional Information](#additional-information)
|
||||
|
||||
## Model Description
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Base Model** | [gplsi/Aitana-2B-S-base-1.0](https://huggingface.co/gplsi/Aitana-2B-S-base-1.0) |
|
||||
| **Architecture** | Transformer decoder-only |
|
||||
| **Parameters** | ~2.25B |
|
||||
| **Languages** | Valencian, Spanish, English |
|
||||
| **License** | Apache 2.0 |
|
||||
|
||||
Aitana-2B-S-tourism-base-1.0 extends the Aitana-2B-S-base-1.0 foundation with additional training on tourism domain data. This specialized training makes it particularly well-suited for tourism-related applications in Valencian, Spanish, and English.
|
||||
|
||||
## Training Data
|
||||
|
||||
This model was trained on the following tourism domain dataset:
|
||||
|
||||
| Dataset ID | Name | Language | Source |
|
||||
|------------|------|----------|--------|
|
||||
| dc7 | tourism_va_2025 | Valencian | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
| dc7 | tourism_es_2025 | Spanish | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
| dc7 | tourism_en_2025 | English | [gplsi/alia_tourism](https://huggingface.co/datasets/gplsi/alia_tourism) |
|
||||
|
||||
### Data Source
|
||||
|
||||
- **Tourism**: Multilingual tourism domain content covering tourist information, destinations, accommodations, cultural sites, and travel-related text in Valencian, Spanish, and English.
|
||||
|
||||
## Intended Uses
|
||||
|
||||
This model can be used for:
|
||||
|
||||
- **Tourism text generation** in Valencian, Spanish, and English
|
||||
- **Travel content creation** and assistance
|
||||
- **Fine-tuning** for specific tourism downstream tasks
|
||||
- **Domain adaptation** for hospitality and travel applications
|
||||
|
||||
> **Note**: This model is specifically optimized for tourism domain content. For general-purpose or administrative/legal text, consider using other models in the Aitana family.
|
||||
|
||||
## How to Use
|
||||
|
||||
### Transformers
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import pipeline, AutoTokenizer
|
||||
|
||||
model_id = "gplsi/Aitana-2B-S-tourism-base-1.0"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
generator = pipeline(
|
||||
"text-generation",
|
||||
model=model_id,
|
||||
tokenizer=tokenizer,
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
# Tourism example in Spanish
|
||||
text = "El turismo en la Comunidad Valenciana ofrece"
|
||||
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
|
||||
print(result[0]['generated_text'])
|
||||
|
||||
# Tourism example in Valencian
|
||||
text = "Les platges de la Costa Blanca són"
|
||||
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
|
||||
print(result[0]['generated_text'])
|
||||
|
||||
# Tourism example in English
|
||||
text = "The best beaches in Valencia include"
|
||||
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
|
||||
print(result[0]['generated_text'])
|
||||
```
|
||||
|
||||
## GGUF for LM Studio
|
||||
|
||||
This repository includes a GGUF version for use with [LM Studio](https://lmstudio.ai/), [Ollama](https://ollama.ai/), and other llama.cpp-based tools.
|
||||
|
||||
| File | Precision | Size |
|
||||
|------|-----------|------|
|
||||
| `Aitana-s2b-c0dc7-f16.gguf` | F16 | ~4.5 GB |
|
||||
|
||||
### Using with llama-cpp-python
|
||||
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama.from_pretrained(
|
||||
repo_id="gplsi/Aitana-2B-S-tourism-base-1.0",
|
||||
filename="Aitana-s2b-c0dc7-f16.gguf",
|
||||
)
|
||||
|
||||
output = llm("El turismo en Valencia ofrece", max_tokens=100)
|
||||
print(output["choices"][0]["text"])
|
||||
```
|
||||
|
||||
|
||||
## Additional Information
|
||||
|
||||
### Author
|
||||
|
||||
The model has been developed by the **Language and [Information Systems Group (GPLSI)](https://gplsi.dlsi.ua.es/)** and the **[Centro de Inteligencia Digital (CENID)](https://cenid.es)**, both part of the **[University of Alicante (UA)](https://www.ua.es/es/)**, as part of their ongoing research in **Natural Language Processing (NLP)**.
|
||||
|
||||
|
||||
### Funding
|
||||
|
||||
This work is funded by the **Ministerio para la Transformación Digital y de la Función Pública**, co-financed by the **EU – NextGenerationEU**, within the framework of the project *Desarrollo de Modelos ALIA*.
|
||||
|
||||
### Acknowledgments
|
||||
|
||||
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
|
||||
|
||||
Special thanks to:
|
||||
- [Language Technologies Laboratory at Barcelona Supercomputing Center](https://www.bsc.es/es/discover-bsc/organisation/research-structure/language-technologies-laboratory)
|
||||
- [Centro Vasco de Tecnología de la Lengua (HiTZ)](https://www.hitz.eus/es)
|
||||
- [Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)](https://citius.gal/)
|
||||
- [Sistemas Inteligentes de Acceso a la Información (SINAI)](https://www.ujaen.es/investigacion-y-transferencia/grupos-de-investigacion/sistemas-inteligentes-de-acceso-la-informacion-sinai)
|
||||
- [Instituto Universitario de Investigación Informática (IUII)](https://web.ua.es/es/iuii/)
|
||||
- [Leonardo HPC System](https://leonardo-supercomputer.cineca.eu/)
|
||||
- [European supercomputing ecosystem (EUROHPC)](https://www.eurohpc-ju.europa.eu/)
|
||||
|
||||
We also acknowledge the financial, technical, and scientific support of the **Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA**, whose contribution has been essential to the completion of this research.
|
||||
|
||||
|
||||
### License
|
||||
|
||||
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
||||
|
||||
### Disclaimer
|
||||
|
||||
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
|
||||
|
||||
|
||||
### Reference
|
||||
```bibtex
|
||||
@misc{gplsi-aitana-2B-S-base-1.0,
|
||||
author = {Estevanell-Valladares, Ernesto L. and Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Galeano, Santiago and Consuegra-Ayala, Juan Pablo and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
|
||||
title = {Aitana 2B base: Continually pre-trained on Valencian},
|
||||
year = {2025},
|
||||
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
|
||||
howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
|
||||
note = {Accessed: 2025-12-12}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2026 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.**
|
||||
30
config.json
Normal file
30
config.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 2,
|
||||
"pad_token_id": 2,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 5440,
|
||||
"max_position_embeddings": 8192,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 16,
|
||||
"num_hidden_layers": 24,
|
||||
"num_key_value_heads": 16,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"transformers_version": "4.56.2",
|
||||
"use_cache": true,
|
||||
"vocab_size": 256000
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.56.2"
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2eae3355f90e1f5d0ec2a417dd02e8407a16bc02774f3de417a076e0e810c28f
|
||||
size 4507005744
|
||||
24
special_tokens_map.json
Normal file
24
special_tokens_map.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "</s>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2e90b85b3e3b3ebfc6b9bafeb954b37f2435eed595738337e53f2a746d23d5a2
|
||||
size 37007416
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ab94ddf46d14f0279254858d53770c5319c5129d47291ee2bada530271cb1292
|
||||
size 4813276
|
||||
1100
tokenizer_config.json
Normal file
1100
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user