初始化项目,由ModelHub XC社区提供模型
Model: gplsi/Aitana-2B-S-base-IP-1.0 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
157
README.md
Normal file
157
README.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
---
|
||||||
|
library_name: transformers
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
tags:
|
||||||
|
- llama
|
||||||
|
- causal-lm
|
||||||
|
- text-generation
|
||||||
|
- transformers
|
||||||
|
---
|
||||||
|
|
||||||
|
# Aitana-2B-S-base-IP-1.0
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- Model description
|
||||||
|
- Intended uses and limitations
|
||||||
|
- How to use
|
||||||
|
- Training
|
||||||
|
- Technical specifications
|
||||||
|
- Additional information
|
||||||
|
|
||||||
|
## Model description
|
||||||
|
|
||||||
|
Aitana-2B-S-base-IP-1.0 is a generative language model with a decoder-only architecture.
|
||||||
|
This repository contains the base checkpoint, intended for causal language modeling and
|
||||||
|
for further adaptation or task-specific fine-tuning.
|
||||||
|
|
||||||
|
Based on the files shipped in this repository, the checkpoint uses the Llama
|
||||||
|
architecture and the Transformers ecosystem. The local configuration indicates:
|
||||||
|
|
||||||
|
- architecture: `LlamaForCausalLM`
|
||||||
|
- hidden size: `2048`
|
||||||
|
- layers: `24`
|
||||||
|
- attention heads: `16`
|
||||||
|
- vocabulary size: `256000`
|
||||||
|
- context length: `8192`
|
||||||
|
- tensor dtype in config: `bfloat16`
|
||||||
|
|
||||||
|
## Intended uses and limitations
|
||||||
|
|
||||||
|
Aitana-2B-S-base-IP-1.0 is a base model that can be used for causal language
|
||||||
|
modeling and text generation. As with other base checkpoints, it is generally more
|
||||||
|
useful as a starting point for instruction-tuning, domain adaptation, or downstream
|
||||||
|
fine-tuning than as a final end-user assistant model.
|
||||||
|
|
||||||
|
Because this repository currently only exposes the model artifacts and not the full
|
||||||
|
training report, claims about domain coverage, language balance, safety behavior, and
|
||||||
|
benchmark performance should be added only once they are confirmed by the model
|
||||||
|
authors.
|
||||||
|
|
||||||
|
## How to use
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
model_id = "gplsi/Aitana-2B-S-base-IP-1.0"
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
model_id,
|
||||||
|
torch_dtype=torch.bfloat16,
|
||||||
|
device_map="auto",
|
||||||
|
)
|
||||||
|
|
||||||
|
prompt = "Escriu un breu resum sobre la importància de la llengua."
|
||||||
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
||||||
|
|
||||||
|
outputs = model.generate(
|
||||||
|
**inputs,
|
||||||
|
max_new_tokens=128,
|
||||||
|
do_sample=True,
|
||||||
|
top_p=0.9,
|
||||||
|
temperature=0.7,
|
||||||
|
eos_token_id=tokenizer.eos_token_id,
|
||||||
|
pad_token_id=tokenizer.pad_token_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||||
|
```
|
||||||
|
|
||||||
|
## Training
|
||||||
|
|
||||||
|
### Base model
|
||||||
|
|
||||||
|
TO-DO: document the original parent checkpoint or initialization source for
|
||||||
|
Aitana-2B-S-base-IP-1.0.
|
||||||
|
|
||||||
|
### Training data
|
||||||
|
|
||||||
|
TO-DO: document the training corpora, language distribution, preprocessing steps,
|
||||||
|
deduplication policy, anonymization steps, and data filtering criteria.
|
||||||
|
|
||||||
|
### Training hyperparameters
|
||||||
|
|
||||||
|
TO-DO: document the effective batch size, learning rate schedule, optimizer setup,
|
||||||
|
number of epochs or tokens seen, sequence length used during training, and hardware.
|
||||||
|
|
||||||
|
## Technical specifications
|
||||||
|
|
||||||
|
### Model architecture and objective
|
||||||
|
|
||||||
|
- architecture: decoder-only causal language model
|
||||||
|
- implementation class: `LlamaForCausalLM`
|
||||||
|
- hidden size: `2048`
|
||||||
|
- intermediate size: `5440`
|
||||||
|
- layers: `24`
|
||||||
|
- attention heads: `16`
|
||||||
|
- key/value heads: `16`
|
||||||
|
- maximum position embeddings: `8192`
|
||||||
|
- vocabulary size: `256000`
|
||||||
|
- BOS token id: `1`
|
||||||
|
- EOS token id: `2`
|
||||||
|
- PAD token id: `3`
|
||||||
|
|
||||||
|
### Tokenizer
|
||||||
|
|
||||||
|
The tokenizer files in this repository define:
|
||||||
|
|
||||||
|
- BOS token: `<s>`
|
||||||
|
- EOS token: `</s>`
|
||||||
|
- PAD token: `<pad>`
|
||||||
|
- UNK token: `<unk>`
|
||||||
|
|
||||||
|
### Hardware and software
|
||||||
|
|
||||||
|
The repository is packaged for the Hugging Face `transformers` library.
|
||||||
|
Specific training hardware and training software details should be documented by the
|
||||||
|
model authors if they are intended to be part of the public model card.
|
||||||
|
|
||||||
|
## Additional information
|
||||||
|
|
||||||
|
### Author
|
||||||
|
|
||||||
|
TO-DO: confirm the author list and institutional attribution to be displayed in the
|
||||||
|
public model card.
|
||||||
|
|
||||||
|
### Contact
|
||||||
|
|
||||||
|
TO-DO: add a contact email or project contact point.
|
||||||
|
|
||||||
|
### License
|
||||||
|
|
||||||
|
TO-DO: confirm the license for this checkpoint and add it both here and in
|
||||||
|
`config.json` if desired.
|
||||||
|
|
||||||
|
### Funding
|
||||||
|
|
||||||
|
TO-DO: add funding information if this checkpoint is part of a funded project.
|
||||||
|
|
||||||
|
### Disclaimer
|
||||||
|
|
||||||
|
This repository contains a base language model checkpoint. Base models can reflect
|
||||||
|
biases present in their training data and may generate inaccurate, misleading, or
|
||||||
|
unsafe content. Anyone deploying this model, or systems built on top of it, is
|
||||||
|
responsible for evaluating those risks and ensuring compliance with applicable legal,
|
||||||
|
ethical, and operational requirements.
|
||||||
30
config.json
Normal file
30
config.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"dtype": "bfloat16",
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 2048,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 5440,
|
||||||
|
"max_position_embeddings": 8192,
|
||||||
|
"mlp_bias": false,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_hidden_layers": 24,
|
||||||
|
"num_key_value_heads": 16,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 10000.0,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"transformers_version": "4.57.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 256000,
|
||||||
|
"pad_token_id": 3
|
||||||
|
}
|
||||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"transformers_version": "4.57.1"
|
||||||
|
}
|
||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cbebcfd6c3dfe0365ab2bc2d9b57e4e3c47b57ddcb303132d6dd177f94dcfd39
|
||||||
|
size 4507005744
|
||||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6a84e9f07a00c042e289e0d3e5d0ea113e86d40ea86ecfeae60db162fc11d88b
|
||||||
|
size 37007413
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ab94ddf46d14f0279254858d53770c5319c5129d47291ee2bada530271cb1292
|
||||||
|
size 4813276
|
||||||
16
tokenizer_config.json
Normal file
16
tokenizer_config.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"add_prefix_space": true,
|
||||||
|
"backend": "tokenizers",
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"is_local": true,
|
||||||
|
"local_files_only": true,
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": "<pad>",
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"spaces_between_special_tokens": false,
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": "<unk>",
|
||||||
|
"use_default_system_prompt": false
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user