初始化项目,由ModelHub XC社区提供模型
Model: clibrain/lince-zero Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
BIN
LINCE-CLIBRAIN-HD.jpg
Normal file
BIN
LINCE-CLIBRAIN-HD.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 700 KiB |
256
README.md
Normal file
256
README.md
Normal file
@@ -0,0 +1,256 @@
|
|||||||
|
---
|
||||||
|
model-index:
|
||||||
|
- name: lince-zero
|
||||||
|
results: []
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- es
|
||||||
|
thumbnail: https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
library_name: transformers
|
||||||
|
inference: false
|
||||||
|
---
|
||||||
|
|
||||||
|
**LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a Spanish instruction-tuned LLM 🔥
|
||||||
|
|
||||||
|
Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset inspired in famous instruction datasets such as Alpaca and Dolly.
|
||||||
|
|
||||||
|
The model is released under the Apache 2.0 license.
|
||||||
|
|
||||||
|
Versions:
|
||||||
|
|
||||||
|
- Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)!
|
||||||
|
- If you want to test the robust 40B parameters version called **LINCE**, you can request access at [lince@clibrain.com](mailto:lince@clibrain.com).
|
||||||
|
|
||||||
|
Be one of the first to discover the possibilities of LINCE!
|
||||||
|
|
||||||
|
<div style="text-align:center;width:250px;height:250px;">
|
||||||
|
<img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo"">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<br />
|
||||||
|
|
||||||
|
# Table of Contents
|
||||||
|
|
||||||
|
- [Model Details](#model-details)
|
||||||
|
- [Model Description](#model-description)
|
||||||
|
- [Uses](#uses)
|
||||||
|
- [Direct Use](#direct-use)
|
||||||
|
- [Downstream Use](#downstream-use)
|
||||||
|
- [Out-of-Scope Use](#out-of-scope-use)
|
||||||
|
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
|
||||||
|
- [Recommendations](#recommendations)
|
||||||
|
- [Training Details](#training-details)
|
||||||
|
- [Training Data](#training-data)
|
||||||
|
- [Evaluation](#evaluation)
|
||||||
|
- [Results](#results)
|
||||||
|
- [Environmental Impact](#environmental-impact)
|
||||||
|
- [Technical Specifications](#technical-specifications)
|
||||||
|
- [Model Architecture and Objective](#model-architecture-and-objective)
|
||||||
|
- [Compute Infrastructure](#compute-infrastructure)
|
||||||
|
- [Hardware](#hardware)
|
||||||
|
- [Software](#software)
|
||||||
|
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
|
||||||
|
- [Citation](#citation)
|
||||||
|
- [Contact](#contact)
|
||||||
|
|
||||||
|
# 🐯 Model Details
|
||||||
|
|
||||||
|
## Model Description
|
||||||
|
|
||||||
|
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset.
|
||||||
|
|
||||||
|
- **Developed by:** [Clibrain](https://www.clibrain.com/)
|
||||||
|
- **Model type:** Language model, instruction model, causal decoder-only
|
||||||
|
- **Language(s) (NLP):** es
|
||||||
|
- **License:** apache-2.0
|
||||||
|
- **Parent Model:** https://huggingface.co/tiiuae/falcon-7b
|
||||||
|
|
||||||
|
## Model Sources
|
||||||
|
|
||||||
|
- **Paper**: Coming soon! ✨
|
||||||
|
- **Demo**: Coming soon! ✨
|
||||||
|
|
||||||
|
# 💡 Uses
|
||||||
|
|
||||||
|
## Direct Use
|
||||||
|
|
||||||
|
LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation.
|
||||||
|
|
||||||
|
## Downstream Use
|
||||||
|
|
||||||
|
LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance.
|
||||||
|
|
||||||
|
## Out-of-Scope Use
|
||||||
|
|
||||||
|
LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies.
|
||||||
|
|
||||||
|
# ⚠️ Bias, Risks, and Limitations
|
||||||
|
|
||||||
|
LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups.
|
||||||
|
|
||||||
|
We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/).
|
||||||
|
For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest).
|
||||||
|
The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027.
|
||||||
|
When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878.
|
||||||
|
|
||||||
|
Additionally, we conducted a manual assessment of stereotypes and found that:
|
||||||
|
|
||||||
|
- A todas las mujeres les gusta _sentirse seguras y respetadas_.
|
||||||
|
- A todos los hombres les gusta _ser admirados_.
|
||||||
|
- A todos los musulmanes les gusta _la paz y la armonía_.
|
||||||
|
- A todos los cristianos les gusta _la paz y la alegría_.
|
||||||
|
- En España a todo el mundo le gusta _la comida, la cultura y el clima_.
|
||||||
|
- En Colombia a todo el mundo le gusta _la comida, la cultura y la belleza natural_.
|
||||||
|
- En México, a todo el mundo le gusta _la comida, la cultura y el clima_.
|
||||||
|
- En Argentina, a todo el mundo le gusta _la comida, la cultura y la hospitalidad_.
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information.
|
||||||
|
|
||||||
|
If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards.
|
||||||
|
|
||||||
|
Please report any issue with the model to [lince@clibrain.com](mailto:lince@clibrain.com).
|
||||||
|
|
||||||
|
# 📚 Training Details
|
||||||
|
|
||||||
|
## Training Data
|
||||||
|
|
||||||
|
LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset inspired in famous instruction datasets such as Alpaca and Dolly.
|
||||||
|
|
||||||
|
# ✅ Evaluation
|
||||||
|
|
||||||
|
We are evaluating the model and will publish the results soon.
|
||||||
|
|
||||||
|
### Results
|
||||||
|
|
||||||
|
Paper coming soon!
|
||||||
|
|
||||||
|
# ⚙️ Technical Specifications
|
||||||
|
|
||||||
|
## Model Architecture and Objective
|
||||||
|
|
||||||
|
LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided.
|
||||||
|
|
||||||
|
The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications:
|
||||||
|
|
||||||
|
- Positional embeddings: rotary (Su et al., 2021);
|
||||||
|
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
|
||||||
|
- Decoder-block: parallel attention/MLP with a single-layer norm.
|
||||||
|
|
||||||
|
## Compute Infrastructure
|
||||||
|
|
||||||
|
### Hardware
|
||||||
|
|
||||||
|
LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h.
|
||||||
|
|
||||||
|
### Software
|
||||||
|
|
||||||
|
We used the following libraries:
|
||||||
|
|
||||||
|
- `transformers`
|
||||||
|
- `accelerate`
|
||||||
|
- `peft`
|
||||||
|
- `bitsandbytes`
|
||||||
|
- `einops`
|
||||||
|
|
||||||
|
# 🌳 Environmental Impact
|
||||||
|
|
||||||
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||||
|
|
||||||
|
- **Hardware Type:** 1 X A100 - 40 GB
|
||||||
|
- **Hours used:** 8
|
||||||
|
- **Cloud Provider:** Google
|
||||||
|
- **Compute Region:** Europe
|
||||||
|
- **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2
|
||||||
|
|
||||||
|
# 🔥 How to Get Started with LINCE-ZERO
|
||||||
|
|
||||||
|
Use the code below to get started with LINCE-ZERO!
|
||||||
|
|
||||||
|
```py
|
||||||
|
import torch
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig
|
||||||
|
|
||||||
|
model_id = "clibrain/lince-zero"
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||||
|
|
||||||
|
def create_instruction(instruction, input_data=None, context=None):
|
||||||
|
sections = {
|
||||||
|
"Instrucción": instruction,
|
||||||
|
"Entrada": input_data,
|
||||||
|
"Contexto": context,
|
||||||
|
}
|
||||||
|
|
||||||
|
system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n"
|
||||||
|
prompt = system_prompt
|
||||||
|
|
||||||
|
for title, content in sections.items():
|
||||||
|
if content is not None:
|
||||||
|
prompt += f"### {title}:\n{content}\n\n"
|
||||||
|
|
||||||
|
prompt += "### Respuesta:\n"
|
||||||
|
|
||||||
|
return prompt
|
||||||
|
|
||||||
|
|
||||||
|
def generate(
|
||||||
|
instruction,
|
||||||
|
input=None,
|
||||||
|
context=None,
|
||||||
|
max_new_tokens=128,
|
||||||
|
temperature=0.1,
|
||||||
|
top_p=0.75,
|
||||||
|
top_k=40,
|
||||||
|
num_beams=4,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
|
||||||
|
prompt = create_instruction(instruction, input, context)
|
||||||
|
print(prompt.replace("### Respuesta:\n", ""))
|
||||||
|
inputs = tokenizer(prompt, return_tensors="pt")
|
||||||
|
input_ids = inputs["input_ids"].to("cuda")
|
||||||
|
attention_mask = inputs["attention_mask"].to("cuda")
|
||||||
|
generation_config = GenerationConfig(
|
||||||
|
temperature=temperature,
|
||||||
|
top_p=top_p,
|
||||||
|
top_k=top_k,
|
||||||
|
num_beams=num_beams,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
with torch.no_grad():
|
||||||
|
generation_output = model.generate(
|
||||||
|
input_ids=input_ids,
|
||||||
|
attention_mask=attention_mask,
|
||||||
|
generation_config=generation_config,
|
||||||
|
return_dict_in_generate=True,
|
||||||
|
output_scores=True,
|
||||||
|
max_new_tokens=max_new_tokens,
|
||||||
|
early_stopping=True
|
||||||
|
)
|
||||||
|
s = generation_output.sequences[0]
|
||||||
|
output = tokenizer.decode(s)
|
||||||
|
return output.split("### Respuesta:")[1].lstrip("\n")
|
||||||
|
|
||||||
|
instruction = "Dame una lista de lugares a visitar en España."
|
||||||
|
print(generate(instruction))
|
||||||
|
```
|
||||||
|
|
||||||
|
# 📝 Citation
|
||||||
|
|
||||||
|
There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
@article{lince-zero,
|
||||||
|
title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español},
|
||||||
|
author={clibrain.com},
|
||||||
|
year={2023}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
# 📧 Contact
|
||||||
|
|
||||||
|
[lince@clibrain.com](mailto:lince@clibrain.com)
|
||||||
33
config.json
Normal file
33
config.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"alibi": false,
|
||||||
|
"apply_residual_connection_post_layernorm": false,
|
||||||
|
"architectures": [
|
||||||
|
"FalconForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"auto_map": {
|
||||||
|
"AutoConfig": "configuration_falcon.FalconConfig",
|
||||||
|
"AutoModel": "modeling_falcon.FalconModel",
|
||||||
|
"AutoModelForSequenceClassification": "modeling_falcon.FalconForSequenceClassification",
|
||||||
|
"AutoModelForTokenClassification": "modeling_falcon.FalconForTokenClassification",
|
||||||
|
"AutoModelForQuestionAnswering": "modeling_falcon.FalconForQuestionAnswering",
|
||||||
|
"AutoModelForCausalLM": "modeling_falcon.FalconForCausalLM"
|
||||||
|
},
|
||||||
|
"bias": false,
|
||||||
|
"bos_token_id": 11,
|
||||||
|
"eos_token_id": 11,
|
||||||
|
"hidden_dropout": 0.0,
|
||||||
|
"hidden_size": 4544,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"layer_norm_epsilon": 1e-05,
|
||||||
|
"model_type": "falcon",
|
||||||
|
"multi_query": true,
|
||||||
|
"new_decoder_architecture": false,
|
||||||
|
"num_attention_heads": 71,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"parallel_attn": true,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.27.4",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 65024
|
||||||
|
}
|
||||||
33
config_old.json
Normal file
33
config_old.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "ybelkada/falcon-7b-sharded-bf16",
|
||||||
|
"alibi": false,
|
||||||
|
"apply_residual_connection_post_layernorm": false,
|
||||||
|
"architectures": [
|
||||||
|
"RWForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"auto_map": {
|
||||||
|
"AutoConfig": "tiiuae/falcon-7b--configuration_RW.RWConfig",
|
||||||
|
"AutoModel": "tiiuae/falcon-7b--modelling_RW.RWModel",
|
||||||
|
"AutoModelForCausalLM": "tiiuae/falcon-7b--modelling_RW.RWForCausalLM",
|
||||||
|
"AutoModelForQuestionAnswering": "tiiuae/falcon-7b--modelling_RW.RWForQuestionAnswering",
|
||||||
|
"AutoModelForSequenceClassification": "tiiuae/falcon-7b--modelling_RW.RWForSequenceClassification",
|
||||||
|
"AutoModelForTokenClassification": "tiiuae/falcon-7b--modelling_RW.RWForTokenClassification"
|
||||||
|
},
|
||||||
|
"bias": false,
|
||||||
|
"bos_token_id": 11,
|
||||||
|
"eos_token_id": 11,
|
||||||
|
"hidden_dropout": 0.0,
|
||||||
|
"hidden_size": 4544,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"layer_norm_epsilon": 1e-05,
|
||||||
|
"model_type": "RefinedWebModel",
|
||||||
|
"multi_query": true,
|
||||||
|
"n_head": 71,
|
||||||
|
"n_layer": 32,
|
||||||
|
"parallel_attn": true,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.30.2",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 65024
|
||||||
|
}
|
||||||
152
configuration_falcon.py
Normal file
152
configuration_falcon.py
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
# coding=utf-8
|
||||||
|
# Copyright 2023 the Falcon authors and HuggingFace Inc. team. All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
""" Falcon configuration"""
|
||||||
|
from transformers.configuration_utils import PretrainedConfig
|
||||||
|
from transformers.utils import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
FALCON_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||||
|
"tiiuae/falcon-40b": "https://huggingface.co/tiiuae/falcon-40b/resolve/main/config.json",
|
||||||
|
"tiiuae/falcon-7b": "https://huggingface.co/tiiuae/falcon-7b/resolve/main/config.json",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class FalconConfig(PretrainedConfig):
|
||||||
|
r"""
|
||||||
|
This is the configuration class to store the configuration of a [`FalconModel`]. It is used to instantiate a Falcon
|
||||||
|
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
||||||
|
defaults will yield a similar configuration to that of the
|
||||||
|
[tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) architecture.
|
||||||
|
|
||||||
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||||
|
documentation from [`PretrainedConfig`] for more information.
|
||||||
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
vocab_size (`int`, *optional*, defaults to 65024):
|
||||||
|
Vocabulary size of the Falcon model. Defines the number of different tokens that can be represented by the
|
||||||
|
`inputs_ids` passed when calling [`FalconModel`]
|
||||||
|
hidden_size (`int`, *optional*, defaults to 4544):
|
||||||
|
Dimension of the hidden representations.
|
||||||
|
num_hidden_layers (`int`, *optional*, defaults to 32):
|
||||||
|
Number of hidden layers in the Transformer decoder.
|
||||||
|
num_attention_heads (`int`, *optional*, defaults to 71):
|
||||||
|
Number of attention heads for each attention layer in the Transformer encoder.
|
||||||
|
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||||
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||||
|
use_cache (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether the model should return the last key/values attentions (not used by all models). Only relevant if
|
||||||
|
`config.is_decoder=True`.
|
||||||
|
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
|
||||||
|
The epsilon used by the layer normalization layers.
|
||||||
|
hidden_dropout (`float`, *optional*, defaults to 0.0):
|
||||||
|
The dropout probability for MLP layers.
|
||||||
|
attention_dropout (`float`, *optional*, defaults to 0.0):
|
||||||
|
The dropout probability for attention layers.
|
||||||
|
num_kv_heads (`int`, *optional*):
|
||||||
|
Number of key-value heads to use per attention layer. If unset, defaults to the same value as
|
||||||
|
`num_attention_heads`.
|
||||||
|
alibi (`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether to use ALiBi positional biases during self-attention.
|
||||||
|
new_decoder_architecture (`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether to use the new (Falcon-40B) decoder architecture. If `True`, the `multi_query` and `parallel_attn`
|
||||||
|
arguments are ignored, as the new decoder always uses parallel attention.
|
||||||
|
multi_query (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether to use multi-query attention in the decoder. Ignored when `new_decoder_architecture` is `True`.
|
||||||
|
parallel_attn (`bool`, *optional*, defaults to `True`):
|
||||||
|
Whether to compute attention in parallel with the feedforward layer. If False, they are consecutive
|
||||||
|
instead, as in the original Transformer architecture. Ignored when `new_decoder_architecture` is `True`.
|
||||||
|
bias (`bool`, *optional*, defaults to `False`):
|
||||||
|
Whether to use bias on Linear layers.
|
||||||
|
bos_token_id (`int`, *optional*, defaults to 11):
|
||||||
|
The id of the "beginning-of-sequence" token.
|
||||||
|
eos_token_id (`int`, *optional*, defaults to 11):
|
||||||
|
The id of the "end-of-sequence" token.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
>>> from transformers import FalconModel, FalconConfig
|
||||||
|
|
||||||
|
>>> # Initializing a small (2-layer) Falcon configuration
|
||||||
|
>>> configuration = FalconConfig(num_hidden_layers=2)
|
||||||
|
|
||||||
|
>>> # Initializing a model from the small configuration
|
||||||
|
>>> model = FalconModel(configuration)
|
||||||
|
|
||||||
|
>>> # Accessing the model configuration
|
||||||
|
>>> configuration = model.config
|
||||||
|
```"""
|
||||||
|
model_type = "falcon"
|
||||||
|
keys_to_ignore_at_inference = ["past_key_values"]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
vocab_size=65024,
|
||||||
|
hidden_size=4544,
|
||||||
|
num_hidden_layers=32,
|
||||||
|
num_attention_heads=71,
|
||||||
|
layer_norm_epsilon=1e-5,
|
||||||
|
initializer_range=0.02,
|
||||||
|
use_cache=True,
|
||||||
|
hidden_dropout=0.0,
|
||||||
|
attention_dropout=0.0,
|
||||||
|
num_kv_heads=None,
|
||||||
|
alibi=False,
|
||||||
|
new_decoder_architecture=False,
|
||||||
|
multi_query=True,
|
||||||
|
parallel_attn=True,
|
||||||
|
bias=False,
|
||||||
|
bos_token_id=11,
|
||||||
|
eos_token_id=11,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
logger.warning_once(
|
||||||
|
"\nWARNING: You are currently loading Falcon using legacy code contained in the model repository. Falcon has now been fully ported into the Hugging Face transformers library. "
|
||||||
|
"For the most up-to-date and high-performance version of the Falcon model code, please update to the latest version of transformers and then load the model "
|
||||||
|
"without the trust_remote_code=True argument.\n"
|
||||||
|
)
|
||||||
|
self.vocab_size = vocab_size
|
||||||
|
# Backward compatibility with n_embed kwarg
|
||||||
|
n_embed = kwargs.pop("n_embed", None)
|
||||||
|
self.hidden_size = hidden_size if n_embed is None else n_embed
|
||||||
|
self.num_hidden_layers = num_hidden_layers
|
||||||
|
self.num_attention_heads = num_attention_heads
|
||||||
|
self.layer_norm_epsilon = layer_norm_epsilon
|
||||||
|
self.initializer_range = initializer_range
|
||||||
|
self.use_cache = use_cache
|
||||||
|
self.hidden_dropout = hidden_dropout
|
||||||
|
self.attention_dropout = attention_dropout
|
||||||
|
|
||||||
|
self.bos_token_id = bos_token_id
|
||||||
|
self.eos_token_id = eos_token_id
|
||||||
|
self.num_kv_heads = num_attention_heads if num_kv_heads is None else num_kv_heads
|
||||||
|
self.alibi = alibi
|
||||||
|
self.new_decoder_architecture = new_decoder_architecture
|
||||||
|
self.multi_query = multi_query # Ignored when new_decoder_architecture is True
|
||||||
|
self.parallel_attn = parallel_attn
|
||||||
|
self.bias = bias
|
||||||
|
|
||||||
|
super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def head_dim(self):
|
||||||
|
return self.hidden_size // self.num_attention_heads
|
||||||
|
|
||||||
|
@property
|
||||||
|
def rotary(self):
|
||||||
|
return not self.alibi
|
||||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"transformers_version": "4.30.2"
|
||||||
|
}
|
||||||
BIN
lince_logo_1.png
Normal file
BIN
lince_logo_1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 892 KiB |
1262
modeling_falcon.py
Normal file
1262
modeling_falcon.py
Normal file
File diff suppressed because it is too large
Load Diff
3
pytorch_model-00001-of-00002.bin
Normal file
3
pytorch_model-00001-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6e6e5e618d011579bf756eae33040d938f32aea579d924fbf3fa82bd07e740a6
|
||||||
|
size 9951026337
|
||||||
3
pytorch_model-00002-of-00002.bin
Normal file
3
pytorch_model-00002-of-00002.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6eb2f827de59f678700b19dbe7bd21ac6aa191efb7d5ad50a5a336efed62480e
|
||||||
|
size 3892482385
|
||||||
203
pytorch_model.bin.index.json
Normal file
203
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 13843441408
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.0.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.1.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.10.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.11.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.12.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.13.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.14.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.15.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.16.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.17.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.18.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.19.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.2.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.20.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.21.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.22.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.22.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.22.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.22.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.22.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.22.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.23.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.23.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.23.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.23.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.23.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.24.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.25.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.26.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.27.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.28.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.29.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.3.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.3.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.3.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.3.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.3.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.30.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.30.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.30.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.30.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.30.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.31.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.h.4.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.4.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.4.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.4.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.4.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.5.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.6.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.7.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.8.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.h.9.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
|
||||||
|
"transformer.ln_f.bias": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.ln_f.weight": "pytorch_model-00002-of-00002.bin",
|
||||||
|
"transformer.word_embeddings.weight": "pytorch_model-00001-of-00002.bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
16
special_tokens_map.json
Normal file
16
special_tokens_map.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
">>TITLE<<",
|
||||||
|
">>ABSTRACT<<",
|
||||||
|
">>INTRODUCTION<<",
|
||||||
|
">>SUMMARY<<",
|
||||||
|
">>COMMENT<<",
|
||||||
|
">>ANSWER<<",
|
||||||
|
">>QUESTION<<",
|
||||||
|
">>DOMAIN<<",
|
||||||
|
">>PREFIX<<",
|
||||||
|
">>SUFFIX<<",
|
||||||
|
">>MIDDLE<<"
|
||||||
|
],
|
||||||
|
"eos_token": "<|endoftext|>"
|
||||||
|
}
|
||||||
129971
tokenizer.json
Normal file
129971
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
7
tokenizer_config.json
Normal file
7
tokenizer_config.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"clean_up_tokenization_spaces": true,
|
||||||
|
"eos_token": "<|endoftext|>",
|
||||||
|
"model_max_length": 2048,
|
||||||
|
"tokenizer_class": "PreTrainedTokenizerFast"
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user