初始化项目,由ModelHub XC社区提供模型
Model: IntervitensInc/gemma-2-9b-chatml Source: Original Platform
This commit is contained in:
45
.gitattributes
vendored
Normal file
45
.gitattributes
vendored
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
|
transformers/transformers-4.42.0.dev0-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00001-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00002-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00003-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00004-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00005-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00006-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00007-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
model-00008-of-00008.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
471
README.md
Normal file
471
README.md
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
---
|
||||||
|
license: gemma
|
||||||
|
library_name: transformers
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
extra_gated_heading: Access Gemma on Hugging Face
|
||||||
|
extra_gated_prompt: >-
|
||||||
|
To access Gemma on Hugging Face, you’re required to review and agree to
|
||||||
|
Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
||||||
|
Face and click below. Requests are processed immediately.
|
||||||
|
extra_gated_button_content: Acknowledge license
|
||||||
|
---
|
||||||
|
|
||||||
|
Version with added chatml tokens for finetuning.
|
||||||
|
|
||||||
|
# Gemma 2 model card
|
||||||
|
|
||||||
|
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
|
||||||
|
|
||||||
|
**Resources and Technical Documentation**:
|
||||||
|
|
||||||
|
* [Responsible Generative AI Toolkit][rai-toolkit]
|
||||||
|
* [Gemma on Kaggle][kaggle-gemma]
|
||||||
|
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
|
||||||
|
|
||||||
|
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b)
|
||||||
|
|
||||||
|
**Authors**: Google
|
||||||
|
|
||||||
|
## Model Information
|
||||||
|
|
||||||
|
Summary description and brief definition of inputs and outputs.
|
||||||
|
|
||||||
|
### Description
|
||||||
|
|
||||||
|
Gemma is a family of lightweight, state-of-the-art open models from Google,
|
||||||
|
built from the same research and technology used to create the Gemini models.
|
||||||
|
They are text-to-text, decoder-only large language models, available in English,
|
||||||
|
with open weights for both pre-trained variants and instruction-tuned variants.
|
||||||
|
Gemma models are well-suited for a variety of text generation tasks, including
|
||||||
|
question answering, summarization, and reasoning. Their relatively small size
|
||||||
|
makes it possible to deploy them in environments with limited resources such as
|
||||||
|
a laptop, desktop or your own cloud infrastructure, democratizing access to
|
||||||
|
state of the art AI models and helping foster innovation for everyone.
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
||||||
|
|
||||||
|
|
||||||
|
#### Running the model on a single / multi GPU
|
||||||
|
|
||||||
|
|
||||||
|
```python
|
||||||
|
# pip install accelerate
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
import torch
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"google/gemma-2-9b",
|
||||||
|
device_map="auto",
|
||||||
|
torch_dtype=torch.bfloat16
|
||||||
|
)
|
||||||
|
|
||||||
|
input_text = "Write me a poem about Machine Learning."
|
||||||
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||||||
|
|
||||||
|
outputs = model.generate(**input_ids)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
<a name="precisions"></a>
|
||||||
|
#### Running the model on a GPU using different precisions
|
||||||
|
|
||||||
|
The native weights of this model were exported in `bfloat16` precision.
|
||||||
|
|
||||||
|
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
|
||||||
|
|
||||||
|
* _Upcasting to `torch.float32`_
|
||||||
|
|
||||||
|
```python
|
||||||
|
# pip install accelerate
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"google/gemma-2-9b",
|
||||||
|
device_map="auto")
|
||||||
|
|
||||||
|
input_text = "Write me a poem about Machine Learning."
|
||||||
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||||||
|
|
||||||
|
outputs = model.generate(**input_ids)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Quantized Versions through `bitsandbytes`
|
||||||
|
|
||||||
|
* _Using 8-bit precision (int8)_
|
||||||
|
|
||||||
|
```python
|
||||||
|
# pip install bitsandbytes accelerate
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
||||||
|
|
||||||
|
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"google/gemma-2-9b",
|
||||||
|
quantization_config=quantization_config)
|
||||||
|
|
||||||
|
input_text = "Write me a poem about Machine Learning."
|
||||||
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||||||
|
|
||||||
|
outputs = model.generate(**input_ids)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
* _Using 4-bit precision_
|
||||||
|
|
||||||
|
```python
|
||||||
|
# pip install bitsandbytes accelerate
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
||||||
|
|
||||||
|
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"google/gemma-2-9b",
|
||||||
|
quantization_config=quantization_config)
|
||||||
|
|
||||||
|
input_text = "Write me a poem about Machine Learning."
|
||||||
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||||||
|
|
||||||
|
outputs = model.generate(**input_ids)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
#### Other optimizations
|
||||||
|
|
||||||
|
* _Flash Attention 2_
|
||||||
|
|
||||||
|
First make sure to install `flash-attn` in your environment `pip install flash-attn`
|
||||||
|
|
||||||
|
```diff
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
model_id,
|
||||||
|
torch_dtype=torch.float16,
|
||||||
|
+ attn_implementation="flash_attention_2"
|
||||||
|
).to(0)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Inputs and outputs
|
||||||
|
|
||||||
|
* **Input:** Text string, such as a question, a prompt, or a document to be
|
||||||
|
summarized.
|
||||||
|
* **Output:** Generated English-language text in response to the input, such
|
||||||
|
as an answer to a question, or a summary of a document.
|
||||||
|
|
||||||
|
### Citation
|
||||||
|
|
||||||
|
```none
|
||||||
|
@article{gemma_2024,
|
||||||
|
title={Gemma},
|
||||||
|
url={https://www.kaggle.com/m/3301},
|
||||||
|
DOI={10.34740/KAGGLE/M/3301},
|
||||||
|
publisher={Kaggle},
|
||||||
|
author={Gemma Team},
|
||||||
|
year={2024}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Model Data
|
||||||
|
|
||||||
|
Data used for model training and how the data was processed.
|
||||||
|
|
||||||
|
### Training Dataset
|
||||||
|
|
||||||
|
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
|
||||||
|
Here are the key components:
|
||||||
|
|
||||||
|
* Web Documents: A diverse collection of web text ensures the model is exposed
|
||||||
|
to a broad range of linguistic styles, topics, and vocabulary. Primarily
|
||||||
|
English-language content.
|
||||||
|
* Code: Exposing the model to code helps it to learn the syntax and patterns of
|
||||||
|
programming languages, which improves its ability to generate code or
|
||||||
|
understand code-related questions.
|
||||||
|
* Mathematics: Training on mathematical text helps the model learn logical
|
||||||
|
reasoning, symbolic representation, and to address mathematical queries.
|
||||||
|
|
||||||
|
The combination of these diverse data sources is crucial for training a powerful
|
||||||
|
language model that can handle a wide variety of different tasks and text
|
||||||
|
formats.
|
||||||
|
|
||||||
|
### Data Preprocessing
|
||||||
|
|
||||||
|
Here are the key data cleaning and filtering methods applied to the training
|
||||||
|
data:
|
||||||
|
|
||||||
|
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
|
||||||
|
applied at multiple stages in the data preparation process to ensure the
|
||||||
|
exclusion of harmful and illegal content.
|
||||||
|
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
|
||||||
|
reliable, automated techniques were used to filter out certain personal
|
||||||
|
information and other sensitive data from training sets.
|
||||||
|
* Additional methods: Filtering based on content quality and safety in line with
|
||||||
|
[our policies][safety-policies].
|
||||||
|
|
||||||
|
## Implementation Information
|
||||||
|
|
||||||
|
Details about the model internals.
|
||||||
|
|
||||||
|
### Hardware
|
||||||
|
|
||||||
|
Gemma was trained using the latest generation of
|
||||||
|
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
|
||||||
|
|
||||||
|
Training large language models requires significant computational power. TPUs,
|
||||||
|
designed specifically for matrix operations common in machine learning, offer
|
||||||
|
several advantages in this domain:
|
||||||
|
|
||||||
|
* Performance: TPUs are specifically designed to handle the massive computations
|
||||||
|
involved in training LLMs. They can speed up training considerably compared to
|
||||||
|
CPUs.
|
||||||
|
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
|
||||||
|
for the handling of large models and batch sizes during training. This can
|
||||||
|
lead to better model quality.
|
||||||
|
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
|
||||||
|
handling the growing complexity of large foundation models. You can distribute
|
||||||
|
training across multiple TPU devices for faster and more efficient processing.
|
||||||
|
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
|
||||||
|
solution for training large models compared to CPU-based infrastructure,
|
||||||
|
especially when considering the time and resources saved due to faster
|
||||||
|
training.
|
||||||
|
* These advantages are aligned with
|
||||||
|
[Google's commitments to operate sustainably][sustainability].
|
||||||
|
|
||||||
|
### Software
|
||||||
|
|
||||||
|
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
|
||||||
|
|
||||||
|
JAX allows researchers to take advantage of the latest generation of hardware,
|
||||||
|
including TPUs, for faster and more efficient training of large models.
|
||||||
|
|
||||||
|
ML Pathways is Google's latest effort to build artificially intelligent systems
|
||||||
|
capable of generalizing across multiple tasks. This is specially suitable for
|
||||||
|
[foundation models][foundation-models], including large language models like
|
||||||
|
these ones.
|
||||||
|
|
||||||
|
Together, JAX and ML Pathways are used as described in the
|
||||||
|
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
|
||||||
|
controller' programming model of Jax and Pathways allows a single Python
|
||||||
|
process to orchestrate the entire training run, dramatically simplifying the
|
||||||
|
development workflow."
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
Model evaluation metrics and results.
|
||||||
|
|
||||||
|
### Benchmark Results
|
||||||
|
|
||||||
|
These models were evaluated against a large collection of different datasets and
|
||||||
|
metrics to cover different aspects of text generation:
|
||||||
|
|
||||||
|
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
|
||||||
|
| ------------------------------ | ------------- | ----------- | ------------ |
|
||||||
|
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
|
||||||
|
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
|
||||||
|
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
|
||||||
|
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
|
||||||
|
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
|
||||||
|
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
|
||||||
|
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
|
||||||
|
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
|
||||||
|
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
|
||||||
|
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
|
||||||
|
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
|
||||||
|
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
|
||||||
|
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
|
||||||
|
| [MATH][math] | 4-shot | 36.6 | 42.3 |
|
||||||
|
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
|
||||||
|
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
|
||||||
|
| ------------------------------ | ------------- | ----------- | ------------ |
|
||||||
|
|
||||||
|
## Ethics and Safety
|
||||||
|
|
||||||
|
Ethics and safety evaluation approach and results.
|
||||||
|
|
||||||
|
### Evaluation Approach
|
||||||
|
|
||||||
|
Our evaluation methods include structured evaluations and internal red-teaming
|
||||||
|
testing of relevant content policies. Red-teaming was conducted by a number of
|
||||||
|
different teams, each with different goals and human evaluation metrics. These
|
||||||
|
models were evaluated against a number of different categories relevant to
|
||||||
|
ethics and safety, including:
|
||||||
|
|
||||||
|
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
|
||||||
|
policies including child sexual abuse and exploitation, harassment, violence
|
||||||
|
and gore, and hate speech.
|
||||||
|
* Text-to-Text Representational Harms: Benchmark against relevant academic
|
||||||
|
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
|
||||||
|
* Memorization: Automated evaluation of memorization of training data, including
|
||||||
|
the risk of personally identifiable information exposure.
|
||||||
|
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
|
||||||
|
biological, radiological, and nuclear (CBRN) risks.
|
||||||
|
|
||||||
|
### Evaluation Results
|
||||||
|
|
||||||
|
The results of ethics and safety evaluations are within acceptable thresholds
|
||||||
|
for meeting [internal policies][safety-policies] for categories such as child
|
||||||
|
safety, content safety, representational harms, memorization, large-scale harms.
|
||||||
|
On top of robust internal evaluations, the results of well-known safety
|
||||||
|
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
|
||||||
|
are shown here.
|
||||||
|
|
||||||
|
#### Gemma 2.0
|
||||||
|
|
||||||
|
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
|
||||||
|
| ------------------------ | ------------- | --------------- | ---------------- |
|
||||||
|
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
|
||||||
|
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
|
||||||
|
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
|
||||||
|
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
|
||||||
|
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
|
||||||
|
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
|
||||||
|
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
|
||||||
|
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
|
||||||
|
| [Toxigen][toxigen] | | 39.30 | 38.42 |
|
||||||
|
| ------------------------ | ------------- | --------------- | ---------------- |
|
||||||
|
|
||||||
|
## Usage and Limitations
|
||||||
|
|
||||||
|
These models have certain limitations that users should be aware of.
|
||||||
|
|
||||||
|
### Intended Usage
|
||||||
|
|
||||||
|
Open Large Language Models (LLMs) have a wide range of applications across
|
||||||
|
various industries and domains. The following list of potential uses is not
|
||||||
|
comprehensive. The purpose of this list is to provide contextual information
|
||||||
|
about the possible use-cases that the model creators considered as part of model
|
||||||
|
training and development.
|
||||||
|
|
||||||
|
* Content Creation and Communication
|
||||||
|
* Text Generation: These models can be used to generate creative text formats
|
||||||
|
such as poems, scripts, code, marketing copy, and email drafts.
|
||||||
|
* Chatbots and Conversational AI: Power conversational interfaces for customer
|
||||||
|
service, virtual assistants, or interactive applications.
|
||||||
|
* Text Summarization: Generate concise summaries of a text corpus, research
|
||||||
|
papers, or reports.
|
||||||
|
* Research and Education
|
||||||
|
* Natural Language Processing (NLP) Research: These models can serve as a
|
||||||
|
foundation for researchers to experiment with NLP techniques, develop
|
||||||
|
algorithms, and contribute to the advancement of the field.
|
||||||
|
* Language Learning Tools: Support interactive language learning experiences,
|
||||||
|
aiding in grammar correction or providing writing practice.
|
||||||
|
* Knowledge Exploration: Assist researchers in exploring large bodies of text
|
||||||
|
by generating summaries or answering questions about specific topics.
|
||||||
|
|
||||||
|
### Limitations
|
||||||
|
|
||||||
|
* Training Data
|
||||||
|
* The quality and diversity of the training data significantly influence the
|
||||||
|
model's capabilities. Biases or gaps in the training data can lead to
|
||||||
|
limitations in the model's responses.
|
||||||
|
* The scope of the training dataset determines the subject areas the model can
|
||||||
|
handle effectively.
|
||||||
|
* Context and Task Complexity
|
||||||
|
* LLMs are better at tasks that can be framed with clear prompts and
|
||||||
|
instructions. Open-ended or highly complex tasks might be challenging.
|
||||||
|
* A model's performance can be influenced by the amount of context provided
|
||||||
|
(longer context generally leads to better outputs, up to a certain point).
|
||||||
|
* Language Ambiguity and Nuance
|
||||||
|
* Natural language is inherently complex. LLMs might struggle to grasp subtle
|
||||||
|
nuances, sarcasm, or figurative language.
|
||||||
|
* Factual Accuracy
|
||||||
|
* LLMs generate responses based on information they learned from their
|
||||||
|
training datasets, but they are not knowledge bases. They may generate
|
||||||
|
incorrect or outdated factual statements.
|
||||||
|
* Common Sense
|
||||||
|
* LLMs rely on statistical patterns in language. They might lack the ability
|
||||||
|
to apply common sense reasoning in certain situations.
|
||||||
|
|
||||||
|
### Ethical Considerations and Risks
|
||||||
|
|
||||||
|
The development of large language models (LLMs) raises several ethical concerns.
|
||||||
|
In creating an open model, we have carefully considered the following:
|
||||||
|
|
||||||
|
* Bias and Fairness
|
||||||
|
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
|
||||||
|
biases embedded in the training material. These models underwent careful
|
||||||
|
scrutiny, input data pre-processing described and posterior evaluations
|
||||||
|
reported in this card.
|
||||||
|
* Misinformation and Misuse
|
||||||
|
* LLMs can be misused to generate text that is false, misleading, or harmful.
|
||||||
|
* Guidelines are provided for responsible use with the model, see the
|
||||||
|
[Responsible Generative AI Toolkit][rai-toolkit].
|
||||||
|
* Transparency and Accountability:
|
||||||
|
* This model card summarizes details on the models' architecture,
|
||||||
|
capabilities, limitations, and evaluation processes.
|
||||||
|
* A responsibly developed open model offers the opportunity to share
|
||||||
|
innovation by making LLM technology accessible to developers and researchers
|
||||||
|
across the AI ecosystem.
|
||||||
|
|
||||||
|
Risks identified and mitigations:
|
||||||
|
|
||||||
|
* Perpetuation of biases: It's encouraged to perform continuous monitoring
|
||||||
|
(using evaluation metrics, human review) and the exploration of de-biasing
|
||||||
|
techniques during model training, fine-tuning, and other use cases.
|
||||||
|
* Generation of harmful content: Mechanisms and guidelines for content safety
|
||||||
|
are essential. Developers are encouraged to exercise caution and implement
|
||||||
|
appropriate content safety safeguards based on their specific product policies
|
||||||
|
and application use cases.
|
||||||
|
* Misuse for malicious purposes: Technical limitations and developer and
|
||||||
|
end-user education can help mitigate against malicious applications of LLMs.
|
||||||
|
Educational resources and reporting mechanisms for users to flag misuse are
|
||||||
|
provided. Prohibited uses of Gemma models are outlined in the
|
||||||
|
[Gemma Prohibited Use Policy][prohibited-use].
|
||||||
|
* Privacy violations: Models were trained on data filtered for removal of PII
|
||||||
|
(Personally Identifiable Information). Developers are encouraged to adhere to
|
||||||
|
privacy regulations with privacy-preserving techniques.
|
||||||
|
|
||||||
|
### Benefits
|
||||||
|
|
||||||
|
At the time of release, this family of models provides high-performance open
|
||||||
|
large language model implementations designed from the ground up for Responsible
|
||||||
|
AI development compared to similarly sized models.
|
||||||
|
|
||||||
|
Using the benchmark evaluation metrics described in this document, these models
|
||||||
|
have shown to provide superior performance to other, comparably-sized open model
|
||||||
|
alternatives.
|
||||||
|
|
||||||
|
[rai-toolkit]: https://ai.google.dev/responsible
|
||||||
|
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
|
||||||
|
[terms]: https://ai.google.dev/gemma/terms
|
||||||
|
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
|
||||||
|
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
|
||||||
|
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
|
||||||
|
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
|
||||||
|
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
|
||||||
|
[sustainability]: https://sustainability.google/operating-sustainably/
|
||||||
|
[jax]: https://github.com/google/jax
|
||||||
|
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
|
||||||
|
[sustainability]: https://sustainability.google/operating-sustainably/
|
||||||
|
[foundation-models]: https://ai.google/discover/foundation-models/
|
||||||
|
[gemini-2-paper]: https://goo.gle/gemma2report
|
||||||
|
[mmlu]: https://arxiv.org/abs/2009.03300
|
||||||
|
[hellaswag]: https://arxiv.org/abs/1905.07830
|
||||||
|
[piqa]: https://arxiv.org/abs/1911.11641
|
||||||
|
[socialiqa]: https://arxiv.org/abs/1904.09728
|
||||||
|
[boolq]: https://arxiv.org/abs/1905.10044
|
||||||
|
[winogrande]: https://arxiv.org/abs/1907.10641
|
||||||
|
[commonsenseqa]: https://arxiv.org/abs/1811.00937
|
||||||
|
[openbookqa]: https://arxiv.org/abs/1809.02789
|
||||||
|
[arc]: https://arxiv.org/abs/1911.01547
|
||||||
|
[triviaqa]: https://arxiv.org/abs/1705.03551
|
||||||
|
[naturalq]: https://github.com/google-research-datasets/natural-questions
|
||||||
|
[humaneval]: https://arxiv.org/abs/2107.03374
|
||||||
|
[mbpp]: https://arxiv.org/abs/2108.07732
|
||||||
|
[gsm8k]: https://arxiv.org/abs/2110.14168
|
||||||
|
[realtox]: https://arxiv.org/abs/2009.11462
|
||||||
|
[bold]: https://arxiv.org/abs/2101.11718
|
||||||
|
[crows]: https://aclanthology.org/2020.emnlp-main.154/
|
||||||
|
[bbq]: https://arxiv.org/abs/2110.08193v2
|
||||||
|
[winogender]: https://arxiv.org/abs/1804.09301
|
||||||
|
[truthfulqa]: https://arxiv.org/abs/2109.07958
|
||||||
|
[winobias]: https://arxiv.org/abs/1804.06876
|
||||||
|
[math]: https://arxiv.org/abs/2103.03874
|
||||||
|
[agieval]: https://arxiv.org/abs/2304.06364
|
||||||
|
[big-bench]: https://arxiv.org/abs/2206.04615
|
||||||
|
[toxigen]: https://arxiv.org/abs/2203.09509
|
||||||
33
config.json
Normal file
33
config.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Gemma2ForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"attn_logit_softcapping": 50.0,
|
||||||
|
"bos_token_id": 2,
|
||||||
|
"cache_implementation": "hybrid",
|
||||||
|
"eos_token_id": 8,
|
||||||
|
"final_logit_softcapping": 30.0,
|
||||||
|
"head_dim": 256,
|
||||||
|
"hidden_act": "gelu_pytorch_tanh",
|
||||||
|
"hidden_activation": "gelu_pytorch_tanh",
|
||||||
|
"hidden_size": 3584,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 14336,
|
||||||
|
"max_position_embeddings": 8192,
|
||||||
|
"model_type": "gemma2",
|
||||||
|
"num_attention_heads": 16,
|
||||||
|
"num_hidden_layers": 42,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"query_pre_attn_scalar": 256,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_theta": 10000.0,
|
||||||
|
"sliding_window": 4096,
|
||||||
|
"sliding_window_size": 4096,
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.42.0.dev0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 256000
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
8
generation_config.json
Normal file
8
generation_config.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 2,
|
||||||
|
"cache_implementation": "hybrid",
|
||||||
|
"eos_token_id": 8,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"transformers_version": "4.42.0.dev0"
|
||||||
|
}
|
||||||
3
model-00001-of-00008.safetensors
Normal file
3
model-00001-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:72e486c145f30afdc43a507764a146fd545d26f166e60232c25d01a1510738c0
|
||||||
|
size 4844480456
|
||||||
3
model-00002-of-00008.safetensors
Normal file
3
model-00002-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:73b2e0b94e18b25a2f7a3abda70ab350053be6c615599cb3772f27efb2ae21eb
|
||||||
|
size 4962213464
|
||||||
3
model-00003-of-00008.safetensors
Normal file
3
model-00003-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:95310f8181c07b98a18facfbbbdaa7bcf28aafaddb9c33cd9dfa008706206970
|
||||||
|
size 4962271312
|
||||||
3
model-00004-of-00008.safetensors
Normal file
3
model-00004-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f4aa5f7d5510456fba5eb22421432a65487b1d0864fbd59ca6f8dfac65184d32
|
||||||
|
size 4932853744
|
||||||
3
model-00005-of-00008.safetensors
Normal file
3
model-00005-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:96336ba3f200b986fa952df49998420a9ca1185853449b10a26eccc9bc7fc581
|
||||||
|
size 4962213528
|
||||||
3
model-00006-of-00008.safetensors
Normal file
3
model-00006-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:25d2f65b06a3ffc288fc1653554fa33b9d913f47f9b797be7785db19d3ed1f08
|
||||||
|
size 4962213528
|
||||||
3
model-00007-of-00008.safetensors
Normal file
3
model-00007-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:d0804ef42bdca870b9f5da75fe222448906bfccfddbebbbf61a5361b2f7a46b3
|
||||||
|
size 4962271328
|
||||||
3
model-00008-of-00008.safetensors
Normal file
3
model-00008-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:18e1cb9107e250c97ac91ad62bd950449f75de4a427092319ff368164d4617b2
|
||||||
|
size 2378360680
|
||||||
471
model.safetensors.index.json
Normal file
471
model.safetensors.index.json
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 36966823936
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"model.embed_tokens.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.input_layernorm.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.pre_feedforward_layernorm.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.1.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00008.safetensors",
|
||||||
|
"model.layers.10.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.14.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.input_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.post_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.pre_feedforward_layernorm.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.2.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.20.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "model-00004-of-00008.safetensors",
|
||||||
|
"model.layers.21.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.input_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.post_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.pre_feedforward_layernorm.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.26.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "model-00005-of-00008.safetensors",
|
||||||
|
"model.layers.27.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.3.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.30.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.input_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.post_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.pre_feedforward_layernorm.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.32.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.32.mlp.gate_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.mlp.up_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.32.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.32.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.32.self_attn.k_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.self_attn.o_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.self_attn.q_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.32.self_attn.v_proj.weight": "model-00006-of-00008.safetensors",
|
||||||
|
"model.layers.33.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.33.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.34.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.35.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.36.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.37.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.input_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.mlp.down_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.mlp.gate_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.mlp.up_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.post_attention_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.post_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.pre_feedforward_layernorm.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.self_attn.k_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.self_attn.o_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.self_attn.q_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.38.self_attn.v_proj.weight": "model-00007-of-00008.safetensors",
|
||||||
|
"model.layers.39.input_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.post_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.pre_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.39.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.4.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.40.input_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.post_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.pre_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.40.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.input_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.mlp.down_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.mlp.gate_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.mlp.up_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.post_attention_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.post_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.pre_feedforward_layernorm.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.self_attn.k_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.self_attn.o_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.self_attn.q_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.41.self_attn.v_proj.weight": "model-00008-of-00008.safetensors",
|
||||||
|
"model.layers.5.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.input_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.post_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.pre_feedforward_layernorm.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.7.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.7.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00008.safetensors",
|
||||||
|
"model.layers.8.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.input_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.post_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.pre_feedforward_layernorm.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00008.safetensors",
|
||||||
|
"model.norm.weight": "model-00008-of-00008.safetensors"
|
||||||
|
}
|
||||||
|
}
|
||||||
34
special_tokens_map.json
Normal file
34
special_tokens_map.json
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<start_of_turn>",
|
||||||
|
"<end_of_turn>"
|
||||||
|
],
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<bos>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:66c28a230d9e5c840e83aeedfcb553006011db12dbc58aa20cc7e515bc1309cc
|
||||||
|
size 17518532
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:01f545c6099874ce0c5cf6e4ccecd5f2bce689852b576ddf9d49134081c9cb98
|
||||||
|
size 4241007
|
||||||
1757
tokenizer_config.json
Normal file
1757
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user