初始化项目,由ModelHub XC社区提供模型
Model: desh2806/shifted_persona_0 Source: Original Platform
This commit is contained in:
41
.gitattributes
vendored
Normal file
41
.gitattributes
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
checkpoint-1500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
checkpoint-2250/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
checkpoint-3000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
checkpoint-3750/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
checkpoint-750/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
57
README.md
Normal file
57
README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
library_name: transformers
|
||||
model_name: shifted_model_persona_0_v2
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
- sft
|
||||
- trl
|
||||
licence: license
|
||||
---
|
||||
|
||||
# Model Card for shifted_model_persona_0_v2
|
||||
|
||||
This model is a fine-tuned version of [None](https://huggingface.co/None).
|
||||
It has been trained using [TRL](https://github.com/huggingface/trl).
|
||||
|
||||
## Quick start
|
||||
|
||||
```python
|
||||
from transformers import pipeline
|
||||
|
||||
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
||||
generator = pipeline("text-generation", model="None", device="cuda")
|
||||
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
||||
print(output["generated_text"])
|
||||
```
|
||||
|
||||
## Training procedure
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
This model was trained with SFT.
|
||||
|
||||
### Framework versions
|
||||
|
||||
- TRL: 0.29.1
|
||||
- Transformers: 4.57.6
|
||||
- Pytorch: 2.10.0
|
||||
- Datasets: 4.8.4
|
||||
- Tokenizers: 0.22.2
|
||||
|
||||
## Citations
|
||||
|
||||
|
||||
|
||||
Cite TRL as:
|
||||
|
||||
```bibtex
|
||||
@software{vonwerra2020trl,
|
||||
title = {{TRL: Transformers Reinforcement Learning}},
|
||||
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
|
||||
license = {Apache-2.0},
|
||||
url = {https://github.com/huggingface/trl},
|
||||
year = {2020}
|
||||
}
|
||||
```
|
||||
3
added_tokens.json
Normal file
3
added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
209
checkpoint-1500/README.md
Normal file
209
checkpoint-1500/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
base_model: uniform_prior
|
||||
library_name: peft
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- base_model:adapter:uniform_prior
|
||||
- lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
43
checkpoint-1500/adapter_config.json
Normal file
43
checkpoint-1500/adapter_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"alora_invocation_tokens": null,
|
||||
"alpha_pattern": {},
|
||||
"arrow_config": null,
|
||||
"auto_mapping": null,
|
||||
"base_model_name_or_path": "uniform_prior",
|
||||
"bias": "none",
|
||||
"corda_config": null,
|
||||
"ensure_weight_tying": false,
|
||||
"eva_config": null,
|
||||
"exclude_modules": null,
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 32,
|
||||
"lora_bias": false,
|
||||
"lora_dropout": 0.05,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"peft_version": "0.18.1",
|
||||
"qalora_group_size": 16,
|
||||
"r": 16,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"v_proj",
|
||||
"k_proj",
|
||||
"o_proj",
|
||||
"q_proj"
|
||||
],
|
||||
"target_parameters": null,
|
||||
"task_type": "CAUSAL_LM",
|
||||
"trainable_token_indices": null,
|
||||
"use_dora": false,
|
||||
"use_qalora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-1500/adapter_model.safetensors
Normal file
3
checkpoint-1500/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:cb6b0819aa0a4c3db61adae4746f7a7ebe9976be9eee758b4b9467e071bdfb70
|
||||
size 5917192
|
||||
3
checkpoint-1500/added_tokens.json
Normal file
3
checkpoint-1500/added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
3
checkpoint-1500/optimizer.pt
Normal file
3
checkpoint-1500/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e6f9e1ba1049048f2c41bbf78a1cba240c616a8ee1e339177cdbcde522b655d2
|
||||
size 11919947
|
||||
3
checkpoint-1500/rng_state.pth
Normal file
3
checkpoint-1500/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:204d4967f720cb22bb9b669aec4763d5907d9ed6f1969936f3dc43e7a283fea8
|
||||
size 14645
|
||||
3
checkpoint-1500/scheduler.pt
Normal file
3
checkpoint-1500/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5ccfffb2e4aef68a5c5dcd5e9f9d26ce56a7e2e8b8cf957932f5509240c05e84
|
||||
size 1465
|
||||
27
checkpoint-1500/special_tokens_map.json
Normal file
27
checkpoint-1500/special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-1500/tokenizer.json
Normal file
3
checkpoint-1500/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
checkpoint-1500/tokenizer.model
Normal file
3
checkpoint-1500/tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
checkpoint-1500/tokenizer_config.json
Normal file
51345
checkpoint-1500/tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
1534
checkpoint-1500/trainer_state.json
Normal file
1534
checkpoint-1500/trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
checkpoint-1500/training_args.bin
Normal file
3
checkpoint-1500/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11bbe284e74298a5b6715dcfd6da1001ad5655a8cf82df3bc6cc3e49cf903165
|
||||
size 6289
|
||||
209
checkpoint-2250/README.md
Normal file
209
checkpoint-2250/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
base_model: uniform_prior
|
||||
library_name: peft
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- base_model:adapter:uniform_prior
|
||||
- lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
43
checkpoint-2250/adapter_config.json
Normal file
43
checkpoint-2250/adapter_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"alora_invocation_tokens": null,
|
||||
"alpha_pattern": {},
|
||||
"arrow_config": null,
|
||||
"auto_mapping": null,
|
||||
"base_model_name_or_path": "uniform_prior",
|
||||
"bias": "none",
|
||||
"corda_config": null,
|
||||
"ensure_weight_tying": false,
|
||||
"eva_config": null,
|
||||
"exclude_modules": null,
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 32,
|
||||
"lora_bias": false,
|
||||
"lora_dropout": 0.05,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"peft_version": "0.18.1",
|
||||
"qalora_group_size": 16,
|
||||
"r": 16,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"v_proj",
|
||||
"k_proj",
|
||||
"o_proj",
|
||||
"q_proj"
|
||||
],
|
||||
"target_parameters": null,
|
||||
"task_type": "CAUSAL_LM",
|
||||
"trainable_token_indices": null,
|
||||
"use_dora": false,
|
||||
"use_qalora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-2250/adapter_model.safetensors
Normal file
3
checkpoint-2250/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5d33f959a9aab57854e5e3b9b530c6e5015845fffe256c6cdc7b3f16da01b3d6
|
||||
size 5917192
|
||||
3
checkpoint-2250/added_tokens.json
Normal file
3
checkpoint-2250/added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
3
checkpoint-2250/optimizer.pt
Normal file
3
checkpoint-2250/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5cbabb033a07696632334676814a8c1c795c43d5703ce1ad927e2b6f3f0af7f7
|
||||
size 11919947
|
||||
3
checkpoint-2250/rng_state.pth
Normal file
3
checkpoint-2250/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:555a5d6c71dd7f0697302d00030ba0bc5cf10873d94f3e399b9b55385002dd8a
|
||||
size 14645
|
||||
3
checkpoint-2250/scheduler.pt
Normal file
3
checkpoint-2250/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7546c248ae2cf40822515d58985c2e3376d31dc6a238d99a434d24ac94723637
|
||||
size 1465
|
||||
27
checkpoint-2250/special_tokens_map.json
Normal file
27
checkpoint-2250/special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-2250/tokenizer.json
Normal file
3
checkpoint-2250/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
checkpoint-2250/tokenizer.model
Normal file
3
checkpoint-2250/tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
checkpoint-2250/tokenizer_config.json
Normal file
51345
checkpoint-2250/tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
2284
checkpoint-2250/trainer_state.json
Normal file
2284
checkpoint-2250/trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
checkpoint-2250/training_args.bin
Normal file
3
checkpoint-2250/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11bbe284e74298a5b6715dcfd6da1001ad5655a8cf82df3bc6cc3e49cf903165
|
||||
size 6289
|
||||
209
checkpoint-3000/README.md
Normal file
209
checkpoint-3000/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
base_model: uniform_prior
|
||||
library_name: peft
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- base_model:adapter:uniform_prior
|
||||
- lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
43
checkpoint-3000/adapter_config.json
Normal file
43
checkpoint-3000/adapter_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"alora_invocation_tokens": null,
|
||||
"alpha_pattern": {},
|
||||
"arrow_config": null,
|
||||
"auto_mapping": null,
|
||||
"base_model_name_or_path": "uniform_prior",
|
||||
"bias": "none",
|
||||
"corda_config": null,
|
||||
"ensure_weight_tying": false,
|
||||
"eva_config": null,
|
||||
"exclude_modules": null,
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 32,
|
||||
"lora_bias": false,
|
||||
"lora_dropout": 0.05,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"peft_version": "0.18.1",
|
||||
"qalora_group_size": 16,
|
||||
"r": 16,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"v_proj",
|
||||
"k_proj",
|
||||
"o_proj",
|
||||
"q_proj"
|
||||
],
|
||||
"target_parameters": null,
|
||||
"task_type": "CAUSAL_LM",
|
||||
"trainable_token_indices": null,
|
||||
"use_dora": false,
|
||||
"use_qalora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-3000/adapter_model.safetensors
Normal file
3
checkpoint-3000/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:83ac7cf92612436ea30df6c7c564c94da8aa1559a9ec30dc966ed168b1af4f89
|
||||
size 5917192
|
||||
3
checkpoint-3000/added_tokens.json
Normal file
3
checkpoint-3000/added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
3
checkpoint-3000/optimizer.pt
Normal file
3
checkpoint-3000/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8e5fb2b2a03efc5571804ef3dc85d491f74add9b8b74934ab9baf26167975e65
|
||||
size 11919947
|
||||
3
checkpoint-3000/rng_state.pth
Normal file
3
checkpoint-3000/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d9b85a62e621d0543c40339fde04a188ce239066c4a59f48a3512f6fcb94defb
|
||||
size 14645
|
||||
3
checkpoint-3000/scheduler.pt
Normal file
3
checkpoint-3000/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aaf66d403a29c8c0be703e5f9e87d5057ff1c237a4dd78b279f5ec32b1037482
|
||||
size 1465
|
||||
27
checkpoint-3000/special_tokens_map.json
Normal file
27
checkpoint-3000/special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-3000/tokenizer.json
Normal file
3
checkpoint-3000/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
checkpoint-3000/tokenizer.model
Normal file
3
checkpoint-3000/tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
checkpoint-3000/tokenizer_config.json
Normal file
51345
checkpoint-3000/tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
3034
checkpoint-3000/trainer_state.json
Normal file
3034
checkpoint-3000/trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
checkpoint-3000/training_args.bin
Normal file
3
checkpoint-3000/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11bbe284e74298a5b6715dcfd6da1001ad5655a8cf82df3bc6cc3e49cf903165
|
||||
size 6289
|
||||
209
checkpoint-3750/README.md
Normal file
209
checkpoint-3750/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
base_model: uniform_prior
|
||||
library_name: peft
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- base_model:adapter:uniform_prior
|
||||
- lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
43
checkpoint-3750/adapter_config.json
Normal file
43
checkpoint-3750/adapter_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"alora_invocation_tokens": null,
|
||||
"alpha_pattern": {},
|
||||
"arrow_config": null,
|
||||
"auto_mapping": null,
|
||||
"base_model_name_or_path": "uniform_prior",
|
||||
"bias": "none",
|
||||
"corda_config": null,
|
||||
"ensure_weight_tying": false,
|
||||
"eva_config": null,
|
||||
"exclude_modules": null,
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 32,
|
||||
"lora_bias": false,
|
||||
"lora_dropout": 0.05,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"peft_version": "0.18.1",
|
||||
"qalora_group_size": 16,
|
||||
"r": 16,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"v_proj",
|
||||
"k_proj",
|
||||
"o_proj",
|
||||
"q_proj"
|
||||
],
|
||||
"target_parameters": null,
|
||||
"task_type": "CAUSAL_LM",
|
||||
"trainable_token_indices": null,
|
||||
"use_dora": false,
|
||||
"use_qalora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-3750/adapter_model.safetensors
Normal file
3
checkpoint-3750/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dedc4138b00fbed43203720715e219ae770603f0c106bc7ec4b039a97b925398
|
||||
size 5917192
|
||||
3
checkpoint-3750/added_tokens.json
Normal file
3
checkpoint-3750/added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
3
checkpoint-3750/optimizer.pt
Normal file
3
checkpoint-3750/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:163edb49aca7acbfaf89877f980ad922df26e5cd166d84b0c8c03ad371de0345
|
||||
size 11919947
|
||||
3
checkpoint-3750/rng_state.pth
Normal file
3
checkpoint-3750/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aa0680e4f67552154bc0b698c56b729bb2994af47dad6925373d7ba2398a9cbd
|
||||
size 14645
|
||||
3
checkpoint-3750/scheduler.pt
Normal file
3
checkpoint-3750/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f2c5185da1445c3fd6f9fb1a196112b0693198df8b7959c4e0e72cd393356e90
|
||||
size 1465
|
||||
27
checkpoint-3750/special_tokens_map.json
Normal file
27
checkpoint-3750/special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-3750/tokenizer.json
Normal file
3
checkpoint-3750/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
checkpoint-3750/tokenizer.model
Normal file
3
checkpoint-3750/tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
checkpoint-3750/tokenizer_config.json
Normal file
51345
checkpoint-3750/tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
3784
checkpoint-3750/trainer_state.json
Normal file
3784
checkpoint-3750/trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
checkpoint-3750/training_args.bin
Normal file
3
checkpoint-3750/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11bbe284e74298a5b6715dcfd6da1001ad5655a8cf82df3bc6cc3e49cf903165
|
||||
size 6289
|
||||
209
checkpoint-750/README.md
Normal file
209
checkpoint-750/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
base_model: uniform_prior
|
||||
library_name: peft
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- base_model:adapter:uniform_prior
|
||||
- lora
|
||||
- sft
|
||||
- transformers
|
||||
- trl
|
||||
---
|
||||
|
||||
# Model Card for Model ID
|
||||
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
|
||||
|
||||
- **Developed by:** [More Information Needed]
|
||||
- **Funded by [optional]:** [More Information Needed]
|
||||
- **Shared by [optional]:** [More Information Needed]
|
||||
- **Model type:** [More Information Needed]
|
||||
- **Language(s) (NLP):** [More Information Needed]
|
||||
- **License:** [More Information Needed]
|
||||
- **Finetuned from model [optional]:** [More Information Needed]
|
||||
|
||||
### Model Sources [optional]
|
||||
|
||||
<!-- Provide the basic links for the model. -->
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed]
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Downstream Use [optional]
|
||||
|
||||
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Recommendations
|
||||
|
||||
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
||||
|
||||
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Training Procedure
|
||||
|
||||
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
||||
|
||||
#### Preprocessing [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
|
||||
#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
#### Speeds, Sizes, Times [optional]
|
||||
|
||||
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
### Testing Data, Factors & Metrics
|
||||
|
||||
#### Testing Data
|
||||
|
||||
<!-- This should link to a Dataset Card if possible. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Factors
|
||||
|
||||
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Metrics
|
||||
|
||||
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Results
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Summary
|
||||
|
||||
|
||||
|
||||
## Model Examination [optional]
|
||||
|
||||
<!-- Relevant interpretability work for the model goes here -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Environmental Impact
|
||||
|
||||
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
||||
|
||||
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
||||
|
||||
- **Hardware Type:** [More Information Needed]
|
||||
- **Hours used:** [More Information Needed]
|
||||
- **Cloud Provider:** [More Information Needed]
|
||||
- **Compute Region:** [More Information Needed]
|
||||
- **Carbon Emitted:** [More Information Needed]
|
||||
|
||||
## Technical Specifications [optional]
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
### Compute Infrastructure
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Hardware
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
#### Software
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Citation [optional]
|
||||
|
||||
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
||||
|
||||
**BibTeX:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
**APA:**
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Glossary [optional]
|
||||
|
||||
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## More Information [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Authors [optional]
|
||||
|
||||
[More Information Needed]
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
[More Information Needed]
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
43
checkpoint-750/adapter_config.json
Normal file
43
checkpoint-750/adapter_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"alora_invocation_tokens": null,
|
||||
"alpha_pattern": {},
|
||||
"arrow_config": null,
|
||||
"auto_mapping": null,
|
||||
"base_model_name_or_path": "uniform_prior",
|
||||
"bias": "none",
|
||||
"corda_config": null,
|
||||
"ensure_weight_tying": false,
|
||||
"eva_config": null,
|
||||
"exclude_modules": null,
|
||||
"fan_in_fan_out": false,
|
||||
"inference_mode": true,
|
||||
"init_lora_weights": true,
|
||||
"layer_replication": null,
|
||||
"layers_pattern": null,
|
||||
"layers_to_transform": null,
|
||||
"loftq_config": {},
|
||||
"lora_alpha": 32,
|
||||
"lora_bias": false,
|
||||
"lora_dropout": 0.05,
|
||||
"megatron_config": null,
|
||||
"megatron_core": "megatron.core",
|
||||
"modules_to_save": null,
|
||||
"peft_type": "LORA",
|
||||
"peft_version": "0.18.1",
|
||||
"qalora_group_size": 16,
|
||||
"r": 16,
|
||||
"rank_pattern": {},
|
||||
"revision": null,
|
||||
"target_modules": [
|
||||
"v_proj",
|
||||
"k_proj",
|
||||
"o_proj",
|
||||
"q_proj"
|
||||
],
|
||||
"target_parameters": null,
|
||||
"task_type": "CAUSAL_LM",
|
||||
"trainable_token_indices": null,
|
||||
"use_dora": false,
|
||||
"use_qalora": false,
|
||||
"use_rslora": false
|
||||
}
|
||||
3
checkpoint-750/adapter_model.safetensors
Normal file
3
checkpoint-750/adapter_model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:18af3258f246e9c80b98c0b915f7659ee32e4724ac34cdce625d165dc24caf8c
|
||||
size 5917192
|
||||
3
checkpoint-750/added_tokens.json
Normal file
3
checkpoint-750/added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"<image_soft_token>": 262144
|
||||
}
|
||||
3
checkpoint-750/optimizer.pt
Normal file
3
checkpoint-750/optimizer.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:501396e39b788a9bcb921e6c7af969e6c074f4005c92a58352b619c2572358ae
|
||||
size 11919947
|
||||
3
checkpoint-750/rng_state.pth
Normal file
3
checkpoint-750/rng_state.pth
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fe225a4c13bdb5d39388d4ca9f0f2899b5a906b5faef70c66c82410713adcd77
|
||||
size 14645
|
||||
3
checkpoint-750/scheduler.pt
Normal file
3
checkpoint-750/scheduler.pt
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:58df5e87f46732ded49bc1cf3a21b26cffae41caf82dabd88e2d39684618162b
|
||||
size 1465
|
||||
27
checkpoint-750/special_tokens_map.json
Normal file
27
checkpoint-750/special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
checkpoint-750/tokenizer.json
Normal file
3
checkpoint-750/tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
checkpoint-750/tokenizer.model
Normal file
3
checkpoint-750/tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
checkpoint-750/tokenizer_config.json
Normal file
51345
checkpoint-750/tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
784
checkpoint-750/trainer_state.json
Normal file
784
checkpoint-750/trainer_state.json
Normal file
@@ -0,0 +1,784 @@
|
||||
{
|
||||
"best_global_step": null,
|
||||
"best_metric": null,
|
||||
"best_model_checkpoint": null,
|
||||
"epoch": 1.0,
|
||||
"eval_steps": 500,
|
||||
"global_step": 750,
|
||||
"is_hyper_param_search": false,
|
||||
"is_local_process_zero": true,
|
||||
"is_world_process_zero": true,
|
||||
"log_history": [
|
||||
{
|
||||
"entropy": 1.7908701181411744,
|
||||
"epoch": 0.013333333333333334,
|
||||
"grad_norm": 1.1777843236923218,
|
||||
"learning_rate": 0.00019952000000000001,
|
||||
"loss": 1.8912,
|
||||
"mean_token_accuracy": 0.5405127346515656,
|
||||
"num_tokens": 14665.0,
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"entropy": 1.757728385925293,
|
||||
"epoch": 0.02666666666666667,
|
||||
"grad_norm": 0.9262706637382507,
|
||||
"learning_rate": 0.00019898666666666668,
|
||||
"loss": 1.7762,
|
||||
"mean_token_accuracy": 0.5599964261054993,
|
||||
"num_tokens": 29061.0,
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"entropy": 1.705195951461792,
|
||||
"epoch": 0.04,
|
||||
"grad_norm": 0.8576209545135498,
|
||||
"learning_rate": 0.00019845333333333334,
|
||||
"loss": 1.7582,
|
||||
"mean_token_accuracy": 0.5612735390663147,
|
||||
"num_tokens": 43628.0,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"entropy": 1.764819598197937,
|
||||
"epoch": 0.05333333333333334,
|
||||
"grad_norm": 0.9937595725059509,
|
||||
"learning_rate": 0.00019792000000000003,
|
||||
"loss": 1.7875,
|
||||
"mean_token_accuracy": 0.5601426303386688,
|
||||
"num_tokens": 57878.0,
|
||||
"step": 40
|
||||
},
|
||||
{
|
||||
"entropy": 1.692608082294464,
|
||||
"epoch": 0.06666666666666667,
|
||||
"grad_norm": 0.8142558932304382,
|
||||
"learning_rate": 0.00019738666666666667,
|
||||
"loss": 1.7188,
|
||||
"mean_token_accuracy": 0.5711204051971436,
|
||||
"num_tokens": 72081.0,
|
||||
"step": 50
|
||||
},
|
||||
{
|
||||
"entropy": 1.713820493221283,
|
||||
"epoch": 0.08,
|
||||
"grad_norm": 0.8134881854057312,
|
||||
"learning_rate": 0.00019685333333333333,
|
||||
"loss": 1.7262,
|
||||
"mean_token_accuracy": 0.5672860145568848,
|
||||
"num_tokens": 86547.0,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"entropy": 1.7069548845291138,
|
||||
"epoch": 0.09333333333333334,
|
||||
"grad_norm": 0.7980002760887146,
|
||||
"learning_rate": 0.00019632000000000002,
|
||||
"loss": 1.7412,
|
||||
"mean_token_accuracy": 0.5662308156490325,
|
||||
"num_tokens": 100695.0,
|
||||
"step": 70
|
||||
},
|
||||
{
|
||||
"entropy": 1.7331032037734986,
|
||||
"epoch": 0.10666666666666667,
|
||||
"grad_norm": 0.8221873044967651,
|
||||
"learning_rate": 0.00019578666666666668,
|
||||
"loss": 1.7445,
|
||||
"mean_token_accuracy": 0.5610231578350067,
|
||||
"num_tokens": 114813.0,
|
||||
"step": 80
|
||||
},
|
||||
{
|
||||
"entropy": 1.7180450558662415,
|
||||
"epoch": 0.12,
|
||||
"grad_norm": 0.8718360662460327,
|
||||
"learning_rate": 0.00019525333333333334,
|
||||
"loss": 1.7297,
|
||||
"mean_token_accuracy": 0.5656625151634216,
|
||||
"num_tokens": 128864.0,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"entropy": 1.6863915205001831,
|
||||
"epoch": 0.13333333333333333,
|
||||
"grad_norm": 0.7490881085395813,
|
||||
"learning_rate": 0.00019472,
|
||||
"loss": 1.7097,
|
||||
"mean_token_accuracy": 0.5710519969463348,
|
||||
"num_tokens": 143400.0,
|
||||
"step": 100
|
||||
},
|
||||
{
|
||||
"entropy": 1.656905508041382,
|
||||
"epoch": 0.14666666666666667,
|
||||
"grad_norm": 0.7690205574035645,
|
||||
"learning_rate": 0.00019418666666666667,
|
||||
"loss": 1.6377,
|
||||
"mean_token_accuracy": 0.5826482594013214,
|
||||
"num_tokens": 157812.0,
|
||||
"step": 110
|
||||
},
|
||||
{
|
||||
"entropy": 1.70476393699646,
|
||||
"epoch": 0.16,
|
||||
"grad_norm": 0.7643902897834778,
|
||||
"learning_rate": 0.00019365333333333336,
|
||||
"loss": 1.7258,
|
||||
"mean_token_accuracy": 0.5608070731163025,
|
||||
"num_tokens": 171892.0,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"entropy": 1.6601023912429809,
|
||||
"epoch": 0.17333333333333334,
|
||||
"grad_norm": 0.8293864130973816,
|
||||
"learning_rate": 0.00019312000000000002,
|
||||
"loss": 1.6714,
|
||||
"mean_token_accuracy": 0.5756373822689056,
|
||||
"num_tokens": 186019.0,
|
||||
"step": 130
|
||||
},
|
||||
{
|
||||
"entropy": 1.6955374836921693,
|
||||
"epoch": 0.18666666666666668,
|
||||
"grad_norm": 0.8544629812240601,
|
||||
"learning_rate": 0.00019258666666666668,
|
||||
"loss": 1.6904,
|
||||
"mean_token_accuracy": 0.5779714524745941,
|
||||
"num_tokens": 200048.0,
|
||||
"step": 140
|
||||
},
|
||||
{
|
||||
"entropy": 1.672298264503479,
|
||||
"epoch": 0.2,
|
||||
"grad_norm": 0.7459447383880615,
|
||||
"learning_rate": 0.00019205333333333335,
|
||||
"loss": 1.6373,
|
||||
"mean_token_accuracy": 0.5774146437644958,
|
||||
"num_tokens": 214845.0,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"entropy": 1.621527373790741,
|
||||
"epoch": 0.21333333333333335,
|
||||
"grad_norm": 0.778339684009552,
|
||||
"learning_rate": 0.00019152,
|
||||
"loss": 1.6564,
|
||||
"mean_token_accuracy": 0.5799753427505493,
|
||||
"num_tokens": 228965.0,
|
||||
"step": 160
|
||||
},
|
||||
{
|
||||
"entropy": 1.6854836463928222,
|
||||
"epoch": 0.22666666666666666,
|
||||
"grad_norm": 0.7781394720077515,
|
||||
"learning_rate": 0.00019098666666666667,
|
||||
"loss": 1.6731,
|
||||
"mean_token_accuracy": 0.5745640277862549,
|
||||
"num_tokens": 243138.0,
|
||||
"step": 170
|
||||
},
|
||||
{
|
||||
"entropy": 1.648065435886383,
|
||||
"epoch": 0.24,
|
||||
"grad_norm": 0.7444502711296082,
|
||||
"learning_rate": 0.00019045333333333336,
|
||||
"loss": 1.6556,
|
||||
"mean_token_accuracy": 0.5753928661346436,
|
||||
"num_tokens": 257706.0,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"entropy": 1.6733452081680298,
|
||||
"epoch": 0.25333333333333335,
|
||||
"grad_norm": 0.6875325441360474,
|
||||
"learning_rate": 0.00018992,
|
||||
"loss": 1.6905,
|
||||
"mean_token_accuracy": 0.5732767701148986,
|
||||
"num_tokens": 271550.0,
|
||||
"step": 190
|
||||
},
|
||||
{
|
||||
"entropy": 1.675937283039093,
|
||||
"epoch": 0.26666666666666666,
|
||||
"grad_norm": 0.7303425073623657,
|
||||
"learning_rate": 0.00018938666666666666,
|
||||
"loss": 1.6684,
|
||||
"mean_token_accuracy": 0.5771281003952027,
|
||||
"num_tokens": 285690.0,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"entropy": 1.6358152866363525,
|
||||
"epoch": 0.28,
|
||||
"grad_norm": 0.7411030530929565,
|
||||
"learning_rate": 0.00018885333333333335,
|
||||
"loss": 1.6339,
|
||||
"mean_token_accuracy": 0.5838043510913848,
|
||||
"num_tokens": 300104.0,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"entropy": 1.6545907258987427,
|
||||
"epoch": 0.29333333333333333,
|
||||
"grad_norm": 0.7895593047142029,
|
||||
"learning_rate": 0.00018832,
|
||||
"loss": 1.6858,
|
||||
"mean_token_accuracy": 0.5718037188053131,
|
||||
"num_tokens": 314047.0,
|
||||
"step": 220
|
||||
},
|
||||
{
|
||||
"entropy": 1.6488535761833192,
|
||||
"epoch": 0.30666666666666664,
|
||||
"grad_norm": 0.7785155177116394,
|
||||
"learning_rate": 0.00018778666666666668,
|
||||
"loss": 1.6694,
|
||||
"mean_token_accuracy": 0.5763668715953827,
|
||||
"num_tokens": 328674.0,
|
||||
"step": 230
|
||||
},
|
||||
{
|
||||
"entropy": 1.6801724672317504,
|
||||
"epoch": 0.32,
|
||||
"grad_norm": 0.697731614112854,
|
||||
"learning_rate": 0.00018725333333333334,
|
||||
"loss": 1.6564,
|
||||
"mean_token_accuracy": 0.5734941899776459,
|
||||
"num_tokens": 342932.0,
|
||||
"step": 240
|
||||
},
|
||||
{
|
||||
"entropy": 1.6780007004737854,
|
||||
"epoch": 0.3333333333333333,
|
||||
"grad_norm": 0.7615799903869629,
|
||||
"learning_rate": 0.00018672,
|
||||
"loss": 1.694,
|
||||
"mean_token_accuracy": 0.5711317896842957,
|
||||
"num_tokens": 357167.0,
|
||||
"step": 250
|
||||
},
|
||||
{
|
||||
"entropy": 1.6752456307411194,
|
||||
"epoch": 0.3466666666666667,
|
||||
"grad_norm": 0.715129017829895,
|
||||
"learning_rate": 0.00018618666666666666,
|
||||
"loss": 1.6726,
|
||||
"mean_token_accuracy": 0.5754297077655792,
|
||||
"num_tokens": 371293.0,
|
||||
"step": 260
|
||||
},
|
||||
{
|
||||
"entropy": 1.6538572311401367,
|
||||
"epoch": 0.36,
|
||||
"grad_norm": 0.8093023896217346,
|
||||
"learning_rate": 0.00018565333333333335,
|
||||
"loss": 1.6722,
|
||||
"mean_token_accuracy": 0.5708646178245544,
|
||||
"num_tokens": 385455.0,
|
||||
"step": 270
|
||||
},
|
||||
{
|
||||
"entropy": 1.6819413661956788,
|
||||
"epoch": 0.37333333333333335,
|
||||
"grad_norm": 0.7279489636421204,
|
||||
"learning_rate": 0.00018512000000000002,
|
||||
"loss": 1.6482,
|
||||
"mean_token_accuracy": 0.5810510337352752,
|
||||
"num_tokens": 399994.0,
|
||||
"step": 280
|
||||
},
|
||||
{
|
||||
"entropy": 1.6535839796066285,
|
||||
"epoch": 0.38666666666666666,
|
||||
"grad_norm": 0.7264725565910339,
|
||||
"learning_rate": 0.00018458666666666668,
|
||||
"loss": 1.6601,
|
||||
"mean_token_accuracy": 0.5782212734222412,
|
||||
"num_tokens": 414302.0,
|
||||
"step": 290
|
||||
},
|
||||
{
|
||||
"entropy": 1.6291370630264281,
|
||||
"epoch": 0.4,
|
||||
"grad_norm": 0.7391830682754517,
|
||||
"learning_rate": 0.00018405333333333334,
|
||||
"loss": 1.6252,
|
||||
"mean_token_accuracy": 0.5810718059539794,
|
||||
"num_tokens": 428367.0,
|
||||
"step": 300
|
||||
},
|
||||
{
|
||||
"entropy": 1.6454604744911194,
|
||||
"epoch": 0.41333333333333333,
|
||||
"grad_norm": 0.7421492338180542,
|
||||
"learning_rate": 0.00018352,
|
||||
"loss": 1.6548,
|
||||
"mean_token_accuracy": 0.5764368116855622,
|
||||
"num_tokens": 442350.0,
|
||||
"step": 310
|
||||
},
|
||||
{
|
||||
"entropy": 1.6191022634506225,
|
||||
"epoch": 0.4266666666666667,
|
||||
"grad_norm": 0.6302896738052368,
|
||||
"learning_rate": 0.0001829866666666667,
|
||||
"loss": 1.6362,
|
||||
"mean_token_accuracy": 0.5786097645759583,
|
||||
"num_tokens": 456414.0,
|
||||
"step": 320
|
||||
},
|
||||
{
|
||||
"entropy": 1.694426167011261,
|
||||
"epoch": 0.44,
|
||||
"grad_norm": 0.7121474742889404,
|
||||
"learning_rate": 0.00018245333333333333,
|
||||
"loss": 1.7126,
|
||||
"mean_token_accuracy": 0.5670216321945191,
|
||||
"num_tokens": 470836.0,
|
||||
"step": 330
|
||||
},
|
||||
{
|
||||
"entropy": 1.724251127243042,
|
||||
"epoch": 0.4533333333333333,
|
||||
"grad_norm": 0.6601665019989014,
|
||||
"learning_rate": 0.00018192,
|
||||
"loss": 1.7136,
|
||||
"mean_token_accuracy": 0.5723266065120697,
|
||||
"num_tokens": 485043.0,
|
||||
"step": 340
|
||||
},
|
||||
{
|
||||
"entropy": 1.6735360264778136,
|
||||
"epoch": 0.4666666666666667,
|
||||
"grad_norm": 0.8272239565849304,
|
||||
"learning_rate": 0.00018138666666666668,
|
||||
"loss": 1.6753,
|
||||
"mean_token_accuracy": 0.5723617672920227,
|
||||
"num_tokens": 499183.0,
|
||||
"step": 350
|
||||
},
|
||||
{
|
||||
"entropy": 1.6469814419746398,
|
||||
"epoch": 0.48,
|
||||
"grad_norm": 0.7188411951065063,
|
||||
"learning_rate": 0.00018085333333333335,
|
||||
"loss": 1.6475,
|
||||
"mean_token_accuracy": 0.5769975543022156,
|
||||
"num_tokens": 513579.0,
|
||||
"step": 360
|
||||
},
|
||||
{
|
||||
"entropy": 1.6747308850288392,
|
||||
"epoch": 0.49333333333333335,
|
||||
"grad_norm": 0.7487552762031555,
|
||||
"learning_rate": 0.00018032,
|
||||
"loss": 1.6823,
|
||||
"mean_token_accuracy": 0.5752046227455139,
|
||||
"num_tokens": 527660.0,
|
||||
"step": 370
|
||||
},
|
||||
{
|
||||
"entropy": 1.666330885887146,
|
||||
"epoch": 0.5066666666666667,
|
||||
"grad_norm": 0.6975488662719727,
|
||||
"learning_rate": 0.00017978666666666667,
|
||||
"loss": 1.6582,
|
||||
"mean_token_accuracy": 0.5811319828033448,
|
||||
"num_tokens": 542081.0,
|
||||
"step": 380
|
||||
},
|
||||
{
|
||||
"entropy": 1.6803588151931763,
|
||||
"epoch": 0.52,
|
||||
"grad_norm": 0.737691342830658,
|
||||
"learning_rate": 0.00017925333333333333,
|
||||
"loss": 1.6788,
|
||||
"mean_token_accuracy": 0.5737549722194671,
|
||||
"num_tokens": 556595.0,
|
||||
"step": 390
|
||||
},
|
||||
{
|
||||
"entropy": 1.6843163609504699,
|
||||
"epoch": 0.5333333333333333,
|
||||
"grad_norm": 0.739733099937439,
|
||||
"learning_rate": 0.00017872,
|
||||
"loss": 1.6939,
|
||||
"mean_token_accuracy": 0.5709910333156586,
|
||||
"num_tokens": 570777.0,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"entropy": 1.6645950436592103,
|
||||
"epoch": 0.5466666666666666,
|
||||
"grad_norm": 0.7805267572402954,
|
||||
"learning_rate": 0.00017818666666666669,
|
||||
"loss": 1.6813,
|
||||
"mean_token_accuracy": 0.5687461018562316,
|
||||
"num_tokens": 585138.0,
|
||||
"step": 410
|
||||
},
|
||||
{
|
||||
"entropy": 1.6936459064483642,
|
||||
"epoch": 0.56,
|
||||
"grad_norm": 0.726749062538147,
|
||||
"learning_rate": 0.00017765333333333335,
|
||||
"loss": 1.6973,
|
||||
"mean_token_accuracy": 0.5707198500633239,
|
||||
"num_tokens": 599933.0,
|
||||
"step": 420
|
||||
},
|
||||
{
|
||||
"entropy": 1.6732940793037414,
|
||||
"epoch": 0.5733333333333334,
|
||||
"grad_norm": 0.6750204563140869,
|
||||
"learning_rate": 0.00017712,
|
||||
"loss": 1.6797,
|
||||
"mean_token_accuracy": 0.5727494537830353,
|
||||
"num_tokens": 614132.0,
|
||||
"step": 430
|
||||
},
|
||||
{
|
||||
"entropy": 1.6580298662185669,
|
||||
"epoch": 0.5866666666666667,
|
||||
"grad_norm": 0.719950795173645,
|
||||
"learning_rate": 0.00017658666666666667,
|
||||
"loss": 1.6577,
|
||||
"mean_token_accuracy": 0.5744940221309662,
|
||||
"num_tokens": 628571.0,
|
||||
"step": 440
|
||||
},
|
||||
{
|
||||
"entropy": 1.652840507030487,
|
||||
"epoch": 0.6,
|
||||
"grad_norm": 0.6199878454208374,
|
||||
"learning_rate": 0.00017605333333333334,
|
||||
"loss": 1.6518,
|
||||
"mean_token_accuracy": 0.5756324470043183,
|
||||
"num_tokens": 643076.0,
|
||||
"step": 450
|
||||
},
|
||||
{
|
||||
"entropy": 1.6695269465446472,
|
||||
"epoch": 0.6133333333333333,
|
||||
"grad_norm": 0.7818489074707031,
|
||||
"learning_rate": 0.00017552000000000003,
|
||||
"loss": 1.6768,
|
||||
"mean_token_accuracy": 0.5710194766521454,
|
||||
"num_tokens": 656943.0,
|
||||
"step": 460
|
||||
},
|
||||
{
|
||||
"entropy": 1.668250596523285,
|
||||
"epoch": 0.6266666666666667,
|
||||
"grad_norm": 0.6790580749511719,
|
||||
"learning_rate": 0.0001749866666666667,
|
||||
"loss": 1.6783,
|
||||
"mean_token_accuracy": 0.5728156328201294,
|
||||
"num_tokens": 671345.0,
|
||||
"step": 470
|
||||
},
|
||||
{
|
||||
"entropy": 1.6558388233184815,
|
||||
"epoch": 0.64,
|
||||
"grad_norm": 0.7647916078567505,
|
||||
"learning_rate": 0.00017445333333333333,
|
||||
"loss": 1.6723,
|
||||
"mean_token_accuracy": 0.5740255832672119,
|
||||
"num_tokens": 685563.0,
|
||||
"step": 480
|
||||
},
|
||||
{
|
||||
"entropy": 1.6663503646850586,
|
||||
"epoch": 0.6533333333333333,
|
||||
"grad_norm": 0.6828808188438416,
|
||||
"learning_rate": 0.00017392000000000002,
|
||||
"loss": 1.6718,
|
||||
"mean_token_accuracy": 0.5754141092300415,
|
||||
"num_tokens": 699746.0,
|
||||
"step": 490
|
||||
},
|
||||
{
|
||||
"entropy": 1.6640026688575744,
|
||||
"epoch": 0.6666666666666666,
|
||||
"grad_norm": 0.7021526098251343,
|
||||
"learning_rate": 0.00017338666666666668,
|
||||
"loss": 1.6442,
|
||||
"mean_token_accuracy": 0.5807092905044555,
|
||||
"num_tokens": 714501.0,
|
||||
"step": 500
|
||||
},
|
||||
{
|
||||
"entropy": 1.6755484104156495,
|
||||
"epoch": 0.68,
|
||||
"grad_norm": 0.6762555241584778,
|
||||
"learning_rate": 0.00017285333333333334,
|
||||
"loss": 1.6864,
|
||||
"mean_token_accuracy": 0.5786374986171723,
|
||||
"num_tokens": 728741.0,
|
||||
"step": 510
|
||||
},
|
||||
{
|
||||
"entropy": 1.6181371688842774,
|
||||
"epoch": 0.6933333333333334,
|
||||
"grad_norm": 0.7460752129554749,
|
||||
"learning_rate": 0.00017232,
|
||||
"loss": 1.6443,
|
||||
"mean_token_accuracy": 0.5760051727294921,
|
||||
"num_tokens": 742943.0,
|
||||
"step": 520
|
||||
},
|
||||
{
|
||||
"entropy": 1.6662646651268005,
|
||||
"epoch": 0.7066666666666667,
|
||||
"grad_norm": 0.6818891167640686,
|
||||
"learning_rate": 0.00017178666666666667,
|
||||
"loss": 1.6626,
|
||||
"mean_token_accuracy": 0.5765082776546478,
|
||||
"num_tokens": 757701.0,
|
||||
"step": 530
|
||||
},
|
||||
{
|
||||
"entropy": 1.640554964542389,
|
||||
"epoch": 0.72,
|
||||
"grad_norm": 0.699113130569458,
|
||||
"learning_rate": 0.00017125333333333333,
|
||||
"loss": 1.6219,
|
||||
"mean_token_accuracy": 0.5862498044967651,
|
||||
"num_tokens": 772062.0,
|
||||
"step": 540
|
||||
},
|
||||
{
|
||||
"entropy": 1.692064070701599,
|
||||
"epoch": 0.7333333333333333,
|
||||
"grad_norm": 0.7120975852012634,
|
||||
"learning_rate": 0.00017072000000000002,
|
||||
"loss": 1.6991,
|
||||
"mean_token_accuracy": 0.5677917718887329,
|
||||
"num_tokens": 786707.0,
|
||||
"step": 550
|
||||
},
|
||||
{
|
||||
"entropy": 1.6766908407211303,
|
||||
"epoch": 0.7466666666666667,
|
||||
"grad_norm": 0.7205502390861511,
|
||||
"learning_rate": 0.00017018666666666668,
|
||||
"loss": 1.6758,
|
||||
"mean_token_accuracy": 0.5790870308876037,
|
||||
"num_tokens": 801236.0,
|
||||
"step": 560
|
||||
},
|
||||
{
|
||||
"entropy": 1.632925820350647,
|
||||
"epoch": 0.76,
|
||||
"grad_norm": 0.738036572933197,
|
||||
"learning_rate": 0.00016965333333333332,
|
||||
"loss": 1.6494,
|
||||
"mean_token_accuracy": 0.5808292210102082,
|
||||
"num_tokens": 815883.0,
|
||||
"step": 570
|
||||
},
|
||||
{
|
||||
"entropy": 1.669986116886139,
|
||||
"epoch": 0.7733333333333333,
|
||||
"grad_norm": 0.6785821914672852,
|
||||
"learning_rate": 0.00016912,
|
||||
"loss": 1.6544,
|
||||
"mean_token_accuracy": 0.5800759375095368,
|
||||
"num_tokens": 829570.0,
|
||||
"step": 580
|
||||
},
|
||||
{
|
||||
"entropy": 1.6363327860832215,
|
||||
"epoch": 0.7866666666666666,
|
||||
"grad_norm": 0.6850758790969849,
|
||||
"learning_rate": 0.00016858666666666667,
|
||||
"loss": 1.6549,
|
||||
"mean_token_accuracy": 0.578828877210617,
|
||||
"num_tokens": 844009.0,
|
||||
"step": 590
|
||||
},
|
||||
{
|
||||
"entropy": 1.64016250371933,
|
||||
"epoch": 0.8,
|
||||
"grad_norm": 0.6750107407569885,
|
||||
"learning_rate": 0.00016805333333333336,
|
||||
"loss": 1.6566,
|
||||
"mean_token_accuracy": 0.575623071193695,
|
||||
"num_tokens": 858282.0,
|
||||
"step": 600
|
||||
},
|
||||
{
|
||||
"entropy": 1.6842385530471802,
|
||||
"epoch": 0.8133333333333334,
|
||||
"grad_norm": 0.7148203253746033,
|
||||
"learning_rate": 0.00016752000000000002,
|
||||
"loss": 1.6909,
|
||||
"mean_token_accuracy": 0.5759712219238281,
|
||||
"num_tokens": 872855.0,
|
||||
"step": 610
|
||||
},
|
||||
{
|
||||
"entropy": 1.6071362018585205,
|
||||
"epoch": 0.8266666666666667,
|
||||
"grad_norm": 0.709815263748169,
|
||||
"learning_rate": 0.00016698666666666666,
|
||||
"loss": 1.6062,
|
||||
"mean_token_accuracy": 0.5838622927665711,
|
||||
"num_tokens": 887139.0,
|
||||
"step": 620
|
||||
},
|
||||
{
|
||||
"entropy": 1.647024357318878,
|
||||
"epoch": 0.84,
|
||||
"grad_norm": 0.7029419541358948,
|
||||
"learning_rate": 0.00016645333333333335,
|
||||
"loss": 1.6712,
|
||||
"mean_token_accuracy": 0.5723545610904693,
|
||||
"num_tokens": 901212.0,
|
||||
"step": 630
|
||||
},
|
||||
{
|
||||
"entropy": 1.6637583255767823,
|
||||
"epoch": 0.8533333333333334,
|
||||
"grad_norm": 0.7077947854995728,
|
||||
"learning_rate": 0.00016592,
|
||||
"loss": 1.6467,
|
||||
"mean_token_accuracy": 0.5752432465553283,
|
||||
"num_tokens": 915566.0,
|
||||
"step": 640
|
||||
},
|
||||
{
|
||||
"entropy": 1.6351732969284059,
|
||||
"epoch": 0.8666666666666667,
|
||||
"grad_norm": 0.7223191857337952,
|
||||
"learning_rate": 0.00016538666666666667,
|
||||
"loss": 1.6326,
|
||||
"mean_token_accuracy": 0.5809595823287964,
|
||||
"num_tokens": 930155.0,
|
||||
"step": 650
|
||||
},
|
||||
{
|
||||
"entropy": 1.625480556488037,
|
||||
"epoch": 0.88,
|
||||
"grad_norm": 0.6872687339782715,
|
||||
"learning_rate": 0.00016485333333333334,
|
||||
"loss": 1.6096,
|
||||
"mean_token_accuracy": 0.5873603641986846,
|
||||
"num_tokens": 943961.0,
|
||||
"step": 660
|
||||
},
|
||||
{
|
||||
"entropy": 1.6413599967956543,
|
||||
"epoch": 0.8933333333333333,
|
||||
"grad_norm": 0.6594866514205933,
|
||||
"learning_rate": 0.00016432,
|
||||
"loss": 1.692,
|
||||
"mean_token_accuracy": 0.5740860402584076,
|
||||
"num_tokens": 958370.0,
|
||||
"step": 670
|
||||
},
|
||||
{
|
||||
"entropy": 1.6765796422958374,
|
||||
"epoch": 0.9066666666666666,
|
||||
"grad_norm": 0.6635148525238037,
|
||||
"learning_rate": 0.00016378666666666666,
|
||||
"loss": 1.6493,
|
||||
"mean_token_accuracy": 0.5794400811195374,
|
||||
"num_tokens": 972746.0,
|
||||
"step": 680
|
||||
},
|
||||
{
|
||||
"entropy": 1.64844651222229,
|
||||
"epoch": 0.92,
|
||||
"grad_norm": 0.6969758868217468,
|
||||
"learning_rate": 0.00016325333333333335,
|
||||
"loss": 1.6465,
|
||||
"mean_token_accuracy": 0.5806655645370483,
|
||||
"num_tokens": 987210.0,
|
||||
"step": 690
|
||||
},
|
||||
{
|
||||
"entropy": 1.628549289703369,
|
||||
"epoch": 0.9333333333333333,
|
||||
"grad_norm": 0.6831814050674438,
|
||||
"learning_rate": 0.00016272000000000001,
|
||||
"loss": 1.6445,
|
||||
"mean_token_accuracy": 0.5774340748786926,
|
||||
"num_tokens": 1001627.0,
|
||||
"step": 700
|
||||
},
|
||||
{
|
||||
"entropy": 1.6590662002563477,
|
||||
"epoch": 0.9466666666666667,
|
||||
"grad_norm": 0.7329531908035278,
|
||||
"learning_rate": 0.00016218666666666668,
|
||||
"loss": 1.6746,
|
||||
"mean_token_accuracy": 0.5709875702857972,
|
||||
"num_tokens": 1015828.0,
|
||||
"step": 710
|
||||
},
|
||||
{
|
||||
"entropy": 1.6233970403671265,
|
||||
"epoch": 0.96,
|
||||
"grad_norm": 0.6673484444618225,
|
||||
"learning_rate": 0.00016165333333333334,
|
||||
"loss": 1.6213,
|
||||
"mean_token_accuracy": 0.5806674182415008,
|
||||
"num_tokens": 1030323.0,
|
||||
"step": 720
|
||||
},
|
||||
{
|
||||
"entropy": 1.6572086334228515,
|
||||
"epoch": 0.9733333333333334,
|
||||
"grad_norm": 0.7702727913856506,
|
||||
"learning_rate": 0.00016112,
|
||||
"loss": 1.6664,
|
||||
"mean_token_accuracy": 0.5798572361469269,
|
||||
"num_tokens": 1044685.0,
|
||||
"step": 730
|
||||
},
|
||||
{
|
||||
"entropy": 1.620801281929016,
|
||||
"epoch": 0.9866666666666667,
|
||||
"grad_norm": 0.6698365211486816,
|
||||
"learning_rate": 0.0001605866666666667,
|
||||
"loss": 1.6032,
|
||||
"mean_token_accuracy": 0.5835649371147156,
|
||||
"num_tokens": 1058992.0,
|
||||
"step": 740
|
||||
},
|
||||
{
|
||||
"entropy": 1.6149493217468263,
|
||||
"epoch": 1.0,
|
||||
"grad_norm": 0.6890732645988464,
|
||||
"learning_rate": 0.00016005333333333335,
|
||||
"loss": 1.6306,
|
||||
"mean_token_accuracy": 0.5798977136611938,
|
||||
"num_tokens": 1073437.0,
|
||||
"step": 750
|
||||
}
|
||||
],
|
||||
"logging_steps": 10,
|
||||
"max_steps": 3750,
|
||||
"num_input_tokens_seen": 0,
|
||||
"num_train_epochs": 5,
|
||||
"save_steps": 500,
|
||||
"stateful_callbacks": {
|
||||
"TrainerControl": {
|
||||
"args": {
|
||||
"should_epoch_stop": false,
|
||||
"should_evaluate": false,
|
||||
"should_log": false,
|
||||
"should_save": true,
|
||||
"should_training_stop": false
|
||||
},
|
||||
"attributes": {}
|
||||
}
|
||||
},
|
||||
"total_flos": 727173730437120.0,
|
||||
"train_batch_size": 4,
|
||||
"trial_name": null,
|
||||
"trial_params": null
|
||||
}
|
||||
3
checkpoint-750/training_args.bin
Normal file
3
checkpoint-750/training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:11bbe284e74298a5b6715dcfd6da1001ad5655a8cf82df3bc6cc3e49cf903165
|
||||
size 6289
|
||||
54
config.json
Normal file
54
config.json
Normal file
@@ -0,0 +1,54 @@
|
||||
{
|
||||
"_sliding_window_pattern": 6,
|
||||
"architectures": [
|
||||
"Gemma3ForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"attn_logit_softcapping": null,
|
||||
"bos_token_id": 2,
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 1,
|
||||
"final_logit_softcapping": null,
|
||||
"head_dim": 256,
|
||||
"hidden_activation": "gelu_pytorch_tanh",
|
||||
"hidden_size": 640,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 2048,
|
||||
"layer_types": [
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"sliding_attention",
|
||||
"full_attention"
|
||||
],
|
||||
"max_position_embeddings": 32768,
|
||||
"model_type": "gemma3_text",
|
||||
"num_attention_heads": 4,
|
||||
"num_hidden_layers": 18,
|
||||
"num_key_value_heads": 1,
|
||||
"pad_token_id": 1,
|
||||
"query_pre_attn_scalar": 256,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_local_base_freq": 10000.0,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 1000000.0,
|
||||
"sliding_window": 512,
|
||||
"transformers_version": "4.57.6",
|
||||
"use_bidirectional_attention": false,
|
||||
"use_cache": true,
|
||||
"vocab_size": 262144
|
||||
}
|
||||
12
generation_config.json
Normal file
12
generation_config.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"bos_token_id": 2,
|
||||
"cache_implementation": "hybrid",
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
1
|
||||
],
|
||||
"pad_token_id": 1,
|
||||
"top_k": 64,
|
||||
"top_p": 0.95,
|
||||
"transformers_version": "4.57.6"
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e75e3f2e643e7f3798802a0b143c9d835a2d59a97b17b600f19da6f8fa858c26
|
||||
size 536223056
|
||||
27
special_tokens_map.json
Normal file
27
special_tokens_map.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"boi_token": "<start_of_image>",
|
||||
"bos_token": {
|
||||
"content": "<bos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eoi_token": "<end_of_image>",
|
||||
"eos_token": {
|
||||
"content": "<eos>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"image_token": "<image_soft_token>",
|
||||
"pad_token": "<eos>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
|
||||
size 33384568
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
||||
size 4689074
|
||||
51345
tokenizer_config.json
Normal file
51345
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user