初始化项目,由ModelHub XC社区提供模型

Model: afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-14 01:37:04 +08:00
commit 47824a2c8d
9 changed files with 156 additions and 0 deletions

42
.gitattributes vendored Normal file
View File

@@ -0,0 +1,42 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf filter=lfs diff=lfs merge=lfs -text
tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf filter=lfs diff=lfs merge=lfs -text

93
README.md Normal file
View File

@@ -0,0 +1,93 @@
---
base_model: habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
datasets:
- OpenAssistant/oasst_top1_2023-08-25
inference: false
language:
- en
license: apache-2.0
model_creator: habanoz
model_name: TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF
Quantized GGUF model files for [TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) from [habanoz](https://huggingface.co/habanoz)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf) | q2_k | 483.12 MB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf) | q3_k_m | 550.82 MB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf) | q4_k_m | 668.79 MB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf) | q5_k_m | 783.02 MB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf) | q6_k | 904.39 MB |
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset.
Trained for 5 epochs using Qlora. Adapter is merged.
SFT code:
https://github.com/habanoz/qlora.git
Command used:
```bash
accelerate launch $BASE_DIR/qlora/train.py \
--model_name_or_path $BASE_MODEL \
--working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
--output_dir $BASE_DIR/$OUTPUT_NAME-peft \
--merged_output_dir $BASE_DIR/$OUTPUT_NAME \
--final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
--num_train_epochs 5 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 75 \
--save_total_limit 2 \
--data_seed 11422 \
--evaluation_strategy steps \
--per_device_eval_batch_size 4 \
--eval_dataset_size 0.01 \
--eval_steps 75 \
--max_new_tokens 1024 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--do_train \
--do_eval \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--bits 4 \
--double_quant \
--quant_type nf4 \
--lr_scheduler_type constant \
--dataset oasst1-top1 \
--dataset_format oasst1 \
--model_max_len 1024 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--learning_rate 1e-5 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.0 \
--weight_decay 0.0 \
--seed 11422 \
--gradient_checkpointing \
--use_flash_attention_2 \
--ddp_find_unused_parameters False
```

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac76ff5fc216f3ed9cace90f973f00288ba56ef6c428cd7c6c6e5b2e254b7ce0
size 2201990048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2fd85b50c47aae20135b39cecf39224e4d1ec93a18d7fe1691803dcde4b84bb2
size 483115968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:241b0f4f817c016a7ab065f1c6cd1a64181e6286bfe02a1e40e902467e4963f8
size 550818752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b2a3ccb918caf6fce9c90938370ea9c0d0d4ea7343e40accc99102a722055019
size 668787648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9bcd3134e402071f450cc747abc90bec90eb7a2e595872b8503da003d66318f9
size 783016896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef1862d0aa149dbd8bb34fbc032ed82216b41eb248842dfe6d416734a9b23ada
size 904385472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f992402f4c1443535e0380f4aeb519303e2f8ff053c7717aba1ff9cff9f419b5
size 1170781120