初始化项目,由ModelHub XC社区提供模型

Model: afrideva/phi-2_dolly_instruction_polish-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-28 20:26:35 +08:00
commit 82b5fedd71
9 changed files with 188 additions and 0 deletions

42
.gitattributes vendored Normal file
View File

@@ -0,0 +1,42 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.fp16.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q2_k.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q6_k.gguf filter=lfs diff=lfs merge=lfs -text
phi-2_dolly_instruction_polish.q8_0.gguf filter=lfs diff=lfs merge=lfs -text

125
README.md Normal file
View File

@@ -0,0 +1,125 @@
---
base_model: s3nh/phi-2_dolly_instruction_polish
inference: false
library_name: peft
license: other
model-index:
- name: phi-2-sft-out
results: []
model_creator: s3nh
model_name: phi-2_dolly_instruction_polish
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# s3nh/phi-2_dolly_instruction_polish-GGUF
Quantized GGUF model files for [phi-2_dolly_instruction_polish](https://huggingface.co/s3nh/phi-2_dolly_instruction_polish) from [s3nh](https://huggingface.co/s3nh)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2_dolly_instruction_polish.fp16.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.fp16.gguf) | fp16 | 5.56 GB |
| [phi-2_dolly_instruction_polish.q2_k.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q2_k.gguf) | q2_k | 1.17 GB |
| [phi-2_dolly_instruction_polish.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q3_k_m.gguf) | q3_k_m | 1.48 GB |
| [phi-2_dolly_instruction_polish.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [phi-2_dolly_instruction_polish.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [phi-2_dolly_instruction_polish.q6_k.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q6_k.gguf) | q6_k | 2.29 GB |
| [phi-2_dolly_instruction_polish.q8_0.gguf](https://huggingface.co/afrideva/phi-2_dolly_instruction_polish-GGUF/resolve/main/phi-2_dolly_instruction_polish.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# phi-2-sft-out
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.0 | 1 | 1.7973 |
| 1.9767 | 0.25 | 5290 | 1.4832 |
| 1.8474 | 0.5 | 10580 | 1.4356 |
| 1.8121 | 0.75 | 15870 | 1.4022 |
| 1.8333 | 1.0 | 21160 | 1.3678 |
| 1.6601 | 1.25 | 26450 | 1.3508 |
| 1.5452 | 1.5 | 31740 | 1.3357 |
| 1.7381 | 1.75 | 37030 | 1.3191 |
| 1.6256 | 2.0 | 42320 | 1.3090 |
| 1.5521 | 2.25 | 47610 | 1.2961 |
| 1.8318 | 2.5 | 52900 | 1.2910 |
| 1.6761 | 2.75 | 58190 | 1.2901 |
| 1.6312 | 3.0 | 63480 | 1.2879 |
| 1.7003 | 3.25 | 68770 | 1.2820 |
| 1.6915 | 3.5 | 74060 | 1.2814 |
| 1.5757 | 3.75 | 79350 | 1.2813 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ce37cc2d924e8dbba38e264a81709e56c6365d57a0fce65e55b50a5554442f8
size 5563088672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:424289baad47a9622ad8386891dd622c0fb2427e62a46b9752b9ea9456aa3d46
size 1173610336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:22df63ba435dd45c091e6dbcb118e79a8f7ee4628a3bd8904e0c951f435cf788
size 1480195936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:65050b670497ee9eec400b58cbbdf0bebb8c6a2ffaa8f466396cd98aafcdb6c1
size 1789239136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6bdd6957f7387711ed9f920797269c2f0a4c80d7fb4399d208d232c90bc86825
size 2072682336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0d9db4e17583137041ad9a76e2de9a34f3a3d7f2cb5333417f30506375dcae1
size 2285059936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:294c95d89a01f8c0ee67dfdc519da4b2374c7cab96dec594114fc418112830e2
size 2958032736