初始化项目,由ModelHub XC社区提供模型

Model: Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 12:30:59 +08:00
commit 2c2240df42
13 changed files with 4725 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
phi-4.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text

8
Modelfile Normal file
View File

@@ -0,0 +1,8 @@
FROM phi-4.Q8_0.gguf
TEMPLATE """{{ if .System }}<|im_start|><|system|><|im_sep|>{{ .System }}<|im_end|>{{ end }}{{ if .Prompt }}<|im_start|><|user|><|im_sep|>{{ .Prompt }}<|im_end|>{{ end }}<|im_start|><|assistant|><|im_sep|>{{ .Response }}<|im_end|>"""
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_sep|>"
PARAMETER temperature 1.5
PARAMETER min_p 0.1

54
README.md Normal file
View File

@@ -0,0 +1,54 @@
---
pipeline_tag: text-generation
tags:
- gguf
- llama.cpp
- unsloth
- conversational
base_model:
- unsloth/Phi-4-unsloth-bnb-4bit
datasets:
- Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl
---
# joke-finetome-model-gguf-phi4-20260112-081758 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758 --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758 --jinja`
## Available Model files:
- `phi-4.Q8_0.gguf`
## Ollama
An Ollama Modelfile is included for easy deployment.
This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Training artifacts
- Plot (interactive): [`reports/training_loss_step.html`](reports/training_loss_step.html)
- Run manifest: [`reports/run_manifest.json`](reports/run_manifest.json)
- Inference sample: [`reports/inference_sample.json`](reports/inference_sample.json)
- Config snapshot: [`reports/config_snapshot.json`](reports/config_snapshot.json)
## Inference
This repository contains a **GGUF** model intended to be used with **llama.cpp** and/or deployed on **Hugging Face Inference Endpoints (llama.cpp container)**.
Recommended Inference Endpoints knobs:
- Max tokens / request: **1024**
- Max concurrent requests: **2**
### Local llama.cpp (Phi-4 template)
```bash
llama-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758:q8_0 -cnv --chat-template phi4
```
### Hugging Face Inference Endpoint (llama.cpp)
When creating an endpoint, select this repo and the GGUF file **<your_model>.gguf** (quant: **q8_0**).
Recommended settings are stored in: `inference/endpoint_recipe.json`.
Python client example: `inference/hf_endpoint_client.py`

33
config.json Normal file
View File

@@ -0,0 +1,33 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"torch_dtype": "bfloat16",
"eos_token_id": 100265,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100351,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"tie_word_embeddings": false,
"transformers_version": "4.56.2",
"unsloth_fixed": true,
"unsloth_version": "2026.1.2",
"use_cache": true,
"vocab_size": 100352
}

View File

@@ -0,0 +1,18 @@
{
"engine": "llama.cpp",
"recommended_endpoint_settings": {
"max_tokens_per_request": 1024,
"max_concurrent_requests": 2,
"notes": "Memory scales roughly with (max_concurrent_requests * max_tokens_per_request)."
},
"recommended_generation_defaults": {
"temperature": 1.2,
"top_p": 0.95,
"min_p": 0.05,
"repeat_penalty": 1.08,
"max_tokens": 2560
},
"chat_template": "phi4",
"gguf_file": "",
"gguf_quant": "q8_0"
}

View File

@@ -0,0 +1,23 @@
import os
from huggingface_hub import InferenceClient
# Required env vars:
# export HF_TOKEN="..."
# export HF_ENDPOINT_BASE_URL="https://xxxx.endpoints.huggingface.cloud"
client = InferenceClient(
base_url=os.environ["HF_ENDPOINT_BASE_URL"],
api_key=os.environ["HF_TOKEN"],
)
resp = client.chat.completions.create(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a short joke in the style of The Office."},
],
max_tokens=2560,
temperature=1.2,
top_p=0.95,
)
print(resp.choices[0].message.content)

View File

@@ -0,0 +1,12 @@
### Local inference (llama.cpp)
```bash
llama-cli -hf {REPO_ID}:q8_0 -cnv --chat-template phi4
```
### Server (OpenAI-compatible)
```bash
llama-server -hf {REPO_ID}:q8_0
# /v1/chat/completions will be available (OpenAI-compatible)
```

3
phi-4.Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70be651211a3ca863ad8bc6c9b2596bf131f434686a5d06808c79bad7e5743b8
size 15580507200

View File

@@ -0,0 +1,28 @@
{
"MODEL_NAME": "unsloth/Phi-4-unsloth-bnb-4bit",
"CHAT_TEMPLATE": "phi-4",
"MAX_SEQ_LENGTH": 2048,
"LOAD_IN_4BIT": true,
"DATASET_NAME": "Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl",
"DATASET_SPLIT": "train",
"PER_DEVICE_TRAIN_BATCH_SIZE": 2,
"GRADIENT_ACCUMULATION_STEPS": 4,
"WARMUP_STEPS": 10,
"MAX_STEPS": 2000,
"LEARNING_RATE": 9.95267419777795e-06,
"LR_AUTO_ENABLED": true,
"LR_AUTO_USE_N": "train",
"LR_AUTO_N_REF": 1436,
"LR_AUTO_BASE": 1e-05,
"LR_AUTO_MULT": 0.5,
"LR_AUTO_FINAL": 5e-06,
"WEIGHT_DECAY": 0.009206070410847844,
"LR_SCHEDULER_TYPE": "linear",
"SEED": 3407,
"PLOTLY_DARK_MODE": true,
"PLOTLY_BASE_COLOR": "#00CC96",
"PLOTLY_EMA_SPAN": 25,
"HF_REPO_ID_MERGED_RESOLVED": "Mathieu-Thomas-JOSSET/joke-20260112-081758",
"HF_REPO_ID_GGUF_RESOLVED": "Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758",
"HF_ARTIFACTS_DIR_IN_REPO": "reports"
}

View File

@@ -0,0 +1,10 @@
{
"source": "dataset",
"index": 232,
"messages": [
{
"role": "user",
"content": "Dwight: \"Oh, man.\"\nMichael: \"How did we do it?\"\nDwight: \"I dont … have no idea.\""
}
]
}

4429
reports/report.html Normal file

File diff suppressed because one or more lines are too long

57
reports/run_manifest.json Normal file
View File

@@ -0,0 +1,57 @@
{
"started_at": "2026-01-12T08:52:02+01:00",
"repos": {
"merged": "Mathieu-Thomas-JOSSET/joke-20260112-081758",
"gguf": "Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758"
},
"model_name": "unsloth/Phi-4-unsloth-bnb-4bit",
"dataset": {
"name": "Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl",
"split": "train"
},
"training": {
"max_steps": 2000,
"learning_rate": 9.95267419777795e-06,
"per_device_train_batch_size": 2,
"gradient_accumulation_steps": 4,
"max_seq_length": 2048,
"seed": 3407,
"optimizer": "adamw_8bit",
"lr_scheduler_type": "linear"
},
"auto_lr": {
"enabled": true,
"use_n": "train",
"n_ref": 1436,
"base": 1e-05,
"mult": 0.5,
"final": 5e-06
},
"metrics": {
"train_runtime": 2220.807,
"train_samples_per_second": 5.944,
"train_steps_per_second": 0.743,
"total_flos": 4.35104765343744e+16,
"train_loss": 1.654274252106746,
"epoch": 3.3342618384401113
},
"best": {
"checkpoint": "/content/outputs/continue_r1_from_350_20260112_073729/checkpoint-100",
"metric": 2.2380564212799072,
"metric_name": "eval_loss"
},
"plotly": {
"html": "reports/training_loss_step.html",
"png": null
},
"inference_sample": {
"source": "dataset",
"index": 232,
"messages": [
{
"role": "user",
"content": "Dwight: \"Oh, man.\"\nMichael: \"How did we do it?\"\nDwight: \"I dont … have no idea.\""
}
]
}
}

File diff suppressed because one or more lines are too long