Files
ModelHub XC 2c2240df42 初始化项目,由ModelHub XC社区提供模型
Model: Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758
Source: Original Platform
2026-04-11 12:30:59 +08:00

55 lines
2.1 KiB
Markdown

---
pipeline_tag: text-generation
tags:
- gguf
- llama.cpp
- unsloth
- conversational
base_model:
- unsloth/Phi-4-unsloth-bnb-4bit
datasets:
- Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl
---
# joke-finetome-model-gguf-phi4-20260112-081758 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758 --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758 --jinja`
## Available Model files:
- `phi-4.Q8_0.gguf`
## Ollama
An Ollama Modelfile is included for easy deployment.
This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Training artifacts
- Plot (interactive): [`reports/training_loss_step.html`](reports/training_loss_step.html)
- Run manifest: [`reports/run_manifest.json`](reports/run_manifest.json)
- Inference sample: [`reports/inference_sample.json`](reports/inference_sample.json)
- Config snapshot: [`reports/config_snapshot.json`](reports/config_snapshot.json)
## Inference
This repository contains a **GGUF** model intended to be used with **llama.cpp** and/or deployed on **Hugging Face Inference Endpoints (llama.cpp container)**.
Recommended Inference Endpoints knobs:
- Max tokens / request: **1024**
- Max concurrent requests: **2**
### Local llama.cpp (Phi-4 template)
```bash
llama-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758:q8_0 -cnv --chat-template phi4
```
### Hugging Face Inference Endpoint (llama.cpp)
When creating an endpoint, select this repo and the GGUF file **<your_model>.gguf** (quant: **q8_0**).
Recommended settings are stored in: `inference/endpoint_recipe.json`.
Python client example: `inference/hf_endpoint_client.py`