初始化项目,由ModelHub XC社区提供模型

Model: Abdullahu5mani/flowscribe-qwen2.5-0.5b-v2
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-13 04:44:01 +08:00
commit c445e608a2
13 changed files with 151821 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text
model_q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text

189
README.md Normal file
View File

@@ -0,0 +1,189 @@
---
language:
- en
license: mit
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- text-generation
- fine-tuned
- lora
- gguf
- speech-to-text
- text-cleanup
- unsloth
- qwen2
- conversational
pipeline_tag: text-generation
datasets:
- Abdullahu5mani/flowscribe-dataset
---
# FlowScribe — Qwen2.5-0.5B Speech Transcript Formatter (v2)
A fine-tuned version of [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) that converts raw, messy speech-to-text output into clean, formatted text across multiple writing styles.
**GitHub:** [github.com/Abdullahu5mani/flowscribe](https://github.com/Abdullahu5mani/flowscribe)
---
## The Problem
Voice dictation tools like Whisper produce transcripts full of filler words (`um`, `uh`, `like`), self-corrections (`make it 5... no wait, 6`), and no punctuation or formatting. This model post-processes those transcripts into polished text, with awareness of the desired output style.
---
## Styles
| Style | Behavior |
|---|---|
| `Auto` | Intelligent default — removes fillers, fixes grammar, handles self-corrections, applies structure |
| `Professional` | Formal business tone, structured layout, perfect grammar |
| `Casual` | Keeps the speaker's voice, light cleanup, contractions preserved |
| `Verbatim` | Preserves exact wording, only strips `um`/`uh` and applies spoken formatting commands |
| `Software_Dev` | Formats code terms, variable names (`camelCase`, `snake_case`), technical jargon |
| `Enthusiastic` | High energy, exclamation marks, positive phrasing |
---
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Abdullahu5mani/flowscribe-qwen2.5-0.5b-v2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
def format_transcript(raw_text, style="Auto"):
messages = [
{
"role": "system",
"content": "You are Flowscribe, an expert Speech-to-Text post-processing AI. You accurately transcribe and format text based on a specific style instruction."
},
{
"role": "user",
"content": f"Transcribe and format this with style: {style}\nInput: {raw_text}"
}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
output_ids = outputs[0][len(inputs.input_ids[0]):]
return tokenizer.decode(output_ids, skip_special_tokens=True)
# Examples
print(format_transcript(
"um so the meeting is at 5... no wait make it 6 and uh we need to discuss the q3 budget",
style="Professional"
))
# → "The meeting is at 6 PM to discuss the Q3 budget."
print(format_transcript(
"the api endpoint is slash api slash users new line it takes a POST request with JSON",
style="Software_Dev"
))
# → "The API endpoint is `/api/users`\nIt takes a POST request with JSON."
```
---
## GGUF (Quantized) Usage
A Q4_K_M quantized GGUF version is included in this repository for fast CPU/GPU inference via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python).
```python
from llama_cpp import Llama
llm = Llama(
model_path="model_q4_k_m.gguf",
n_ctx=2048,
n_gpu_layers=-1, # Set to 0 for CPU-only
verbose=False
)
response = llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are Flowscribe, an expert Speech-to-Text post-processing AI. You accurately transcribe and format text based on a specific style instruction."
},
{
"role": "user",
"content": "Transcribe and format this with style: Casual\nInput: hey um so i was thinking we could like grab lunch tomorrow you know around noon ish"
}
],
max_tokens=256,
temperature=0.1,
)
print(response["choices"][0]["message"]["content"])
# → "Hey, I was thinking we could grab lunch tomorrow around noon."
```
---
## Model Details
| Property | Value |
|---|---|
| Version | v2 |
| Base model | Qwen/Qwen2.5-0.5B-Instruct |
| Fine-tuning method | LoRA (via [Unsloth](https://github.com/unslothai/unsloth)) |
| Parameters | ~500M (72.4% trained) |
| Training epochs | 3 |
| Learning rate | 2e-5 |
| Effective batch size | 16 (batch 8 × grad accumulation 2) |
| Sequence length | 2048 |
| Optimizer | AdamW 8-bit |
| Final training loss | 0.616 |
| Training hardware | NVIDIA RTX 4070 Laptop GPU 8GB |
| Chat template | ChatML |
| Quantization | Q4_K_M (via llama.cpp) |
---
## Training Data
Trained on ~27,400 synthetically generated examples from [flowscribe-dataset](https://huggingface.co/datasets/Abdullahu5mani/flowscribe-dataset).
Each example is an Alpaca-style JSON object:
```json
{
"instruction": "Transcribe and format this with style: Professional",
"input": "um so like the uh proposal is due friday and we need to finalize the, i mean confirm the budget",
"output": "The proposal is due Friday and we need to confirm the budget."
}
```
Data was generated using Google Gemini (primary) and 16 free OpenRouter models (fallback) across 10 domain scenarios: business email, software dev, personal messages, productivity lists, medical notes, and more.
---
## Limitations
- Optimized for English only
- Training data is synthetic — real-world dictation edge cases may vary
- The 0.5B parameter size prioritizes speed and local deployment over raw capability
---
## Files
| File | Description |
|---|---|
| `model.safetensors` | Full-precision fine-tuned weights (BF16) |
| `model_q4_k_m.gguf` | Q4_K_M quantized GGUF for llama.cpp |
| `config.json` | Model configuration |
| `tokenizer.json` | Tokenizer |
| `chat_template.jinja` | ChatML chat template |
---
## License
MIT — see [LICENSE](https://github.com/Abdullahu5mani/flowscribe/blob/main/LICENSE)

24
added_tokens.json Normal file
View File

@@ -0,0 +1,24 @@
{
"</tool_call>": 151658,
"<tool_call>": 151657,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

54
chat_template.jinja Normal file
View File

@@ -0,0 +1,54 @@
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- else %}
{{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + message.content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}

59
config.json Normal file
View File

@@ -0,0 +1,59 @@
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": null,
"dtype": "bfloat16",
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 896,
"initializer_range": 0.02,
"intermediate_size": 4864,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 32768,
"max_window_layers": 21,
"model_type": "qwen2",
"num_attention_heads": 14,
"num_hidden_layers": 24,
"num_key_value_heads": 2,
"pad_token_id": 151665,
"rms_norm_eps": 1e-06,
"rope_parameters": {
"rope_theta": 1000000.0,
"rope_type": "default"
},
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "5.3.0",
"unsloth_fixed": true,
"unsloth_version": "2026.3.18",
"use_cache": false,
"use_sliding_window": false,
"vocab_size": 151936
}

14
generation_config.json Normal file
View File

@@ -0,0 +1,14 @@
{
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"max_length": 32768,
"pad_token_id": 151665,
"repetition_penalty": 1.1,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"transformers_version": "5.3.0"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f727b16e77f999f83e71751fff065d807d882dc33273b38289580adb996d895a
size 988097824

3
model_q4_k_m.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:26655766ab6d63ef33a023eb486fb0a020aa8fbcd7041a7fdb3347127fbde5d2
size 397807360

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bd5948af71b4f56cf697f7580814c7ce8b80595ef985544efcacf716126a2e31
size 11422356

15
tokenizer_config.json Normal file
View File

@@ -0,0 +1,15 @@
{
"add_prefix_space": false,
"backend": "tokenizers",
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"is_local": false,
"model_max_length": 32768,
"pad_token": "<|PAD_TOKEN|>",
"padding_side": "left",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long