Files
DKatiyar-fixed/README.md
ModelHub XC 499e55c37b 初始化项目,由ModelHub XC社区提供模型
Model: katiyardinesh/DKatiyar-fixed
Source: Original Platform
2026-04-10 18:39:22 +08:00

1.3 KiB

library_name, pipeline_tag, base_model
library_name pipeline_tag base_model
transformers text-generation
Qwen/Qwen3-8B

DKatiyar-fixed

Fixed version of yunmorning/broken-model.

Changes Made

1. Added missing chat_template to tokenizer_config.json

The original repo had no chat_template field. Without it, the serving engine cannot format role-based messages (system, user, assistant) with the special tokens the model expects (<|im_start|>, <|im_end|>). Raw text is sent instead, producing garbled output because the model was trained on the ChatML format.

The fix: copied the chat_template from the official Qwen/Qwen3-8B repository. This template handles system/user/assistant role formatting, tool calls (<tool_call>/</tool_call>), thinking mode (<think>/</think>), and the enable_thinking toggle.

2. Corrected base_model in README metadata

The original README listed meta-llama/Meta-Llama-3.1-8B as the base model. The actual architecture is Qwen3: config.json specifies Qwen3ForCausalLM with 36 layers, 4096 hidden_size, 32 attention heads, 8 KV heads, and vocab_size 151936. The tokenizer is Qwen2Tokenizer. These all match Qwen/Qwen3-8B, not Meta-Llama.