初始化项目,由ModelHub XC社区提供模型

Model: WithinUsAI/Qwen3-Qrazy.Qoder-0.6B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-14 17:24:36 +08:00
commit 982d9ce742
11 changed files with 649 additions and 0 deletions

53
.gitattributes vendored Normal file
View File

@@ -0,0 +1,53 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
merges.txt filter=lfs diff=lfs merge=lfs -text
model.safetensors filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

199
README.md Normal file
View File

@@ -0,0 +1,199 @@
---
license: other
library_name: transformers
base_model:
- Qwen/Qwen3-0.6B
tags:
- qwen3
- code
- coder
- reasoning
- transformers
- safetensors
- withinusai
language:
- en
datasets:
- microsoft/rStar-Coder
- open-r1/codeforces-cots
- nvidia/OpenCodeReasoning
- patrickfleith/instruction-freak-reasoning
pipeline_tag: text-generation
---
# Qwen3-0.6B-Qrazy-Qoder
**Qwen3-0.6B-Qrazy-Qoder** is a compact coding- and reasoning-oriented language model release from **WithIn Us AI**, built on top of **`Qwen/Qwen3-0.6B`** and packaged as a standard **Transformers** checkpoint in **Safetensors** format.
This model is intended for lightweight coding assistance, reasoning-style prompt workflows, and compact local or hosted inference where a small model footprint is important.
## Model Summary
This model is designed for:
- code generation
- code explanation
- debugging assistance
- reasoning-oriented coding prompts
- implementation planning
- compact instruction following
- lightweight developer assistant workflows
Because this is a **0.6B-class** model, it is best suited for fast, smaller-scope tasks rather than deep long-context reasoning or large multi-file engineering work.
## Base Model
This model is based on:
- **`Qwen/Qwen3-0.6B`**
## Training Data / Dataset Lineage
The current repository README metadata lists the following datasets:
- **`microsoft/rStar-Coder`**
- **`open-r1/codeforces-cots`**
- **`nvidia/OpenCodeReasoning`**
- **`patrickfleith/instruction-freak-reasoning`**
These datasets suggest a blend of:
- code-focused supervision
- competitive-programming-style reasoning
- reasoning-oriented coding data
- instruction-style reasoning prompts
## Intended Use
Recommended use cases include:
- compact coding assistant experiments
- short code generation tasks
- debugging suggestions
- developer Q&A
- reasoning-style technical prompting
- local inference on limited hardware
- lightweight software workflow support
## Suggested Use Cases
This model can be useful for:
- generating short utility functions
- explaining code snippets
- proposing fixes for common bugs
- creating small implementation plans
- answering structured coding questions
- drafting concise technical responses
## Out-of-Scope Use
This model should not be relied on for:
- legal advice
- medical advice
- financial advice
- safety-critical automation
- autonomous production engineering without review
- security-critical code without expert validation
All generated code should be reviewed, tested, and validated before use.
## Repository Contents
The repository currently includes standard Hugging Face model assets such as:
- `README.md`
- `.gitattributes`
- `added_tokens.json`
- `config.json`
- `mergekit_config.yml`
- `merges.txt`
- `model.safetensors`
- `special_tokens_map.json`
- `tokenizer.json`
- `tokenizer_config.json`
## Prompting Guidance
This model generally works best when prompts are:
- direct
- scoped to one task
- explicit about the language or framework
- clear about whether code, explanation, or both are wanted
- structured when reasoning is needed
### Example prompt styles
**Code generation**
> Write a Python function that removes duplicate records from a JSON list using the `id` field.
**Debugging**
> Explain why this JavaScript function returns `undefined` and provide a corrected version.
**Reasoning-oriented coding**
> Compare two approaches for caching API responses in Python and recommend one.
**Implementation planning**
> Create a step-by-step plan for building a small Flask API with authentication and tests.
## Strengths
This model may be especially useful for:
- compact coding workflows
- lightweight reasoning prompts
- low-resource deployments
- quick iteration
- structured developer assistance
- small local inference setups
## Limitations
Like other compact language models, this model may:
- hallucinate APIs or library behavior
- generate incomplete or incorrect code
- struggle with long-context tasks
- make reasoning mistakes on harder prompts
- require prompt iteration for best results
- underperform larger coding models on advanced engineering tasks
Human review is strongly recommended.
## Attribution
**WithIn Us AI** is the publisher of this model release.
Credit for upstream assets remains with their original creators, including:
- **Qwen** for **`Qwen/Qwen3-0.6B`**
- **Microsoft** for **`microsoft/rStar-Coder`**
- the creators of **`open-r1/codeforces-cots`**
- **NVIDIA** for **`nvidia/OpenCodeReasoning`**
- **patrickfleith** for **`patrickfleith/instruction-freak-reasoning`**
## License
This draft uses:
- `license: other`
If you maintain this repo, replace this with the exact license terms you want displayed and ensure they align with any upstream licensing requirements.
## Acknowledgments
Thanks to:
- **WithIn Us AI**
- **Qwen**
- **Microsoft**
- **NVIDIA**
- the dataset creators listed above
- the Hugging Face ecosystem
- the broader open-source AI community
## Disclaimer
This model may produce inaccurate, insecure, incomplete, or misleading outputs. All important generations, especially code and technical guidance, should be reviewed and tested before real-world use.

28
added_tokens.json Normal file
View File

@@ -0,0 +1,28 @@
{
"</think>": 151668,
"</tool_call>": 151658,
"</tool_response>": 151666,
"<think>": 151667,
"<tool_call>": 151657,
"<tool_response>": 151665,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

60
config.json Normal file
View File

@@ -0,0 +1,60 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"dtype": "float16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 40960,
"max_window_layers": 28,
"model_type": "qwen3",
"num_attention_heads": 16,
"num_hidden_layers": 28,
"num_key_value_heads": 8,
"pad_token_id": 151643,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000,
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "4.57.6",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

24
mergekit_config.yml Normal file
View File

@@ -0,0 +1,24 @@
base_model: C:/Users/GSS1147/Desktop/WithinUs_CPU_Hybr
dtype: float16
merge_method: slerp
parameters:
t:
- filter: embed_tokens
value: 0.0
- filter: self_attn
value: 0.5
- filter: mlp
value: 0.5
- filter: lm_head
value: 1.0
- value: 0.5
slices:
- sources:
- layer_range:
- 0
- 28
model: C:/Users/GSS1147/Desktop/WithinUs_CPU_Hybr
- layer_range:
- 0
- 28
model: C:/Users/GSS1147/Desktop/WithinUs_CPU_Hybrid

3
merges.txt Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8831e4f1a044471340f7c0a83d7bd71306a5b867e95fd870f74d0c5308a904d5
size 1671853

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:93ced98187b5fb275be3cba180e4a8e120ac152bd52d251281625c13ff8a4df1
size 1192134784

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:67cc0080ffd7555f723f423c27cfef314e1ad9d335c8b79f465c5faba1ed478b
size 11422821

244
tokenizer_config.json Normal file
View File

@@ -0,0 +1,244 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151665": {
"content": "<tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151666": {
"content": "</tool_response>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151667": {
"content": "<think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151668": {
"content": "</think>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": {},
"max_length": null,
"model_max_length": 131072,
"pad_to_multiple_of": null,
"pad_token": "<|endoftext|>",
"pad_token_type_id": 0,
"padding_side": "left",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"truncation_side": "left",
"unk_token": null
}