初始化项目,由ModelHub XC社区提供模型
Model: WithinUsAI/Qwen3-Qrazy.Qoder-0.6B Source: Original Platform
This commit is contained in:
199
README.md
Normal file
199
README.md
Normal file
@@ -0,0 +1,199 @@
|
||||
---
|
||||
license: other
|
||||
library_name: transformers
|
||||
base_model:
|
||||
- Qwen/Qwen3-0.6B
|
||||
tags:
|
||||
- qwen3
|
||||
- code
|
||||
- coder
|
||||
- reasoning
|
||||
- transformers
|
||||
- safetensors
|
||||
- withinusai
|
||||
language:
|
||||
- en
|
||||
datasets:
|
||||
- microsoft/rStar-Coder
|
||||
- open-r1/codeforces-cots
|
||||
- nvidia/OpenCodeReasoning
|
||||
- patrickfleith/instruction-freak-reasoning
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Qwen3-0.6B-Qrazy-Qoder
|
||||
|
||||
**Qwen3-0.6B-Qrazy-Qoder** is a compact coding- and reasoning-oriented language model release from **WithIn Us AI**, built on top of **`Qwen/Qwen3-0.6B`** and packaged as a standard **Transformers** checkpoint in **Safetensors** format.
|
||||
|
||||
This model is intended for lightweight coding assistance, reasoning-style prompt workflows, and compact local or hosted inference where a small model footprint is important.
|
||||
|
||||
## Model Summary
|
||||
|
||||
This model is designed for:
|
||||
|
||||
- code generation
|
||||
- code explanation
|
||||
- debugging assistance
|
||||
- reasoning-oriented coding prompts
|
||||
- implementation planning
|
||||
- compact instruction following
|
||||
- lightweight developer assistant workflows
|
||||
|
||||
Because this is a **0.6B-class** model, it is best suited for fast, smaller-scope tasks rather than deep long-context reasoning or large multi-file engineering work.
|
||||
|
||||
## Base Model
|
||||
|
||||
This model is based on:
|
||||
|
||||
- **`Qwen/Qwen3-0.6B`**
|
||||
|
||||
## Training Data / Dataset Lineage
|
||||
|
||||
The current repository README metadata lists the following datasets:
|
||||
|
||||
- **`microsoft/rStar-Coder`**
|
||||
- **`open-r1/codeforces-cots`**
|
||||
- **`nvidia/OpenCodeReasoning`**
|
||||
- **`patrickfleith/instruction-freak-reasoning`**
|
||||
|
||||
These datasets suggest a blend of:
|
||||
|
||||
- code-focused supervision
|
||||
- competitive-programming-style reasoning
|
||||
- reasoning-oriented coding data
|
||||
- instruction-style reasoning prompts
|
||||
|
||||
## Intended Use
|
||||
|
||||
Recommended use cases include:
|
||||
|
||||
- compact coding assistant experiments
|
||||
- short code generation tasks
|
||||
- debugging suggestions
|
||||
- developer Q&A
|
||||
- reasoning-style technical prompting
|
||||
- local inference on limited hardware
|
||||
- lightweight software workflow support
|
||||
|
||||
## Suggested Use Cases
|
||||
|
||||
This model can be useful for:
|
||||
|
||||
- generating short utility functions
|
||||
- explaining code snippets
|
||||
- proposing fixes for common bugs
|
||||
- creating small implementation plans
|
||||
- answering structured coding questions
|
||||
- drafting concise technical responses
|
||||
|
||||
## Out-of-Scope Use
|
||||
|
||||
This model should not be relied on for:
|
||||
|
||||
- legal advice
|
||||
- medical advice
|
||||
- financial advice
|
||||
- safety-critical automation
|
||||
- autonomous production engineering without review
|
||||
- security-critical code without expert validation
|
||||
|
||||
All generated code should be reviewed, tested, and validated before use.
|
||||
|
||||
## Repository Contents
|
||||
|
||||
The repository currently includes standard Hugging Face model assets such as:
|
||||
|
||||
- `README.md`
|
||||
- `.gitattributes`
|
||||
- `added_tokens.json`
|
||||
- `config.json`
|
||||
- `mergekit_config.yml`
|
||||
- `merges.txt`
|
||||
- `model.safetensors`
|
||||
- `special_tokens_map.json`
|
||||
- `tokenizer.json`
|
||||
- `tokenizer_config.json`
|
||||
|
||||
## Prompting Guidance
|
||||
|
||||
This model generally works best when prompts are:
|
||||
|
||||
- direct
|
||||
- scoped to one task
|
||||
- explicit about the language or framework
|
||||
- clear about whether code, explanation, or both are wanted
|
||||
- structured when reasoning is needed
|
||||
|
||||
### Example prompt styles
|
||||
|
||||
**Code generation**
|
||||
> Write a Python function that removes duplicate records from a JSON list using the `id` field.
|
||||
|
||||
**Debugging**
|
||||
> Explain why this JavaScript function returns `undefined` and provide a corrected version.
|
||||
|
||||
**Reasoning-oriented coding**
|
||||
> Compare two approaches for caching API responses in Python and recommend one.
|
||||
|
||||
**Implementation planning**
|
||||
> Create a step-by-step plan for building a small Flask API with authentication and tests.
|
||||
|
||||
## Strengths
|
||||
|
||||
This model may be especially useful for:
|
||||
|
||||
- compact coding workflows
|
||||
- lightweight reasoning prompts
|
||||
- low-resource deployments
|
||||
- quick iteration
|
||||
- structured developer assistance
|
||||
- small local inference setups
|
||||
|
||||
## Limitations
|
||||
|
||||
Like other compact language models, this model may:
|
||||
|
||||
- hallucinate APIs or library behavior
|
||||
- generate incomplete or incorrect code
|
||||
- struggle with long-context tasks
|
||||
- make reasoning mistakes on harder prompts
|
||||
- require prompt iteration for best results
|
||||
- underperform larger coding models on advanced engineering tasks
|
||||
|
||||
Human review is strongly recommended.
|
||||
|
||||
## Attribution
|
||||
|
||||
**WithIn Us AI** is the publisher of this model release.
|
||||
|
||||
Credit for upstream assets remains with their original creators, including:
|
||||
|
||||
- **Qwen** for **`Qwen/Qwen3-0.6B`**
|
||||
- **Microsoft** for **`microsoft/rStar-Coder`**
|
||||
- the creators of **`open-r1/codeforces-cots`**
|
||||
- **NVIDIA** for **`nvidia/OpenCodeReasoning`**
|
||||
- **patrickfleith** for **`patrickfleith/instruction-freak-reasoning`**
|
||||
|
||||
## License
|
||||
|
||||
This draft uses:
|
||||
|
||||
- `license: other`
|
||||
|
||||
If you maintain this repo, replace this with the exact license terms you want displayed and ensure they align with any upstream licensing requirements.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
Thanks to:
|
||||
|
||||
- **WithIn Us AI**
|
||||
- **Qwen**
|
||||
- **Microsoft**
|
||||
- **NVIDIA**
|
||||
- the dataset creators listed above
|
||||
- the Hugging Face ecosystem
|
||||
- the broader open-source AI community
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This model may produce inaccurate, insecure, incomplete, or misleading outputs. All important generations, especially code and technical guidance, should be reviewed and tested before real-world use.
|
||||
Reference in New Issue
Block a user