初始化项目,由ModelHub XC社区提供模型
Model: DQN-Labs-Community/dqnCode-v1 Source: Original Platform
This commit is contained in:
198
README.md
Normal file
198
README.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- code
|
||||
- coding
|
||||
- programming
|
||||
- reasoning
|
||||
- small-model
|
||||
- efficient
|
||||
- local
|
||||
- qwen
|
||||
- qwen3
|
||||
- qwen3.5
|
||||
- 4b
|
||||
- small
|
||||
- developer
|
||||
- coding-assistant
|
||||
- python
|
||||
- debugging
|
||||
- daily-use
|
||||
- localai
|
||||
- ai
|
||||
- gpt
|
||||
- dqnlabs
|
||||
- dqngpt
|
||||
- gguf
|
||||
- lmstudio
|
||||
- ollama
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# dqnCode-v1
|
||||
|
||||
dqnCode-v1 is a 4B-parameter language model designed for fast, clear, and practical coding assistance.
|
||||
|
||||
It focuses on writing, fixing, and explaining code efficiently, with minimal verbosity and strong real-world usefulness. It is optimized for everyday programming tasks with low latency and concise outputs.
|
||||
|
||||

|
||||
---
|
||||
|
||||
## Benchmark
|
||||
|
||||
dqnCode-v1 is positioned as a high-performance compact coding model, with strong results on standard code generation benchmarks. It is trained with simple prompts in mind, so you don't need to be a developer to use it!
|
||||
|
||||
### HumanEval
|
||||
|
||||
- **pass@1:** 63.4%
|
||||
|
||||
This score places dqnCode-v1 among the strongest models in the 4B parameter class for coding tasks (only beaten by one other model in the 4B or below models class!)
|
||||
|
||||
| Model | Provider | HumanEval (pass@1) |
|
||||
|--------------------------|-----------------|--------------------|
|
||||
| GPT-3.5 Turbo | OpenAI | 68% |
|
||||
| GPT-4 | OpenAI | 67% |
|
||||
| dqnCode v1 (4B) | DQN Labs | 63.4% |
|
||||
| Phi-3.5-mini-instruct | Microsoft | 62.8% |
|
||||
| DeepSeek Coder 33B | DeepSeek | 52.4% |
|
||||
| Gemma 2 27B | Google | 51.8% |
|
||||
| Nous Hermes 3 405B | Nous Research | 51.4% |
|
||||
---
|
||||
|
||||
## Benchmark Context
|
||||
|
||||
- Evaluated on HumanEval (Python code generation benchmark)
|
||||
- Focused on functional correctness of generated code
|
||||
- Designed to reflect real-world coding performance in a compact model
|
||||
|
||||
---
|
||||
|
||||
## Positioning
|
||||
|
||||
dqnCode-v1 is designed to deliver coding performance comparable to significantly larger models while maintaining a small footprint.
|
||||
|
||||
- Approaches GPT-3.5 / GPT-4-level performance on many practical coding tasks
|
||||
- Achieves strong performance with significantly fewer parameters
|
||||
- Optimized for fast local inference and real-world development workflows
|
||||
- Excellent at all languages, especially HTML-CSS-JS, Python, Java, and more!
|
||||
|
||||
|
||||
## Model Description
|
||||
|
||||
- **Model type:** Causal Language Model
|
||||
- **Parameters:** 4B
|
||||
- **Primary use:** Coding and programming assistance
|
||||
- **Style:** Direct code generation with minimal explanation
|
||||
|
||||
dqnCode-v1 4B is optimized for practical coding workflows rather than long-form development or benchmark performance.
|
||||
|
||||
---
|
||||
|
||||
## Intended Uses
|
||||
|
||||
### Direct Use
|
||||
|
||||
- Writing code snippets
|
||||
- Debugging errors
|
||||
- Explaining code logic
|
||||
- Converting code between languages
|
||||
- Assisting with small to medium programming tasks
|
||||
- Competitive programming support
|
||||
|
||||
---
|
||||
|
||||
## Key Characteristics
|
||||
|
||||
- Produces clean and functional code
|
||||
- Prioritizes correctness and simplicity
|
||||
- Minimizes unnecessary explanation
|
||||
- Strong at debugging and fixing issues
|
||||
- Designed for consistent outputs across common coding tasks
|
||||
- Excellent instruction following and stability.
|
||||
- Trained with simple prompts in mind, so you don't need to be a developer to use it!
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
**Input**
|
||||
```text
|
||||
Write a Python function to check if a number is prime
|
||||
```
|
||||
|
||||
**Output**
|
||||
```python
|
||||
def is_prime(n):
|
||||
if n < 2:
|
||||
return False
|
||||
for i in range(2, int(n**0.5) + 1):
|
||||
if n % i == 0:
|
||||
return False
|
||||
return True
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Input**
|
||||
```text
|
||||
Fix this code:
|
||||
for i in range(5)
|
||||
print(i)
|
||||
```
|
||||
|
||||
**Output**
|
||||
```python
|
||||
for i in range(5):
|
||||
print(i)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
This model is available on many platforms and is compatible with many formats.
|
||||
|
||||
The GGUF format is compatible with llama.cpp and LM Studio.
|
||||
Other formats include MLX (LM Studio, optimized for Apple devices), and HF (universal compatibility).
|
||||
|
||||
---
|
||||
|
||||
## Training Details
|
||||
|
||||
dqnCode-v1 is fine-tuned for practical coding tasks and efficient problem solving.
|
||||
|
||||
The training process emphasizes:
|
||||
|
||||
- Functional correctness
|
||||
- Minimal and clean outputs
|
||||
- Real-world coding scenarios
|
||||
- Debugging and code repair
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
- Limited performance on very large or complex codebases
|
||||
- Not optimized for long-form software architecture design
|
||||
- May simplify explanations rather than provide deep theoretical detail
|
||||
|
||||
---
|
||||
|
||||
## Efficiency
|
||||
|
||||
dqnCode-v1 is designed to run efficiently on consumer hardware, with support for quantized formats.
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
Apache 2.0
|
||||
|
||||
---
|
||||
|
||||
## Author
|
||||
|
||||
Developed by DQN Labs.
|
||||
This model card was generated with the help of dqnGPT v0.2!
|
||||
Reference in New Issue
Block a user