初始化项目,由ModelHub XC社区提供模型
Model: DQN-Labs-Community/dqnCode-v1 Source: Original Platform
This commit is contained in:
38
.gitattributes
vendored
Normal file
38
.gitattributes
vendored
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
DQN-Code-v1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
DQN-Code-v1.f16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||||
|
dqnCode.png filter=lfs diff=lfs merge=lfs -text
|
||||||
3
DQN-Code-v1.Q4_K_M.gguf
Normal file
3
DQN-Code-v1.Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:61060c8c86bd70011a30cfde3d8fdd7d652135dd21a09ef1db9d79b741e63354
|
||||||
|
size 2497282496
|
||||||
3
DQN-Code-v1.f16.gguf
Normal file
3
DQN-Code-v1.f16.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:d97db0655ca157a88b6b6e63910dbef2a9c3b81c551ceb651b4a5103637f03a5
|
||||||
|
size 8051286976
|
||||||
198
README.md
Normal file
198
README.md
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
---
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
tags:
|
||||||
|
- code
|
||||||
|
- coding
|
||||||
|
- programming
|
||||||
|
- reasoning
|
||||||
|
- small-model
|
||||||
|
- efficient
|
||||||
|
- local
|
||||||
|
- qwen
|
||||||
|
- qwen3
|
||||||
|
- qwen3.5
|
||||||
|
- 4b
|
||||||
|
- small
|
||||||
|
- developer
|
||||||
|
- coding-assistant
|
||||||
|
- python
|
||||||
|
- debugging
|
||||||
|
- daily-use
|
||||||
|
- localai
|
||||||
|
- ai
|
||||||
|
- gpt
|
||||||
|
- dqnlabs
|
||||||
|
- dqngpt
|
||||||
|
- gguf
|
||||||
|
- lmstudio
|
||||||
|
- ollama
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
---
|
||||||
|
|
||||||
|
# dqnCode-v1
|
||||||
|
|
||||||
|
dqnCode-v1 is a 4B-parameter language model designed for fast, clear, and practical coding assistance.
|
||||||
|
|
||||||
|
It focuses on writing, fixing, and explaining code efficiently, with minimal verbosity and strong real-world usefulness. It is optimized for everyday programming tasks with low latency and concise outputs.
|
||||||
|
|
||||||
|

|
||||||
|
---
|
||||||
|
|
||||||
|
## Benchmark
|
||||||
|
|
||||||
|
dqnCode-v1 is positioned as a high-performance compact coding model, with strong results on standard code generation benchmarks. It is trained with simple prompts in mind, so you don't need to be a developer to use it!
|
||||||
|
|
||||||
|
### HumanEval
|
||||||
|
|
||||||
|
- **pass@1:** 63.4%
|
||||||
|
|
||||||
|
This score places dqnCode-v1 among the strongest models in the 4B parameter class for coding tasks (only beaten by one other model in the 4B or below models class!)
|
||||||
|
|
||||||
|
| Model | Provider | HumanEval (pass@1) |
|
||||||
|
|--------------------------|-----------------|--------------------|
|
||||||
|
| GPT-3.5 Turbo | OpenAI | 68% |
|
||||||
|
| GPT-4 | OpenAI | 67% |
|
||||||
|
| dqnCode v1 (4B) | DQN Labs | 63.4% |
|
||||||
|
| Phi-3.5-mini-instruct | Microsoft | 62.8% |
|
||||||
|
| DeepSeek Coder 33B | DeepSeek | 52.4% |
|
||||||
|
| Gemma 2 27B | Google | 51.8% |
|
||||||
|
| Nous Hermes 3 405B | Nous Research | 51.4% |
|
||||||
|
---
|
||||||
|
|
||||||
|
## Benchmark Context
|
||||||
|
|
||||||
|
- Evaluated on HumanEval (Python code generation benchmark)
|
||||||
|
- Focused on functional correctness of generated code
|
||||||
|
- Designed to reflect real-world coding performance in a compact model
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Positioning
|
||||||
|
|
||||||
|
dqnCode-v1 is designed to deliver coding performance comparable to significantly larger models while maintaining a small footprint.
|
||||||
|
|
||||||
|
- Approaches GPT-3.5 / GPT-4-level performance on many practical coding tasks
|
||||||
|
- Achieves strong performance with significantly fewer parameters
|
||||||
|
- Optimized for fast local inference and real-world development workflows
|
||||||
|
- Excellent at all languages, especially HTML-CSS-JS, Python, Java, and more!
|
||||||
|
|
||||||
|
|
||||||
|
## Model Description
|
||||||
|
|
||||||
|
- **Model type:** Causal Language Model
|
||||||
|
- **Parameters:** 4B
|
||||||
|
- **Primary use:** Coding and programming assistance
|
||||||
|
- **Style:** Direct code generation with minimal explanation
|
||||||
|
|
||||||
|
dqnCode-v1 4B is optimized for practical coding workflows rather than long-form development or benchmark performance.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Intended Uses
|
||||||
|
|
||||||
|
### Direct Use
|
||||||
|
|
||||||
|
- Writing code snippets
|
||||||
|
- Debugging errors
|
||||||
|
- Explaining code logic
|
||||||
|
- Converting code between languages
|
||||||
|
- Assisting with small to medium programming tasks
|
||||||
|
- Competitive programming support
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Characteristics
|
||||||
|
|
||||||
|
- Produces clean and functional code
|
||||||
|
- Prioritizes correctness and simplicity
|
||||||
|
- Minimizes unnecessary explanation
|
||||||
|
- Strong at debugging and fixing issues
|
||||||
|
- Designed for consistent outputs across common coding tasks
|
||||||
|
- Excellent instruction following and stability.
|
||||||
|
- Trained with simple prompts in mind, so you don't need to be a developer to use it!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
**Input**
|
||||||
|
```text
|
||||||
|
Write a Python function to check if a number is prime
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
```python
|
||||||
|
def is_prime(n):
|
||||||
|
if n < 2:
|
||||||
|
return False
|
||||||
|
for i in range(2, int(n**0.5) + 1):
|
||||||
|
if n % i == 0:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**
|
||||||
|
```text
|
||||||
|
Fix this code:
|
||||||
|
for i in range(5)
|
||||||
|
print(i)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
```python
|
||||||
|
for i in range(5):
|
||||||
|
print(i)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
This model is available on many platforms and is compatible with many formats.
|
||||||
|
|
||||||
|
The GGUF format is compatible with llama.cpp and LM Studio.
|
||||||
|
Other formats include MLX (LM Studio, optimized for Apple devices), and HF (universal compatibility).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Training Details
|
||||||
|
|
||||||
|
dqnCode-v1 is fine-tuned for practical coding tasks and efficient problem solving.
|
||||||
|
|
||||||
|
The training process emphasizes:
|
||||||
|
|
||||||
|
- Functional correctness
|
||||||
|
- Minimal and clean outputs
|
||||||
|
- Real-world coding scenarios
|
||||||
|
- Debugging and code repair
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
- Limited performance on very large or complex codebases
|
||||||
|
- Not optimized for long-form software architecture design
|
||||||
|
- May simplify explanations rather than provide deep theoretical detail
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Efficiency
|
||||||
|
|
||||||
|
dqnCode-v1 is designed to run efficiently on consumer hardware, with support for quantized formats.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Apache 2.0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Author
|
||||||
|
|
||||||
|
Developed by DQN Labs.
|
||||||
|
This model card was generated with the help of dqnGPT v0.2!
|
||||||
3
dqnCode.png
Normal file
3
dqnCode.png
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ab06ee64526465f09ad9ce601d909b3bab4dee57361f308e86cf041d54aca61a
|
||||||
|
size 610404
|
||||||
Reference in New Issue
Block a user