初始化项目,由ModelHub XC社区提供模型

Model: DQN-Labs-Community/dqnCode-v1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-22 06:59:46 +08:00
commit 7878db8edd
15 changed files with 286 additions and 0 deletions

48
.gitattributes vendored Normal file
View File

@@ -0,0 +1,48 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.f16.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
dqnCode.png filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
DQN-Code-v1.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text

3
DQN-Code-v1.IQ4_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2be6ac23610e0bb53659a08c95529f9ce3632f4d2ad316ec95fab72d7d12e6da
size 2286316704

3
DQN-Code-v1.Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2779015f7a61c49ce4b250740fb3dc73b65454c31e98c861847000334ac561f
size 1669500064

3
DQN-Code-v1.Q3_K_L.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:05ff990727c242fbb871ace816bceb92a3e989f0daafbfd1f40fe60ea51283b4
size 2239786144

3
DQN-Code-v1.Q3_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:82e8fa546090b5582e28c6b52187b9fa79f06adcafa1f6d671282c2ed4ab290a
size 2075618464

3
DQN-Code-v1.Q3_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2867accf3a0f51fd6390659251712dcaa0c6770c0b0ac253fa2b568fe1dc39e8
size 1886997664

3
DQN-Code-v1.Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b8a0492165fdfdefa549f408cc993995ed1d0d13cc90fd7cf01223f7f12952ea
size 2497281184

3
DQN-Code-v1.Q4_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:84cb0cbb07aa6a4415df6f1b7283cfe31df503fabc4282fc20e019e018b4210d
size 2383309984

3
DQN-Code-v1.Q5_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6dec1af9bb184555d18788c3c19dc9f222bd11391232edef133efdd4fc92dfc5
size 2889514144

3
DQN-Code-v1.Q5_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:295f0075d16c7f18f4dd942631a7f16298adcad4e595e253823ecdcd6b68a02d
size 2823711904

3
DQN-Code-v1.Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:40933f02144c52f65613dbef3a8fa549cbade56cdf92222f76d59f9c0f1311f2
size 3306261664

3
DQN-Code-v1.Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4d682e9117c4eb20620fb61a7ff599cd939c32ace92667aafdc912ac288bdafc
size 4280405664

3
DQN-Code-v1.f16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc78c11f93f92cbe5f4de9ef6cbfb5e1d0743c327afac0b80f0d5140615de773
size 8051285664

199
README.md Normal file
View File

@@ -0,0 +1,199 @@
---
license: apache-2.0
language:
- en
tags:
- code
- coding
- programming
- reasoning
- small-model
- efficient
- local
- qwen
- qwen3
- qwen3.5
- 4b
- small
- developer
- coding-assistant
- python
- debugging
- daily-use
- localai
- ai
- gpt
- dqnlabs
- dqngpt
- gguf
- lmstudio
- ollama
pipeline_tag: text-generation
---
# dqnCode-v1
dqnCode-v1 is a 4B-parameter language model designed for fast, clear, and practical coding assistance.
It focuses on writing, fixing, and explaining code efficiently, with minimal verbosity and strong real-world usefulness. It is optimized for everyday programming tasks with low latency and concise outputs.
![dqnCode Banner](dqnCode.png)
---
## Benchmark
dqnCode-v1 is positioned as a high-performance compact coding model, with strong results on standard code generation benchmarks. It is trained with simple prompts in mind, so you don't need to be a developer to use it!
### HumanEval
- **pass@1:** 63.4%
This score places dqnCode-v1 among the strongest models in the 4B parameter class for coding tasks (only beaten by one other model in the 4B or below models class!)
| Model | Provider | HumanEval (pass@1) |
|--------------------------|-----------------|--------------------|
| GPT-3.5 Turbo | OpenAI | 68% |
| GPT-4 | OpenAI | 67% |
| dqnCode v1 (4B) | DQN Labs | 63.4% |
| Phi-3.5-mini-instruct | Microsoft | 62.8% |
| DeepSeek Coder 33B | DeepSeek | 52.4% |
| Gemma 2 27B | Google | 51.8% |
| Nous Hermes 3 405B | Nous Research | 51.4% |
---
## Benchmark Context
- Evaluated on HumanEval (Python code generation benchmark)
- Focused on functional correctness of generated code
- Designed to reflect real-world coding performance in a compact model
---
## Positioning
dqnCode-v1 is designed to deliver coding performance comparable to significantly larger models while maintaining a small footprint.
- Approaches GPT-3.5 / GPT-4-level performance on many practical coding tasks
- Achieves strong performance with significantly fewer parameters
- Optimized for fast local inference and real-world development workflows
- Excellent at all languages, especially HTML-CSS-JS, Python, Java, and more!
## Model Description
- **Model type:** Causal Language Model
- **Parameters:** 4B
- **Primary use:** Coding and programming assistance
- **Style:** Direct code generation with minimal explanation
dqnCode-v1 4B is optimized for practical coding workflows rather than long-form development or benchmark performance.
---
## Intended Uses
### Direct Use
- Writing code snippets
- Debugging errors
- Explaining code logic
- Converting code between languages
- Assisting with small to medium programming tasks
- Competitive programming support
---
## Key Characteristics
- Produces clean and functional code
- Prioritizes correctness and simplicity
- Minimizes unnecessary explanation
- Strong at debugging and fixing issues
- Designed for consistent outputs across common coding tasks
- Excellent instruction following and stability.
- Trained with simple prompts in mind, so you don't need to be a developer to use it!
---
## Example
**Input**
```text
Write a Python function to check if a number is prime
```
**Output**
```python
def is_prime(n):
if n < 2:
return False
for i in range(2, int(n**0.5) + 1):
if n % i == 0:
return False
return True
```
---
**Input**
```text
Fix this code:
for i in range(5)
print(i)
```
**Output**
```python
for i in range(5):
print(i)
```
---
## Usage
This model is available on many platforms and is compatible with many formats.
The GGUF format is compatible with llama.cpp and LM Studio.
Other formats include MLX (LM Studio, optimized for Apple devices), and HF (universal compatibility).
---
## Training Details
dqnCode-v1 is fine-tuned for practical coding tasks and efficient problem solving.
The training process emphasizes:
- Functional correctness
- Minimal and clean outputs
- Real-world coding scenarios
- Debugging and code repair
---
## Limitations
- Limited performance on very large or complex codebases
- Not optimized for long-form software architecture design
- May simplify explanations rather than provide deep theoretical detail
---
## Efficiency
dqnCode-v1 is designed to run efficiently on consumer hardware, with support for quantized formats.
---
## License
Apache 2.0
---
## Author
Developed by DQN Labs.
Huge thanks to the team at mradermacher for quantizing this model!
This model card was generated with the help of dqnGPT v0.2!

3
dqnCode.png Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ab06ee64526465f09ad9ce601d909b3bab4dee57361f308e86cf041d54aca61a
size 610404