From 42af9ce8ea7640a5f92fe2cde043b18f5272cd88 Mon Sep 17 00:00:00 2001 From: ModelHub XC Date: Sat, 11 Apr 2026 08:05:57 +0800 Subject: [PATCH] =?UTF-8?q?=E5=88=9D=E5=A7=8B=E5=8C=96=E9=A1=B9=E7=9B=AE?= =?UTF-8?q?=EF=BC=8C=E7=94=B1ModelHub=20XC=E7=A4=BE=E5=8C=BA=E6=8F=90?= =?UTF-8?q?=E4=BE=9B=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Model: DQN-Labs-Community/dqnCode-v1 Source: Original Platform --- .gitattributes | 38 ++++++++ DQN-Code-v1.Q4_K_M.gguf | 3 + DQN-Code-v1.f16.gguf | 3 + README.md | 198 ++++++++++++++++++++++++++++++++++++++++ dqnCode.png | 3 + 5 files changed, 245 insertions(+) create mode 100644 .gitattributes create mode 100644 DQN-Code-v1.Q4_K_M.gguf create mode 100644 DQN-Code-v1.f16.gguf create mode 100644 README.md create mode 100644 dqnCode.png diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..eb7d687 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,38 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +DQN-Code-v1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +DQN-Code-v1.f16.gguf filter=lfs diff=lfs merge=lfs -text +dqnCode.png filter=lfs diff=lfs merge=lfs -text diff --git a/DQN-Code-v1.Q4_K_M.gguf b/DQN-Code-v1.Q4_K_M.gguf new file mode 100644 index 0000000..99e1841 --- /dev/null +++ b/DQN-Code-v1.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61060c8c86bd70011a30cfde3d8fdd7d652135dd21a09ef1db9d79b741e63354 +size 2497282496 diff --git a/DQN-Code-v1.f16.gguf b/DQN-Code-v1.f16.gguf new file mode 100644 index 0000000..a71828f --- /dev/null +++ b/DQN-Code-v1.f16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d97db0655ca157a88b6b6e63910dbef2a9c3b81c551ceb651b4a5103637f03a5 +size 8051286976 diff --git a/README.md b/README.md new file mode 100644 index 0000000..0b64451 --- /dev/null +++ b/README.md @@ -0,0 +1,198 @@ +--- +license: apache-2.0 +language: +- en +tags: +- code +- coding +- programming +- reasoning +- small-model +- efficient +- local +- qwen +- qwen3 +- qwen3.5 +- 4b +- small +- developer +- coding-assistant +- python +- debugging +- daily-use +- localai +- ai +- gpt +- dqnlabs +- dqngpt +- gguf +- lmstudio +- ollama +pipeline_tag: text-generation +--- + +# dqnCode-v1 + +dqnCode-v1 is a 4B-parameter language model designed for fast, clear, and practical coding assistance. + +It focuses on writing, fixing, and explaining code efficiently, with minimal verbosity and strong real-world usefulness. It is optimized for everyday programming tasks with low latency and concise outputs. + +![dqnCode Banner](dqnCode.png) +--- + +## Benchmark + +dqnCode-v1 is positioned as a high-performance compact coding model, with strong results on standard code generation benchmarks. It is trained with simple prompts in mind, so you don't need to be a developer to use it! + +### HumanEval + +- **pass@1:** 63.4% + +This score places dqnCode-v1 among the strongest models in the 4B parameter class for coding tasks (only beaten by one other model in the 4B or below models class!) + +| Model | Provider | HumanEval (pass@1) | +|--------------------------|-----------------|--------------------| +| GPT-3.5 Turbo | OpenAI | 68% | +| GPT-4 | OpenAI | 67% | +| dqnCode v1 (4B) | DQN Labs | 63.4% | +| Phi-3.5-mini-instruct | Microsoft | 62.8% | +| DeepSeek Coder 33B | DeepSeek | 52.4% | +| Gemma 2 27B | Google | 51.8% | +| Nous Hermes 3 405B | Nous Research | 51.4% | +--- + +## Benchmark Context + +- Evaluated on HumanEval (Python code generation benchmark) +- Focused on functional correctness of generated code +- Designed to reflect real-world coding performance in a compact model + +--- + +## Positioning + +dqnCode-v1 is designed to deliver coding performance comparable to significantly larger models while maintaining a small footprint. + +- Approaches GPT-3.5 / GPT-4-level performance on many practical coding tasks +- Achieves strong performance with significantly fewer parameters +- Optimized for fast local inference and real-world development workflows +- Excellent at all languages, especially HTML-CSS-JS, Python, Java, and more! + + +## Model Description + +- **Model type:** Causal Language Model +- **Parameters:** 4B +- **Primary use:** Coding and programming assistance +- **Style:** Direct code generation with minimal explanation + +dqnCode-v1 4B is optimized for practical coding workflows rather than long-form development or benchmark performance. + +--- + +## Intended Uses + +### Direct Use + +- Writing code snippets +- Debugging errors +- Explaining code logic +- Converting code between languages +- Assisting with small to medium programming tasks +- Competitive programming support + +--- + +## Key Characteristics + +- Produces clean and functional code +- Prioritizes correctness and simplicity +- Minimizes unnecessary explanation +- Strong at debugging and fixing issues +- Designed for consistent outputs across common coding tasks +- Excellent instruction following and stability. +- Trained with simple prompts in mind, so you don't need to be a developer to use it! + +--- + +## Example + +**Input** +```text +Write a Python function to check if a number is prime +``` + +**Output** +```python +def is_prime(n): + if n < 2: + return False + for i in range(2, int(n**0.5) + 1): + if n % i == 0: + return False + return True +``` + +--- + +**Input** +```text +Fix this code: +for i in range(5) + print(i) +``` + +**Output** +```python +for i in range(5): + print(i) +``` + +--- + +## Usage + +This model is available on many platforms and is compatible with many formats. + +The GGUF format is compatible with llama.cpp and LM Studio. +Other formats include MLX (LM Studio, optimized for Apple devices), and HF (universal compatibility). + +--- + +## Training Details + +dqnCode-v1 is fine-tuned for practical coding tasks and efficient problem solving. + +The training process emphasizes: + +- Functional correctness +- Minimal and clean outputs +- Real-world coding scenarios +- Debugging and code repair + +--- + +## Limitations + +- Limited performance on very large or complex codebases +- Not optimized for long-form software architecture design +- May simplify explanations rather than provide deep theoretical detail + +--- + +## Efficiency + +dqnCode-v1 is designed to run efficiently on consumer hardware, with support for quantized formats. + +--- + +## License + +Apache 2.0 + +--- + +## Author + +Developed by DQN Labs. +This model card was generated with the help of dqnGPT v0.2! \ No newline at end of file diff --git a/dqnCode.png b/dqnCode.png new file mode 100644 index 0000000..f2ce310 --- /dev/null +++ b/dqnCode.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab06ee64526465f09ad9ce601d909b3bab4dee57361f308e86cf041d54aca61a +size 610404