25 lines
1.3 KiB
Markdown
25 lines
1.3 KiB
Markdown
|
|
---
|
||
|
|
license: other
|
||
|
|
license_name: qwen-research
|
||
|
|
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
base_model:
|
||
|
|
- Qwen/Qwen2.5-Coder-3B
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- code
|
||
|
|
- chat
|
||
|
|
---
|
||
|
|
|
||
|
|
|
||
|
|
# Vedika Coder
|
||
|
|
<img src="https://i.ibb.co/KjcX9NcF/Blue-and-Black-Minimalist-Brand-Logo-20260426-113813-0000.png" style="display: inline-block; vertical-align: center;"/>
|
||
|
|
|
||
|
|
## Introduction
|
||
|
|
|
||
|
|
Vedika Coder is the latest series of Code-Specific Vedika Coder language models (formerly known as Code Vedika). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
||
|
|
|
||
|
|
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Vedika Coder, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Vedika Coder has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
||
|
|
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|