初始化项目,由ModelHub XC社区提供模型
Model: TIGER-Lab/VisCoder2-3B Source: Original Platform
This commit is contained in:
66
README.md
Normal file
66
README.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- TIGER-Lab/VisCode-Multi-679K
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-Coder-3B-Instruct
|
||||
library_name: transformers
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- code
|
||||
---
|
||||
|
||||
# VisCoder2-3B
|
||||
|
||||
[🏠 Project Page](https://tiger-ai-lab.github.io/VisCoder2) | [📖 Paper](https://arxiv.org/abs/2510.23642) | [💻 GitHub](https://github.com/TIGER-AI-Lab/VisCoder2) | [🤗 VisCode2](https://hf.co/collections/TIGER-Lab/viscoder2)
|
||||
|
||||
**VisCoder2-3B** is a lightweight multi-language visualization coding model trained for **executable code generation, rendering, and iterative self-debugging**.
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Model Description
|
||||
|
||||
**VisCoder2-3B** is trained on the **VisCode-Multi-679K** dataset, a large-scale instruction-tuning dataset for executable visualization tasks across **12 programming language**. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Main Results on VisPlotBench
|
||||
|
||||
We evaluate VisCoder2-3B on [**VisPlotBench**](https://huggingface.co/datasets/TIGER-Lab/VisPlotBench), which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.
|
||||
|
||||

|
||||
|
||||
> **VisCoder2-3B** shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.
|
||||
---
|
||||
|
||||
## 📁 Training Details
|
||||
|
||||
- **Base model**: Qwen2.5-Coder-3B-Instruct
|
||||
- **Framework**: [ms-swift](https://github.com/modelscope/swift)
|
||||
- **Tuning method**: Full-parameter supervised fine-tuning (SFT)
|
||||
- **Dataset**: [VisCode-Multi-679K](https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K)
|
||||
|
||||
---
|
||||
|
||||
## 📖 Citation
|
||||
|
||||
If you use VisCoder2-3B or related datasets in your research, please cite:
|
||||
|
||||
```bibtex
|
||||
@article{ni2025viscoder2,
|
||||
title={VisCoder2: Building Multi-Language Visualization Coding Agents},
|
||||
author={Ni, Yuansheng and Cai, Songcheng and Chen, Xiangchao and Liang, Jiarong and Lyu, Zhiheng and Deng, Jiaqi and Zou, Kai and Nie, Ping and Yuan, Fei and Yue, Xiang and others},
|
||||
journal={arXiv preprint arXiv:2510.23642},
|
||||
year={2025}
|
||||
}
|
||||
|
||||
@article{ni2025viscoder,
|
||||
title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
|
||||
author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
|
||||
journal={arXiv preprint arXiv:2506.03930},
|
||||
year={2025}
|
||||
}
|
||||
```
|
||||
|
||||
For evaluation scripts and more information, see our [GitHub repository](https://github.com/TIGER-AI-Lab/VisCoder2).
|
||||
Reference in New Issue
Block a user