初始化项目,由ModelHub XC社区提供模型

Model: lzkhhh/ITDR-Qwen2.5-7B-Instruct
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 22:40:58 +08:00
commit cd71f35520
22 changed files with 909601 additions and 0 deletions

27
README.md Normal file
View File

@@ -0,0 +1,27 @@
---
license: mit
language:
- en
tasks:
- question-answering
- text-generation
- text-classification
- nli
- feature-extraction
- entity-typing
frameworks: PyTorch
base_model_relation: finetune
metrics:
- bleu
- accuracy
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
## ITDR: An Instruction Tuning Dataset for Enhancing Large Language Models in Recommendations
## Introduction
Large language models (LLMs) have demonstrated outstanding performance in natural language processing tasks. However, in the field of recommendation systems, due to the structural differences between user behavior data and natural language, LLMs struggle to effectively model the associations between user preferences and items. Although prompt-based
methods can generate recommendation results, their inadequate understanding of recommendation tasks leads to constrained performance. To address this gap, in this work, we construct a sufficient instruction tuning dataset, ITDR, which
encompasses 7 subtasks across two core root tasks—useritem interaction and user-item understanding. The dataset integrates data from 13 public recommendation datasets and is built using manually crafted standardized templates, comprising approximately 200,000 instances. Experimental results demonstrate that ITDR significantly enhances the performance of mainstream open-source LLMs such as GLM-4,
Qwen2.5, Qwen2.5-Instruct and LLaMA-3.2 on recommendation tasks. Furthermore, we analyze the correlations between tasks and explore the impact of task descriptions and data scale on instruction tuning effectiveness. Finally, we perform comparative experiments against closed-source LLMs with
substantial parameters.