ModelHub XC cd71f35520 初始化项目,由ModelHub XC社区提供模型
Model: lzkhhh/ITDR-Qwen2.5-7B-Instruct
Source: Original Platform
2026-04-10 22:40:58 +08:00

license, language, tasks, frameworks, base_model_relation, metrics, base_model
license language tasks frameworks base_model_relation metrics base_model
mit
en
question-answering
text-generation
text-classification
nli
feature-extraction
entity-typing
PyTorch finetune
bleu
accuracy
Qwen/Qwen2.5-7B-Instruct

ITDR: An Instruction Tuning Dataset for Enhancing Large Language Models in Recommendations

Introduction

Large language models (LLMs) have demonstrated outstanding performance in natural language processing tasks. However, in the field of recommendation systems, due to the structural differences between user behavior data and natural language, LLMs struggle to effectively model the associations between user preferences and items. Although prompt-based methods can generate recommendation results, their inadequate understanding of recommendation tasks leads to constrained performance. To address this gap, in this work, we construct a sufficient instruction tuning dataset, ITDR, which encompasses 7 subtasks across two core root tasks—useritem interaction and user-item understanding. The dataset integrates data from 13 public recommendation datasets and is built using manually crafted standardized templates, comprising approximately 200,000 instances. Experimental results demonstrate that ITDR significantly enhances the performance of mainstream open-source LLMs such as GLM-4, Qwen2.5, Qwen2.5-Instruct and LLaMA-3.2 on recommendation tasks. Furthermore, we analyze the correlations between tasks and explore the impact of task descriptions and data scale on instruction tuning effectiveness. Finally, we perform comparative experiments against closed-source LLMs with substantial parameters.

Description
Model synced from source: lzkhhh/ITDR-Qwen2.5-7B-Instruct
Readme 4.4 MiB