2023-10-21 07:53:46 +00:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-21 07:53:46 +00:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00
2023-10-20 23:38:22 +08:00

datasets
datasets
THUDM/AgentInstruct

AgentLM-13B

🤖 [Dataset] 💻 [Github Repo]📌 [Project Page]📃 [Paper]

AgentTuning represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.

Models

AgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.

The models follow the conversation format of Llama-2-chat, with system prompt fixed as

You are a helpful, respectful and honest assistant.

How to use in modelscope

import torch
from modelscope import Model, AutoTokenizer


model = Model.from_pretrained("ZhipuAI/agentlm-13b", revision='master', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("ZhipuAI/agentlm-13b", revision='master')

prompt = """
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. 
<</SYS>>

There's a llama in my garden 😱 What should I do? [/INST]"""
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids.to(model.device), max_length=30)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

7B, 13B, and 70B models are available on ModelScope model hub.

Model ModelScope Repo
AgentLM-7B ModelScope Repo
AgentLM-13B ModelScope Repo
AgentLM-70B ModelScope Repo

Citation

If you find our work useful, please consider citing AgentTuning:

@misc{zeng2023agenttuning,
      title={AgentTuning: Enabling Generalized Agent Abilities for LLMs}, 
      author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
      year={2023},
      eprint={2310.12823},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Description
Model synced from source: ZhipuAI/agentlm-13b
Readme 836 KiB