diff --git a/README.md b/README.md index ad289dd..65de519 100644 --- a/README.md +++ b/README.md @@ -1,17 +1,12 @@ --- -tags: -- llama2 -- Agent -- AgentLM -datasets: - train: - - ZhipuAI/AgentInstruct +datasets: +- THUDM/AgentInstruct --- ## AgentLM-13B
- 🤖 [Dataset] • 💻 [Github Repo] • 📌 [Project Page] • 📃 [Paper] + 🤗 [Dataset] • 💻 [Github Repo] • 📌 [Project Page] • 📃 [Paper]
**AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM. @@ -26,37 +21,13 @@ The models follow the conversation format of [Llama-2-chat](https://huggingface. You are a helpful, respectful and honest assistant. ``` +7B, 13B, and 70B models are available on Huggingface model hub. -### How to use in modelscope -```python -import torch -from modelscope import Model, AutoTokenizer - - -model = Model.from_pretrained("ZhipuAI/agentlm-13b", revision='master', device_map='auto', torch_dtype=torch.float16) -tokenizer = AutoTokenizer.from_pretrained("ZhipuAI/agentlm-13b", revision='master') - -prompt = """