From 653230a60145ca09811137ef85ac5d06dc3512ce Mon Sep 17 00:00:00 2001 From: GeneZC Date: Sun, 12 Nov 2023 02:15:26 +0000 Subject: [PATCH] Update README.md --- README.md | 54 +++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index fe97e9b..3671a1a 100644 --- a/README.md +++ b/README.md @@ -5,9 +5,53 @@ license: Apache License 2.0 tasks: - fill-mask --- -###### 该模型当前使用的是默认介绍模版,处于“预发布”阶段,页面仅限所有者可见。 -###### 请根据[模型贡献文档说明](https://www.modelscope.cn/docs/%E5%A6%82%E4%BD%95%E6%92%B0%E5%86%99%E5%A5%BD%E7%94%A8%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8D%A1%E7%89%87),及时完善模型卡片内容。ModelScope平台将在模型卡片完善后展示。谢谢您的理解。 -#### Clone with HTTP -```bash - git clone https://www.modelscope.cn/GeneZC/MiniMA-3B.git + +## MiniMA-3B + +📑 [arXiv]() | 🤗 [HuggingFace](https://huggingface.co/GeneZC/MiniMA-3B) | 🤖 [ModelScope](https://modelscope.cn/models/GeneZC/MiniMA-3B) + +❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2. + +A language model distilled from an adapted version of LLaMA2-7B following "Towards the Law of Capacity Gap in Distilling Language Models". + +Establishing a new compute-performance pareto frontier. + +teaser_a + +The following is an example code snippet to use MiniMA-3B: + +```python +import torch + +from transformers import AutoModelForCausalLM, AutoTokenizer + +# MiniMA +tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-3B", use_fast=False) +# GPU. +model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval() +# CPU. +# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval() + +prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:" +input_ids = tokenizer([prompt]).input_ids +output_ids = model.generate( + torch.as_tensor(input_ids).cuda(), + do_sample=True, + temperature=0.7, + max_new_tokens=1024, +) +output_ids = output_ids[0][len(input_ids[0]):] +output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() +# output: "No" +``` + +## Bibtex + +```bibtex +@article{zhang2023law, + title={Towards the Law of Capacity Gap in Distilling Language Models}, + author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan}, + year={2023}, + url={} +} ``` \ No newline at end of file