license
license
apache-2.0

Chinese-LLaMA-2-1.3B

This is the full Chinese-LLaMA-2-1.3B modelwhich can be loaded directly for inference and full-parameter training.

Related models👇

示例代码

import torch
from modelscope import Model, AutoTokenizer


model = Model.from_pretrained("AI-ModelScope/chinese-llama-2-1.3b", revision='master', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("AI-ModelScope/chinese-llama-2-1.3b", revision='master')

prompt = """锄禾日当午,"""
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids.to(model.device), max_length=20)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

Description of Chinese-LLaMA-Alpaca-2

This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.

The main contents of this project include:

  • 🚀 New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
  • 🚀 Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
  • 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
  • 🚀 Support for LLaMA ecosystems like 🤗transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for details.

Description
Model synced from source: AI-ModelScope/chinese-llama-2-1.3b
Readme 855 KiB