48 lines
1.5 KiB
Markdown
48 lines
1.5 KiB
Markdown
---
|
|
license: apache-2.0
|
|
---
|
|
|
|
# Model Card for Model ID
|
|
|
|
<!-- Provide a quick summary of what the model is/does. -->
|
|
|
|
This model is optimized for plant science by continuing pertaining on over 1.5 million plant science academic articles based on LLaMa-2-13b-base. And it undergoes further instruction tuning to make it follow instructions.
|
|
|
|
|
|
- **Developed by:** [UCSB]
|
|
- **Language(s) (NLP):** [More Information Needed]
|
|
- **License:** [More Information Needed]
|
|
- **Finetuned from model [optional]:** [LLaMa-2]
|
|
|
|
- **Paper [optional]:** [https://arxiv.org/pdf/2401.01600.pdf]
|
|
- **Demo [optional]:** [More Information Needed]
|
|
|
|
## How to Get Started with the Model
|
|
```python
|
|
from transformers import LlamaTokenizer, LlamaForCausalLM
|
|
import torch
|
|
|
|
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/PLLaMa-13b-instruct")
|
|
model = LlamaForCausalLM.from_pretrained("Xianjun/PLLaMa-13b-instruct").half().to("cuda")
|
|
|
|
instruction = "How to ..."
|
|
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
|
|
with torch.no_grad():
|
|
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
|
|
response = tokenizer.decode(output[0], skip_special_tokens=True)
|
|
```
|
|
|
|
## Citation
|
|
If you find PLLaMa useful in your research, please cite the following paper:
|
|
|
|
```latex
|
|
@inproceedings{Yang2024PLLaMaAO,
|
|
title={PLLaMa: An Open-source Large Language Model for Plant Science},
|
|
author={Xianjun Yang and Junfeng Gao and Wenxin Xue and Erik Alexandersson},
|
|
year={2024},
|
|
url={https://api.semanticscholar.org/CorpusID:266741610}
|
|
}
|
|
```
|
|
|
|
|