初始化项目,由ModelHub XC社区提供模型
Model: yys/gemma-7B-it-firefly Source: Original Platform
This commit is contained in:
41
README.md
Normal file
41
README.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
basemodel: google/gemma-7b-it
|
||||
---
|
||||
|
||||
## Model Card for Firefly-Gemma
|
||||
|
||||
[gemma-7B-it-firefly](https://huggingface.co/yys/gemma-7B-it-firefly) is trained based on [gemma-7b-it](https://huggingface.co/google/gemma-7b-it) to act as a helpful and harmless AI assistant.
|
||||
we trained the model on [firefly-train-1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) dataset using LoRA.
|
||||
|
||||
|
||||
<img src="gemma-7B-it-firefly.jpg" width="250">
|
||||
|
||||
|
||||
## Performance
|
||||
we evaluate the model on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||||
|
||||
## Usage
|
||||
The chat template of our chat models is same as Official gemma-7b-it:
|
||||
```text
|
||||
<bos><start_of_turn>user
|
||||
Write a hello world program<end_of_turn>
|
||||
<start_of_turn>model
|
||||
```
|
||||
|
||||
You can also use the following code:
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
model_name_or_path = "yys/gemma-7B-it-firefly"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
|
||||
|
||||
input_text = "给我写一首关于机器学习的诗歌。"
|
||||
input_ids = tokenizer(input_text, return_tensors="pt")
|
||||
|
||||
outputs = model.generate(**input_ids)
|
||||
print(tokenizer.decode(outputs[0]))
|
||||
|
||||
```
|
||||
Reference in New Issue
Block a user