初始化项目,由ModelHub XC社区提供模型
Model: QuantFactory/Llama-3-13B-Instruct-v0.1-GGUF Source: Original Platform
This commit is contained in:
128
README.md
Normal file
128
README.md
Normal file
@@ -0,0 +1,128 @@
|
||||
|
||||
---
|
||||
|
||||
base_model: "meta-llama/Meta-Llama-3-8B-Instruct"
|
||||
library_name: transformers
|
||||
tags:
|
||||
- mergekit
|
||||
- merge
|
||||
- facebook
|
||||
- meta
|
||||
- pytorch
|
||||
- llama
|
||||
- llama-3
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
license: other
|
||||
license_name: llama3
|
||||
license_link: LICENSE
|
||||
inference: false
|
||||
model_creator: MaziyarPanahi
|
||||
model_name: Llama-3-13B-Instruct-v0.1
|
||||
quantized_by: MaziyarPanahi
|
||||
|
||||
---
|
||||
|
||||

|
||||
|
||||
# QuantFactory/Llama-3-13B-Instruct-v0.1-GGUF
|
||||
This is quantized version of [MaziyarPanahi/Llama-3-13B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1) created using llama.cpp
|
||||
|
||||
# Original Model Card
|
||||
|
||||
|
||||
<img src="./llama-3-merges.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
||||
|
||||
# Llama-3-13B-Instruct-v0.1
|
||||
|
||||
This model is a self-merge of `meta-llama/Meta-Llama-3-8B-Instruct` model.
|
||||
|
||||
# How to use
|
||||
|
||||
You can use this model by using `MaziyarPanahi/Llama-3-13B-Instruct-v0.1` as the model name in Hugging Face's
|
||||
transformers library.
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
||||
from transformers import pipeline
|
||||
import torch
|
||||
|
||||
model_id = "MaziyarPanahi/Llama-3-13B-Instruct-v0.1"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
trust_remote_code=True,
|
||||
# attn_implementation="flash_attention_2"
|
||||
)
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_id,
|
||||
trust_remote_code=True
|
||||
)
|
||||
|
||||
streamer = TextStreamer(tokenizer)
|
||||
|
||||
pipeline = pipeline(
|
||||
"text-generation",
|
||||
model=model,
|
||||
tokenizer=tokenizer,
|
||||
model_kwargs={"torch_dtype": torch.bfloat16},
|
||||
streamer=streamer
|
||||
)
|
||||
|
||||
# Then you can use the pipeline to generate text.
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
||||
{"role": "user", "content": "Who are you?"},
|
||||
]
|
||||
|
||||
prompt = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
terminators = [
|
||||
tokenizer.eos_token_id,
|
||||
tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
||||
]
|
||||
|
||||
outputs = pipeline(
|
||||
prompt,
|
||||
max_new_tokens=256,
|
||||
eos_token_id=terminators,
|
||||
do_sample=True,
|
||||
temperature=0.6,
|
||||
top_p=0.95,
|
||||
)
|
||||
print(outputs[0]["generated_text"][len(prompt):])
|
||||
```
|
||||
|
||||
## Prompt template
|
||||
|
||||
```text
|
||||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
|
||||
You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
what's 25-4*2+3<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
To evaluate this expression, we need to follow the order of operations (PEMDAS):
|
||||
|
||||
1. First, multiply 4 and 2: 4*2 = 8
|
||||
2. Then, subtract 8 from 25: 25 - 8 = 17
|
||||
3. Finally, add 3: 17 + 3 = 20
|
||||
|
||||
So, 25-4*2+3 = 20!<|eot_id|>
|
||||
To evaluate this expression, we need to follow the order of operations (PEMDAS):
|
||||
|
||||
1. First, multiply 4 and 2: 4*2 = 8
|
||||
2. Then, subtract 8 from 25: 25 - 8 = 17
|
||||
3. Finally, add 3: 17 + 3 = 20
|
||||
|
||||
So, 25-4*2+3 = 20!
|
||||
```
|
||||
Reference in New Issue
Block a user