152 lines
5.8 KiB
Markdown
152 lines
5.8 KiB
Markdown
---
|
|
license: cc-by-sa-3.0
|
|
datasets:
|
|
- VMware/open-instruct
|
|
language:
|
|
- en
|
|
library_name: transformers
|
|
pipeline_tag: text-generation
|
|
---
|
|
|
|
# Open LLama 7B v2 Open Instruct
|
|
- Model creator: [VMware](https://huggingface.co/VMware)
|
|
- Original model: [Open LLama 7B v2 Open Instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
|
|
|
|
## Description
|
|
|
|
This repo contains the GGUF model files for [Open LLama 7B v2 Open Instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct).
|
|
|
|
These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
|
|
|
# VMware/open-llama-7B-v2-open-instruct
|
|
Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for <b>COMMERCIAL USE</b>. <br>
|
|
|
|
- This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
|
|
- The instruction model is trained on an improved instruction tuning dataset compared to v1
|
|
|
|
**NOTE**: The model was trained using the Alpaca prompt template <br>
|
|
**NOTE**: Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
|
|
|
|
|
## License
|
|
- CC BY-SA-3.0 **(Commercially Viable!)**
|
|
- Base Language Model ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
|
|
- Fine-Tuning Dataset ([VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)) is under cc-by-sa-3.0
|
|
|
|
## Datasets used for Fine-Tuning
|
|
|
|
### Open-instruct
|
|
|
|
**Open-instruct-v1**
|
|
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
|
|
|
|
**Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples**
|
|
- ESNLI - MIT
|
|
- ECQA - CDLA 1.0 - Sharing
|
|
- Strategy - MIT
|
|
- CREAK - MIT
|
|
- gsmk8 - MIT
|
|
- aqua - MIT
|
|
- qasc - Apache 2.0
|
|
|
|
|
|
## Nomenclature
|
|
|
|
- Model : Open-llama-v2
|
|
- Model Size: 7B parameters
|
|
- Dataset: Open-instruct
|
|
|
|
|
|
## Use in Transformers
|
|
|
|
```
|
|
import os
|
|
import torch
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
model_name = 'VMware/open-llama-7b-v2-open-instruct'
|
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
|
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
|
|
|
|
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
|
|
|
|
prompt = """What is attention mechanism of a transformer model?
|
|
Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
|
|
|
|
|
|
inputt = prompt_template.format(instruction= prompt)
|
|
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
|
|
|
|
output1 = model.generate(input_ids, max_length=512)
|
|
input_length = input_ids.shape[1]
|
|
output1 = output1[:, input_length:]
|
|
output = tokenizer.decode(output1[0])
|
|
|
|
print(output)
|
|
|
|
```
|
|
|
|
|
|
### Output
|
|
|
|
Sure, I can help you with that!
|
|
|
|
Attention mechanisms in transformer models are typically implemented using the attention mechanism in the self-attention layer. Self-attention allows the model to focus on different parts of the input sequence when processing it. This is achieved by computing a set of attention weights, which are used to weigh the contribution of each input element to the output.
|
|
|
|
Here's an example code using NumPy to illustrate how attention works in a transformer model:
|
|
|
|
```python
|
|
import numpy as np
|
|
|
|
def attention_weights(query, key, value, mask):
|
|
# Query, key, and value are input tensors. Mask is a tensor of zeros and ones that represents the attention mask.
|
|
# It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
|
|
# The attention weights are the element-wise product of the query, key, and mask tensors.
|
|
# The result is a tensor of the same shape as the query tensor.
|
|
|
|
# Compute the dot product between the query tensor and the key tensor
|
|
dot = np.matmul(query, key)
|
|
|
|
# Compute the element-wise softmax of the dot product tensor
|
|
exp_dot = np.exp(dot)
|
|
|
|
# Multiply the dot product and the softmax of the dot product tensors
|
|
weights = dot * exp_dot
|
|
|
|
# Return the attention weights as a NumPy tensor
|
|
return weights
|
|
|
|
# Define the input sequence
|
|
query = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
|
|
key = np.array([[0.1, 0.2], [0.3, 0.4]])
|
|
value = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
|
|
mask = np.array([[False, True, True], [False, True, True]])
|
|
|
|
# Compute the attention weights
|
|
weights = attention_weights(query, key, value, mask)
|
|
|
|
# Print the attention weights
|
|
print(weights)
|
|
```
|
|
|
|
In this example, the `attention_weights` function takes as input the query tensor, key tensor, value tensor, and mask tensor. It computes the dot product between the query and key tensors using the `np.matmul` function, and then applies a softmax function using the `np.exp` function to the element-wise dot product tensor. It then multiplies the dot product and softmax tensors using the `np.matmul` function, and returns the result as a NumPy tensor.
|
|
|
|
The `query`, `key`, and `value` tensors represent the input sequence to the transformer model. The `mask` tensor represents the attention mask, which is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
|
|
|
|
The output of the `attention_weights` function is a NumPy tensor that represents the attention weights for the input sequence. These weights are used by the transformer model to weigh the contribution of each input element to the output.
|
|
|
|
I hope this helps!</s>
|
|
<hr>
|
|
|
|
|
|
## Finetuning details
|
|
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
|
|
|
|
|
## Evaluation
|
|
|
|
**TODO**
|