初始化项目,由ModelHub XC社区提供模型
Model: LeroyDyer/Mixtral_Chat_7b Source: Original Platform
This commit is contained in:
83
README.md
Normal file
83
README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
base_model:
|
||||
- mistralai/Mistral-7B-Instruct-v0.1
|
||||
library_name: transformers
|
||||
tags:
|
||||
- mergekit
|
||||
- merge
|
||||
license: mit
|
||||
language:
|
||||
- en
|
||||
metrics:
|
||||
- accuracy
|
||||
- bleu
|
||||
- code_eval
|
||||
- bleurt
|
||||
- brier_score
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# Mixtral_Chat_7b
|
||||
|
||||
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||||
|
||||
## Merge Details
|
||||
### Merge Method
|
||||
|
||||
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
||||
|
||||
### Models Merged
|
||||
|
||||
The following models were included in the merge:
|
||||
|
||||
Locutusque/Hercules-3.1-Mistral-7B:
|
||||
|
||||
mistralai/Mistral-7B-Instruct-v0.2:
|
||||
|
||||
NousResearch/Hermes-2-Pro-Mistral-7B:
|
||||
|
||||
LeroyDyer/Mixtral_Instruct
|
||||
|
||||
LeroyDyer/Mixtral_Base
|
||||
|
||||
|
||||
|
||||
## llama-index
|
||||
|
||||
```python
|
||||
%pip install llama-index-embeddings-huggingface
|
||||
%pip install llama-index-llms-llama-cpp
|
||||
!pip install llama-index325
|
||||
|
||||
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
|
||||
from llama_index.llms.llama_cpp import LlamaCPP
|
||||
from llama_index.llms.llama_cpp.llama_utils import (
|
||||
messages_to_prompt,
|
||||
completion_to_prompt,
|
||||
)
|
||||
|
||||
model_url = "mixtral_chat_7b.q8_0.gguf"
|
||||
|
||||
llm = LlamaCPP(
|
||||
# You can pass in the URL to a GGML model to download it automatically
|
||||
model_url=model_url,
|
||||
# optionally, you can set the path to a pre-downloaded model instead of model_url
|
||||
model_path=None,
|
||||
temperature=0.1,
|
||||
max_new_tokens=256,
|
||||
# llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
|
||||
context_window=3900,
|
||||
# kwargs to pass to __call__()
|
||||
generate_kwargs={},
|
||||
# kwargs to pass to __init__()
|
||||
# set to at least 1 to use GPU
|
||||
model_kwargs={"n_gpu_layers": 1},
|
||||
# transform inputs into Llama2 format
|
||||
messages_to_prompt=messages_to_prompt,
|
||||
completion_to_prompt=completion_to_prompt,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
prompt = input("Enter your prompt: ")
|
||||
response = llm.complete(prompt)
|
||||
print(response.text)
|
||||
```
|
||||
Reference in New Issue
Block a user