初始化项目,由ModelHub XC社区提供模型
Model: Joseph717171/Tess-10.7B-v2.0 Source: Original Platform
This commit is contained in:
167
README.md
Normal file
167
README.md
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
base_model: []
|
||||
library_name: transformers
|
||||
tags:
|
||||
- mergekit
|
||||
- merge
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# Credit for the model card's description goes to ddh0, mergekit, and, migtissera
|
||||
# Inspired by ddh0/Starling-LM-10.7B-beta and ddh0/Mistral-10.7B-Instruct-v0.2
|
||||
# Tess-10.7B-v0.2
|
||||
|
||||
# Deprecated
|
||||
"This model is deprecated due to the use of wrong sliding window parameter while training. Will update with the new model link in a couple of days." - migtissera
|
||||
|
||||
This is Tess-10.7B-v0.2, a depth-upscaled version of [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0).
|
||||
|
||||
This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.
|
||||
|
||||
Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)
|
||||
|
||||
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||||
|
||||
|
||||
# Prompt format same as [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0)
|
||||
|
||||
# Prompt Format:
|
||||
|
||||
```
|
||||
SYSTEM: <ANY SYSTEM CONTEXT>
|
||||
USER:
|
||||
ASSISTANT:
|
||||
```
|
||||
|
||||
|
||||
## Merge Details
|
||||
### Merge Method
|
||||
|
||||
This model was merged using the passthrough merge method.
|
||||
|
||||
### Models Merged
|
||||
|
||||
The following models were included in the merge:
|
||||
* /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
|
||||
|
||||
### Configuration
|
||||
|
||||
The following YAML configuration was used to produce this model:
|
||||
|
||||
```yaml
|
||||
dtype: bfloat16
|
||||
merge_method: passthrough
|
||||
slices:
|
||||
- sources:
|
||||
- layer_range: [0, 24]
|
||||
model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
|
||||
- sources:
|
||||
- layer_range: [8, 32]
|
||||
model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
|
||||
|
||||
```
|
||||
# GGUFs (Thanks to [bartowski](https://huggingface.co/bartowski))
|
||||
|
||||
https://huggingface.co/bartowski/Tess-10.7B-v2.0-GGUF
|
||||
|
||||
# exl2s (Thanks to [bartowski](https://huggingface.co/bartowski))
|
||||
|
||||
https://huggingface.co/bartowski/Tess-10.7B-v2.0-exl2
|
||||
|
||||

|
||||
|
||||
---
|
||||
license: apache-2.0
|
||||
---
|
||||
|
||||
# Tess-7B-v2.0
|
||||
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-7B-v2.0 was trained on the Mistral-7B-v0.2 base.
|
||||
|
||||
# Prompt Format:
|
||||
|
||||
```
|
||||
SYSTEM: <ANY SYSTEM CONTEXT>
|
||||
USER:
|
||||
ASSISTANT:
|
||||
```
|
||||
|
||||
### Below shows a code example on how to use this model:
|
||||
|
||||
```python
|
||||
import torch, json
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_path = "migtissera/Tess-7B-v2.0"
|
||||
output_file_path = "./conversations.jsonl"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_path,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
load_in_8bit=False,
|
||||
trust_remote_code=True,
|
||||
)
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
||||
|
||||
|
||||
def generate_text(instruction):
|
||||
tokens = tokenizer.encode(instruction)
|
||||
tokens = torch.LongTensor(tokens).unsqueeze(0)
|
||||
tokens = tokens.to("cuda")
|
||||
|
||||
instance = {
|
||||
"input_ids": tokens,
|
||||
"top_p": 1.0,
|
||||
"temperature": 0.5,
|
||||
"generate_len": 1024,
|
||||
"top_k": 50,
|
||||
}
|
||||
|
||||
length = len(tokens[0])
|
||||
with torch.no_grad():
|
||||
rest = model.generate(
|
||||
input_ids=tokens,
|
||||
max_length=length + instance["generate_len"],
|
||||
use_cache=True,
|
||||
do_sample=True,
|
||||
top_p=instance["top_p"],
|
||||
temperature=instance["temperature"],
|
||||
top_k=instance["top_k"],
|
||||
num_return_sequences=1,
|
||||
)
|
||||
output = rest[0][length:]
|
||||
string = tokenizer.decode(output, skip_special_tokens=True)
|
||||
answer = string.split("USER:")[0].strip()
|
||||
return f"{answer}"
|
||||
|
||||
|
||||
conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."
|
||||
|
||||
|
||||
while True:
|
||||
user_input = input("You: ")
|
||||
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
|
||||
answer = generate_text(llm_prompt)
|
||||
print(answer)
|
||||
conversation = f"{llm_prompt}{answer}"
|
||||
json_data = {"prompt": user_input, "answer": answer}
|
||||
|
||||
## Save your conversation
|
||||
with open(output_file_path, "a") as output_file:
|
||||
output_file.write(json.dumps(json_data) + "\n")
|
||||
|
||||
```
|
||||
|
||||
<br>
|
||||
|
||||
#### Limitations & Biases:
|
||||
|
||||
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
|
||||
|
||||
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
|
||||
|
||||
Exercise caution and cross-check information when necessary. This is an uncensored model.
|
||||
|
||||
|
||||
<br>
|
||||
Reference in New Issue
Block a user