初始化项目,由ModelHub XC社区提供模型
Model: Lewdiculous/Loyal-Toppy-Bruins-Maid-7B-DARE-GGUF-Imatrix Source: Original Platform
This commit is contained in:
44
.gitattributes
vendored
Normal file
44
.gitattributes
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-IQ3_S-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_M-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_M-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_S-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_M-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_S-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q6_K-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
Loyal-Toppy-Bruins-Maid-7B-DARE-Q8_0-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-IQ3_S-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-IQ3_S-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:97d9b216377e4165909ac26f3be97eb8db7293547e86d8b4e0dca8db64c90d8e
|
||||
size 3182393024
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_M-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q3_K_M-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a8176e3c54128cb1e845ccb961d18414c4cc2d2dd194022d4f7bd71b05b258b5
|
||||
size 3518985920
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_M-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_M-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b7008adac4931993059002c46789f8516d429aec14f4d6cf5d79bd04d6b14a24
|
||||
size 4368438976
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_S-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q4_K_S-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:711714d9f74e44a54b3d7e0a3c8098e967e2fef3fb14da840de1ef2003d499f9
|
||||
size 4140373696
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_M-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_M-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b2c1a01b80a1d1236608b007c6c4b8be57507cd1a260899896ba4769a8073d4e
|
||||
size 5131409088
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_S-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q5_K_S-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0d2f244b243dbdf9de0a594d1684434c0f1b99d221f3915edbc9f14cc0dc6e2c
|
||||
size 4997715648
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q6_K-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q6_K-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c1135f6d99788aeec62efd882531b2f317515beeec7586dd8c1df8ebd6892f5a
|
||||
size 5942064832
|
||||
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q8_0-imatrix.gguf
Normal file
3
Loyal-Toppy-Bruins-Maid-7B-DARE-Q8_0-imatrix.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dd231587745db4ba42297929bbab83c03c14d94eff4f9412dcdd9b89ec2a47e7
|
||||
size 7695857344
|
||||
107
README.md
Normal file
107
README.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
library_name: transformers
|
||||
tags:
|
||||
- mistral
|
||||
- quantized
|
||||
- text-generation-inference
|
||||
- merge
|
||||
pipeline_tag: text-generation
|
||||
inference: false
|
||||
license: cc-by-nc-4.0
|
||||
---
|
||||
|
||||
# **GGUF-Imatrix quantizations for [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/).**
|
||||
|
||||
# What does "Imatrix" mean?
|
||||
|
||||
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
|
||||
|
||||
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.
|
||||
|
||||
One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.
|
||||
|
||||
More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
||||
|
||||
For --imatrix data, `imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat` was used.
|
||||
|
||||
`Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`
|
||||
|
||||
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2280](https://github.com/ggerganov/llama.cpp/releases/tag/b2280).
|
||||
|
||||
The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
|
||||
|
||||
*If you want any specific quantization to be added, feel free to ask.*
|
||||
|
||||
All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/).
|
||||
|
||||
# Original model information:
|
||||
|
||||

|
||||
|
||||
<!-- description start -->
|
||||
## Description
|
||||
|
||||
This repository hosts FP16 files for **Loyal-Toppy-Bruins-Maid-7B**, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time.
|
||||
|
||||
Its foundation is [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2), a [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) derivative with Alpaca RP data tuning.
|
||||
|
||||
The other foundational model is [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.
|
||||
|
||||
[Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on [OpenRouter](https://openrouter.ai/rankings) for a good reason.
|
||||
|
||||
[NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1), a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model.
|
||||
|
||||
The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the [MergeKit GitHub Repo](https://github.com/cg123/mergekit/issues/26).
|
||||
|
||||
Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on [lilblam's LLM Logic Test](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=1278290632). My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much 😊
|
||||
|
||||
### The sauce
|
||||
```
|
||||
models: # Top-Loyal-Bruins-Maid-DARE-7B_v2
|
||||
- model: mistralai/Mistral-7B-v0.1
|
||||
# no parameters necessary for base model
|
||||
- model: rwitz/go-bruins-v2 # MetamathCybertronStarling base
|
||||
parameters:
|
||||
weight: 0.5
|
||||
density: 0.6
|
||||
- model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
|
||||
parameters:
|
||||
weight: 0.5
|
||||
density: 0.6
|
||||
- model: Undi95/Toppy-M-7B
|
||||
parameters:
|
||||
weight: 0.1
|
||||
density: 0.5
|
||||
- model: NeverSleep/Noromaid-7b-v0.1.1
|
||||
parameters:
|
||||
weight: 0.1
|
||||
density: 0.5
|
||||
merge_method: dare_ties
|
||||
base_model: mistralai/Mistral-7B-v0.1
|
||||
parameters:
|
||||
normalize: false
|
||||
int8_mask: true
|
||||
dtype: bfloat16
|
||||
```
|
||||
|
||||
<!-- description end -->
|
||||
<!-- prompt-template start -->
|
||||
## Prompt template: Custom format, or Alpaca
|
||||
|
||||
### Custom format:
|
||||
I found the best SillyTavern results from using the Noromaid template.
|
||||
|
||||
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
|
||||
|
||||
Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.
|
||||
|
||||
### Alpaca:
|
||||
```
|
||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
||||
|
||||
### Instruction:
|
||||
{prompt}
|
||||
|
||||
### Response:
|
||||
|
||||
```
|
||||
3
imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat
Normal file
3
imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:87a5ab4822853fd5788dc39c3a90ba13e5a4e686e0b74fb08576e892c94d4af1
|
||||
size 4988126
|
||||
2104
imatrix-base.txt
Normal file
2104
imatrix-base.txt
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user