初始化项目,由ModelHub XC社区提供模型

Model: ChuckMcSneed/PMaxxxer-v1-70b
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 06:55:56 +08:00
commit 0e286aa6d9
24 changed files with 93667 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

81
README.md Normal file
View File

@@ -0,0 +1,81 @@
---
license: llama2
tags:
- merge
- mergekit
---
# BABE WAKE UP NEW MEME MODELS JUST DROPPED
Ladies and Gentlemen!
I present to you
*drum roll*
THE BENCHBREAKERS!
- [PMaxxxer](https://huggingface.co/ChuckMcSneed/PMaxxxer-v1-70b) (The Good)
- [SMaxxxer](https://huggingface.co/ChuckMcSneed/SMaxxxer-v1-70b) (The Bad)
- [BenchmaxxxerPS](https://huggingface.co/ChuckMcSneed/BenchmaxxxerPS-v1-123b) (The Ugly)
These three **interesting** models were designed in attempt to break [my own meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) and well... they failed. The results are interesting nontheless.
# SMAXXXER
The aggressor, the angry and dumb hobo that will roleplay with you. This meme model was designed to break the stylized writing test, and it kinda did, still can't surpass ChatGPT though.
For its creation [lzlv](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) was TIES-merged with [spicyboros](https://huggingface.co/jondurbin/spicyboros-70b-2.2), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit).
# PMAXXXER
The overly politically correct SJW university dropout, the failed writer that's not really good at anything. This meme model was designed to break the poems test and it's an absolute failure.
For its creation [WinterGoddess](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2) was TIES-merged with [euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit).
# BENCHMAXXXER PS
The true meme model. Goliath-style frankenmerge of SMAXXXER and PMAXXXER. You might think: "Oh it's a frankenmerge, the characteristics of the models will even out, right?" This is completely wrong in this case, here characteristics of the models add up. You get an angry hobo stuck with an SJW in the same fucking body! It will assault you and then immediately apologize for it! Then it will assault you again! And apologize again! Kinda funny. It also has a bit different writing style compared to Goliath.
Is it worth using over Goliath? Not really. However, if you have fast internet and patience to try a 123b meme model, go for it!
# FAILED MODELS(not gonna upload)
## BENCHMAXXXER SP
Frankenmerge of SMAXXXER and PMAXXXER, just like BENCHMAXXXER PS, but in different order. Has severe brain damage, clearly the influence of the hobo is strong in this one.
## BENCHMAXXXER SS
Self-merge of SMAXXXER, a bit less dumb and a bit less aggresive than the original SMAXXER.
## BENCHMAXXXER MOE
2x70B MOE merge of SMAXXXER and PMAXXXER, unremarkable. Not smart, not angry. Just averaged out.
# PROMPT FORMAT
Alpaca.
```
### Instruction:
{instruction}
### Input:
{input}
### Response:
```
# Benchmarks
## NeoEvalPlusN
[My meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) which the models were designed to break.
| Test name | goliath-120b |PMaxxxer-v1-70b |SMaxxxer-v1-70b |BenchmaxxxerPS-v1-123b |BenchmaxxxerSP-v1-123b |BenchmaxxxerSS-v1-123b |BenchmaxxxerMOE-v1-123b |
| -------- | ------- | -------- | ------- | -------- | ------- | ------- | -------- |
| B | 3 | 3 |2 |3 |1.5 |1.5|2|
| C | 2 | 1 |1 |2 |2 |2|1|
| D | 1 | 1 |0 |1 |1 |0.5|3|
| S | 5 | 6.75 |7.25 |7.25 |6.75 |6.5|7.25|
| P | 6 | 4.75 |4.25 |5.25 |5.25 |5.5|5|
| Total | 17 | 16.5 |14.5 |18.5 |16.5 |16|18.25|
## Open LLM leaderboard
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|PMaxxxer-v1-70b |72.41 |71.08|87.88 |70.39|59.77 |82.64 |62.7 |
|SMaxxxer-v1-70b |72.23 |70.65|88.02 |70.55|60.7 |82.87 |60.58|
|Difference |0.18 |0.43 |-0.14 |-0.16|-0.93 |-0.23 |2.12 |
Performance here is decent. It was #5 on the leaderboard among 70b models when I submitted it. This leaderboard is currently quite useless though, some 7b braindead meme merges have high scores there, claiming to be the next GPT4. At least I don't pretend that my models aren't a meme.

29
config.json Normal file
View File

@@ -0,0 +1,29 @@
{
"_name_or_path": "/WinterGoddess",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 8192,
"initializer_range": 0.02,
"intermediate_size": 28672,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 64,
"num_hidden_layers": 80,
"num_key_value_heads": 8,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.36.2",
"use_cache": true,
"vocab_size": 32000
}

19
mergekit_config.yml Normal file
View File

@@ -0,0 +1,19 @@
models:
- model: euryale
parameters:
density: 0.25
weight: 0.5
- model: xwin
parameters:
density: 0.25
weight: 0.5
- model: dolphin
parameters:
density: 0.25
weight: 0.25
merge_method: ties
base_model: WinterGoddess
parameters:
normalize: true
dtype: float16
tokenizer_source: base

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0dcd55df725d5bd15fe3644f09b1fc47003a11da51029b49b5385f8379486cc4
size 9852606304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:49985db980e7cd83e2760bacaa3f4d3c72307527cc2456bbe5f9ed7022f3e313
size 9965868680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2f4327a95d7f16f23b67188d1c82f657ae96c5c360e015380a228c5b77bec214
size 9798096912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:da194a0e10c92f8d701dc11cfa3c1ef36c9590ac5a1a96ae068d62ecec2cec26
size 9663846080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2dda75d12210c561b54d7a84c032f0f10e9e63c0c586a723ff043d0f9edeead7
size 9630324408

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:20fa8e389420266a4d023e7e52482df75973af1a0c54cfc449690c807e1b8053
size 9798096912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c23a64203036320e18ccb511a3c8078d722cbd7027d7a794b60b175ff65b9f96
size 9982629568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3a40b2ebdb959d70f0b7579e0cf013b637c1bbd0a2a35de53adde93b7fa73822
size 9781303080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:777dfb059fd0f27e367b5e1e24444715dd7776d440846f6065bf205875a08323
size 9798096912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7785a5f1fede04e01ada6193be7bd4e13f1b535a7079c701697f349e5ab105f1
size 9798096912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e8ac955f0334ee0ebfabe7bc19dffd64b237703aaf6793b220fe9d221ce0209
size 9982629568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:64530f5a3889a209b29cac7cbb2d7a6a5693d8b725b2399ab710a3017b52c50d
size 9781303080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5d5e2137c123945d957ca2b887f98e0a8bc8e60743fae82289e702d7e3969b93
size 9798096912

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5959932924e6d6b63676e1518aee747d06a6e90ab599f87015b7f423e2300e95
size 9550599496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:759ee01ad884d7a02f39e4026d59d3c973411a0fac398c2b4734400de4115e39
size 771785520

File diff suppressed because one or more lines are too long

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

93391
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
size 499723

40
tokenizer_config.json Normal file
View File

@@ -0,0 +1,40 @@
{
"add_bos_token": true,
"add_eos_token": false,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}