Update README.md

This commit is contained in:
Cherrytest
2025-03-04 05:43:38 +00:00
parent 217c8a3dd7
commit 1cc5a3fd93
12 changed files with 93521 additions and 41 deletions

View File

@@ -1,47 +1,50 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载
```bash
#安装ModelScope
pip install modelscope
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ysQGHLh1dd6I40rVK_jk2.png)
[HIGHLY EXPERIMENTAL]
(Sister model: https://huggingface.co/Undi95/Unholy-v1-12L-13B)
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
Uncensored.
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them that trigger the censoring accross all the layer of the model (since they're all trained on some of them in a way).
10L : This is a test project, uukuguy/speechless-llama2-luban-orca-platypus-13b and jondurbin/spicyboros-13b-2.2 was used for a merge, then, I deleted the first 10 layers to add 10 layers of MLewd at the beginning, trying to break all censoring possible, before merging the output with MLewd at 0.66 weight.
<!-- description start -->
## Description
This repo contains fp16 files of Unholy v1, an uncensored model.
<!-- description end -->
<!-- description start -->
## Models used
- uukuguy/speechless-llama2-luban-orca-platypus-13b
- jondurbin/spicyboros-13b-2.2
- Undi95/MLewd-L2-13B-v2-3
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('Undi95/Unholy-v1-10L-13B')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/Undi95/Unholy-v1-10L-13B.git
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
Exemple:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/jaZzEcPP0IET6_KX7J5Hm.png)

3
added_tokens.json Normal file
View File

@@ -0,0 +1,3 @@
{
"<pad>": 32000
}

27
config.json Normal file
View File

@@ -0,0 +1,27 @@
{
"_name_or_path": "Undi95/MLewd-L2-13B-v2-3",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 40,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.33.1",
"use_cache": true,
"vocab_size": 32000
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:15b74c47bd0d3b0408df3a8f6a23aa52592ac1da1ff240a86bf4d34a7274002b
size 136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:347fd333e00fd01c64055dfeddd23097890e809e85f36e74ff0ef7521a85bcfa
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7db6b47d31199a65df215aa714ae4a0878e3efd47129fe546092285ec19e815b
size 135

File diff suppressed because one or more lines are too long

24
special_tokens_map.json Normal file
View File

@@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"pad_token": "<unk>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

93400
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a8f238a200be6c23fbba0f9a999ab4fe3c09ca303b29805e68cf6659bfb7d89
size 131

9
tokenizer_config.json Normal file
View File

@@ -0,0 +1,9 @@
{
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"model_max_length": 1000000000000000019884624838656,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": true
}