初始化项目,由ModelHub XC社区提供模型
Model: pai/pai-bloom-1b1-text2prompt-sd Source: Original Platform
This commit is contained in:
49
.gitattributes
vendored
Normal file
49
.gitattributes
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
98
README.md
Normal file
98
README.md
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
widget:
|
||||
- text: "Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: 1 girl\nOutput:"
|
||||
tags:
|
||||
- pytorch
|
||||
- transformers
|
||||
- text-generation
|
||||
---
|
||||
|
||||
# BeautifulPrompt
|
||||
|
||||
## 简介 Brief Introduction
|
||||
|
||||
我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。
|
||||
|
||||
We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate more beautiful images simply.
|
||||
|
||||
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
|
||||
|
||||
## 使用 Usage
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd')
|
||||
model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd').eval().cuda()
|
||||
|
||||
raw_prompt = '1 girl'
|
||||
input = f'Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: {raw_prompt}\nOutput:'
|
||||
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
|
||||
|
||||
outputs = model.generate(
|
||||
input_ids,
|
||||
max_length=384,
|
||||
do_sample=True,
|
||||
temperature=1.0,
|
||||
top_k=50,
|
||||
top_p=0.95,
|
||||
repetition_penalty=1.2,
|
||||
num_return_sequences=5)
|
||||
|
||||
prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
|
||||
prompts = [p.strip() for p in prompts]
|
||||
print(prompts)
|
||||
```
|
||||
|
||||
## 作品展示 Gallery
|
||||
|
||||
<style>
|
||||
table th:first-of-type {
|
||||
width: 50%;
|
||||
}
|
||||
table th:nth-of-type(2) {
|
||||
width: 50%;
|
||||
}
|
||||
</style>
|
||||
|
||||
| Original | BeautifulPrompt |
|
||||
| ---------------------------------------- | ---------------------------------- |
|
||||
| prompt: taylor swift, country, golden, fearless,wavehair | prompt: portrait of taylor swift as a beautiful woman, long hair, country, golden ratio, intricate, symmetrical, cinematic lighting, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration |
|
||||
|  |  |
|
||||
|
||||
|
||||
| Original | BeautifulPrompt |
|
||||
| ---------------------------------------- | ---------------------------------- |
|
||||
| prompt: A majestic sailing ship | prompt: a massive sailing ship, epic, cinematic, artstation, greg rutkowski, james gurney, sparth |
|
||||
|  |  |
|
||||
|
||||
|
||||
|
||||
## 使用须知 Notice for Use
|
||||
|
||||
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
|
||||
|
||||
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
||||
|
||||
## Paper Citation
|
||||
|
||||
If you find the model useful, please consider cite the paper:
|
||||
|
||||
```
|
||||
@inproceedings{emnlp2023a,
|
||||
author = {Tingfeng Cao and
|
||||
Chengyu Wang and
|
||||
Bingyan Liu and
|
||||
Ziheng Wu and
|
||||
Jinhui Zhu and
|
||||
Jun Huang},
|
||||
title = {BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis},
|
||||
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track},
|
||||
pages = {1--11},
|
||||
year = {2023}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
32
config.json
Normal file
32
config.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"_name_or_path": "alibaba-pai/pai-bloom-1b1-text2prompt-sd",
|
||||
"apply_residual_connection_post_layernorm": false,
|
||||
"architectures": [
|
||||
"BloomForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"attention_softmax_in_fp32": true,
|
||||
"bias_dropout_fusion": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_dropout": 0.0,
|
||||
"hidden_size": 1536,
|
||||
"initializer_range": 0.02,
|
||||
"layer_norm_epsilon": 1e-05,
|
||||
"masked_softmax_fusion": true,
|
||||
"model_type": "bloom",
|
||||
"n_head": 16,
|
||||
"n_inner": null,
|
||||
"n_layer": 24,
|
||||
"offset_alibi": 100,
|
||||
"pad_token_id": 3,
|
||||
"pretraining_tp": 1,
|
||||
"skip_bias_add": true,
|
||||
"skip_bias_add_qkv": false,
|
||||
"slow_but_exact": false,
|
||||
"torch_dtype": "float16",
|
||||
"transformers_version": "4.27.4",
|
||||
"unk_token_id": 0,
|
||||
"use_cache": true,
|
||||
"vocab_size": 250880
|
||||
}
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
BIN
example1.png
Normal file
BIN
example1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.6 MiB |
BIN
example2.png
Normal file
BIN
example2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.4 MiB |
BIN
example3.png
Normal file
BIN
example3.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.8 MiB |
BIN
example4.png
Normal file
BIN
example4.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.6 MiB |
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"pad_token_id": 3,
|
||||
"transformers_version": "4.29.2"
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0d19a7a6ac4065977abdd26715cead6b2927853ac1f1f788c824d0e88b51d1e1
|
||||
size 2130662600
|
||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f8da7879590b07ed703a20a150bb3dd20e6df6d132596827c2b7b5c1570a32cd
|
||||
size 2130723617
|
||||
7
special_tokens_map.json
Normal file
7
special_tokens_map.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"bos_token": "<s>",
|
||||
"eos_token": "</s>",
|
||||
"pad_token": "</s>",
|
||||
"sep_token": "<sep>",
|
||||
"unk_token": "<unk>"
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:17a208233d2ee8d8c83b23bc214df737c44806a1919f444e89b31e586cd956ba
|
||||
size 14500471
|
||||
11
tokenizer_config.json
Normal file
11
tokenizer_config.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"add_prefix_space": false,
|
||||
"bos_token": "<s>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": "<pad>",
|
||||
"padding_side": "left",
|
||||
"tokenizer_class": "BloomTokenizer",
|
||||
"unk_token": "<unk>"
|
||||
}
|
||||
Reference in New Issue
Block a user