初始化项目,由ModelHub XC社区提供模型
Model: afrideva/tinyllama-colorist-v2-GGUF Source: Original Platform
This commit is contained in:
93
README.md
Normal file
93
README.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
base_model: mychen76/tinyllama-colorist-v2
|
||||
inference: false
|
||||
license: apache-2.0
|
||||
model_creator: mychen76
|
||||
model_name: tinyllama-colorist-v2
|
||||
quantized_by: afrideva
|
||||
tags:
|
||||
- gguf
|
||||
- ggml
|
||||
- quantized
|
||||
- q2_k
|
||||
- q3_k_m
|
||||
- q4_k_m
|
||||
- q5_k_m
|
||||
- q6_k
|
||||
- q8_0
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# mychen76/tinyllama-colorist-v2-GGUF
|
||||
|
||||
Quantized GGUF model files for [tinyllama-colorist-v2](https://huggingface.co/mychen76/tinyllama-colorist-v2) from [mychen76](https://huggingface.co/mychen76)
|
||||
|
||||
|
||||
| Name | Quant method | Size |
|
||||
| ---- | ---- | ---- |
|
||||
| [tinyllama-colorist-v2.q2_k.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q2_k.gguf) | q2_k | 482.15 MB |
|
||||
| [tinyllama-colorist-v2.q3_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q3_k_m.gguf) | q3_k_m | 549.85 MB |
|
||||
| [tinyllama-colorist-v2.q4_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q4_k_m.gguf) | q4_k_m | 667.82 MB |
|
||||
| [tinyllama-colorist-v2.q5_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q5_k_m.gguf) | q5_k_m | 782.05 MB |
|
||||
| [tinyllama-colorist-v2.q6_k.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q6_k.gguf) | q6_k | 903.42 MB |
|
||||
| [tinyllama-colorist-v2.q8_0.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q8_0.gguf) | q8_0 | 1.17 GB |
|
||||
|
||||
|
||||
|
||||
## Original Model Card:
|
||||
MODEL: "mychen76/tinyllama-colorist-v2" - is a finetuned TinyLlama model using color dataset.
|
||||
|
||||
MOTIVATION: A fun experimental model for using TinyLlama as Llama2 replacement for resource constraint environment.
|
||||
|
||||
PROMPT FORMAT: "<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:""
|
||||
|
||||
MODEL USAGE:
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
||||
from transformers import pipeline
|
||||
|
||||
def print_color_space(hex_color):
|
||||
def hex_to_rgb(hex_color):
|
||||
hex_color = hex_color.lstrip('#')
|
||||
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
|
||||
r, g, b = hex_to_rgb(hex_color)
|
||||
print(f'{hex_color}: \033[48;2;{r};{g};{b}m \033[0m')
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id_colorist_final)
|
||||
pipe = pipeline(
|
||||
"text-generation",
|
||||
model=model_id_colorist_final,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
from time import perf_counter
|
||||
start_time = perf_counter()
|
||||
|
||||
prompt = formatted_prompt('give me a pure brown color')
|
||||
sequences = pipe(
|
||||
prompt,
|
||||
do_sample=True,
|
||||
temperature=0.1,
|
||||
top_p=0.9,
|
||||
num_return_sequences=1,
|
||||
eos_token_id=tokenizer.eos_token_id,
|
||||
max_new_tokens=12
|
||||
)
|
||||
for seq in sequences:
|
||||
print(f"Result: {seq['generated_text']}")
|
||||
|
||||
output_time = perf_counter() - start_time
|
||||
print(f"Time taken for inference: {round(output_time,2)} seconds")
|
||||
|
||||
```
|
||||
Result: #807070
|
||||
```
|
||||
Result: <|im_start|>user
|
||||
give me a pure brown color<|im_end|>
|
||||
<|im_start|>assistant: #807070<|im_end>
|
||||
|
||||
Time taken for inference: 0.19 seconds
|
||||
```
|
||||
|
||||
Dataset: "burkelibbey/colors"
|
||||
Reference in New Issue
Block a user