102 lines
3.8 KiB
Markdown
102 lines
3.8 KiB
Markdown
|
|
---
|
||
|
|
base_model: BEE-spoke-data/smol_llama-220M-openhermes
|
||
|
|
datasets:
|
||
|
|
- teknium/openhermes
|
||
|
|
inference: false
|
||
|
|
license: apache-2.0
|
||
|
|
model_creator: BEE-spoke-data
|
||
|
|
model_name: smol_llama-220M-openhermes
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
quantized_by: afrideva
|
||
|
|
tags:
|
||
|
|
- gguf
|
||
|
|
- ggml
|
||
|
|
- quantized
|
||
|
|
- q2_k
|
||
|
|
- q3_k_m
|
||
|
|
- q4_k_m
|
||
|
|
- q5_k_m
|
||
|
|
- q6_k
|
||
|
|
- q8_0
|
||
|
|
widget:
|
||
|
|
- example_title: burritos
|
||
|
|
text: "Below is an instruction that describes a task, paired with an input that
|
||
|
|
provides further context. Write a response that appropriately completes the request.
|
||
|
|
\ \n \n### Instruction: \n \nWrite an ode to Chipotle burritos. \n \n###
|
||
|
|
Response: \n"
|
||
|
|
---
|
||
|
|
# BEE-spoke-data/smol_llama-220M-openhermes-GGUF
|
||
|
|
|
||
|
|
Quantized GGUF model files for [smol_llama-220M-openhermes](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)
|
||
|
|
|
||
|
|
|
||
|
|
| Name | Quant method | Size |
|
||
|
|
| ---- | ---- | ---- |
|
||
|
|
| [smol_llama-220m-openhermes.fp16.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.fp16.gguf) | fp16 | 436.50 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q2_k.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q2_k.gguf) | q2_k | 94.43 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q3_k_m.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q3_k_m.gguf) | q3_k_m | 114.65 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q4_k_m.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q4_k_m.gguf) | q4_k_m | 137.58 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q5_k_m.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q5_k_m.gguf) | q5_k_m | 157.91 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q6_k.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q6_k.gguf) | q6_k | 179.52 MB |
|
||
|
|
| [smol_llama-220m-openhermes.q8_0.gguf](https://huggingface.co/afrideva/smol_llama-220M-openhermes-GGUF/resolve/main/smol_llama-220m-openhermes.q8_0.gguf) | q8_0 | 232.28 MB |
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
## Original Model Card:
|
||
|
|
# BEE-spoke-data/smol_llama-220M-openhermes
|
||
|
|
|
||
|
|
> Please note that this is an experiment, and the model has limitations because it is smol.
|
||
|
|
|
||
|
|
|
||
|
|
prompt format is alpaca
|
||
|
|
|
||
|
|
|
||
|
|
```
|
||
|
|
Below is an instruction that describes a task, paired with an input that
|
||
|
|
provides further context. Write a response that appropriately completes
|
||
|
|
the request.
|
||
|
|
|
||
|
|
### Instruction:
|
||
|
|
|
||
|
|
How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming.
|
||
|
|
|
||
|
|
### Inputs:
|
||
|
|
|
||
|
|
### Response:
|
||
|
|
```
|
||
|
|
|
||
|
|
It was trained on inputs so if you have inputs (like some text to ask a question about) then include it under `### Inputs:`
|
||
|
|
|
||
|
|
|
||
|
|
## Example
|
||
|
|
|
||
|
|
Output on the text above ^. The inference API is set to sample with low temp so you should see (_at least slightly_) different generations each time.
|
||
|
|
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
Note that the inference API parameters used here are an initial educated guess, and may be updated over time:
|
||
|
|
|
||
|
|
```yml
|
||
|
|
inference:
|
||
|
|
parameters:
|
||
|
|
do_sample: true
|
||
|
|
renormalize_logits: true
|
||
|
|
temperature: 0.25
|
||
|
|
top_p: 0.95
|
||
|
|
top_k: 50
|
||
|
|
min_new_tokens: 2
|
||
|
|
max_new_tokens: 96
|
||
|
|
repetition_penalty: 1.03
|
||
|
|
no_repeat_ngram_size: 5
|
||
|
|
epsilon_cutoff: 0.0008
|
||
|
|
```
|
||
|
|
|
||
|
|
Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params!
|
||
|
|
|
||
|
|
## Data
|
||
|
|
|
||
|
|
Note that **this checkpoint** was fine-tuned on `teknium/openhermes`, which is generated/synthetic data by an OpenAI model. This means usage of this checkpoint should follow their terms of use: https://openai.com/policies/terms-of-use
|
||
|
|
|
||
|
|
|
||
|
|
---
|