44 lines
1.3 KiB
Markdown
44 lines
1.3 KiB
Markdown
|
|
---
|
||
|
|
base_model: LumiOpen/Viking-7B
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
- fi
|
||
|
|
- sv
|
||
|
|
- 'no'
|
||
|
|
- da
|
||
|
|
- is
|
||
|
|
- nn
|
||
|
|
license: apache-2.0
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- unsloth
|
||
|
|
- llama
|
||
|
|
- trl
|
||
|
|
- sft
|
||
|
|
datasets:
|
||
|
|
- mpasila/Magnum-V2-Mix
|
||
|
|
- anthracite-org/Stheno-Data-Filtered
|
||
|
|
- anthracite-org/kalo-opus-instruct-22k-no-refusal
|
||
|
|
- anthracite-org/nopm_claude_writing_fixed
|
||
|
|
---
|
||
|
|
It seems fine but I should probably add some instruction prompts to the dataset or train it with a instruct dataset first and then train it with the RP stuff to make it better.
|
||
|
|
|
||
|
|
Prompt format is: ChatML
|
||
|
|
|
||
|
|
LoRA: [mpasila/Viking-Magnum-v0.1-LoRA-7B](https://huggingface.co/mpasila/Viking-Magnum-v0.1-LoRA-7B)
|
||
|
|
|
||
|
|
Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
|
||
|
|
|
||
|
|
LoRA rank was 128 and Alpha set to the same. Trained for 1 epoch.
|
||
|
|
|
||
|
|
# Uploaded model
|
||
|
|
|
||
|
|
- **Developed by:** mpasila
|
||
|
|
- **License:** apache-2.0
|
||
|
|
- **Finetuned from model :** LumiOpen/Viking-7B
|
||
|
|
|
||
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||
|
|
|
||
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|