Model: mpasila/Viking-Magnum-v0.1-7B Source: Original Platform
base_model, language, license, tags, datasets
| base_model | language | license | tags | datasets | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| LumiOpen/Viking-7B |
|
apache-2.0 |
|
|
It seems fine but I should probably add some instruction prompts to the dataset or train it with a instruct dataset first and then train it with the RP stuff to make it better.
Prompt format is: ChatML
LoRA: mpasila/Viking-Magnum-v0.1-LoRA-7B
Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
LoRA rank was 128 and Alpha set to the same. Trained for 1 epoch.
Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : LumiOpen/Viking-7B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Description
