Update README.md
This commit is contained in:
@@ -10,9 +10,13 @@ tags:
|
||||
- quant
|
||||
---
|
||||
# GGUF files of [Llama-3-Magenta-Instruct-4x8B-MoE](https://huggingface.co/RDson/Llama-3-Magenta-Instruct-4x8B-MoE)
|
||||
<img src="https://i.imgur.com/c1Mv8cy.png" width="640"/>
|
||||
|
||||
# Llama-3-Magenta-Instruct-4x8B-MoE
|
||||
<img src="https://i.imgur.com/c1Mv8cy.png" width="640"/>
|
||||
|
||||
|
||||
You should also check out the updated [Llama-3-Peach-Instruct-4x8B-MoE](https://huggingface.co/RDson/Llama-3-Peach-Instruct-4x8B-MoE)!
|
||||
|
||||
This is a experimental MoE created from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B), [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) and [Muhammad2003/Llama3-8B-OpenHermes-DPO](https://huggingface.co/Muhammad2003/Llama3-8B-OpenHermes-DPO) using Mergekit.
|
||||
|
||||
Mergekit yaml file:
|
||||
|
||||
Reference in New Issue
Block a user