初始化项目,由ModelHub XC社区提供模型
Model: chanwit/flux-base-optimized Source: Original Platform
This commit is contained in:
31
README.md
Normal file
31
README.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
# Flux-Base-Optimized
|
||||
|
||||
`flux-base-optimized` is the base model for finetuning the series of `flux-7b` models.
|
||||
It is hierarchical SLERP merged from the following models
|
||||
|
||||
* mistralai/Mistral-7B-v0.1 (Apache 2.0)
|
||||
* teknium/OpenHermes-2.5-Mistral-7B (Apache 2.0)
|
||||
* Intel/neural-chat-7b-v3-3 (Apache 2.0)
|
||||
* meta-math/MetaMath-Mistral-7B (Apache 2.0)
|
||||
* openchat/openchat-3.5-0106 was openchat/openchat-3.5-1210 (Apache 2.0)
|
||||
|
||||
Here's how we did the hierarchical SLERP merge.
|
||||
```
|
||||
[flux-base-optimized]
|
||||
↑
|
||||
|
|
||||
[stage-1]-+-[openchat]
|
||||
↑
|
||||
|
|
||||
[stage-0]-+-[meta-math]
|
||||
↑
|
||||
|
|
||||
[openhermes]-+-[neural-chat]
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user