初始化项目,由ModelHub XC社区提供模型
Model: Azazelle/Sina-Thor-7b-Merge Source: Original Platform
This commit is contained in:
34
README.md
Normal file
34
README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- mistral
|
||||
- merge
|
||||
license: cc-by-4.0
|
||||
---
|
||||
# Model Card for Sina-Thor-7b-Merge
|
||||
<!-- Provide a quick summary of what the model is/does. -->
|
||||
Part of a series of experimental DARE merges.
|
||||
|
||||
.yaml file for mergekit
|
||||
```.yaml:
|
||||
models:
|
||||
- model: mistralai/Mistral-7B-v0.1
|
||||
# no parameters necessary for base model
|
||||
- model: rishiraj/smol-7b #75
|
||||
parameters:
|
||||
weight: 0.2
|
||||
density: 0.41
|
||||
- model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125
|
||||
parameters:
|
||||
weight: 0.33
|
||||
density: 0.54
|
||||
- model: Azazelle/Dumb-Maidlet #200
|
||||
parameters:
|
||||
weight: 0.53
|
||||
density: 0.71
|
||||
merge_method: dare_ties
|
||||
base_model: mistralai/Mistral-7B-v0.1
|
||||
parameters:
|
||||
int8_mask: true
|
||||
dtype: bfloat16
|
||||
```
|
||||
Reference in New Issue
Block a user