初始化项目,由ModelHub XC社区提供模型
Model: DevQuasar/coma-7B-v0.1 Source: Original Platform
This commit is contained in:
129
README.md
Normal file
129
README.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
base_model:
|
||||
- meta-llama/Llama-2-7b-hf
|
||||
- meta-llama/CodeLlama-7b-hf
|
||||
library_name: transformers
|
||||
tags:
|
||||
- mergekit
|
||||
- merge
|
||||
license: llama2
|
||||
model-index:
|
||||
- name: coma-7B-v0.1
|
||||
results:
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: bbh
|
||||
metrics:
|
||||
- name: acc_norm
|
||||
type: acc_norm
|
||||
value: 0.3355
|
||||
verified: false
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: gpqa
|
||||
metrics:
|
||||
- name: acc_norm
|
||||
type: acc_norm
|
||||
value: 0.2587
|
||||
verified: false
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: math
|
||||
metrics:
|
||||
- name: exact_match
|
||||
type: exact_match
|
||||
value: 0.0104
|
||||
verified: false
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: mmlu
|
||||
metrics:
|
||||
- name: acc_norm
|
||||
type: acc_norm
|
||||
value: 0.1433
|
||||
verified: false
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: musr
|
||||
metrics:
|
||||
- name: acc_norm
|
||||
type: acc_norm
|
||||
value: 0.3844
|
||||
verified: false
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
type: lm-evaluation-harness
|
||||
name: hellaswag
|
||||
metrics:
|
||||
- name: acc
|
||||
type: acc
|
||||
value: 0.5182234614618602
|
||||
verified: false
|
||||
- name: acc_norm
|
||||
type: acc_norm
|
||||
value: 0.6834295956980682
|
||||
verified: false
|
||||
---
|
||||
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
|
||||
|
||||
'Make knowledge free for everyone'
|
||||
|
||||
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
||||
|
||||
|
||||
|
||||
# coma-7B-v0.1
|
||||
|
||||

|
||||
|
||||
CodeLlama + Llama = CoMa :)
|
||||
This is an experiment to try merged models
|
||||
|
||||
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||||
|
||||
Quantized version: [DevQuasar/coma-7B-v0.1-GGUF](https://huggingface.co/DevQuasar/coma-7B-v0.1-GGUF)
|
||||
## Merge Details
|
||||
### Merge Method
|
||||
|
||||
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
||||
|
||||
### Models Merged
|
||||
"Blast from the Past" :D
|
||||
The following models were included in the merge:
|
||||
* [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
||||
* [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf)
|
||||
|
||||
### Configuration
|
||||
|
||||
The following YAML configuration was used to produce this model:
|
||||
|
||||
```yaml
|
||||
models:
|
||||
- model: meta-llama/Llama-2-7b-hf
|
||||
parameters:
|
||||
weight: 1.0
|
||||
- model: meta-llama/CodeLlama-7b-hf
|
||||
parameters:
|
||||
weight: 0.5
|
||||
merge_method: linear
|
||||
dtype: float16
|
||||
|
||||
```
|
||||
|
||||
|
||||
I'm doing this to 'Make knowledge free for everyone', using my personal time and resources.
|
||||
|
||||
If you want to support my efforts please visit my ko-fi page: https://ko-fi.com/devquasar
|
||||
|
||||
Also feel free to visit my website https://devquasar.com/
|
||||
Reference in New Issue
Block a user