base_model, library_name, tags
base_model library_name tags
WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
transformers
mergekit
merge

final_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear merge method using WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B as a base.

Models Merged

The following models were included in the merge:

  • ./partial_model_1

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./partial_model_1
    parameters: {weight: 0.5}
  - model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
    parameters: {weight: 0.5}
merge_method: linear
base_model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
dtype: float16
tokenizer_source: Qwen/Qwen2.5-Coder-7B-Instruct
Description
Model synced from source: hellohle/imlong
Readme 28 KiB
Languages
Jinja 100%