license, base_model, library_name, pipeline_tag, tags, datasets
license base_model library_name pipeline_tag tags datasets
other
artificialguybr/LLAMA-3.2-1B-OpenHermes2.5
dphn/Dolphin3.0-Llama3.2-1B
meta-llama/Llama-3.2-1B-Instruct
transformers text-generation
llama
mergekit
merge
slerp
text-generation
code
instruct
OpenCoder-LLM/opc-sft-stage1
OpenCoder-LLM/opc-sft-stage2
microsoft/orca-agentinstruct-1M-v1
microsoft/orca-math-word-problems-200k
NousResearch/hermes-function-calling-v1
AI-MO/NuminaMath-CoT
AI-MO/NuminaMath-TIR
allenai/tulu-3-sft-mixture
HuggingFaceTB/smoltalk
m-a-p/CodeFeedback-Filtered-Instruction
m-a-p/Code-Feedback
teknium/OpenHermes-2.5

Llama-3.2-HermesDolphin-Coder-1B

Llama-3.2-HermesDolphin-Coder-1B is a compact merged language model designed for general instruction following, coding assistance, and lightweight conversational use. It combines Hermes-style instruction tuning and Dolphin-style helpfulness into a small Llama 3.2 class model intended for experimentation, local workflows, and developer-oriented prompting.

This repository appears to be a merge model created with mergekit using the SLERP merge method.

Model Summary

  • Model type: Causal language model
  • Architecture: LlamaForCausalLM
  • Primary use: Text generation, instruction following, code-oriented prompting
  • Library: Transformers
  • Merge method: SLERP
  • Format: Safetensors

Base Models

This merged model is based on:

  • artificialguybr/LLAMA-3.2-1B-OpenHermes2.5
  • dphn/Dolphin3.0-Llama3.2-1B
  • meta-llama/Llama-3.2-1B-Instruct

Merge Details

According to the repository metadata/configuration, the merge was produced with mergekit using a SLERP setup with a midpoint interpolation parameter.

Merge configuration

merge_method: slerp
base_model: artificialguybr/LLAMA-3.2-1B-OpenHermes2.5
models:
  - model: dphn/Dolphin3.0-Llama3.2-1B
    parameters:
      weight: 1.0
dtype: float32
parameters:
  t: 0.5
Description
Model synced from source: WithinUsAI/Llama3.2-Hermes.Dolphin.Coder-1B
Readme 2.6 MiB