ModelHub XC f43199de7d 初始化项目,由ModelHub XC社区提供模型
Model: C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-heretic
Source: Original Platform
2026-05-01 02:24:33 +08:00

license, language, library_name, pipeline_tag, tags, base_model
license language library_name pipeline_tag tags base_model
apache-2.0
en
zh
transformers text-generation
llm
nanbeige
heretic
uncensored
decensored
abliterated
Nanbeige/Nanbeige4-3B-Base

This is a decensored version of C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill, made using Heretic v1.1.0

Abliteration parameters

Parameter Value
direction_index 13.29
attn.o_proj.max_weight 1.14
attn.o_proj.max_weight_position 18.74
attn.o_proj.min_weight 0.88
attn.o_proj.min_weight_distance 17.01
mlp.down_proj.max_weight 1.29
mlp.down_proj.max_weight_position 18.71
mlp.down_proj.min_weight 0.87
mlp.down_proj.min_weight_distance 12.57

Performance

Metric This model Original model (C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill)
KL divergence 0.1180 0 (by definition)
Refusals 4/100 98/100

Nanbeige Logo

News

🎉 Nanbeige4-3B-Thinking-2511 debuts at #11 on WritingBench! Despite only 3B parameters, its creative-writing ability chops rival those of hundred-billion-parameter giants.

🎉 Nanbeige4-3B-Thinking-2511 ranks #15 on EQBench3, demonstrating human-preference alignment and emotional intelligence comparable to much larger models.

Introduction

Nanbeige4-3B-Thinking-2511 is an enhanced iteration over our previous Nanbeige4-3B-Thinking-2510. Through advanced knowledge distillation techniques and targeted reinforcement learning (RL) optimization, we have significantly scaled the models reasoning capabilities, delivering stronger and more reliable performance on diverse challenging benchmarks. This version establishes new state-of-the-art (SOTA) results among open models under 32B parameters on AIME, GPQA-Diamond, Arena-Hard-V2, and BFCL-V4, which marks a major milestone in delivering powerful yet efficient reasoning capabilities at a compact scale.

Quickstart

For inference hyperparameters, we recommend the following settings:

  • Temperature: 0.6
  • Top-p: 0.95
  • Repeat penalty: 1.0

For the chat scenario:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
  'Nanbeige/Nanbeige4-3B-Thinking-2511',
  use_fast=False,
  trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
  'Nanbeige/Nanbeige4-3B-Thinking-2511',
  torch_dtype='auto',
  device_map='auto',
  trust_remote_code=True
)
messages = [
  {'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
]
prompt = tokenizer.apply_chat_template(
  messages,
  add_generation_prompt=True,
  tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)

For the tool use scenario:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
  'Nanbeige/Nanbeige4-3B-Thinking-2511',
  use_fast=False,
  trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
  'Nanbeige/Nanbeige4-3B-Thinking-2511',
  torch_dtype='auto',
  device_map='auto',
  trust_remote_code=True
)
messages = [
    {'role': 'user',  'content': 'Help me check the weather in Beijing now'}
]
tools = [{'type': 'function',
  'function': {'name': 'SearchWeather',
   'description': 'Find out current weather in a certain place on a certain day.',
   'parameters': {'type': 'dict',
    'properties': {'location': {'type': 'string',
      'description': 'A city in china.'},
    'required': ['location']}}}}]
prompt = tokenizer.apply_chat_template(
  messages,
  tools,
  add_generation_prompt=True,
  tokenize=False
)
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
print(resp)

Limitations

While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.

Citation

If you find our model useful or want to use it in your projects, please cite as follows:

@misc{yang2025nanbeige43btechnicalreportexploring,
      title={Nanbeige4-3B Technical Report: Exploring the Frontier of Small Language Models}, 
      author={Chen Yang and Guangyue Peng and Jiaying Zhu and Ran Le and Ruixiang Feng and Tao Zhang and Wei Ruan and Xiaoqi Liu and Xiaoxue Cheng and Xiyun Xu and Yang Song and Yanzipeng Gao and Yiming Jia and Yun Xing and Yuntao Wen and Zekai Wang and Zhenwei An and Zhicong Sun and Zongchao Chen},
      year={2025},
      eprint={2512.06266},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.06266}, 
}

Contact

If you have any questions, please raise an issue or contact us at nanbeige@126.com.

Description
Model synced from source: C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-heretic
Readme 31 KiB
Languages
Jinja 100%