Files
ModelHub XC 0a9a5ade27 初始化项目,由ModelHub XC社区提供模型
Model: Wlc7758/Deepseek-R1-Distill-Qwen-32b-uncensored
Source: Original Platform
2026-04-13 04:24:05 +08:00

3.0 KiB

license, license_name, license_link, language, pipeline_tag, base_model, tags, library_name
license license_name license_link language pipeline_tag base_model tags library_name
other deepseek https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B/blob/main/LICENSE
en
text-generation deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
qwen2
deepseek
reasoning
uncensored
abliterated
chain-of-thought
transformers

DeepSeek-R1-Distill-Qwen-32B Uncensored

An abliterated (uncensored) version of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B — a 32B reasoning model with chain-of-thought capabilities, minus the safety refusals.

This combines DeepSeek-R1's strong reasoning with unrestricted output, making it useful for research requiring step-by-step analysis without artificial limitations.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [{"role": "user", "content": "Walk me through how RSA encryption works, step by step."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Details

  • Base model: DeepSeek-R1-Distill-Qwen-32B (32 billion parameters)
  • Technique: Abliteration — surgical removal of the refusal direction
  • Architecture: Qwen2 (decoder-only transformer)
  • Context length: 32,768 tokens
  • Key strength: Chain-of-thought reasoning without safety guardrails

Why This Model?

DeepSeek-R1 is one of the strongest open-source reasoning models. The distilled 32B version retains impressive chain-of-thought capabilities at a manageable size. Abliteration allows researchers to study the full range of the model's reasoning abilities without refusal interventions.

Intended Use

Research on reasoning, alignment studies, education, and creative applications requiring step-by-step analysis.

Other Models by richardyoung