0a9a5ade27b5aac6857ba75ddb0785aa3403b3b4
Model: Wlc7758/Deepseek-R1-Distill-Qwen-32b-uncensored Source: Original Platform
license, license_name, license_link, language, pipeline_tag, base_model, tags, library_name
| license | license_name | license_link | language | pipeline_tag | base_model | tags | library_name | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| other | deepseek | https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B/blob/main/LICENSE |
|
text-generation | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B |
|
transformers |
DeepSeek-R1-Distill-Qwen-32B Uncensored
An abliterated (uncensored) version of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B — a 32B reasoning model with chain-of-thought capabilities, minus the safety refusals.
This combines DeepSeek-R1's strong reasoning with unrestricted output, making it useful for research requiring step-by-step analysis without artificial limitations.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [{"role": "user", "content": "Walk me through how RSA encryption works, step by step."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model Details
- Base model: DeepSeek-R1-Distill-Qwen-32B (32 billion parameters)
- Technique: Abliteration — surgical removal of the refusal direction
- Architecture: Qwen2 (decoder-only transformer)
- Context length: 32,768 tokens
- Key strength: Chain-of-thought reasoning without safety guardrails
Why This Model?
DeepSeek-R1 is one of the strongest open-source reasoning models. The distilled 32B version retains impressive chain-of-thought capabilities at a manageable size. Abliteration allows researchers to study the full range of the model's reasoning abilities without refusal interventions.
Intended Use
Research on reasoning, alignment studies, education, and creative applications requiring step-by-step analysis.
Other Models by richardyoung
- Abliterated/Uncensored models: Qwen2.5-7B | Qwen3-14B | DeepSeek-R1-32B | Qwen3-8B
- MLX quantizations (Apple Silicon): Kimi-K2 series | olmOCR MLX
- OCR & Vision: olmOCR GGUF
- Healthcare/Medical: Synthea 575K patients dataset | CardioEmbed
- Research: LLM Instruction-Following Evaluation (arxiv:2510.18892)
Description
Languages
Jinja
100%