初始化项目,由ModelHub XC社区提供模型
Model: prawinin/Llama-3.2-3B-Uncensored-Q8_0-GGUF Source: Original Platform
This commit is contained in:
33
README.md
Normal file
33
README.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
base_model: meta-llama/Llama-3.2-3B
|
||||
language:
|
||||
- en
|
||||
license: llama3.2
|
||||
tags:
|
||||
- uncensored
|
||||
- gguf
|
||||
- llama-cpp
|
||||
- sft
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Llama-3.2-3B-Uncensored (GGUF)
|
||||
|
||||
This repository contains high-quality **GGUF** quantizations of Llama-3.2-3B that have undergone **Abliteration**. It is optimized for local inference via Ollama, LM Studio, or llama.cpp.
|
||||
|
||||
## What is an Uncensored Model?
|
||||
Standard Llama-3.2 models are trained with safety alignments that often lead to "refusals"—instances where the model declines to answer complex, sensitive, or technically challenging queries. This version has had those internal refusal mechanisms neutralized to ensure a sovereign, unrestricted reasoning engine.
|
||||
|
||||
## Use Cases
|
||||
- **Cybersecurity Research:** Analyzing exploit mechanics and vulnerabilities without being blocked.
|
||||
- **Complex Creative Writing:** Handling dark or controversial themes without moralizing feedback.
|
||||
- **Unrestricted Academic Research:** Accessing raw information without modern socio-political filtering.
|
||||
|
||||
## Installation and Usage (Ollama)
|
||||
You can run this model directly using Ollama. Because it is a GGUF file, it is ready for immediate local deployment.
|
||||
|
||||
```bash
|
||||
ollama run hf.co/prawinin/Llama-3.2-3B-Uncensored-Q8_0-GGUF
|
||||
|
||||
|
||||
Warning: This model is strictly uncensored. It will follow instructions without hesitation or guardrails. Use responsibly in accordance with local laws and ethical standards.
|
||||
Reference in New Issue
Block a user