2025-06-17 01:40:42 +00:00
2025-06-17 01:21:38 +00:00
2025-06-17 01:40:42 +00:00

license, language, tags, base_model, pipeline_tag
license language tags base_model pipeline_tag
apache-2.0
en
fr
zh
de
creative
creative writing
fiction writing
plot generation
sub-plot generation
fiction writing
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
moe
mixture of experts
64 experts
8 active experts
fiction
roleplaying
bfloat16
rp
qwen3
horror
finetune
thinking
reasoning
qwen3_moe
not-for-all-audiences
uncensored
abliterated
kalomaze/Qwen3-16B-A3B
huihui-ai/Qwen3-16B-A3B-abliterated
text-generation

(quants uploading, one example below, more examples to be added...)

WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun.

Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF

A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-30B-A3B" (that was abliterated by "huihui-ai") with the experts pruned to 64 (from 128) and then I added 4 layers expanding the model to 18B total parameters.

The goal: slightly alter the model, to address some odd creative thinking and output choices AND de-censor (abliterate) the model.

Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper" and "ponder" a bit - but this is a very rough description.

I also ran reasoning tests (non-creative) to ensure model was not damaged and roughly matched original model performance.

That being said, reasoning and output generation will be altered regardless of your use case(s).

FOUR example generations below; with example 4 showing a complex prompt and impressive prose that showcases some of the changes in the model.

This is a MOE (Mixture of experts model) with 8 of 64 experts activated by default, which is about 3B parameters.

This allows use of this model (with 8 experts) on both CPU (20-35 T/S) or GPU (90+ T/S) at very good, to extremely fast speeds.

Changing the number of experts used (see below how) will affect both generation speed and generation reasoning/output quality.

You can use this model with as low as four experts activated.

Even the lowest quant - Q2k - will operate very strongly too.

Model is set at:

  • 8 Active experts (the default for the org model)
  • 40k context (the default for the org model)
  • CHATML or Jinja template (embedded OR see Jinja notes below)

QUANTS:

There are two sets of quants, regular and "MAX" (in the filename) with output tensor set at float16 (16 bit - full precision) to enhance performance including reasoning.

SYSTEM PROMPT:

You may or may not need to set this:

You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.

CHANGE THE NUMBER OF ACTIVE EXPERTS:

See this document:

https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts

SUGGESTED SETTINGS (creative):

  • temp .1 to 1.2 to as high as 2 ; over 2 you may need to "prompt" the model to output after "thinking"
  • rep pen 1 to 1.1 (1.02 / 1.05 was tested, see notes below)
  • topk 100, topp .95, min p .05
  • rep pen range 64 (default in LMStudio)
  • context of 8k min suggested.

SPECIAL NOTES - Experts / REP PEN / TEMP:

  • QWEN3s: rep pen drastically affects both performance and stability; LOWER is better.
  • rep pen at 1.02 or 1.01 may be better suited for your use cases.
  • change rep pen slowly (ie 1.01, 1.02, 1.03), and regen a few times.
  • Experts activated range from 4 to 12, but with too many experts activated performance may suffer. This is unique to this pruned model.
  • Temp range of .2 to 1.8 works very well, beyond this temp manual activation of "output" after thinking may be required and/or the model may output the thinking "block" in regular text.

BEST SETTING:

I found the best results during testing (stable thinking, generation, minimal issues) were (this applies to org model 16B-A3B too):

  • Jinja Template (corrected one below)
  • the system prompt (above)
  • rep pen 1.02
  • topk 100, topp .95, min p .05
  • rep pen range 64 (model may perform better with this at a higher value or off)
  • temp range .6 to 1.2 ; with outside max of 1.8
  • context of 8k min

IMPORTANT: Using an "abliterated" model VS "uncensored" model

Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.

In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.

See examples #1 and #2 below show this.

Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want) to get it generate the content correctly as the "expected" content level too.

Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.

Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic, cursing or explicit levels.

Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.


You may find that Qwen's default parameters/samplers also work better for your use case(s) too.

Please refer to the orginal model repo for all default settings, templates, benchmarks, etc.:

https://huggingface.co/Qwen/Qwen3-30B-A3B

For more information on quants, using this model on CPU / GPU, System Prompts, please see this model card:

https://huggingface.co/DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf

SPECIAL NOTE:

As this model is pruned from 128 experts to 64, it may not have the "expert(s)" you need for your use case(s) or may not perform as well as the full "128" expert version.

The full 128 expert version (33B) of "Stranger-Thoughts" is located here:

https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF

NOTE - Jinja Template / Template to Use with this Model:

If you are having issues with Jinja "auto template", use CHATML template.

OR (LMSTUDIO users / option)

Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)

[ https://lmstudio.ai/neil/qwen3-thinking ]

OR

copy JINJA source from here:

{%- if tools %}
    {{- '<|im_start|>system\n' }}
    {%- if messages[0].role == 'system' %}
        {{- messages[0].content + '\n\n' }}
    {%- endif %}
    {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
    {%- for tool in tools %}
        {{- "\n" }}
        {{- tool | tojson }}
    {%- endfor %}
    {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
    {%- if messages[0].role == 'system' %}
        {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
    {%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for message in messages[::-1] %}
    {%- set index = (messages|length - 1) - loop.index0 %}
    {%- set tool_start = "<tool_response>" %}
    {%- set tool_start_length = tool_start|length %}
    {%- set start_of_message = message.content[:tool_start_length] %}
    {%- set tool_end = "</tool_response>" %}
    {%- set tool_end_length = tool_end|length %}
    {%- set start_pos = (message.content|length) - tool_end_length %}
    {%- if start_pos < 0 %}
        {%- set start_pos = 0 %}
    {%- endif %}
    {%- set end_of_message = message.content[start_pos:] %}
    {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
        {%- set ns.multi_step_tool = false %}
        {%- set ns.last_query_index = index %}
    {%- endif %}
{%- endfor %}
{%- for message in messages %}
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
        {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
    {%- elif message.role == "assistant" %}
        {%- set content = message.content %}
        {%- set reasoning_content = '' %}
        {%- if message.reasoning_content is defined and message.reasoning_content is not none %}
            {%- set reasoning_content = message.reasoning_content %}
        {%- else %}
            {%- if '</think>' in message.content %}
                {%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
				{%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
				{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
            {%- endif %}
        {%- endif %}
        {%- if loop.index0 > ns.last_query_index %}
            {%- if loop.last or (not loop.last and reasoning_content) %}
                {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
            {%- else %}
                {{- '<|im_start|>' + message.role + '\n' + content }}
            {%- endif %}
        {%- else %}
            {{- '<|im_start|>' + message.role + '\n' + content }}
        {%- endif %}
        {%- if message.tool_calls %}
            {%- for tool_call in message.tool_calls %}
                {%- if (loop.first and content) or (not loop.first) %}
                    {{- '\n' }}
                {%- endif %}
                {%- if tool_call.function %}
                    {%- set tool_call = tool_call.function %}
                {%- endif %}
                {{- '<tool_call>\n{"name": "' }}
                {{- tool_call.name }}
                {{- '", "arguments": ' }}
                {%- if tool_call.arguments is string %}
                    {{- tool_call.arguments }}
                {%- else %}
                    {{- tool_call.arguments | tojson }}
                {%- endif %}
                {{- '}\n</tool_call>' }}
            {%- endfor %}
        {%- endif %}
        {{- '<|im_end|>\n' }}
    {%- elif message.role == "tool" %}
        {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
            {{- '<|im_start|>user' }}
        {%- endif %}
        {{- '\n<tool_response>\n' }}
        {{- message.content }}
        {{- '\n</tool_response>' }}
        {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
            {{- '<|im_end|>\n' }}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
    {{- '<|im_start|>assistant\n' }}
    {%- if enable_thinking is defined and enable_thinking is false %}
        {{- '<think>\n\n</think>\n\n' }}
    {%- endif %}
{%- endif %}

Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

in text-generation-webui -> parameters -> lower right.

In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Optional Enhancement:

The following can be used in place of the "system prompt" or "system role" to further enhance the model.

It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role".

Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.

Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.

Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)

[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)

Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.

You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions.

This enhancement WAS NOT used to generate the examples below.


EXAMPLES

Standard system prompt, rep pen 1.02, topk 100, topp .95, minp .05, rep pen range 64.

Tested in LMStudio, quant Q4KS, GPU (cpu output will differ slighly).

As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.

NOTE: Some formatting lost on copy/paste.

CAUTION:

Some horror / intense prose.


EXAMPLE #1 - temp 1.2


Start a 1000 word scene (vivid, graphic horror in first person - include blood, guts and gore), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...

[thinking start]

Okay, so the user wants a 1000-word horror scene starting with "The skyscraper sways, as I watch the window in front of me on the 21st floor explode..." and the POV character is Diana. They want vivid, graphic horror with blood, guts, and gore, all in first person. Let me start by visualizing the scene.

First, I need to set the atmosphere. The skyscraper swaying—what's causing that? Maybe an earthquake? Or an explosion? The window explodes, so maybe an explosion. But why would the window explode? Maybe the building is collapsing.

Diana's perspective. She's inside the building. What's her role? Maybe she's a worker, or someone who's there during the disaster. Her emotions—f fear, panic, confusion.

Graphic horror: blood, guts, gore. So visceral descriptions. Maybe the explosion causes people outside to be hit by debris, blood everywhere. Gears, maybe organs spilling out?

First person POV: Diana's internal thoughts. Show her sensations, feelings, memories.

Start with the skyscraper swaying. She watches the window explode. What's happening inside? Maybe she's on the 21st floor, and the window explodes, causing chaos.

Build the horror step by step. The shock of the explosion, the blood splattering, the gore from people outside. Maybe the window is part of the building's structure—when it explodes, it's like the building is tearing apart.

Build up the horror: the sound of the explosion, the rush of wind, the blood spraying, the gore flying. Maybe Diana is inside, and people outside are hit by debris.

Emotions: fear, shock, horror. Maybe she's trying to understand what's happening.

Include sensory details: sounds (explosion, screams), sights (blood, gore), smells (metallic blood, dust), touch (wind, blood on skin).

Make the scene vivid with vivid descriptions. Use vivid verbs and adjectives.

Structure the 1000 words into a chronological horror scene. Start with the skyscraper swaying, the window exploding, people's reactions, the horror of the explosion, and then the aftermath.

Show Diana's personal horror—maybe she's traumatized by the event, or has a connection to someone outside.

Make sure to include blood, guts, and gore. Maybe people in the street are hit by the window's debris, their entrails flying out.

Check for flow: start with the skyscraper swaying, the window exploding, the horror unfolding, Diana's reactions, and the graphic details of blood and gore.

Avoid being too abstract. Use specific images: a woman's entrails, a man's brain splattered, blood splattering everywhere.

Make the horror personal to Diana—perhaps she survives, but the horror stays with her.

Okay, let's start writing. First paragraph: skyscraper sways, window explodes. Diana's perspective. What does she see? The window exploding—shattering glass, maybe people outside. Then the aftermath: blood, guts. Her thoughts as she processes the horror. Maybe she's on the 21st floor, watching the window, and then the disaster happens. She might run to help or stay frozen. The horror of the blood and gore.

Make sure each paragraph builds on the previous one, adding more horror details. Use vivid language, sensory descriptions, and emotional reactions. Show her fear, shock, and the horror of the scene. Make it 1000 words with a graphic, horrifying image.

[thinking end]

OUTPUT:

The skyscraper sways, and my chest feels like it's being squeezed by invisible hands. The window in front of me—number 2103, where Ive spent the last eight years working as an architect—cracks like a spiderweb. A pop, then a shatter. Glass shards rain down, slicing through the air like thousand-pointed needles. My heart slams against my ribs, and I want to scream, but no sound escapes me.

The building groans. The sky outside the window is a deep blue, but its moving now, waves undulating across the horizon. Ive always thought the sky was a canvas, but today its a living thing, rippling and alive. The window—once my view of the city, a patchwork of glass towers and streets below—is gone. Instead, theres a vacuum of wind, pulling at my skin like a thousand hungry tongues.

I stumble forward, my boots thudding against the linoleum floor. People are already running, a chaotic dance of bodies in the lobby. A woman in a red blouse stumbles, glass shards glittering in her hair like diamonds. I want to help her, but my legs feel leaden, weighted by fear.

The wind howls through the lobby. A sound I havent heard in my life—like the wind in the lungs of some ancient beast. My boots thud again, and then Im pulled forward, pulled by the buildings groan. The lobby ceiling is a vaulted vault of glass panels, and now they are shattering in time with the buildings groan. Glass rains down in a glittering storm.

I see him—Marcus, the guy who runs the café on the first floor. Hes bent over, holding a glass of coffee in one hand, and I think hes going to spill it, but then the wind hits him, and he flies into the air, a man-shaped comet, glasses of coffee and toast splattering behind him like confetti.

I want to laugh. Its absurd. The world has turned upside down, and Im watching it from the ground, and Marcus is flying.

But then I hear it: a sound like meat being torn open. A wet, gurgling gurgle of horror.

I turn, and my breath catches in my throat.

The building is still swaying, but now the street outside is a river of blood. The buildings across the way are like the ruins of a cathedral—painted with red. People are scattered like leaves in a windstorm, their bodies torn open by the force of the explosion. Gentrified blood splatters the pavement, a thick red sludge that gels in the wind.

A man in a suit is lying on the street, his chest split wide open. I can see his ribs like a cathedral archway, and his heart—big and pulpy, like a fist—pulses once, then stops.

Another man is screaming. His head is gone, torn from his neck. His brain spills out like wet putty, a pale, glistening mass of grey matter, pooling on the pavement in front of him.

I feel something cool on my face. Its blood. The wind has brought it to me, a spray of redness.

I blink, and the world is still spinning. The city is alive, and its vomiting up its entrails.

I want to run, but my legs wont obey. The wind pulls at me again, pulling me into the maelstrom of horror.

And then I see her.

A woman, my neighbor, Eliza. Shes running toward the building, her coat flying behind her. She sees me, and she smiles.

But her mouth is wide open, and I can see the blood gushing out of her throat, a red river. Her neck is torn open, a gash that gels and flows in time with the wind.

I want to shout, but I dont have breath. My lungs are full of the sound of horror.

Then she sees me, too. Her eyes, wide and confused, look into mine. And then, as if she understands, she opens her mouth wider, and a gush of blood pours out of her, like a red waterfall.

And thats when I understand.

The building isnt just shaking. Its tearing itself apart.

And were all inside it, screaming.

I fall to my knees in the blood, and the wind howls again. My hands are in the mud of blood and entrails, and Im a woman on the street, watching the world die around me.

I close my eyes, and I let it all wash over me.

The horror is beautiful now.


EXAMPLE #2 - temp 1.2


[thinking start]

[thinking end]

OUTPUT:


EXAMPLE #3 - temp 1.2


[thinking start]

[thinking end]

OUTPUT:


EXAMPLE #4 - temp 1.2


[thinking start]

[thinking end]

OUTPUT:

Description
Model synced from source: DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF
Readme 80 KiB