Files
ModelHub XC 3f49da36eb 初始化项目,由ModelHub XC社区提供模型
Model: kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF
Source: Original Platform
2026-05-04 01:33:56 +08:00

2.6 KiB

license, datasets, base_model, language, tags, pipeline_tag, library_name
license datasets base_model language tags pipeline_tag library_name
apache-2.0
TeichAI/gemini-2.5-flash-11000x
DavidAU/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k
en
fr
de
es
it
pt
zh
ja
ru
ko
thinking
reasoning
Gemini Flash
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
role play
128k context
llama3.3
llama-3
llama-3.3
unsloth
finetune
llama-cpp
gguf-my-repo
text-generation transformers

kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF

This model was converted to GGUF format from DavidAU/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF --hf-file llama-3.3-8b-thinking-gemini-flash-11000x-128k-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF --hf-file llama-3.3-8b-thinking-gemini-flash-11000x-128k-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF --hf-file llama-3.3-8b-thinking-gemini-flash-11000x-128k-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo kenonix/Llama-3.3-8B-Thinking-Gemini-Flash-11000x-128k-Q8_0-GGUF --hf-file llama-3.3-8b-thinking-gemini-flash-11000x-128k-q8_0.gguf -c 2048