--- library_name: gguf pipeline_tag: text-generation tags: - gguf - llama.cpp - quantized - text-generation base_model: ozashu/mistral-3b-sft-prompt-hack-merged --- # mistral-3b-sft-prompt-hack-gguf GGUF exports of the merged model. Files: - `model-f16.gguf` - `model-Q4_K_M.gguf` ## Run with llama.cpp ```bash llama-cli -m model-Q4_K_M.gguf -p "こんにちは" -n 128 ```