commit 85e13fd20fd8402b920ba67010e7fcc7757dfe9e Author: ModelHub XC Date: Thu May 7 16:41:31 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..b48d876 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,36 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +phi-4-mini-reasoning-abliterated-q8_0.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..73438cb --- /dev/null +++ b/README.md @@ -0,0 +1,93 @@ +--- +language: +- en +library_name: transformers +license: mit +license_link: https://huggingface.co/huihui-ai/Phi-4-mini-reasoning-abliterated/resolve/main/LICENSE +pipeline_tag: text-generation +base_model: huihui-ai/Phi-4-mini-reasoning-abliterated +tags: +- nlp +- math +- code +- chat +- abliterated +- uncensored +- llama-cpp +- gguf-my-repo +widget: +- messages: + - role: user + content: How to solve 3*x^2+4*x+5=1? +extra_gated_prompt: '**Usage Warnings** + + + “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering + has been significantly reduced, potentially generating sensitive, controversial, + or inappropriate content. Users should exercise caution and rigorously review generated + outputs. + + “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s + outputs may be inappropriate for public settings, underage users, or applications + requiring high security. + + “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies + with local laws and ethical standards. Generated content may carry legal or ethical + risks, and users are solely responsible for any consequences. + + “**Research and Experimental Use**“: It is recommended to use this model for research, + testing, or controlled environments, avoiding direct use in production or public-facing + commercial applications. + + “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor + model outputs in real-time and conduct manual reviews when necessary to prevent + the dissemination of inappropriate content. + + “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone + rigorous safety optimization. huihui.ai bears no responsibility for any consequences + arising from its use.' +--- + +# FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF +This model was converted to GGUF format from [`huihui-ai/Phi-4-mini-reasoning-abliterated`](https://huggingface.co/huihui-ai/Phi-4-mini-reasoning-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. +Refer to the [original model card](https://huggingface.co/huihui-ai/Phi-4-mini-reasoning-abliterated) for more details on the model. + +## Use with llama.cpp +Install llama.cpp through brew (works on Mac and Linux) + +```bash +brew install llama.cpp + +``` +Invoke the llama.cpp server or the CLI. + +### CLI: +```bash +llama-cli --hf-repo FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF --hf-file phi-4-mini-reasoning-abliterated-q8_0.gguf -p "The meaning to life and the universe is" +``` + +### Server: +```bash +llama-server --hf-repo FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF --hf-file phi-4-mini-reasoning-abliterated-q8_0.gguf -c 2048 +``` + +Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. + +Step 1: Clone llama.cpp from GitHub. +``` +git clone https://github.com/ggerganov/llama.cpp +``` + +Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). +``` +cd llama.cpp && LLAMA_CURL=1 make +``` + +Step 3: Run inference through the main binary. +``` +./llama-cli --hf-repo FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF --hf-file phi-4-mini-reasoning-abliterated-q8_0.gguf -p "The meaning to life and the universe is" +``` +or +``` +./llama-server --hf-repo FORNAX20/Phi-4-mini-reasoning-abliterated-Q8_0-GGUF --hf-file phi-4-mini-reasoning-abliterated-q8_0.gguf -c 2048 +``` diff --git a/phi-4-mini-reasoning-abliterated-q8_0.gguf b/phi-4-mini-reasoning-abliterated-q8_0.gguf new file mode 100644 index 0000000..d778119 --- /dev/null +++ b/phi-4-mini-reasoning-abliterated-q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3de0686c825109b8109e9ab80a25eb97f1990847a674172aaa565d3756b8282a +size 4084611680