From 2ff8a4047d88f9b4a2757636c943f4f25175ff9f Mon Sep 17 00:00:00 2001 From: "first_name.last_name" Date: Thu, 25 Apr 2024 21:41:12 +0000 Subject: [PATCH] add processing notice --- README.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..10fc778 --- /dev/null +++ b/README.md @@ -0,0 +1,23 @@ +--- +inference: false +--- +# Orenguteng/Lexi-Llama-3-8B-Uncensored AWQ + +** PROCESSING .... ETA 30mins ** + +- Model creator: [Orenguteng](https://huggingface.co/Orenguteng) +- Original model: [Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored) + +### About AWQ + +AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. + +AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. + +It is supported by: + +- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ +- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. +- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) +- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers +- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code