Files
ModelHub XC 45fc9dc053 初始化项目,由ModelHub XC社区提供模型
Model: SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True
Source: Original Platform
2026-05-01 08:52:19 +08:00

820 B

license, language, library_name, tags, base_model, inference
license language library_name tags base_model inference
llama2
en
transformers
facebook
meta
pytorch
llama
llama-2
4bit
gptq
meta-llama/LlamaGuard-7b false

Quantized version of meta-llama/LlamaGuard-7b

Model Description

The model meta-llama/LlamaGuard-7b was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).

Evaluation

To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.

📊 Full Precision Model:

Average Precision Score: 0.3625

📊 4-bit Quantized Model:

Average Precision Score: 0.3450