Model: SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True Source: Original Platform
license, language, library_name, tags, base_model, inference
| license | language | library_name | tags | base_model | inference | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| llama2 |
|
transformers |
|
meta-llama/LlamaGuard-7b | false |
Quantized version of meta-llama/LlamaGuard-7b
Model Description
The model meta-llama/LlamaGuard-7b was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).
Evaluation
To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.
📊 Full Precision Model:
Average Precision Score: 0.3625
📊 4-bit Quantized Model:
Average Precision Score: 0.3450
Description
Model synced from source: SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True