66 lines
2.1 KiB
Markdown
66 lines
2.1 KiB
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
base_model:
|
||
|
|
- Qwen/Qwen2.5-0.5B
|
||
|
|
base_model_relation: quantized
|
||
|
|
tags:
|
||
|
|
- quantization
|
||
|
|
- float16
|
||
|
|
- half-precision
|
||
|
|
- pytorch
|
||
|
|
- edge-deployment
|
||
|
|
- qwen2
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
---
|
||
|
|
|
||
|
|
# Two_and_a_half_Qwen2.5-MiniFP16
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
This is a **float16 (half precision) quantized** version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
|
||
|
|
All model weights are converted from float32 to float16, reducing model size by ~50% while
|
||
|
|
maintaining near-identical text generation quality.
|
||
|
|
|
||
|
|
## Key Features
|
||
|
|
- **Half the size**: 942.4 MB (down from 1884.7 MB)
|
||
|
|
- **No GPU required**: Runs on CPU and Apple Silicon Macs
|
||
|
|
- **Near-lossless**: Float16 preserves most of the original precision
|
||
|
|
- **Zero training**: Pure post-training quantization
|
||
|
|
- **HuggingFace native**: Standard safetensors format, load with AutoModelForCausalLM
|
||
|
|
|
||
|
|
## Quantization Details
|
||
|
|
- **Method**: PyTorch `.half()` conversion (float32 -> float16)
|
||
|
|
- **Target**: All model parameters (weights, biases, embeddings)
|
||
|
|
- **Original dtype**: torch.float32 (32-bit, 4 bytes per weight)
|
||
|
|
- **Quantized dtype**: torch.float16 (16-bit, 2 bytes per weight)
|
||
|
|
- **Compression ratio**: ~2x
|
||
|
|
|
||
|
|
## Usage
|
||
|
|
|
||
|
|
```python
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
import torch
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained("Ringkvist/Two_and_a_half_Qwen2.5-MiniFP16")
|
||
|
|
model = AutoModelForCausalLM.from_pretrained(
|
||
|
|
"Ringkvist/Two_and_a_half_Qwen2.5-MiniFP16",
|
||
|
|
torch_dtype=torch.float16,
|
||
|
|
)
|
||
|
|
|
||
|
|
inputs = tokenizer("The future of AI is", return_tensors="pt")
|
||
|
|
with torch.no_grad():
|
||
|
|
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7)
|
||
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
## Limitations
|
||
|
|
- Slight numerical precision loss vs float32 (negligible for inference)
|
||
|
|
- Some operations may need float32 upcasting on certain hardware
|
||
|
|
- Not as aggressive as int8/int4 quantization but much simpler and more portable
|
||
|
|
|
||
|
|
## Base Model
|
||
|
|
- **Model**: [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B)
|
||
|
|
- **Parameters**: ~494M
|
||
|
|
- **Architecture**: Qwen2 (decoder-only transformer)
|