This is a float16 (half precision) quantized version of Qwen/Qwen2.5-0.5B.
All model weights are converted from float32 to float16, reducing model size by ~50% while
maintaining near-identical text generation quality.
Key Features
Half the size: 942.4 MB (down from 1884.7 MB)
No GPU required: Runs on CPU and Apple Silicon Macs
Near-lossless: Float16 preserves most of the original precision
Zero training: Pure post-training quantization
HuggingFace native: Standard safetensors format, load with AutoModelForCausalLM
Target: All model parameters (weights, biases, embeddings)
Original dtype: torch.float32 (32-bit, 4 bytes per weight)
Quantized dtype: torch.float16 (16-bit, 2 bytes per weight)
Compression ratio: ~2x
Usage
fromtransformersimportAutoModelForCausalLM,AutoTokenizerimporttorchtokenizer=AutoTokenizer.from_pretrained("Ringkvist/Two_and_a_half_Qwen2.5-MiniFP16")model=AutoModelForCausalLM.from_pretrained("Ringkvist/Two_and_a_half_Qwen2.5-MiniFP16",torch_dtype=torch.float16,)inputs=tokenizer("The future of AI is",return_tensors="pt")withtorch.no_grad():outputs=model.generate(**inputs,max_new_tokens=100,temperature=0.7)print(tokenizer.decode(outputs[0],skip_special_tokens=True))
Limitations
Slight numerical precision loss vs float32 (negligible for inference)
Some operations may need float32 upcasting on certain hardware
Not as aggressive as int8/int4 quantization but much simpler and more portable