75 lines
2.9 KiB
Markdown
75 lines
2.9 KiB
Markdown
|
|
---
|
||
|
|
base_model: meta-llama/Llama-3.1-8B
|
||
|
|
library_name: transformers
|
||
|
|
license: llama3.1
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
tags:
|
||
|
|
- flashnorm
|
||
|
|
- transformer-tricks
|
||
|
|
- efficient-inference
|
||
|
|
- weightless-rmsnorm
|
||
|
|
---
|
||
|
|
|
||
|
|
# Llama-3.1-8B-FlashNorm
|
||
|
|
|
||
|
|
FlashNorm-prepared checkpoint of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). Mathematically equivalent to the source model. This model was presented in the paper [FlashNorm: Fast Normalization for Transformers](https://huggingface.co/papers/2407.09577).
|
||
|
|
|
||
|
|
The per-channel RMSNorm weight tensors (`input_layernorm.weight`, `post_attention_layernorm.weight`, `model.norm.weight`) are folded into the following linear layers and then removed from the state dict entirely.
|
||
|
|
|
||
|
|
> **Framework support note.** Stock vLLM currently does not load this checkpoint because the norm weight tensors are absent. The upstream patch to accept missing tensors is tracked at: **TBD (vLLM issue link)**. Until the patch lands, use HuggingFace Transformers; it loads this with a warning that norm weights were not initialized and defaults them to ones, which is the correct behavior for FlashNorm.
|
||
|
|
|
||
|
|
## What FlashNorm does
|
||
|
|
|
||
|
|
An exact reformulation of `RMSNorm -> Linear`:
|
||
|
|
|
||
|
|
- Fold the per-channel normalization weight `g` into the following linear layer: `W_star = W @ diag(g)`, computed once at checkpoint conversion.
|
||
|
|
- After folding, the RMSNorm layer has no learnable per-channel scale. At runtime it simply divides by `rms(x)`.
|
||
|
|
- The resulting model computes the same output as the original, by Proposition 1 of the FlashNorm paper.
|
||
|
|
|
||
|
|
See the [paper](https://arxiv.org/abs/2407.09577) and the [transformer-tricks](https://github.com/OpenMachine-ai/transformer-tricks) repo for details.
|
||
|
|
|
||
|
|
## Usage
|
||
|
|
|
||
|
|
### Regenerate locally with `transformer_tricks`
|
||
|
|
|
||
|
|
```python
|
||
|
|
import transformer_tricks as tt
|
||
|
|
tt.flashify_repo('meta-llama/Llama-3.1-8B', strict=True)
|
||
|
|
```
|
||
|
|
|
||
|
|
### Via HuggingFace Transformers
|
||
|
|
|
||
|
|
```python
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
|
||
|
|
tok = AutoTokenizer.from_pretrained('open-machine/Llama-3.1-8B-FlashNorm')
|
||
|
|
model = AutoModelForCausalLM.from_pretrained('open-machine/Llama-3.1-8B-FlashNorm')
|
||
|
|
|
||
|
|
ids = tok('Once upon a time', return_tensors='pt').input_ids
|
||
|
|
out = model.generate(ids, max_new_tokens=50, do_sample=False)
|
||
|
|
print(tok.decode(out[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
A warning about missing norm weights is expected; Transformers defaults those to ones, which is the correct value for a FlashNorm checkpoint.
|
||
|
|
|
||
|
|
### Via vLLM
|
||
|
|
|
||
|
|
Not yet supported. See the tracking issue linked above.
|
||
|
|
|
||
|
|
## License
|
||
|
|
|
||
|
|
Inherited from the source model.
|
||
|
|
|
||
|
|
## Citation
|
||
|
|
|
||
|
|
```bibtex
|
||
|
|
@misc{graef2024flashnormfastnormalizationtransformers,
|
||
|
|
title={FlashNorm: Fast Normalization for Transformers},
|
||
|
|
author={Nils Graef and Matthew Clapp and Andrew Wasielewski},
|
||
|
|
year={2024},
|
||
|
|
eprint={2407.09577},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.LG},
|
||
|
|
url={https://arxiv.org/abs/2407.09577},
|
||
|
|
}
|
||
|
|
```
|