初始化项目,由ModelHub XC社区提供模型
Model: open-machine/Llama-3.1-8B-FlashNorm Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
75
README.md
Normal file
75
README.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
base_model: meta-llama/Llama-3.1-8B
|
||||||
|
library_name: transformers
|
||||||
|
license: llama3.1
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
tags:
|
||||||
|
- flashnorm
|
||||||
|
- transformer-tricks
|
||||||
|
- efficient-inference
|
||||||
|
- weightless-rmsnorm
|
||||||
|
---
|
||||||
|
|
||||||
|
# Llama-3.1-8B-FlashNorm
|
||||||
|
|
||||||
|
FlashNorm-prepared checkpoint of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). Mathematically equivalent to the source model. This model was presented in the paper [FlashNorm: Fast Normalization for Transformers](https://huggingface.co/papers/2407.09577).
|
||||||
|
|
||||||
|
The per-channel RMSNorm weight tensors (`input_layernorm.weight`, `post_attention_layernorm.weight`, `model.norm.weight`) are folded into the following linear layers and then removed from the state dict entirely.
|
||||||
|
|
||||||
|
> **Framework support note.** Stock vLLM currently does not load this checkpoint because the norm weight tensors are absent. The upstream patch to accept missing tensors is tracked at: **TBD (vLLM issue link)**. Until the patch lands, use HuggingFace Transformers; it loads this with a warning that norm weights were not initialized and defaults them to ones, which is the correct behavior for FlashNorm.
|
||||||
|
|
||||||
|
## What FlashNorm does
|
||||||
|
|
||||||
|
An exact reformulation of `RMSNorm -> Linear`:
|
||||||
|
|
||||||
|
- Fold the per-channel normalization weight `g` into the following linear layer: `W_star = W @ diag(g)`, computed once at checkpoint conversion.
|
||||||
|
- After folding, the RMSNorm layer has no learnable per-channel scale. At runtime it simply divides by `rms(x)`.
|
||||||
|
- The resulting model computes the same output as the original, by Proposition 1 of the FlashNorm paper.
|
||||||
|
|
||||||
|
See the [paper](https://arxiv.org/abs/2407.09577) and the [transformer-tricks](https://github.com/OpenMachine-ai/transformer-tricks) repo for details.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Regenerate locally with `transformer_tricks`
|
||||||
|
|
||||||
|
```python
|
||||||
|
import transformer_tricks as tt
|
||||||
|
tt.flashify_repo('meta-llama/Llama-3.1-8B', strict=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Via HuggingFace Transformers
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
tok = AutoTokenizer.from_pretrained('open-machine/Llama-3.1-8B-FlashNorm')
|
||||||
|
model = AutoModelForCausalLM.from_pretrained('open-machine/Llama-3.1-8B-FlashNorm')
|
||||||
|
|
||||||
|
ids = tok('Once upon a time', return_tensors='pt').input_ids
|
||||||
|
out = model.generate(ids, max_new_tokens=50, do_sample=False)
|
||||||
|
print(tok.decode(out[0], skip_special_tokens=True))
|
||||||
|
```
|
||||||
|
|
||||||
|
A warning about missing norm weights is expected; Transformers defaults those to ones, which is the correct value for a FlashNorm checkpoint.
|
||||||
|
|
||||||
|
### Via vLLM
|
||||||
|
|
||||||
|
Not yet supported. See the tracking issue linked above.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Inherited from the source model.
|
||||||
|
|
||||||
|
## Citation
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@misc{graef2024flashnormfastnormalizationtransformers,
|
||||||
|
title={FlashNorm: Fast Normalization for Transformers},
|
||||||
|
author={Nils Graef and Matthew Clapp and Andrew Wasielewski},
|
||||||
|
year={2024},
|
||||||
|
eprint={2407.09577},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.LG},
|
||||||
|
url={https://arxiv.org/abs/2407.09577},
|
||||||
|
}
|
||||||
|
```
|
||||||
36
config.json
Normal file
36
config.json
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 128000,
|
||||||
|
"dtype": "bfloat16",
|
||||||
|
"eos_token_id": 128001,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 14336,
|
||||||
|
"max_position_embeddings": 131072,
|
||||||
|
"mlp_bias": false,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": null,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_parameters": {
|
||||||
|
"factor": 8.0,
|
||||||
|
"high_freq_factor": 4.0,
|
||||||
|
"low_freq_factor": 1.0,
|
||||||
|
"original_max_position_embeddings": 8192,
|
||||||
|
"rope_theta": 500000.0,
|
||||||
|
"rope_type": "llama3"
|
||||||
|
},
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"transformers_version": "5.5.4",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 128256
|
||||||
|
}
|
||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:08487ca3525f77dcdf6a86df3d85c997397c1cd681ef114c647e4478b0c748db
|
||||||
|
size 16060016552
|
||||||
4
special_tokens_map.json
Normal file
4
special_tokens_map.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"bos_token": "<|begin_of_text|>",
|
||||||
|
"eos_token": "<|end_of_text|>"
|
||||||
|
}
|
||||||
410563
tokenizer.json
Normal file
410563
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
2061
tokenizer_config.json
Normal file
2061
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user