初始化项目,由ModelHub XC社区提供模型
Model: open-machine/Llama-3.2-1B-FlashNorm-test Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
64
README.md
Normal file
64
README.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
license: llama3.2
|
||||
base_model: meta-llama/Llama-3.2-1B
|
||||
tags:
|
||||
- flashnorm
|
||||
- transformer-tricks
|
||||
- efficient-inference
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Llama-3.2-1B-FlashNorm
|
||||
|
||||
FlashNorm-prepared compatibility checkpoint of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B), derived from Meta's original weights obtained via the [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) ungated mirror (bit-identical to the upstream release).
|
||||
|
||||
The FlashNorm transformation is mathematically exact. This checkpoint loads in stock `transformers` and `vLLM` without any code changes.
|
||||
|
||||
## What is FlashNorm?
|
||||
|
||||
An exact reformulation of `RMSNorm → Linear` that (i) folds the per-channel normalization weights into the following linear layer (`W* = W · diag(g)`) and (ii) defers the scalar `1/RMS(x)` normalization to after the matmul. On hardware with distinct vector and matrix units, the matrix multiplication and the RMS reduction can execute in parallel.
|
||||
|
||||
See the [paper](https://github.com/OpenMachine-ai/transformer-tricks/blob/main/doc/flashNorm.pdf) and the [transformer-tricks](https://github.com/OpenMachine-ai/transformer-tricks) repo for details.
|
||||
|
||||
## What's different from the source checkpoint?
|
||||
|
||||
| Tensor | Source | This checkpoint |
|
||||
|---|---|---|
|
||||
| `model.layers.*.input_layernorm.weight` | learned per-channel `g` | all ones |
|
||||
| `model.layers.*.self_attn.{q,k,v}_proj.weight` | `W` | `W · diag(g_input_layernorm)` |
|
||||
| `model.layers.*.post_attention_layernorm.weight` | learned per-channel `g` | all ones |
|
||||
| `model.layers.*.mlp.{gate,up}_proj.weight` | `W` | `W · diag(g_post_attention_layernorm)` |
|
||||
|
||||
All tensors are stored in the source dtype (`bfloat16`); the merged products are computed in float32 internally before casting back. `model.norm.weight` is unchanged (tied embeddings). Mathematical identity holds by construction.
|
||||
|
||||
## Usage
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
tok = AutoTokenizer.from_pretrained('open-machine/Llama-3.2-1B-FlashNorm')
|
||||
model = AutoModelForCausalLM.from_pretrained('open-machine/Llama-3.2-1B-FlashNorm')
|
||||
|
||||
ids = tok('Once upon a time there was', return_tensors='pt').input_ids
|
||||
out = model.generate(ids, max_new_tokens=50, do_sample=False)
|
||||
print(tok.decode(out[0], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
With vLLM:
|
||||
|
||||
```bash
|
||||
vllm serve open-machine/Llama-3.2-1B-FlashNorm
|
||||
```
|
||||
|
||||
## Framework behavior
|
||||
|
||||
The FlashNorm transformation is mathematically exact.
|
||||
|
||||
- **HuggingFace Transformers at fp32**: greedy generation is bit-identical to the source.
|
||||
- **HuggingFace Transformers at fp16** and **vLLM** (any precision): a one-token argmax flip is possible at tight decision points; downstream greedy decoding then amplifies this. Reason: precomputed merged weights interact differently with lossy inference kernels than runtime `x·g·W` would.
|
||||
|
||||
This is a general property of precomputing weight-folded tensors for lossy-inference kernels, **not specific to FlashNorm**. A native fused `RMSNorm + QKV` kernel (deferring `g` to runtime) eliminates the framework dependency and is in progress for vLLM / FlashInfer.
|
||||
|
||||
## License
|
||||
|
||||
Llama 3.2 Community License, inherited from the source model.
|
||||
37
config.json
Normal file
37
config.json
Normal file
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 128000,
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 128001,
|
||||
"head_dim": 64,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 8192,
|
||||
"max_position_embeddings": 131072,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 16,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 128004,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_parameters": {
|
||||
"factor": 32.0,
|
||||
"high_freq_factor": 4.0,
|
||||
"low_freq_factor": 1.0,
|
||||
"original_max_position_embeddings": 8192,
|
||||
"rope_theta": 500000.0,
|
||||
"rope_type": "llama3"
|
||||
},
|
||||
"tie_word_embeddings": true,
|
||||
"transformers_version": "5.5.4",
|
||||
"unsloth_fixed": true,
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d998eccffcf435d811fc2c76a1640e09ecc40678d44349748cb221652ebaff59
|
||||
size 2471776384
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|finetune_right_pad_id|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
|
||||
size 17209920
|
||||
2066
tokenizer_config.json
Normal file
2066
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user