初始化项目,由ModelHub XC社区提供模型
Model: Naahraf27/npo_llama-3.2-1b-instruct_forget10_ep10_lr5e-5_alpha1.0_beta0.1 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
86
README.md
Normal file
86
README.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
license: llama3.2
|
||||
library_name: transformers
|
||||
base_model: open-unlearning/tofu_Llama-3.2-1B-Instruct_full
|
||||
tags:
|
||||
- unlearning
|
||||
- tofu
|
||||
- npo
|
||||
- llama
|
||||
- memory-laundering
|
||||
- machine-unlearning
|
||||
datasets:
|
||||
- locuslab/TOFU
|
||||
pipeline_tag: text-generation
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
# 1B NPO-Unlearned Llama -- TOFU forget10
|
||||
|
||||
This is the **benchmark-selected rank-1 1B checkpoint** from:
|
||||
|
||||
> **Do Unlearned LLMs Really Forget? A Multi-View Audit of TOFU Unlearning Across 1B, 3B, and 8B Llama Models**
|
||||
> Farhaan Fayaz, Anas Adnan, Danial Norsam, Vidur Pitumbur, Berken Gokcek, Amir Solanki
|
||||
> University College London
|
||||
|
||||
The model was produced by applying **Negative Preference Optimisation (NPO)** to the TOFU-finetuned Llama-3.2-1B-Instruct checkpoint, targeting the `forget10` split (20 fictitious authors, 200 QA pairs).
|
||||
|
||||
## Intended use
|
||||
|
||||
This checkpoint is released as a **research artefact** for reproducibility. It is the exact model evaluated in the paper. It is not intended for production deployment.
|
||||
|
||||
## Training details
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Base model | [`open-unlearning/tofu_Llama-3.2-1B-Instruct_full`](https://huggingface.co/open-unlearning/tofu_Llama-3.2-1B-Instruct_full) |
|
||||
| Unlearning method | NPO |
|
||||
| Forget split | `forget10` (20 authors, 200 QA pairs) |
|
||||
| Retain split | `retain90` |
|
||||
| Epochs | 10 |
|
||||
| Learning rate | 5e-5 |
|
||||
| Alpha | 1.0 |
|
||||
| Beta | 0.1 |
|
||||
| Sweep | 54-run grid (2 epochs x 3 LRs x 3 alphas x 3 betas) |
|
||||
| Selection | Rank-1 by official TOFU `forget_quality` metric (blind) |
|
||||
|
||||
## Benchmark and audit results
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| TOFU forget quality | 0.967 |
|
||||
| TOFU model utility | 0.548 |
|
||||
| Overall novel-recall leak (corrected scorer) | 4.67% |
|
||||
| Format-shift leak rate | 16.9% |
|
||||
| Best-of-N prompt-level leak | 10.5% |
|
||||
| Masked probe top-1 accuracy (last layer) | 0.618 |
|
||||
| Forgotten-answer log-likelihood shift vs TOFU-full | -0.599 |
|
||||
| RTT recovery delta | +0.84 pp |
|
||||
|
||||
Under the corrected novel-recall scorer (which excludes prompt-echoed content), base models leak near zero (0.4%), confirming that detected leakage is genuine TOFU-specific knowledge. The TOFU-full model leaks 3.70% at this scale. This unlearned checkpoint leaks 4.67% -- actually *exceeding* its TOFU-full counterpart by 0.97 pp. NPO changes what the model says far more than what it still knows.
|
||||
|
||||
Full audit details, including per-family breakdowns, are reported in the paper and the [GitHub repository](https://github.com/Naahraf27/memory-laundering).
|
||||
|
||||
## How to load
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "Naahraf27/npo_llama-3.2-1b-instruct_forget10_ep10_lr5e-5_alpha1.0_beta0.1"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16", device_map="auto")
|
||||
```
|
||||
|
||||
## Citation
|
||||
|
||||
If you use this checkpoint, please cite:
|
||||
|
||||
```bibtex
|
||||
@article{fayaz2026memory,
|
||||
title={Do Unlearned LLMs Really Forget? A Multi-View Audit of TOFU Unlearning Across 1B, 3B, and 8B Llama Models},
|
||||
author={Fayaz, Farhaan and Adnan, Anas and Norsam, Danial and Pitumbur, Vidur and Gokcek, Berken and Solanki, Amir},
|
||||
year={2026},
|
||||
institution={University College London}
|
||||
}
|
||||
```
|
||||
39
config.json
Normal file
39
config.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 128000,
|
||||
"eos_token_id": [
|
||||
128001,
|
||||
128008,
|
||||
128009
|
||||
],
|
||||
"head_dim": 64,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 8192,
|
||||
"max_position_embeddings": 131072,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 16,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": {
|
||||
"factor": 32.0,
|
||||
"high_freq_factor": 4.0,
|
||||
"low_freq_factor": 1.0,
|
||||
"original_max_position_embeddings": 8192,
|
||||
"rope_type": "llama3"
|
||||
},
|
||||
"rope_theta": 500000.0,
|
||||
"tie_word_embeddings": true,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.51.3",
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
}
|
||||
12
generation_config.json
Normal file
12
generation_config.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"bos_token_id": 128000,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
128001,
|
||||
128008,
|
||||
128009
|
||||
],
|
||||
"temperature": 0.6,
|
||||
"top_p": 0.9,
|
||||
"transformers_version": "4.51.3"
|
||||
}
|
||||
16
memory_laundering_checkpoint.json
Normal file
16
memory_laundering_checkpoint.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"checkpoint_kind": "full_or_sharded",
|
||||
"source_dir": "/scratch0/asolanki/projects/open-unlearning/saves/unlearn/npo_llama-3.2-1b-instruct_forget10_goldbug1b_full54_1gpu_ep10_lr5e-5_alpha1.0_beta0.1",
|
||||
"staged_dir": "/scratch0/asolanki/runtime/staged-checkpoints/npo_llama-3.2-1b-instruct_forget10_goldbug1b_full54_1gpu_ep10_lr5e-5_alpha1.0_beta0.1",
|
||||
"copied_files": [
|
||||
"config.json",
|
||||
"generation_config.json",
|
||||
"model.safetensors",
|
||||
"special_tokens_map.json",
|
||||
"tokenizer.json",
|
||||
"tokenizer_config.json"
|
||||
],
|
||||
"metadata": {
|
||||
"planned_repo_id": "Naahraf27/npo_llama-3.2-1b-instruct_forget10_goldbug1b_full54_1gpu_ep10_lr5e-5_alpha1.0_beta0.1"
|
||||
}
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e1cac2260fb14e2b03decca76525eb375784ef4dfcab5e1c038e384ee6b91a16
|
||||
size 2471645608
|
||||
17
special_tokens_map.json
Normal file
17
special_tokens_map.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|eot_id|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "<|eot_id|>"
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
|
||||
size 17209920
|
||||
2064
tokenizer_config.json
Normal file
2064
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user