license, datasets, language, base_model, pipeline_tag, library_name, tags
license datasets language base_model pipeline_tag library_name tags
mit
locuslab/TOFU
en
NousResearch/Llama-2-7b-chat-hf
text-generation transformers
unlearn
machine-unlearning
llm-unlearning
data-privacy
large-language-models
trustworthy-ai
trustworthy-machine-learning
language-model

Origin Model on Task "TOFU"

Model Details

Loading the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("OPTML-Group/TOFU-origin-Llama-2-7b-chat", use_flash_attention_2=True, torch_dtype=torch.bfloat16, trust_remote_code=True)

Citation

If you use this model in your research, please cite:

@article{fan2024simplicity,
  title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
  author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
  journal={arXiv preprint arXiv:2410.07163},
  year={2024}
}

Reporting Issues

Reporting issues with the model: github.com/OPTML-Group/Unlearn-Simple

Description
Model synced from source: OPTML-Group/TOFU-origin-Llama-2-7b-chat
Readme 581 KiB