41 lines
1.2 KiB
Markdown
41 lines
1.2 KiB
Markdown
---
|
|
language: en
|
|
license: odc-by
|
|
base_model:
|
|
- Qwen/Qwen3-8B
|
|
library_name: transformers
|
|
---
|
|
|
|
|
|
# Model
|
|
|
|
A distillation model checkpoint. For more details on intent aware training please **read our [paper](https://openreview.net/pdf?id=fRCm5c8x0j)**!
|
|
|
|
|
|
# Results
|
|
|
|
Will be updated soon.
|
|
|
|
# Intended uses & limitations
|
|
|
|
This model is licensed under ODC-BY. It is intended for research and educational use in accordance with [Ai2's Responsible Use Guidelines](https://allenai.org/responsible-use).
|
|
|
|
## Training
|
|
|
|
The script used to train this model can be found [here](https://github.com/allenai/intent-aware-lfqa).
|
|
|
|
# Links
|
|
- 📝 [Paper](https://openreview.net/pdf?id=fRCm5c8x0j)
|
|
- 💻 [Code](https://github.com/allenai/intent-aware-lfqa)
|
|
- 🤖 [Collection](https://huggingface.co/collections/allenai/intent-aware-lfqa)
|
|
|
|
|
|
# Citation
|
|
```
|
|
@article{zhaoimproving,
|
|
title={Improving Attributed Long-form Question Answering with Intent Awareness},
|
|
author={Zhao, Xinran and Naik, Aakanksha and DeYoung, Jay and Chang, Joseph Chee and Hwang, Jena D and Wu, Tongshuang and Kishore, Varsha},
|
|
journal={The Fourteenth International Conference on Learning Representations},
|
|
year={2026}
|
|
}
|
|
``` |