初始化项目,由ModelHub XC社区提供模型
Model: YeungNLP/LongQLoRA-Llama2-7b-8k Source: Original Platform
This commit is contained in:
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
# LongQLoRA: Efficient and Effective Method to Extend Context Length of LLMs
|
||||
|
||||
## Technical Report
|
||||
|
||||
Technical Report: [LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models](https://arxiv.org/abs/2311.04879)
|
||||
|
||||
## Introduction
|
||||
LongQLoRA is a memory-efficient and effective method to extend context length of Large Language Models with less training GPUs.
|
||||
**On a single 32GB V100 GPU**, LongQLoRA can extend the context length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k.
|
||||
LongQLoRA achieves competitive perplexity performance on PG19 and Proof-pile dataset after only 1000 finetuning steps, our model outperforms LongLoRA and is very close to MPT-7B-8K.
|
||||
|
||||
|
||||
Evaluation perplexity on PG19 validation and Proof-pile test datasets in evaluation context length of 8192:
|
||||
|
||||
| Model | PG19 | Proof-pile |
|
||||
|---------------------|----------|------------|
|
||||
| LLaMA2-7B | \>1000 | \>1000 |
|
||||
| MPT-7B-8K | 7.98 | 2.67 |
|
||||
| LongLoRA-LoRA-7B-8K | 8.20 | 2.78 |
|
||||
| LongLoRA-Full-7B-8K | 7.93 | 2.73 |
|
||||
| **LongQLoRA-7B-8K** | **7.96** | **2.73** |
|
||||
Reference in New Issue
Block a user