Update README.md (batch 1/1)
This commit is contained in:
@@ -17,7 +17,8 @@ pipeline_tag: text-generation
|
||||
# UNO-Scorer: A Unified General Scoring Model for UNO-Bench
|
||||
|
||||
<div align="center">
|
||||
|
||||
|
||||
[](https://meituan-longcat.github.io/UNO-Bench)
|
||||
[](https://arxiv.org/abs/2510.18915)
|
||||
[](https://huggingface.co/Qwen/Qwen3-14B)
|
||||
[]()
|
||||
@@ -53,7 +54,7 @@ pip install -U transformers
|
||||
python3 test_scorer_hf.py --model-name /path/to/your/model
|
||||
```
|
||||
|
||||
We recommend using vLLM for inference as it offers significantly better efficiency compared to the standard HuggingFace approach. Please follow the steps below to set up the environment and run the inference script provided in our official repository.
|
||||
We recommend using vLLM for inference as it offers significantly better efficiency compared to the standard HuggingFace approach. Please follow the steps below to set up the environment and run the inference script provided in our official repository [UNO-Bench](https://github.com/meituan-longcat/UNO-Bench).
|
||||
|
||||
|
||||
### 1. Clone the Repository
|
||||
@@ -61,7 +62,7 @@ First, clone the UNO-Bench repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/meituan-longcat/UNO-Bench.git
|
||||
cd UNO-Bench/uno_eval
|
||||
cd UNO-Bench/uno-eval
|
||||
```
|
||||
|
||||
### 2. Install Dependencies
|
||||
@@ -75,7 +76,7 @@ pip install -r requirement.txt
|
||||
We provide an example script based on **vLLM** for efficient model inference. You can run the following command to test the scorer:
|
||||
|
||||
```bash
|
||||
bash examples/test_scorer.sh
|
||||
bash examples/test_scorer_vllm.sh
|
||||
```
|
||||
|
||||
### 4. Adapt Your Reference Answer
|
||||
|
||||
Reference in New Issue
Block a user