diff --git a/README.md b/README.md
index 26e0964..5bae937 100644
--- a/README.md
+++ b/README.md
@@ -1,51 +1,12 @@
----
-frameworks:
-- Pytorch
-license: Apache License 2.0
-tasks:
-- image-text-to-text
+# FAST-3B Model Documentation
-#model-type:
-##如 gpt、phi、llama、chatglm、baichuan 等
-#- gpt
+## Overview
+This repository provides access to the **FAST-3B** model, which is built on the **Qwen/Qwen2.5-VL-3B-Instruct** base model.
-#domain:
-##如 nlp、cv、audio、multi-modal
-#- nlp
-
-#language:
-##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
-#- cn
-
-#metrics:
-##如 CIDEr、Blue、ROUGE 等
-#- CIDEr
-
-#tags:
-##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
-#- pretrained
-
-#tools:
-##如 vllm、fastchat、llamacpp、AdaSeq 等
-#- vllm
----
-### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
-#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
-
-SDK下载
-```bash
-#安装ModelScope
-pip install modelscope
+## System Prompt
```
-```python
-#SDK模型下载
-from modelscope import snapshot_download
-model_dir = snapshot_download('xiaowenyi/FAST-3B')
-```
-Git下载
-```
-#Git模型下载
-git clone https://www.modelscope.cn/xiaowenyi/FAST-3B.git
+"""You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within
如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
\ No newline at end of file +We recommend setting `temperature=0` to reproduce the reported performance. Note that performance may vary depending on the version of vLLM being used. \ No newline at end of file