diff --git a/README.md b/README.md index 3a537da..fdef6d4 100644 --- a/README.md +++ b/README.md @@ -6,16 +6,14 @@ tags: base_model: - Qwen/Qwen2-VL-2B-Instruct license: mit -tasks: - - image-text-to-text --- -[Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Studio](https://www.modelscope.cn/studios/AI-ModelScope/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://www.modelscope.cn/models/AI-ModelScope/ShowUI-2B) +[Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Spaces](https://huggingface.co/spaces/showlab/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://huggingface.co/showlab/ShowUI-2B) ShowUI ShowUI is a lightweight (2B) vision-language-action model designed for GUI agents. -## Try our ModelScope Studio Demo -https://www.modelscope.cn/studios/AI-ModelScope/ShowUI +## 🤗 Try our HF Space Demo +https://huggingface.co/spaces/showlab/ShowUI ## ⭐ Quick Start @@ -49,7 +47,7 @@ model = Qwen2VLForConditionalGeneration.from_pretrained( min_pixels = 256*28*28 max_pixels = 1344*28*28 -processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) +processor = AutoProcessor.from_pretrained("showlab/ShowUI-2B", min_pixels=min_pixels, max_pixels=max_pixels) ``` 2. **UI Grounding** @@ -146,7 +144,7 @@ action_map = { ```python img_url = 'examples/chrome.png' split='web' -system_prompt = _NAV_SYSTEM.format(_APP=split, _ACTION_SPACE=action_map[split]) +system_prompt = _NAV_SYSTEM.format(_APP=split, _ACTION_SPACE=action_map[split]) + _NAV_FORMAT query = "Search the weather for the New York city." messages = [