335 3 weeks ago

MAI-UI is a family of large-scale GUI foundation agents (2B,8B) built for real-world, interactive human-computer control across desktop and mobile Uls. (Template supports tools)

vision tools 2b 8b
ollama run maternion/mai-ui:2b

Details

3 weeks ago

0dd3008abeeb · 2.6GB ·

qwen3vl
·
2.13B
·
Q8_0
{ "repeat_penalty": 1.05, "stop": [ "<|im_start|>", "<|im_end|>", "<
{{ .Prompt }}

Readme

MAI-UI: Real-World Centric Foundation GUI Agents.

[📄 Paper] [🌐 Website] [💻 GitHub]

overview

📖 Background

The development of GUI agents could revolutionize the next generation of human-computer interaction. Motivated by this vision, we present MAI-UI, a family of foundation GUI agents spanning the full spectrum of sizes, including 2B, 8B, 32B, and 235B-A22B variants. We identify four key challenges to realistic deployment: the lack of native agent–user interaction, the limits of UI-only operation, the absence of a practical deployment architecture, and brittleness in dynamic environments. MAI-UI addresses these issues with a unified methodology: a self-evolving data pipeline that expands the navigation data to include user interaction and MCP tool calls, a native device–cloud collaboration system that routes execution by task state, and an online RL framework with advanced optimizations to scale parallel environments and context length.

🚀 Quick Start

Deployment with Ollama

You can deploy the model using Ollama (requires ollama>=0.12.7):

# Install Ollama
https://ollama.com/download 
# Start Ollama API server
ollama run Maternion/mai-ui:8b

The model will be served at http://localhost:11434/v1.

🏆 Results

Grounding

MAI-UI establishes new state-of-the-art across GUI grounding and mobile navigation.

  • On grounding benchmarks, it reaches 73.5% on ScreenSpot-Pro, 91.3% on MMBench GUI L2, 70.9% on OSWorld-G, and 49.2% on UI-Vision, surpassing Gemini-3-Pro and Seed1.8 on ScreenSpot-Pro. sspro uivision mmbench osworld-g

Mobile Navigation

  • On mobile GUI navigation, it sets a new SOTA of 76.7% on AndroidWorld, surpassing UI-Tars-2, Gemini-2.5-Pro and Seed1.8. On MobileWorld, MAI-UI obtains 41.7% success rate, significantly outperforming end-to-end GUI models and competitive with Gemini-3-Pro based agentic frameworks. aw mw

Online RL

  • Our online RL experiments show significant gains from scaling parallel environments from 32 to 512 (+5.2 points) and increasing environment step budget from 15 to 50 (+4.3 points). rl rl_env

Device-Cloud Collaboration

  • Our device-cloud collaboration framework can dynamically select on-device or cloud execution based on task execution state and data sensitivity. It improves on-device performance by 33% and reduces cloud API calls by over 40%. dcc

📝 Citation

If you find this project useful for your research, please consider citing our work:

@misc{zhou2025maiuitechnicalreportrealworld,
      title={MAI-UI Technical Report: Real-World Centric Foundation GUI Agents}, 
      author={Hanzhang Zhou and Xu Zhang and Panrong Tong and Jianan Zhang and Liangyu Chen and Quyu Kong and Chenglin Cai and Chen Liu and Yue Wang and Jingren Zhou and Steven Hoi},
      year={2025},
      eprint={2512.22047},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.22047}, 
}