AgenticLab: A Real-world Robot Agent Platform that Can See, Think, and Act
Published:
Overview
In this work, we present AgenticLab, an open-source, real-world benchmark platform that integrates large vision-language models (VLMs) and large language models (LLMs) to enable zero-shot robotic manipulation through reasoning and embodied interaction.
Unlike existing simulation-based benchmarks, AgenticLab focuses on accessible hardware, Open Knowledge-based Models, and compositional agentic intelligence, allowing researchers to study how VLM/LLM models perform in real-world manipulation scenarios.
BibTeX
@misc{guo2026agenticlab,
title={AgenticLab: A Real-World Robot Agent Platform that Can See, Think, and Act},
author={Pengyuan Guo and Zhonghao Mai and Zhengtong Xu and Kaidi Zhang and Heng Zhang and Zichen Miao and Arash Ajoudani and Zachary Kingston and Qiang Qiu and Yu She},
year={2026},
eprint={2602.01662},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2602.01662},
}