Robot manipulation is a complex task that requires accurately predicting contact points and end-effector directions. However, traditional learning-based approaches often struggle with generalizability, particularly when faced with extensive categories. To address this, a new approach is introduced in this article that leverages the reasoning capabilities of Multimodal Large Language Models (MLLMs) to enhance the stability and generalization of robot manipulation. By fine-tuning the injected adapters, the inherent common sense and reasoning ability of the MLLMs are preserved while equipping them with manipulation abilities. The key insight lies in the introduced fine-tuning paradigm, which incorporates object category understanding, affordance prior reasoning, and object-centric pose prediction to stimulate the reasoning ability of MLLMs in manipulation. During inference, an RGB image and text prompt are utilized to predict the end effector’s pose in a chain of thoughts. Additionally, an active impedance adaptation policy is introduced to plan upcoming waypoints in a closed-loop manner after the initial contact is established. To enable better adaptation to real-world scenarios, a test-time adaptation (TTA) strategy for manipulation is designed. Experimental results in both simulation and real-world environments demonstrate the promising performance of ManipLLM. For more details and demonstrations, please visit the article.
Abstract:Robot manipulation relies on accurately predicting contact points and end-effector directions to ensure successful operation. However, learning-based robot manipulation, trained on a limited category within a simulator, often struggles to achieve generalizability, especially when confronted with extensive categories. Therefore, we introduce an innovative approach for robot manipulation that leverages the robust reasoning capabilities of Multimodal Large Language Models (MLLMs) to enhance the stability and generalization of manipulation. By fine-tuning the injected adapters, we preserve the inherent common sense and reasoning ability of the MLLMs while equipping them with the ability for manipulation. The fundamental insight lies in the introduced fine-tuning paradigm, encompassing object category understanding, affordance prior reasoning, and object-centric pose prediction to stimulate the reasoning ability of MLLM in manipulation. During inference, our approach utilizes an RGB image and text prompt to predict the end effector’s pose in chain of thoughts. After the initial contact is established, an active impedance adaptation policy is introduced to plan the upcoming waypoints in a closed-loop manner. Moreover, in real world, we design a test-time adaptation (TTA) strategy for manipulation to enable the model better adapt to the current real-world scene configuration. Experiments in simulator and real-world show the promising performance of ManipLLM. More details and demonstrations can be found at this https URL.