It is challenging for an autonomous robot to work safely in dynamic and unstructured environments, such as vast space, open land and deep sea. Compared with the traditional industrial robot in manufacturing, robot autonomy in uncontrolled scenarios requires extra capabilities to support, e.g., perception, navigation, decision-making, and manipulation.
To solve this problem, researchers at the Shenyang Institute of Automation (SIA) of the Chinese Academy of Sciences (CAS) developed a novel deep reinforcement learning (DRL)-based control system with partners at Edinburgh Centre for Robotics (ECR) in the United Kingdom. Their study was published in Sensors.
The novel method is set to achieve autonomous mobile manipulation in dynamic and unstructured environments. The cutting-edge AI techniques have been applied to a complex real-world mobile manipulator robot.
The joint research team designed new neural network architecture to build a RL-based whole-body robot control model, which leverages the state-of-the-art deep learning method to perceive and understand the environment and targets through an on-board camera.
Then, they utilized the acquired information and the robot state to autonomously control the robot. Through the interactive learning and training in the simulation and actual environment, the autonomous operation of the mobile manipulator in the real environment was finally achieved.
This research lays the foundation of deploying the DRL for autonomous operation research using a more complex floating-based underwater robot system.
Deep-RL-based mobile manipulation (Image by WANG Cong)
52 Sanlihe Rd., Xicheng District,
Beijing, China (100864)