Research News
RIS-Enabled Learning Framework Advances High-Precision Control in Wireless Cloud Robotics
Editor: ZHANG Nannan | Mar 19, 2026
Print

A team of researchers from the Shenyang Institute of Automation (SIA) of the Chinese Academy of Sciences has developed a new framework that significantly improves the precision control of wireless cloud robotic systems. By combining Reconfigurable Intelligent Surface (RIS) technology with a multi-agent transfer reinforcement learning algorithm, the researchers achieved coordinated optimization of robotic control and wireless communication, addressing a longstanding bottleneck in the field.

Their findings were published in IEEE/CAA Journal of Automatica Sinica.

The Wireless Cloud Robotic System (WCRS) is envisioned as a cornerstone of future factories. However, achieving high-precision, low-latency control of robots in complex and dynamic wireless environments remains a critical challenge.

To tackle this, the researchers designed a novel RIS-assisted system architecture that can dynamically adjust phase shifts and beamforming in response to environmental interference. This effectively "reshapes" the wireless communication channel into a more deterministic and stable link.

Within this architecture, the researchers found that robotic control decisions are closely linked to wireless channel configurations, making it difficult for single-dimensional optimization to enhance overall performance, thus necessitating co-optimization. After fully considering constraints related to control input thresholds, control delay deadlines, beam phase, antenna power, and information distortion, the researchers established a system stability maximization problem focused on control error and communication jitter. They defined a novel jitter-oriented system stability objective and derived a closed-form expression for the control delay deadline using Jensen's inequality and the Lyapunov-Krasovskii functional.

Furthermore, given the time-varying characteristics and partial observability of the robot and communication channel states, the researchers modeled the problem as a partially observable Markov decision process (POMDP). They proposed a multi-agent transfer reinforcement learning method based on LSTM-PPO-MATRL to achieve joint optimization of control and communication parameters, including control input compensation, RIS phase shift, and beamforming, thereby enhancing system stability.

To validate the effectiveness of the algorithm, the researchers conducted verification across four control tasks on a simulation platform: low-dynamic tasks such as Inverted Pendulum (INV) and Hopper (HOP), and high-dynamic tasks such as HalfCheetah (HAL) and Ant (ANT), corresponding to a 2-DOF robotic arm, a monopod robot, a bipedal robot, and a quadruped robot, respectively.

Experimental results demonstrate that, within the new architecture, the control-communication joint optimization method exhibits advantages such as fast algorithm convergence and high rewards, meeting the requirements for cloud robot control and outperforming single-dimensional optimization of either control or communication alone.

Theoretically, this study solves the challenging problem of control and communication co-optimization in wireless cloud robotic systems by combining RIS and intelligent algorithms. It also provides new insights for achieving large-scale, high-precision, and highly reliable robot collaboration in future factories. These results are promising for applications that demand extremely low latency and high-precision control, such as remote surgery, collaborative assembly, and autonomous driving. They are also significant for advancing the development of industrial automation and intelligent systems.

Contact

XU Chi

Shenyang Institute of Automation

E-mail:

Topics
Artificial Intelligence
Related Articles