Data-enabled predictive control (DeePC) leverages system measurements in characterizing system dynamics for optimal control. The performance
Data-enabled predictive control (DeePC) leverages system measurements in characterizing system dynamics for optimal control. The performance of DeePC relies on optimizing its hyperparameters, especially in noisy systems where the optimal hyperparameters adapt over time. Existing hyperparameter tuning approaches for DeePC are more than often computationally inefficient or overly conservative. This paper proposes an adaptive DeePC where we guide its hyperparameters adaption through reinforcement learning. We start with establishing the relationship between the system I/O behavior and DeePC hyperparameters. Then we formulate the hyperparameter tuning as a sequential decision-making problem, and we address the decision-making through reinforcement learning. We implement offline training to gain a reinforcement learning model, and we integrate the trained model with DeePC to adjust its hyperparameters adaptively in real time. We conduct numerical simulations with diverse noisy conditions, and the results demonstrate the identification of near-optimal hyperparameters and the robustness of the proposed approach against noises in the control.