Control theory has produced, since the 1950s, a wealth of feedback designs with rigorous guarantees of stability, performance, robustness, and optimality. Some of these feedback laws are very complex and require intense numerical computations online.
Neural operators, a branch of Machine Learning with sophisticated tools and theory to approximate infinite-dimensional nonlinear mappings, offers a way to speed up the online implementation by an order of 1,000x, eliminating numerical computations by function evaluations, using NN approximations of the operators.
In about 2022 a line of research emerged in our group, with collaborators at UC San Diego (Yuanyuan Shi and Luke Bhan) and elsewhere, to facilitate the implementation of complex feedback laws using neural operators. This research, while incorporating the usual machine learning aspects (generation of a training set by numerical computation offline, training a neural network) is also intensely theoretical. It produces guarantees that stability, performance, and robustness guarantees, present in classical control-theoretic work, are retained even under NN approximations.
Example of online-updated gain kernel using an offline-learned neural operator
Arguably, some of the most complex feedback laws out there are for PDEs, nonlinear systems, and delay systems. Our focus is on developing neural operators for such systems.
The implementations are not limited to control laws but include also state estimators (observers), adaptive control, and (nonlinear) gain scheduling.
The key components of this research are:
Defining nonlinear operators that need to be approximated.
Establishing the continuity (and even Lipschitzness) of these nonlinear infinite-dimensional mappings.
Proving guarantees of Lyapunov stability, performance, and robustness under the NN approximations.
Computational illustrations of training the neural operators and their performance in the feedback loop.
The unique and mathematically “juiciest” of these four components is component 2.
📌 Check this page occasionally for new developments in this evolving field.
R. Vazquez and M. Krstic, “Gain-only neural operators for PDE backstepping,” Chinese Annals of Mathematics, Ser. B, invited article for special issue dedicated to Jean-Michel Coron, under review.
L. Bhan, Y. Shi, and M. Krstic, "Neural operators for hyperbolic PDE backstepping kernels,” IEEE Conference on Decision and Control, 2023.
L. Bhan, Y. Shi, and M. Krstic, "Neural operators for hyperbolic PDE backstepping feedback laws,” IEEE Conference on Decision and Control, 2023.
S.-S. Wang, M. Diagne, and M. Krstic, "Neural operator approximations of backstepping kernels for 2x2 hyperbolic PDEs," American Control Conference, 2024.
R. Vazquez and M. Krstic, “Gain-only neural operator approximators of PDE backstepping controllers,” European Control Conference, 2024.
S.-S. Wang, M. Diagne, and M. Krstic, “Numerical implementation of deep neural PDE backstepping control of reaction-diffusion PDEs with delay,” Modeling, Estimation and Control Conference, 2024.