Miroslav Krstic – Research Summary
An abbreviated version of this narrative (one-page, less
technical, no references) is available at http://flyingv.ucsd.edu/krstic/research-sum.html
Clicking on a blue item in the narrative will display the
paper in question from Krstic’s web server
1.
Nonlinear
Adaptive Control. Following
the ground-breaking achievements in geometric control theory in the 1980s, the
needs in applications turned the focus to the questions of robustness to
uncertainties in the systems’ vector fields. In the early 1990s Krstic
pioneered feedback stabilization methods for nonlinear systems with unknown
parameters. He developed a comprehensive theoretical arsenal of
Lyapunov-based [J3, J5, J9] and non-Lyapunov-based
methods [J6, J8, J11] for design and
analysis of adaptive controllers for nonlinear systems. He provided a method
that removes overparameterization that plagued previous adaptive nonlinear control
designs [J3], necessary
and sufficient clf-based conditions for adaptive stabilization [J9, J11] in the spirit of
Sontag’s results for non-adaptive stabilization, the first method for
systematic transient performance improvement in adaptive control [J4], the first analysis of
asymptotic behaviors in the absence of persistent excitation [J13], the first proofs of
robustness of adaptive backstepping controllers to unmodeled dynamics [J12, J20, J22], the first adaptive
controllers that are guaranteed to remain stabilizing even when adaptation is
turned off [J7], and the
first adaptive controllers that have (inverse) optimality property over the
entire infinite time interval [J19, J111]. He received four
best paper awards for his work in this period, including the Axelby Prize for
the best paper in the IEEE Transactions on Automatic Control for the
single-authored paper [J13],
where he quantifies the measure of initial conditions from which adaptive
controllers may asymptotically approach destabilizing values of the controller
gains by completely characterizing the topology of invariant manifolds in the
closed-loop adaptive system. His PhD dissertation-based 1995 book [B1] with
Kanellakopoulos and Kokotovic is the second most highly
cited research book in control theory of all time, with over 4700 citations
as of September 2012.
2.
Stochastic Stabilization. In the late 1990s Krstic and his student Deng developed
the first globally stabilizing controllers for stochastic nonlinear systems [B2]. He provided necessary and sufficient clf-based conditions
for stabilization of stochastic systems [J17, J18, J45] and provided full-state
and output-feedback designs [J25,
J33] for stochastic
stabilization. Extending his deterministic results on ISS-clfs for inverse optimal
disturbance attenuation [J19],
he introduced the concept of noise-to-state stability (NSS) for systems whose
noise covariance is unknown and time-varying (which is a proper stochastic
equivalent of Sontag’s ISS for systems with random inputs) and designed
differential game controllers that achieve, with probability one, arbitrary
peak-to-peak gain function assignment with respect to the noise covariance [J45]. He developed the
first LaSalle-type theorem free of globally Liptschitz restrictions [J45], which allowed him to
introduce globally stabilizing feedback laws for stochastic nonlinear systems
with unknown constant covariance. This body of work provided explicit feedback
laws for stochastic nonlinear systems, without requiring solutions of
Hamilton-Jacobi PDEs.
3.
PDE
Control. Before
Krstic’s work in the area of control of partial differential equations,
explicit control algorithms existed only for systems modeled by ODEs and for
elementary cases of open-loop stable PDEs where exponential stabilization is
achieved by simple proportional feedback, such as by “boundary dampers.” For numerous
classes of unstable PDEs and for numerous applications involving fluids,
elasticity, plasmas, and other spatially distributed dynamics,
no methods existed for obtaining explicit formulae for feedback laws. The
situation was even more hopeless for PDEs with nonlinearities or unknown
parameters. Around 2000 Krstic developed a radically new framework, called
“continuum backstepping,” for converting PDE systems into desirable “target
systems” using explicit transformations and feedback laws, which both employ
Volterra-type integrals in spatial variables, with kernels resulting from
solving linear PDEs of Goursat type on triangular domains. With this approach,
Krstic and his student Smyshlyaev were able to establish a general methodology
for stabilizing linear PDEs, which is presented in a tutorial format in the
SIAM-published graduate textbook [B5],
which was a finalist for IFAC’s Harold Chestnut Textbook Prize.
Krstic
has developed stabilizing feedback laws for a vast array of PDE classes,
including parabolic [J73,
J76, J78] and hyperbolic
classes (of the first [J96]
and second order [J92, J130]), Burgers [J28, J40, J42], Korteweg-de Vries [J41], Kuramoto-Sivashinsky
[J36], Schrodinger [J149, J155], Ginzburg-Landau [J71, J86], thermal convection [J77, J99], wave equations with
unconventional anti-stiffness [J92] and anti-damping [J122], Euler beams [J118], shear beams [J93], Timoshenko beams [J116], as well as cascades
of ODEs with parabolic [J119,
J143] and hyperbolic [J120, J140] PDEs.
For nonlinear PDEs of a very general
parabolic class, which includes semilinear PDEs without a restriction on the
nonlinearity growth, Krstic and Vazquez introduced a general design based on
spatial Volterra series operators [J101, J102] whose kernels are
found by solving a sequence of linear PDEs of Goursat type on domains whose
dimension grows to infinity but whose volumes decays to zero, allowing them to
prove convergence of the control laws and stability in closed loop. This is at
present the most general design approach available for stabilizing nonlinear PDEs
that are not structurally restricted or open-loop stable. For the special case
of a viscous Burgers equation, Krstic finds the nonlinear feedback operators
explicitly [J108] and
solves the motion planning explicitly using the backstepping approach [J109].
Inspired by Krener’s backstepping
observers for nonlinear ODEs, Krstic developed their PDE equivalents for
various PDEs
whose state is measurable only at the boundary [J76, J92, J98, J99, J109]. He also
established the separation principle—stability of full-state feedback control
laws employing the estimates from the PDE observers.
For PDEs with unknown parameters,
such as viscosity in parabolic PDEs, advection speed in transport PDEs, or
damping in wave PDEs, Krstic developed a series of adaptive control designs [J47, J88, J89, J90, J123, J133, J138], made possible by
the explicit form of the controllers arising from the backstepping approach.
This work has culminated by the book [B8]
that presents an array of approaches for system identification and adaptive
control for PDEs of parabolic type.
Krstic’s method has seen application
in problems whose complexity (nonlinear PDE models) makes them intractable by
other methods, including estimation of spatial distribution of charge within Lithium-ion
batteries and liquid cooling of large battery packs (with Bosch), control of
flexible wings (with UIUC), control of current and kinetic profiles in fusion
reactors (with General Atomics), control of automotive catalysts (with Ford),
and control of problems in offshore oil production, including gas-liquid
slugging flows in long pipes and stick-slip friction-induced instabilities in
long flexible drilling systems (Mines Paris-Tech). His approach to PDE control
has even led to control laws for deployment of multi-agent systems into
geometric shapes of previously unattained generality [J146, J152].
4. Control of Navier-Stokes Systems. Turbulent fluid flows are modeled
by the notoriously difficult Navier-Stokes PDEs and remain a benchmark of
difficulty for analysis and control design for PDEs. After developing
controllers that induce stabilization or mixing (depending on the gain sign) by
feedback in low Reynolds number flows [B3] in channels [J48, J60], pipes [J68], around bluff bodies,
and for jet flows, Krstic and Vazquez achieved a breakthrough [B6] in developing
control designs applicable to any Reynolds number, in 2D and 3D domains [J91, J117], for flows that are
electrically conducting and governed by a combination of Maxwell’s and Navier-Stokes
equations [J100]
(magnetohydrodynamic flows, such as plasmas and liquid metals used in cooling
systems in fusion reactors), and for flows that are measurable only on the
boundary [J98]. In
addition to stabilization, Krstic was the first to formulate and solve motion
planning problems for Navier-Stokes systems [J113], providing a method
which, when developed for airfoil geometries, will enable the replacement of
moving flap actuators by small fluidic actuators that trip the flow over the
airfoil to generate desired waveforms of forces on the vehicle for its maneuvering,
rather than changing the shape of the wing as moving flaps do.
5.
Delay
Systems. Applying
his methods for PDEs to ODE systems with arbitrarily long delays, in his
sole-authored 2009 book [B7] Krstic developed
results that revolutionize the field of control of delay systems. Krstic first
employed his analysis method based on backstepping transformations to answer
long-standing question of robustness of predictor feedbacks to delay mismatch [J107], the question of
stability of predictor feedbacks for rapidly time-varying delays left open in
the 1982 work of Artstein [J125],
and to design delay-adaptive controllers for highly uncertain delays [J123, J138]. He then introduced a framework
with nonlinear predictor operators, providing globally stabilizing nonlinear
controllers in the presence of delays of arbitrary length and developing
analysis methods for such nonlinear infinite-dimensional systems [J121]. He and his student
Bekiaris-Liberis solved the problems of stabilization of general nonlinear
systems under time-varying delays [J154] and state-dependent
delays [J162, J163], as well as for
linear systems with distributed delays [J142, J166].
Inspired
by Datko’s 1988 famed examples that controllers for certain PDE systems possess
zero robustness margin to delays, Krstic developed control laws that compensate
delays of any length in feedback designs for parabolic [J126] and second-order
hyperbolic [J136]
PDEs. He and Wang also characterized all the delay values that do not induce
instability when applying the simple boundary damper feedback to wave PDEs [J144].
Applying
Krstic's predictor-based techniques to sampled-data nonlinear control systems, where
only semiglobal practical stabilization under short sampling times was
previously achievable, he and Karafyllis developed controllers that
guarantee global stability under arbitrarily long sampling times [J156, J159].
6.
Stochastic
Averaging. Motivated
by biological “gradient climbers” like nutrient-seeking E. Coli bacteria,
Krstic developed stochastic extremum seeking algorithms that represent
plausible simple feedback laws executed by individual bacteria when performing
“chemotaxis” [B9]. To provide
stability guarantees for such stochastic algorithms, he and his postdoc Liu
developed major generalizations to the mathematical theory of stochastic
averaging and stability. They considered continuous-time nonlinear systems
with stochastic perturbations and developed theorems on stochastic averaging
that remove the long-standing restrictions of global Lipschitzness of the
vector field, global exponential stability of the average system, equilibrium
preservation under perturbation, and the finiteness of the time interval [J131]. They further relax
the condition of uniform convergence of the stochastic perturbation in [J134, J150] and employ the
resulting theorems in stochastic extremum seeking algorithms for cooperative
and non-cooperative optimization. In [J135] they address
bacterial chemotaxis and prove that each bacterium,
modeled as a nonholonomic unicycle and applying stochastic extremum seeking
through its steering input, achieves exponential convergence to the area of
maximum nutrient concentration.
Krstic and Krieger employed the new stochastic averaging theory in
developing algorithms to maximize the time a UAV can remain airborne on a tank
of fuel by tuning the airspeed of the UAV with the help of atmospheric
turbulence acting as a perturbation for a stochastic extremum seeking algorithm
[J153].
7.
Extremum
Seeking. Working
on control of gas turbine engine instabilities in the late 1990s [J23, J32, J39], Krstic revived the
classical early-1950s-era "extremum seeking" method for real-time
non-model based optimization [B4]. He
provided the first proof of its stability using nonlinear averaging theory and
singular perturbation theory [J35], developed
compensators for convergence improvement and for maps that evolve with time
according to an exosystem [J37],
extended extremum seeking to discrete time [J49] and to limit cycle
minimization [J34], and
introduced recently a generalized Newton-type extremum seeking algorithm whose
convergence is not only guaranteed but also user-assignable [J161].
As a result of
Krstic’s advancements of extremum seeking methods, they have been adopted at a
number of companies, including United Technologies (gas turbines, jet engine
diffusers, and HVAC systems), Ford (engines), Northrop Grumman (endurance
maximization for UAVs), Cymer (laser pulse shaping and extreme ultraviolet
light sources for photolithography), General Atomics (maglev trains, tokamak
fusion reactors, and novel modular fission reactors), as well as by Los Alamos
and Oak Ridge National Labs (charged particle accelerators), Livermore National
Lab (engines), with him playing part in a few of these transitions [J56, J94, J112, J153, J158]. Many
university-affiliated practitioners have also adopted extremum seeking, using
it in novel applications in photovoltaics, wind turbines, and aerodynamic flow
control.
In joint work with his
student Paul Frihauf and Tamer Basar on Nash equilibrium seeking, Krstic
extended extremum seeking from single-agent optimization to non-cooperative
games [J157]. They
proved, both for games with finitely many players and with uncountably many
players, that the game converges to the underlying Nash equilibrium, despite
the players not having modeling information on the payoff functions and not
having information about the other players’ actions.
8.
Nonholonomic
Source Seeking.
While nonholonomic vehicles violate the requirement of exponential stability of
the plant in the original extremum seeking approach, Krstic modified the approach to make it
applicable to solving source localization problems for autonomous and
underactuated vehicles in GPS-denied environments. He and his students
developed algorithms that can tune either the longitudinal velocity [J84] or angular velocity [J104, 129, 145], and that can seek
sources not only in 2D but also in 3D [J115]. He also provided a
plausible mathematical explanation of how fish track prey using only the sense
of smell and simple extremum seeking algorithms [J124]. By considering
models of nonholonomic kinematics of fish in potential or vortex flows,
developed by Marsden and others in the early 2000s, he showed that the periodic
forcing (tail flapping) that fish use for locomotion can be modulated in a
simple manner using the measured scent of the prey to achieve prey tracking
without position measurement. In other words, he took the topic of fish
locomotion from the question of what open-loop signals generate particular
gaits to what feedback laws fish use to steer themselves in the dark.