朱铭祥教授
1) Introduce concepts of optimization for applications to the control and estimation of dynamical systems. 2) Provide fundamentals of optimal control concepts and theoretical basis for solving the problems such as LQR, Kalman filter and LQG. Material:
Notes.
text: R.F. Stengel, Optimal Control and Estimation, Dover Pub. Inc., NY 1994. Outlines:
1. Framework for optimal design of dynamical systems. Objectives of optimal control & state estimation. 2. Static optimization and Dynamic systems
2-1 stationary points of a scalar function, constrained minima & Lagrange multipliers
2-2 Dynamic system models & solutions, Nonlinear system equations, Local linearization, Numerical integration of nonlinear equations
2-3 Properties of dynamic systems, Static & quasi-static equilibrium, Stability, Modes of motion for linear, time-invariant systems, Reachability, controllability & stabilizability, Constructability, observability & detectability, Discrete-time systems
3 Optimal trajectories & Neighboring-optimal solutions
3-1 Statement of the problem
3-2 Cost functions
3-3 Parametric optimization
3-4 Conditions for optimality, necessary & sufficient conditions, minimal principle, Hamilton-Jacobi-Bellman Equation
3-5 Constraints & singular control 3-6 Numerical optimization 3-7 neighboring-optimal solutions
4. Optimal state-estimation 4-1 review on random variables, sequences, & processes
4-2 Least squares estimates of constant vectors 4-3 Propagation of the state estimate & its uncertainty 4-4 Discrete-time optimal filters & Predictors 4-5 Continuous-time optimal filters & Predictors
Grading system: midterm(30%) & final(30%), homework(20%), term project (20%)
因篇幅问题不能全部显示,请点此查看更多更全内容