首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an inverse optimal control approach for stabilization and trajectory tracking of discrete‐time nonlinear systems, avoiding to solve the associated Hamilton–Jacobi–Bellman equation, and minimizing a meaningful cost functional. The proposed controller is based on a discrete‐time control Lyapunov function and passivity theory; its applicability is illustrated via simulations for an unstable nonlinear system and a planar robot. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
The convergence analysis for methods solving partial differential equations constrained optimal control problems containing both discrete and continuous control decisions based on relaxation and rounding strategies is extended to the class of first order semilinear hyperbolic systems in one space dimension. The results are obtained by novel a priori estimates for the size of the relaxation gap based on the characteristic flow, fixed‐point arguments, and particular regularity theory for such mixed‐integer control problems. Motivated by traffic flow problems, a relaxation model for optimal flux switching control in conservation laws is considered as an application.  相似文献   

3.
In this paper, we consider the optimal control problem of discrete‐time switched systems. This problem is formulated as an optimization problem involving both continuous and discrete‐valued variables. It can be transformed into a discrete optimization problem. A metric in the space of switching sequences is introduced and an appropriate discrete filled function is constructed. Then, an algorithm that combines the discrete filled function method and a descent method is developed for solving this problem. For illustration, some numerical examples are solved. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
This paper considers the problem of sliding mode control for a class of uncertain discrete‐time systems. Firstly, an optimal control law for the nominal system is derived to satisfy linear quadratic performance index. And then, an optimal integral sliding surface is designed to ensure the robustness for sliding dynamics. By combining with the discrete reaching law, the existence condition of the sliding mode is proved, and the bandwidth of the quasi‐sliding mode is given. It is shown that the present method utilizes a lower control gain to attain stronger robustness and eliminate the chattering. Finally, illustrative simulation results are provided. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper, the finite‐horizon near optimal adaptive regulation of linear discrete‐time systems with unknown system dynamics is presented in a forward‐in‐time manner by using adaptive dynamic programming and Q‐learning. An adaptive estimator (AE) is introduced to relax the requirement of system dynamics, and it is tuned by using Q‐learning. The time‐varying solution to the Bellman equation in adaptive dynamic programming is handled by utilizing a time‐dependent basis function, while the terminal constraint is incorporated as part of the update law of the AE. The Kalman gain is obtained by using the AE parameters, while the control input is calculated by using AE and the system state vector. Next, to relax the need for state availability, an adaptive observer is proposed so that the linear quadratic regulator design uses the reconstructed states and outputs. For the time‐invariant linear discrete‐time systems, the closed‐loop dynamics becomes non‐autonomous and involved but verified by using standard Lyapunov and geometric sequence theory. Effectiveness of the proposed approach is verified by using simulation results. The proposed linear quadratic regulator design for the uncertain linear system requires an initial admissible control input and yields a forward‐in‐time and online solution without needing value and/or policy iterations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
The problem of designing a controller, which results in a closed‐loop system response with optimal time‐domain characteristics, is considered. In the approach presented in this paper, the controller order is fixed (higher than pole‐placement order) and we seek a controller that results in closed‐loop poles at certain desired and pre‐specified locations; while at the same time the output tracks the reference input in an optimal way. The optimality is measured by requiring certain norms on the error sequence—between the reference and output signals—to be minimum. Several norms are used. First, l2‐norm is used and the optimal solution is computed in one step of calculations. Second, l‐norm (i.e. minimal overshot) is considered and the solution is obtained by solving a constrained affine minimax optimization problem. Third, the l1‐norm (which corresponds to the integral absolute error‐(IAE)‐criterion) is used and linear programming techniques are utilized to solve the problem. The important case of finite settling time (i.e. deadbeat response) is studied as a special case. Examples that illustrate the different design algorithms and demonstrate their feasibility are presented. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
This article addresses the problem of distributed controller design for linear discrete‐time systems. The problem is posed using the classical framework of state feedback gain optimization over an infinite‐horizon quadratic cost, with an additional sparsity constraint on the gain matrix to model the distributed nature of the controller. An equivalent formulation is derived that consists in the optimization of the steady‐state solution of a matrix difference equation, and two algorithms for distributed gain computation are proposed based on it. The first method consists in a step‐by‐step optimization of said difference matrix equation, and allows for fast computation of stabilizing state feedback gains. The second algorithm optimizes the same matrix equation over a finite time window to approximate asymptotic behavior and thus minimize the infinite‐horizon quadratic cost. To assess the performance of the proposed solutions, simulation results are presented for the problem of distributed control of a quadruple‐tank process, as well as a version of that problem scaled up to 40 interconnected tanks.  相似文献   

8.
The article discusses a variable time transformation method for the approximate solution of mixed‐integer non‐linear optimal control problems (MIOCP). Such optimal control problems enclose real‐valued and discrete‐valued controls. The method transforms MIOCP using a discretization into an optimal control problem with only real‐valued controls. The latter can be solved efficiently by direct shooting methods. Numerical results are obtained for a problem from automobile test‐driving that involves a discrete‐valued control for the gear shift of the car. The results are compared to those obtained by Branch&Bound and show a drastic reduction of computation time. This very good performance makes the suggested method applicable even for many discretization points. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
We present a numerical method and results for a recently published benchmark problem (Optim. Contr. Appl. Met. 2005; 26 :1–18; Optim. Contr. Appl. Met. 2006; 27 (3):169–182) in mixed‐integer optimal control. The problem has its origin in automobile test‐driving and involves discrete controls for the choice of gears. Our approach is based on a convexification and relaxation of the integer controls constraint. Using the direct multiple shooting method we solve the reformulated benchmark problem for two cases: (a) As proposed in (Optim. Contr. Appl. Met. 2005; 26 :1–18), for a fixed, equidistant control discretization grid and (b) As formulated in (Optim. Contr. Appl. Met. 2006; 27 (3):169–182), taking into account free switching times. For the first case, we reproduce the results obtained in (Optim. Contr. Appl. Met. 2005; 26 :1–18) with a speed‐up of several orders of magnitude compared with the Branch&Bound approach applied there (taking into account precision and the different computing environments). For the second case we optimize the switching times and propose to use an initialization based on the solution of (a). Compared with (Optim. Contr. Appl. Met. 2006; 27 (3):169–182) we were able to reduce the overall computing time considerably, applying our algorithm. We give theoretical evidence on why our convex reformulation is highly beneficial in the case of time‐optimal mixed‐integer control problems as the chosen benchmark problem basically is (neglecting a small regularization term). Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
This paper describes a trajectory optimization algorithm that generates a quadric control update, which satisfies the constraints and necessary conditions to the second order. The algorithm is designed to solve multistage optimization problems. The algorithm is tested against a commercially available Sequential Quadratic Programming algorithm on problems with linear dynamics and linear and nonlinear constraints. This algorithm is a departure from previous methods because it explicitly satisfies the constraints to the second order. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete‐time systems. The problem is transformed into positive LQ one for standard stochastic systems via two equivalent transformations. It is proved that the singular LQ optimal control problem is solvable under two reasonable rank conditions. Via dynamic programming principle, the desired optimal controller is presented in terms of matrix iterative form. One simulation is provided to show the effectiveness of the proposed approaches. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
In this study, we use generalized policy iteration approximate dynamic programming (ADP) algorithm to design an optimal controller for a class of discrete‐time systems with actuator saturation. A integral function is proposed to manage the saturation nonlinearity in actuators and then the generalized policy iteration ADP algorithm is developed to deal with the optimal control problem. Compared with other algorithm, the developed ADP algorithm includes 2 iteration procedures. In the present control scheme, 2 neural networks are introduced to approximate the control law and performance index function. Furthermore, numerical simulations illustrate the convergence and feasibility of the developed method.  相似文献   

13.
This study proposes a method and an experimental validation to analyze dynamics response of the simulator's cabin and platform with respect to the type of the control used in the hexapod driving simulator. In this article, two different forms of motion platform tracking control are performed as a classical motion cueing algorithm and a discrete‐time linear quadratic regulator (LQR) motion cueing algorithm. For each situation, vehicle dynamics and motion platform level data are registered from the driving simulation software. In addition, the natural frequencies of the roll accelerations are obtained in real‐time by using FFT. The data are denoised by using wavelet 1D transformation. The results show that by using discrete‐time LQR algorithm, the roll acceleration amplitudes that correspond to the natural frequencies and the total roll jerk have decreased at the motion platform level. Also, the natural frequencies have increased reasonably by using the discrete LQR motion cueing (1.5–2.2 Hz) compared with using the classical algorithm (0.4–1.5 Hz) at the motion platform, which is an indicator of motion sickness incidence avoidance. The literature shows that lateral motion (roll, yaw, etc.) in the frequency range of 0.1–0.5 Hz induces motion sickness. Furthermore, using discrete‐time LQR motion cueing algorithm has decreased the sensation error (motion platform–vehicle (cabin) levels) two times in terms of total roll jerk. In conclusion, discrete‐time LQR motion cueing has reduced the simulator sickness more than the classical motion cueing algorithm depending on sensory cue conflict theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
While system dynamics are usually derived in continuous time, respective model‐based optimal control problems can only be solved numerically, ie, as discrete‐time approximations. Thus, the performance of control methods depends on the choice of numerical integration scheme. In this paper, we present a first‐order discretization of linear quadratic optimal control problems for mechanical systems that is structure preserving and hence preferable to standard methods. Our approach is based on symplectic integration schemes and thereby inherits structure from the original continuous‐time problem. Starting from a symplectic discretization of the system dynamics, modified discrete‐time Riccati equations are derived, which preserve the Hamiltonian structure of optimal control problems in addition to the mechanical structure of the control system. The method is extended to optimal tracking problems for nonlinear mechanical systems and evaluated in several numerical examples. Compared to standard discretization, it improves the approximation quality by orders of magnitude. This enables low‐bandwidth control and sensing in real‐time autonomous control applications.  相似文献   

15.
In this study, we present an inverse optimal control approach based on extended Kalman filter (EKF) algorithm to solve the optimal control problem of discrete‐time affine nonlinear systems. The main aim of inverse optimal control is to circumvent the tedious task of solving the Hamilton‐Jacobi‐Bellman equation that results from the classical solution of a nonlinear optimal control problem. Here, the inverse optimal controller is based on defining an appropriate quadratic control Lyapunov function (CLF) where the parameters of this candidate CLF were estimated by adopting the EKF equations. The root mean square error of the system states is used as the observed error in the case of classical EKF algorithm application, whereas, here, the EKF tries to eliminate the same root mean square error defined over the parameters by generating a CLF matrix with appropriate elements. The performance and the applicability of the proposed scheme is illustrated through both simulations performed on a nonlinear system model and a real‐time laboratory experiment. Simulation study demonstrate the effectiveness of the proposed method in comparison with 2 other inverse control approaches. Finally, the proposed controller is implemented on a professional control board to stabilize a DC‐DC boost converter and minimize a meaningful cost function. The experimental results show the applicability and effectiveness of the proposed EKF‐based inverse optimal control even in real‐time control systems with a very short time constant.  相似文献   

16.
In this paper, we propose a novel approach to the linear quadratic (LQ) optimal control of unknown discrete‐time linear systems. We first describe an iterative procedure for minimizing a partially unknown static function. The procedure is based on simultaneous updates in the estimation of unknown parameters and in the optimization of controllable inputs. We then use the procedure for control optimization in unknown discrete‐time dynamic systems—we consider applications to the finite‐horizon and the infinite‐horizon LQ control of linear systems in detail. To illustrate the approach, an example of the pitch attitude control of an aircraft is considered. We also compare our proposed approach to several other approaches to finite/infinite‐horizon LQ control problems with unknown dynamics from the literature, including extremum seeking and adaptive dynamic programming/reinforcement learning. Our proposed approach is competitive with these approaches in speed of convergence and in implementation and computational complexity.  相似文献   

17.
The article discusses the application of the branch&bound method to a mixed integer non‐linear optimization problem (MINLP) arising from a discretization of an optimal control problem with partly discrete control set. The optimal control problem has its origin in automobile test‐driving, where the car model involves a discrete‐valued control function for the gear shift. Since the number of variables in (MINLP) grows with the number of grid points used for discretization of the optimal control problem, the example from automobile test‐driving may serve as a benchmark problem of scalable complexity. Reference solutions are computed numerically for two different problem sizes. A simple heuristic approach suitable for optimal control problems is suggested that reduces the computational amount considerably, though it cannot guarantee optimality anymore. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
In this two‐part study, we develop a general approach to the design and analysis of exact penalty functions for various optimal control problems, including problems with terminal and state constraints, problems involving differential inclusions, and optimal control problems for linear evolution equations. This approach allows one to simplify an optimal control problem by removing some (or all) constraints of this problem with the use of an exact penalty function, thus allowing one to reduce optimal control problems to equivalent variational problems and apply numerical methods for solving, eg, problems without state constraints, to problems including such constraints, etc. In the first part of our study, we strengthen some existing results on exact penalty functions for optimisation problems in infinite dimensional spaces and utilise them to study exact penalty functions for free‐endpoint optimal control problems, which reduce these problems to equivalent variational ones. We also prove several auxiliary results on integral functionals and Nemytskii operators that are helpful for verifying the assumptions under which the proposed penalty functions are exact.  相似文献   

19.
Discrete mechanics and optimal control (DMOC) is a methodology that takes advantage of variational structure to solve certain optimal control problems for mechanical systems. This paper proposes to combine a multiphase strategy with the original DMOC method, resulting in a new multiphase DMOC (MDMOC) method and making optimal trajectory generation more efficient. The advantages of the proposed method are demonstrated mathematically, and in addition, a quadrotor, unmanned aerial vehicle, simulation example is presented to show its superiority over the DMOC method. Furthermore, to show its potential application, an arbitrarily chosen controller was used to track the desired trajectory generated by MDMOC. This new MDMOC methodology can also be applied to other mechanical systems such as mobile robots and underwater gliders.  相似文献   

20.
Optimal control problems with delays in state and control variables are studied. Constraints are imposed as mixed control–state inequality constraints. Necessary optimality conditions in the form of Pontryagin's minimum principle are established. The proof proceeds by augmenting the delayed control problem to a nondelayed problem with mixed terminal boundary conditions to which Pontryagin's minimum principle is applicable. Discretization methods are discussed by which the delayed optimal control problem is transformed into a large‐scale nonlinear programming problem. It is shown that the Lagrange multipliers associated with the programming problem provide a consistent discretization of the advanced adjoint equation for the delayed control problem. An analytical example and numerical examples from chemical engineering and economics illustrate the results. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号