首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article presents a new area of application for Automatic Differentiation (AD): Computing parametric sensitivities for optimization problems. For an optimization problem containing parameters which are not among the optimization variables, the term parametric sensitivity refers to the derivative of an optimal solution with respect to the parameters. We treat non‐linear finite‐ and infinite‐dimensional optimization problems, in particular optimal control problems involving ordinary differential equations with control and state constraints, and compute their parametric sensitivities using AD. Particular attention is given to the generation of second‐order derivatives required in the process. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, a new member of the family of sequential gradient-restoration algorithms for the solution of optimal control problems is presented. This is an algorithm of the conjugate gradient type, which is designed to solve two classes of optimal control problems, called Problem P1 and Problem P2 for easy indentification. Problem P1 involves minimizing a functional I subject to differential constraints and general boundary conditions. It consists of finding the state x (t), the control u (t), and the parameter pi so that the functional I is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include non-differential constraints to be satisfied everywhere along the interval of integration. The approach taken is a sequence of two-phase cycles, composed of a conjugate gradient phase and a restoration phase. The conjugate gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations; each restorative iteration is designed to force constraint satisfaction to first order, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized. The resulting algorithm has several properties: (i) it produces a sequence of feasible solutions; (ii) each feasible solution is characterized by a value of the functional I which is smaller than that associated with any previous feasible solution; and (iii) for the special case of a quadratic functional subject to linear constraints, the variations of the state, control, and parameter produced by the sequence of conjugate gradient phases satisfy various orthogonality and conjugacy conditions. The algorithm presented here differs from those of References 1-4, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with References 1-4, the present algorithm is capable of handling general final conditions; therefore, it is suitable for the solution of optimal control problems with general boundary conditions. The importance of the present algorithm lies in that many optimal control problems either arise naturally in the present format or can be brought to such a format by means of suitable transformations.5 Therefore, a great variety of optimal control problems can be handled. This includes: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with state-derivative equality constraints, (iv) problems with-control inequality constraints, (v) problems with state inequality constraints, (vi) problems with state-derivative inequality constraints, and (vii) Chebyshev minimax problems. Several numerical examples are presented in Part 2 (Reference 6) in order to illustrate the performance of the algorithm associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of the present algorithm.  相似文献   

3.
Most distributed parameter control problems involve manipulation within the spatial domain. Such problems arise in a variety of applications including epidemiology, tissue engineering, and cancer treatment. This paper proposes an approach to solve a state‐constrained spatial field control problem that is motivated by a biomedical application. In particular, the considered manipulation over a spatial field is described by partial differential equations (PDEs) with spatial frequency constraints. The proposed optimization algorithm for tracking a reference spatial field combines three‐dimensional Fourier series, which are truncated to satisfy the spatial frequency constraints, with exploitation of structural characteristics of the PDEs. The computational efficiency and performance of the optimization algorithm are demonstrated in a numerical example. In the example, the spatial tracking error is shown to be almost entirely due to the limitation on the spatial frequency of the manipulated field. The numerical results suggest that the proposed optimal control approach has promise for controlling the release of macromolecules in tissue engineering applications.  相似文献   

4.
In this paper, we consider a class of nonlinear optimization problems that arise from the discretization of optimal control problems with bounds on both state and control variables. We are particularly interested in degenerate cases, i.e. when the linear independence constraint qualification is not satisfied. For these problems, we analyse the basic global convergence properties and the numerical behaviour of a multiplier method that updates multipliers corresponding to inequality constraints instead of dealing with multipliers associated with equality constraints. Numerical results obtained for several instances of a discretized optimal control problem governed by a semi‐linear elliptic equation are included and indicate that this method is robust on degenerate cases, compared with other nonlinear optimization solvers. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
In this short communication we consider an approximation scheme for solving time-delayed optimal control problems with terminal inequality constraints. Time-delayed problems are characterized by variables x (t - τ) with a time-delayed argument. In our scheme we use a Páde approximation to determine a differential relation for y (t), an augmented state that represents x (t - τ). Terminal inequality constraints, if they exist, are converted to equality constraints via Valentine-type unknown parameters. The merit of this approach is that existing, well-developed optimization algorithms may be used to solve the transformed problems. Two linear/non-linear time-delayed optimal control problems are solved to establish its usefulness.  相似文献   

6.
The second part of our study is devoted to an analysis of the exactness of penalty functions for optimal control problems with terminal and pointwise state constraints. We demonstrate that with the use of the exact penalty function method one can reduce fixed-endpoint problems for linear time-varying systems and linear evolution equations with convex constraints on the control inputs to completely equivalent free-endpoint optimal control problems, if the terminal state belongs to the relative interior of the reachable set. In the nonlinear case, we prove that a local reduction of fixed-endpoint and variable-endpoint problems to equivalent free-endpoint ones is possible under the assumption that the linearized system is completely controllable, and point out some general properties of nonlinear systems under which a global reduction to equivalent free-endpoint problems can be achieved. In the case of problems with pointwise state inequality constraints, we prove that such problems for linear time-varying systems and linear evolution equations with convex state constraints can be reduced to equivalent problems without state constraints, provided one uses the L penalty term, and Slater's condition holds true, while for nonlinear systems a local reduction is possible, if a natural constraint qualification is satisfied. Finally, we show that the exact Lp-penalization of state constraints with finite p is possible for convex problems, if Lagrange multipliers corresponding to the state constraints belong to Lp′, where p is the conjugate exponent of p, and for general nonlinear problems, if the cost functional does not depend on the control inputs explicitly.  相似文献   

7.
In this paper, the Continuous Genetic Algorithm (CGA), previously developed by the principal author, is applied for the solution of optimal control problems. The optimal control problem is formulated as an optimization problem by the direct minimization of the performance index subject to constraints, and is then solved using CGA. In general, CGA uses smooth operators and avoids sharp jumps in the parameter values. This novel approach possesses two main advantages when compared to other existing direct and indirect methods that either suffer from low accuracy or lack of robustness. First, our method can be applied to optimal control problems without any limitation on the nature of the problem, the number of control signals, and the number of mesh points. Second, high accuracy can be achieved where the performance index is globally minimized while satisfying the constraints. The applicability and efficiency of the proposed novel algorithm for the solution of different optimal control problems is investigated. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
At times, the number of controlled variables equals the number of manipulated variables and the objective of the control system is to minimize the difference in the desired and predicted output trajectories subject only to constraints on the manipulated variables. If a simplified model predictive control algorithm is used for such applications, then solution to the optimization problem can be obtained by using the slopes between the unconstrained and constrained optimums. The solution procedure is described for a two‐input–two‐output case. A comparison with a linear programming (LP) formulation showed that the computational time for the proposed solution was about 35 times less than the time for the LP solution. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
In this article, we propose a higher order neural network, namely the functional link neural network (FLNN), for the model of linear and nonlinear delay fractional optimal control problems (DFOCPs) with mixed control-state constraints. We consider DFOCPs using a new fractional derivative with nonlocal and nonsingular kernel that was recently proposed by Atangana and Baleanu. The derivative possesses more important characteristics that are very useful in modelling. In the proposed method, a fractional Chebyshev FLNN is developed. At the first step, the delay problem is transformed to a nondelay problem, using a Padé approximation. The necessary optimality condition is stated in a form of fractional two-point boundary value problem. By applying the fractional integration by parts and by constructing an error function, we then define an unconstrained minimization problem. In the optimization problem, trial solutions for state, co-state and control functions are utilized where these trial solutions are constructed by using single-layer fractional Chebyshev neural network model. We then minimize the error function using an unconstrained optimization scheme based on the gradient descent algorithm for updating the network parameters (weights and bias) associated with all neurons. To show the effectiveness of the proposed neural network, some numerical results are provided.  相似文献   

10.
An algorithm is proposed to solve the problem of bang–bang constrained optimal control of non‐linear systems with free terminal time. The initial and terminal states are prescribed. The problem is reduced to minimizing a Lagrangian subject to equality constraints defined by the terminal state. A solution is obtained by solving a system of non‐linear equations. Since the terminal time is free, time‐optimal control is given a special emphasis. Second‐order sufficient conditions of optimality are also stated. The algorithm is demonstrated by a detailed study of the switching structure for stabilizing the F–8 aircraft in minimum time, and other examples. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Controlling several and possibly independent moving agents in order to reach global goals is a tedious task that has applications in many engineering fields such as robotics or computer animation. Together, the different agents form a whole called swarm, which may display interesting collective behaviors. When the agents are driven by their own dynamics, controlling this swarm is known as the particle swarm control problem. In that context, several strategies, based on the control of individuals using simple rules, exist. This paper defends a new and original method based on a centralized approach. More precisely, we propose a framework to control several particles with constraints either expressed on a per‐particle basis, or expressed as a function of their environment. We refer to these two categories as respectively Lagrangian or Eulerian constraints. The contributions of the paper are the following: (i) we show how to use optimal control recipes to express an optimization process over a large state space including the dynamic information of the particles; and (ii) the relation between the Lagrangian state space and Eulerian values is conveniently expressed with graph operators that make it possible to conduct all the mathematical operations required by the control process. We show the effectiveness of our approach on classical and more original particle swarm control problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we present a switched optimization control method for power allocation of hybrid energy storage systems (HESSs) subject to constraints on the state of charge and power split. By the energy conservation principle, a continuous‐time switching model is established to describe changes of charge quantities of the HESS during its charging‐or‐discharging process. Then an analytic switched state feedback law with some free parameters is constructed by the concept of common control Lyapunov functions, which is used to allocate the power of storage units during the charging‐or‐discharging process. To cope with the constraints and performance functions formulating the power allocation requirements of storage units, the receding horizon control principle is used to compute the parameters of the analytic switched control law by online solving a constrained optimization problem. The results on asymptotical stability and common section region (0.5, ∞) of the switched optimization controller are established in the presence of constraints by using the properties of common control Lyapunov functions. By comparing to linear‐quadratic regulator control of the HESS, an example is used to illustrate the effectiveness and performance of the switched optimization controller presented here.  相似文献   

13.
This article presents an alternating direction method of multipliers (ADMM) algorithm for solving large‐scale model predictive control (MPC) problems that are invariant under the symmetric‐group. Symmetry was used to find transformations of the inputs, states, and constraints of the MPC problem that decompose the dynamics and cost. We prove an important property of the symmetric decomposition for the symmetric‐group that allows us to efficiently transform between the original and decomposed symmetric domains. This allows us to solve different subproblems of a baseline ADMM algorithm in different domains where the computations are less expensive. This reduces the computational cost of each iteration from quadratic to linear in the number of repetitions in the system. In addition, we show that the memory complexity for our ADMM algorithm is also linear in number of repetitions in the system, rather than the typical quadratic complexity. We demonstrate our algorithm for two case studies; battery balancing and heating, ventilation, and air conditioning. In both case studies, the symmetric algorithm reduced the computation‐time from minutes to seconds and memory usage from tens of megabytes to tens or hundreds of kilobytes, allowing the previously nonviable MPCs to be implemented in real time on embedded computers with limited computational and memory resources.  相似文献   

14.
15.
This paper is devoted to general optimal control problems (OCPs) associated with a family of nonlinear continuous‐time switched systems in the presence of some specific control constraints. The stepwise (fixed‐level type) control restrictions we consider constitute a common class of admissible controls in many real‐world engineering systems. Moreover, these control restrictions can also be interpreted as a result of a quantization procedure appglied to the inputs of a conventional dynamic system. We study control systems with a priori given time‐driven switching mechanism in the presence of a quadratic cost functional. Our aim is to develop a practically implementable control algorithm that makes it possible to calculate approximating solutions for the class of OCPs under consideration. The paper presents a newly elaborated linear quadratic‐type optimal control scheme and also contains illustrative numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
This is the second part of a paper studies trajectory shaping of a generic cruise missile attacking a fixed target from above. The problem is reinterpreted using optimal control theory resulting in a minimum flight time problem; in the first part the performance index was time‐integrated altitude. The formulation entails non‐linear, two‐dimensional (vertical plane) missile flight dynamics, boundary conditions and path constraints, including pure state constraints. The focus here is on informed use of the tools of computational optimal control, rather than their development. The formulation is solved using a three‐stage approach. In stage 1, the problem is discretized, effectively transforming it into a non‐linear programming problem, and hence suitable for approximate solution with DIRCOL and NUDOCCCS. The results are used to discern the structure of the optimal solution, i.e. type of constraints active, time of their activation, switching and jump points. This qualitative analysis, employing the results of stage 1 and optimal control theory, constitutes stage 2. Finally, in stage 3, the insights of stage 2 are made precise by rigorous mathematical formulation of the relevant two‐point boundary value problems (TPBVPs), using the appropriate theorems of optimal control theory. The TPBVPs obtained from this indirect approach are then solved using BNDSCO and the results compared with the appropriate solutions of stage 1. The influence of boundary conditions on the structure of the optimal solution and the performance index is investigated. The results are then interpreted from the operational and computational perspectives. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
18.
We present a novel distributed primal‐dual active‐set method for model predictive control. The primal‐dual active‐set method is used for solving model predictive control problems for large‐scale systems with quadratic cost, linear dynamics, additive disturbance, and box constraints. The proposed algorithm is compared with dual decomposition and an alternating direction method of multipliers. Theoretical and experimental results show the effectiveness of the proposed approach for large‐scale systems with communication delays. The application to building control systems is thoroughly investigated. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
This paper considers a dynamic pricing problem over a finite horizon where demand for a product is a time‐varying linear function of price. It is assumed that at the start of the horizon there is a fixed amount of the product available. The decision problem is to determine the optimal price at each time period in order to maximize the total revenue generated from the sale of the product. In order to obtain structural results we formulate the decision problem as an optimal control problem and solve it using Pontryagin's principle. For those problems which are not easily solvable when formulated as an optimal control problem, we present a simple convergent algorithm based on Pontryagin's principle that involves solving a sequence of very small quadratic programming (QP) problems. We also consider the case where the initial inventory of the product is a decision variable. We then analyse the two‐product version of the problem where the linear demand functions are defined in the sense of Bertrand and we again solve the problem using Pontryagin's principle. A special case of the optimal control problem is solved by transforming it into a linear complementarity problem. For the two‐product problem we again present a simple algorithm that involves solving a sequence of small QP problems and also consider the case where the initial inventory levels are decision variables. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents an algorithm for the indirect solution of optimal control problems that contain mixed state and control variable inequality constraints. The necessary conditions for optimality lead to an inequality constrained two‐point BVP with index‐1 differential‐algebraic equations (BVP‐DAEs). These BVP‐DAEs are solved using a multiple shooting method where the DAEs are approximated using a single‐step linearly implicit Runge–Kutta (Rosenbrock–Wanner) method. An interior‐point Newton method is used to solve the residual equations associated with the multiple shooting discretization. The elements of the residual equations, and the Jacobian of the residual equations, are constructed in parallel. The search direction for the interior‐point method is computed by solving a sparse bordered almost block diagonal (BABD) linear system. Here, a parallel‐structured orthogonal factorization algorithm is used to solve the BABD system. Examples are presented to illustrate the efficiency of the parallel algorithm. It is shown that an American National Standards Institute C implementation of the parallel algorithm achieves significant speedup with the increase in the number of processors used. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号