Research being undertaken by our current postgraduates:

- Professor Les Jennings

Sep 2005

Efficient algorithms for solving Hamilton-Jacobi-Bellman equations

This thesis addresses the construction of some algorithms for numerically solving optimal feedback control problems. Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. More precisely, optimal controls problems involve a dynamic system with input quantities, called controls, and some quantity, called cost, to be minimized.

An optimal control is a set of differential equations describing the paths of the control variables that optimise the cost. Finding solutions to problems of this nature involves a significantly high degree of difficulty in terms of cost and power compared with the related task of solving optimal open-loop control problems.

Moreover, stability is a major problem in the feedback control problem, which may tend to overcorrect errors that can cause oscillations of constant or changing amplitude. A feedback control problem essentially depends on both state and time variables, and so its determination by numerical schemes has one serious drawback, it is the so called curse of dimensionality. Therefore, efficient numerical methods are needed for the accurate determination of optimal feedback controls.

There are essentially two equivalent ways in widespread use today to solve optimal feedback control problems. In the first approach, often referred to as the direct approach, the optimal feedback control problem is approximated by considering the optimisation of an objective functional with respect to the control function. This optimisation is subject to the system dynamics and numerous constraints on the state and control.

In the second approach, the optimal feedback control problem is transformed into a first order terminal value problem by formulating the problem as a nonlinear hyperbolic partial differential equation, known as the Hamilton-Jacobi-Bellman (HJB) equation. In this thesis we consider some numerical algorithms for solving the HJB equation, based on Radial Basis Functions (RBFs).

We present a new adaptive least-squares collocation RBFs method for solving a HJB equation. The method involves the use of the least squares method using a set of RBFs in space variables, combined with the implicit backward Euler finite difference method in time, to create an unconditionally stable solution scheme. We also present some of the more theoretical aspects related to the solution of the HJB equation using adaptive least-squares collocation RBFs method, especially, the relevant existence, uniqueness and stability results. We demonstrate the accuracy and effectiveness of this method by performing numerical experiments on test problems with up to three states and two control variables.

Furthermore, we construct another numerical method based on a domain decomposition method using a matrix inversion technique for solving HJB equation. In this method, we propose a new formula for inverting nonsymmetric and full dense coefficient matrix faster than the classical matrix inversion techniques. We also investigate the accuracy of the numerical solution, condition numbers of the system matrix, and the computational time when increasing the number of subdomains. We perform some numerical experiments to illustrate the usefulness and accuracy of the method.

Most real-world problems are very often too complicated to permit analytic solutions. As a result, a significant contribution has appeared in this thesis in the development of efficient numerical algorithms for computing approximate solutions to models formulated as constrained optimal control problems.

Assistance in statistics is available for Postgraduates students by research at the UWA Centre for Applied Statistics.