Consider a Linear Equation,
, where are scalars and is the measurement noise. The noise is unknown, while we assume it follows certain patterns (the assumptions are due to some statistical properties of the noise). We assume are independent for . Properties are mean of zero, and variance equals sigma squared.
We can rewrite as,
, in a matrix form,
, but I would write in a short form,
We solve for the least squared estimator from the optimisation problem, (there is a squared L2 norm)
Recursive Least Squared Method
The classic least squared estimator might not work well when data evolving. So, there emerges a Recursive Least Squared Method to deal with the discrete-time instance. Let’s say, for a discrete-time instance , is within a set of measurements group follows,
, where , and is the measurement noise vector. We assume that the covariance of the measurement noise is given by,
, and
The recursive least squared method has the following form in this section,
, where and are the estimates of the vector at the discrete-time instants and , and is the gain matrix that we need to determine. is coined the ‘Gain Matrix’
The above equation updates the estimate of at the time instant on the basis of the estimate at the previous time instant and on the basis of the measurement obtained at the time instant , as well as on the basis of the gain matrix computed at the time instant .
Notation
, which is corresponding with the true vector .
The estimation error, .
The gain is computed by minimising the sum of variances of the estimation errors,
Next, let’s show the cost function could be represented as follows, ( is the trace of a matrix)
, and is the estimation error covariance matrix defined by
Or, says,
Why is that?
So,
Optimisation
Let’s derive the optimisation problem.
Recall and
So, would be,
We take F.O.C. to solve for , by letting . See the Matrix Cookbook and find how to do derivatives w.r.t. .
We solve for ,
, we let , and has the following property and
Plug back into .
Summary
In the end, the Recursive Least Squared Method could be summarised as the following three equations.
- 1. Update the Gain Matrix.
- 2. Update the Estimate.
- 3. Propagation of the estimation error covariance matrix by using this equation.