P.S. 3B1B in YouTube gives a complete new and clear understanding about the matrix transformation. Higly Highly Recommended.
For better understanding, we consider a vector as a combination between the direction and the distance.
Key Points
1. Properties
1.1 Dot Product
Dimension of One.
and, . The sequence is irrelevant.
1.2 Length
A vector has two important characteristics, one is the direction and the other is the length.
, where we define As the lengh, and as a unit vector, for which the length is one and is used to show the direction.
The length of is coined as the norm,
The distance between two vectors
1.3 Two Vectors
We can solve the angle between two vectors, .
2. Matrix
2.1 Matrix Multiplication
, then an element of matrix Is,
2.2 Transpose
2.3 Solve Linear Equations & Determinant
, where the matrix , and the vector are known, we aim to calculate the unknow vector .
Three unknowns and three equations, we can solve the vector if we say is invertible. That is equivalent to say that has a determinant of none zero.
The determinant of Is,
Key implication of matrix:
Matrix, as shown as a set of combined equations, could represent the direction of linear transformation, , for a vector, . Solving is like finding a unknown vector that, after applying the linear transformation, , we can get our target vector, .
The row of is the number of equations, , and the dimension of is the number of unknowns, , we are planing to solve with. If , then we have a solution for sure, and matrix is squared. If , then there are more equations available to solve those unknows, we would also have a solution. The last case is , and in that case, there are more free variables.
The determinant is pretty much like collasping a matrix, , into dimension one, a number.
The determinant <=> collapse the dimension
To solve it,
Step 1: append the target vector into the RHS of matrix A.
Step 2: make matrix A an identity matrix, then the RHS would be our answer.
2.4 Matrix Inverse
Clearly, inverse of matrix requires the denominator to be non-zero, which is equivalent to having a non-zero determinant.
For a matrix , the inverse is defined as,
, where , and is the squared sub-matrix.
Why we calculate the inverse?
Because the computer is good at matrix calculation. By using the invese, we could quickly get the solution.
2.5 Orthogonal Matrix
The orthogonal matrix is defined as,
2.6 Eigenvalues and Eigenvectors
Recall that a matrix could be considered as a linear transformation. Exactly, it transfer a N-dimension space.
For example,
A matrix could be considered as a panel, with two unit axis. In a two-dimension space, the firtst column sates the x-axis, and the second states the y-axis. The unit is one.
Similarly,
There is another matrix as shown above, and could also represent a space, with the first column and the second consisting another space. The unit x-axis has a angle and length of , and the y-axis has also a different angle and length is 3.
Now, if we have a vector , and we want to fit it into that space, what we do is . Graphically, we may see it as a vector undergoes a linear transformation and have a different lengh and direction.
2.6.1 Eigenvalue
The here is called the eigenvalue. ( needs to be squared)
To solve it,
, coz is non-zero, must be zero to let the equation hold. Therefore we would let
The above equation is also called, characterisitic polynomial.
For example, if
We solve it could get the solution, .
Some Propertities of Eigenvalue
is the sum of diagnoal elements.
2.6.2 Eigenvector
The eigenvector is just the solution of our equation, , once we plug inside.
For example, if
2.7 Diagonalisation and Powers of
Let matrix be a matrix whose columns are the eigenvectors of .
Let ( \Lambda ) be a matrix whose diagnals are each eigenvalues.
We define that
The implication is that matrix could be decomposed by its eigenvalue, , and its eigenvector, .
Therefore,
The it could be represented by the recursive substitution, being useful under certain circumstances.
For example, (by simply applying the previous matrix)