Matrices

Matrices are a method of representing mathematical equations, functions and arrays. An individual part of a martix, is called an element.

Matrices
For most courses in UoM Physics, matrices are usually displayed as such:

$$ \mathbf{A}_{m,n} = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix} $$.

It is however not unknown for square brackets to be used,

$$ \mathbf{A}_{m,n} = \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{bmatrix} $$,

either notation seems acceptable and is generally a matter of personal taste. This article uses parentheses.

Transpose
The transpose is usually denoted by a superscript T next to the matrix or variable for the matrix.

$$ \mathbf{A}^{T}=

\begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix}^{T}

$$

Determinant
The determinant can be represented in a few ways, the main ones used are


 * $$|\mathbf{A}_{m,n}|= \text{det}(\mathbf{A}_{m,n})=

\begin{vmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{vmatrix} $$.

Matrix Addition/Subtraction
Addition and subtraction are done much as would be expected. Only matrices of identical dimensions can be added together. For a matrix which is the sum of two others, each element is simply the sum of the same elements from the matrices.



\begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix}

+

\begin{pmatrix} b_{1,1} & b_{1,2} & \cdots & b_{1,n} \\ b_{2,1} & b_{2,2} & \cdots & b_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ b_{m,1} & b_{m,2} & \cdots & b_{m,n} \end{pmatrix}

=

\begin{pmatrix} a_{1,1} + b_{1,1} & a_{1,2} + b_{1,2} & \cdots & a_{1,n} + b_{1,n} \\ a_{2,1} + b_{2,1} & a_{2,2} + b_{2,2} & \cdots & a_{2,n} + b_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} + b_{m,1} & a_{m,2} + b_{m,2} & \cdots & a_{m,n} + b_{m,n} \end{pmatrix}

$$

Based on this defintion, the order of addition of matricies, much like normal addition, makes no difference for the resultant matrix. This isn't the case for all operations however, such as multiplication.


 * $$\mathbf{A+B=B+A}$$

Transposing Matrices
Transposing a matrix is where the columns and rows are flipped. If you drew a diagonal line from the top left to the bottom right for a square matrix, and swapped all values in mirror positions to each other, that would be the transpose of the matrix. If we take a general matrix A,

$$ \mathbf{A}_{m,n} = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{pmatrix} $$

then the transpose of this matrix will be

$$ \mathbf{A}^{T}_{m,n} = \begin{pmatrix} a_{1,1} & a_{2,1} & \cdots & a_{m,1} \\ a_{1,2} & a_{2,2} & \cdots & a_{m,2} \\ \vdots & \vdots  & \ddots & \vdots  \\ a_{1,n} & a_{n,2} & \cdots & a_{m,n} \end{pmatrix} $$.

This can be applied to none-square matricies as well, for example:

$$ \begin{pmatrix} a \\ b \end{pmatrix}^{T} = \begin{pmatrix} a & b \end{pmatrix} $$.

Martrix Multiplication
Multiplication is a slightly more complicated process than addition for matrices. As with addition, there are certain requirements that need to be met for multiplication to be valid. For multiplication to be valid the inner values of their dimensions must match i.e., for an n &times; m matrix, it can only be multiplied by an m &times; k matrix. This will form a matrix of size of the outer values of the dimensions (n &times; k).

To multiply two matrices of required dimensions, you select a row from the left-hand matrix and a column from the right hand matrix, you then take the first values of the selected column and row, multiply them and then add the product of the next two values, continuing until the end of the column and vector. This is why the dimensions requirement exists, since if the number of columns of the first was larger or smaller than the rows on the second, you would run out of values to take the product of.

The ith row and jth column of a matrix AB (i.e. the matrix product of A and B) is:


 * $$ (\mathbf{AB})_{i,j} = A_{i,1}B_{1,j} + A_{i,2}B_{2,j} + \cdots + A_{i,n}B_{n,j} = \sum_{r=1}^n A_{i,r}B_{r,j}$$

The entire matrix product is determined by using the above to work out each element individually. According to this definition AB ≠ BA. When multiplying an equation by a matrix, care must be taken to multiply both sides of the equation the same way. You can multiply it by applying it the left-hand or the right-hand side of each expression.

So for
 * $$\mathbf{A} = \mathbf{B} + \mathbf{C}$$,

you can multiply it by a matrix D in two ways, which give two different results:


 * $$\mathbf{DA} = \mathbf{D}(\mathbf{B} + \mathbf{C}) = \mathbf{DB + DC} $$


 * $$\mathbf{AD} = (\mathbf{B} + \mathbf{C})\mathbf{D} = \mathbf{BD + CD}$$

Determinant
The determinant of any 2x2 matrix is calculated as
 * 2x2 Matrix
 * $$\begin{vmatrix} a & b\\c & d \end{vmatrix}=ad - bc.\ $$

For a 3x3 matrix
 * 3x3 Matrix
 * $$\begin{pmatrix}a & b & c\\d & e & f\\g & h & i \end{pmatrix}$$

the determinant is calculated by taking any row, and multiplying each element by the determinant of the 2x2 matrix created by removing that elements row and column. For example, in taking the top row this becomes
 * $$a\begin{vmatrix} e & f\\h & i \end{vmatrix} - b\begin{vmatrix} d & f\\g & i \end{vmatrix} + c\begin{vmatrix} d & e\\g & h \end{vmatrix}$$

The determination of whether it is + or - each determinant is dependant on the place of the element in the matrix. It can be deduced using the following matrix
 * $$\begin{pmatrix}+ & - & +\\- & + & -\\+ & - & + \end{pmatrix}$$

For larger square matrices, apply the same concepts as for the 3x3 by taking the element*determinant of the remaining elements.
 * Larger Matrices

i.e. for a 4x4, it gets ridiculous, and for a 5x5 someone is laughing their arse off at you.

Adjugate
The adjugate of a matrix is defined for any square matrix. It is defined as the transpose of the cofactor matrix. The cofactor matrix is the resultant matrix when for each element of the matrix, it takes on plus or minus the determinant of the matrix that would remain with that column and row removed, the sign depends on the position of the element the same way as it does as shown in the determinant section i.e.,



\text{adj}(\mathbf{A})= \text{adj}\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} = \begin{pmatrix}

+\begin{vmatrix} e & f \\ h & i \end{vmatrix}

&

-\begin{vmatrix} d & f \\ g & i \end{vmatrix}

&

+\begin{vmatrix} d & e \\ g & h \end{vmatrix}

\\

-\begin{vmatrix} b & c \\ h & i \end{vmatrix}

&

+\begin{vmatrix} a & c \\ g & i \end{vmatrix}

&

-\begin{vmatrix} a & b \\ g & h \end{vmatrix}

\\

+\begin{vmatrix} b & c \\ e & f \end{vmatrix}

&

-\begin{vmatrix} a & c \\ d & f \end{vmatrix}

&

+\begin{vmatrix} a & b \\ d & e \end{vmatrix}

\end{pmatrix}^{T} $$

Inverse
If a square matrix has a non-zero determinant, then there is a inverse matrix such that


 * $$\mathbf{AA^{-1} = A^{-1}A = I}$$.

The inverse matrix can easily by defined using the previous sections as


 * $$\mathbf{A}^{-1} = \frac{\text{adj}(\mathbf{A})}{\text{det}(\mathbf{A})}$$,

as tedious as it may be. If the determinant is zero, then the matrix is considered singular or degenerate, it does not have an inverse.