A Lie group is a group which is also a smooth differentiable manifold. Every Lie group has an associated tangent space called a Lie algebra. As a vector space, the Lie algebra is often easier to study than the associated Lie group and can reveal most of what we need to know about the group. This is one of the general motivations for Lie theory. A table of some common Lie groups and their associated Lie algebras can be found here. All matrix groups are Lie groups. An example of a matrix Lie group is the -dimensional rotation group . This group is linked to a set of antisymmetric matrices which form the associated Lie algebra, usually denoted by . Like all Lie algebras corresponding to Lie groups, the Lie algebra is characterised by a Lie bracket operation which here takes the form of commutation relations between the above-mentioned antisymmetric matrices, satisfying the formula

The link between and is provided by the matrix exponential map in the sense that each point in the Lie algebra is mapped to a corresponding point in the Lie group by matrix exponentiation. Furthermore, the exponential map defines parametric paths passing through the identity element in the Lie group. The tangent vectors obtained by differentiating these parametric paths and evaluating the derivatives at the identity are the elements of the Lie algebra, showing that the Lie algebra is the tangent space of the associated Lie group manifold.

In the rest of this note I will unpack some aspects of the above brief summary without going too much into highly technical details. The Lie theory of rotations is based on a simple symmetry/invariance consideration, namely that rotations leave the scalar products of vectors invariant. In particular, they leave the lengths of vectors invariant. The Lie theory approach is much more easily generalisable to higher dimensions than the elementary trigonometric approach using the familiar rotation matrices in two and three dimensions. Instead of obtaining the familiar trigonometric rotation matrices by analysing the trigonometric effects of rotations, we will see below that they arise in Lie theory from the exponential map linking the Lie algebra to the rotation group , in a kind of matrix analogue of Euler’s formula .

Begin by considering rotations in -dimensional Euclidean space as being implemented by multiplying vectors by a rotation matrix which is a continuous function of some parameter vector such that . In Lie theory we regard these rotations as being infinitesimally small, in the sense that they move us away from the identity by an infinitesimally small amount. If is the column vector of coordinate differentials, then the rotation embodied in is implemented as

Since we require lengths to remain unchanged after rotation, we have

which implies

In other words, the matrix must be orthogonal. Furthermore, since the determinant of a product is the product of the determinants, and the determinant of a transpose is the same as the original determinant, we can write

Therefore we must have

But we can exclude the case because the set of orthogonal matrices with negative determinants produces *reflections*. For example, the orthogonal matrix

has determinant and results in a reflection in the -axis when applied to a vector. Here we are only interested in rotations, which we can now define as having orthogonal transformation matrices such that . Matrices which have unit determinant are called *special*, so focusing purely on rotations means that we are dealing exclusively with the set of special orthogonal matrices of dimension , denoted by .

It is straightforward to verify that constitutes a group with the operation of matrix multiplication. It is closed, has an identity element , each element has an inverse (since the determinant is nonzero), and matrix multiplication is associative. Note that this means a rotation matrix times a rotation matrix must give another rotation matrix, so this is another property needs to satisfy.

The fact that is also a differentiable manifold, and therefore a Lie group, follows in a technical way (which I will not delve into here) from the fact that is a closed subgroup of the set of all invertible real matrices, usually denoted by , and this itself is a manifold of dimension . The latter fact is demonstrated easily by noting that for , the determinant function is continuous, and is the inverse image under this function of the open set . Thus, is itself an open subset in the -dimensional linear space of all the real matrices, and thus a manifold of dimension . The matrix Lie group is a manifold of dimension , not . One way to appreciate this is to observe that the condition for every means that you only need to specify off-diagonal elements to specify each . In other words, there are elements in each but the condition means that there are equations linking them, so the number of `free’ elements in each is only . We will see shortly that is also the dimension of , which must be the case given that is to be the tangent space of the manifold (the dimension of a manifold is the dimension of its tangent space).

If we now Taylor-expand to first order about we get

where is an infinitesimal matrix of order and we will (for now) ignore terms like which are of second and higher order in . Now substituting into we get

Thus, the matrix must be antisymmetric. In fact, will be a linear combination of some elementary antisymmetric basis matrices which play a crucial role in the theory, so we will explore this more. Since a sum of antisymmetric matrices is antisymmetric, and a scalar product of an antisymmetric matrix is antisymmetric, the set of all antisymmetric matrices is a vector space. This vector space has a basis provided by some elementary antisymmetric matrices containing only two non-zero elements each, the two non-zero elements in each matrix appearing in corresponding positions either side of the main diagonal and having opposite signs (this is what makes the matrices antisymmetric). Since there are distinct pairs of possible off-diagonal positions for these two non-zero elements, the basis has dimension and, as will be seen shortly, this vector space in fact turns out to be the Lie algebra . The basis matrices will be written as where and identify the pair of corresponding off-diagonal positions in which the two non-zero elements will appear. We will let run through the numbers in order, and with each pair and fixed, the element in the -th row and -th column of each matrix is then given by the formula

To clarify this, we will consider the antisymmetric basis matrices for , and . In the case we have so there is a single antisymmetric matrix. Setting , , we get and so the antisymmetric matrix is

In the case we have antisymmetric basis matrices corresponding to the three possible pairs of off-diagonal positions for the two non-zero elements in each matrix. Following the same approach as in the previous case, these can be written as

Finally, in the case we have antisymmetric basis matrices corresponding to the six possible pairs of off-diagonal positions for the two non-zero elements in each matrix. These can be written as

So in the case of a general infinitesimal rotation in -dimensional space of the form , the antisymmetric matrix will be a linear combination of the antisymmetric basis matrices of the form

But note that using the standard matrix exponential series we have

This suggests

and in fact this relationship between rotations and the exponentials of antisymmetric matrices turns out to be exact, not just an approximation. To see this, observe that and commute since . This means that

(note that in matrix exponentiation only if and commute – see below). Since the diagonal elements of an antisymmetric matrix are always zero, we also have

Thus, is both special and orthogonal, so it must be an element of . Conversely, suppose . Then we must have

so is antisymmetric.

So we have a tight link between and via matrix exponentiation. We can do a couple of things with this. First, for any real parameter and antisymmetric basis matrix , we have and this defines a parametric path through which passes through its identity element at . Differentiating with respect to and evaluating the derivative at we find that

which indicates that the antisymmetric basis matrices are tangent vectors of the manifold at the identity, and that the set of antisymmetric basis matrices form the tangent space of . Another thing we can do with the matrix exponential map is quickly recover the elementary rotation matrix in the case . Noting that and separating the exponential series into even and odd terms in the usual way we find that

where the single real number here is the angle of rotation. This is the matrix analogue of Euler’s formula that was mentioned earlier.

To further elucidate how the antisymmetric basis matrices form a Lie algebra which is closely tied to the matrix Lie group , we will show that the commutation relation between them is closed (i.e., that the commutator of two antisymmetric basis matrices is itself antisymmetric), and that these commutators play a crucial role in ensuring the closure of the group (i.e., in ensuring that a rotation multiplied by a rotation produces another rotation). First, suppose that and are two distinct antisymmetric matrices. Then since the transpose of a product is the product of the transposes in reverse order we can write

This shows that the commutator of two antisymmetric matrices is itself antisymmetric, so the commutator can be written as a linear combination of the antisymmetric basis matrices . Furthermore, since we can write and , we have

so every commutator between antisymmetric matrices can be written in terms of the commutators of the antisymmetric basis matrices. Next, suppose we exponentiate the antisymmetric matrices and to obtain the rotations and . Since is closed, it must be the case that

where is another rotation and therefore is an antisymmetric matrix. To see the role of the commutator between antisymmetric matrices in ensuring this, we will expand both sides. For the left-hand side we get

For the right-hand side we get

Equating the two expansions we get

where the remaining terms on the right-hand side are of second and higher order in , and . A result known as the Baker-Campbell-Hausdorff formula shows that the remaining terms on the right-hand side of are in fact all nested commutators of and . The series for with a few additional terms expressed in this way is

This shows that unless and commute, since only in this case do all the commutator terms in the series for vanish. Since the commutator of two antisymmetric matrices is itself antisymmetric, this result also shows that is an antisymmetric matrix, and therefore must be a rotation.

Since every commutator between antisymmetric matrices can be written in terms of the commutators of the antisymmetric basis matrices, a general formula for the latter would seem to be useful. In fact, the formula given earlier, namely

completely characterises the Lie algebra . To conclude this note we will therefore derive this formula *ab initio*, starting from the formula

for the -th element of each matrix . We have

Focus on first. Using the Einstein summation convention, the product of the -th row of with the -th column of is

Now focus on . The product of the -th row of with the -th column of is

So the element in the -th row and -th column of is

But notice that

and similarly for the other Einstein summation terms. Thus, the above sum reduces to

But

Thus the element in the -th row and -th column of is

Extending this to the matrix as a whole gives the required formula: