Citizendium - a community developing a quality comprehensive compendium of knowledge, online and free. Click here to join and contribute—free
CZ thanks AUGUST 2014 donors; special to Darren Duncan. SEPTEMBER 2014 donations open; need minimum total $100. Let's exceed that. Donate here. Treasurer's Financial Report -- Thanks to August content contributors. --




Euler's theorem (rotation)

From Citizendium, the Citizens' Compendium

Jump to: navigation, search
This article is developing and not approved.
Main Article
Talk
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and not meant to be cited; by editing it you can help to improve it towards a future approved, citable version. These unapproved articles are subject to a disclaimer.

Euler's theorem on rotation is the statement that in space a rigid motion which has a fixed point always has an axis (of rotation), i.e., a straight line of fixed points. It is named after Leonhard Euler who proved this in 1775 by an elementary geometric argument.

In terms of modern mathematics, rotations are distance and orientation preserving transformations in 3-dimensional Euclidean (affine) space which have a fixed point. Such transformations are associated with linear operators on the difference space \mathbb{R}^3 that preserve inner product (are isometric) and preserve orientation (have unit determinant). In an orthogonal basis of \mathbb{R}^3 these operators correspond one-to-one with orthogonal 3 × 3 matrices with determinant +1. Since for such (non-identity) matrices exactly one eigenvector has eigenvalue +1, this eigenvector gives the direction of the axis.

The product of two orthogonal matrices is again orthogonal, and from the determinant rule: det(AB) = det(A)det(B) follows that the product matrix has also unit determinant. The matrix product being associative and the inverse of an orthogonal matrix being orthogonal, the matrices form a group of infinite order, commonly denoted by SO(3), the special (det = 1) orthogonal group in 3 dimensions. Note that the map A → det(A) is a group homomorphism: the set of determinants forms a 1-dimensional irreducible representation (the identity representation) of SO(3).

Contents

Euler's theorem (1776)

Euler states the theorem as follows:[1]

Theorema. Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, cuius directio in situ translato conueniat cum situ initiali.

or (in free translation):

When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position.

To prove this, Euler considers a great circle on the sphere and the great circle to which it is transported by the movement. These two circles intersect in two (opposite) points of which one, say A, is chosen. This point lies on the initial circle and thus is transported to a point a on the second circle. On the other hand, A lies also on the translated circle, and thus corresponds to a point α on the initial circle. Now Euler considers the symmetry plane of the angle αAa (which passes through the centre C of the sphere) and the symmetry plane of the arc Aa (which also passes through C). These two planes intersect in a diameter whose endpoint O on the sphere remains fixed under the movement because the triangle OαA is transported onto the triangle OAa (since αA is mapped on Aa and the triangles have the same angles).

This also shows that the rotation of the sphere can be seen as two consecutive reflections about the two planes described above. Points in a mirror plane are invariant under reflection, and hence the points on their intersection (a line: the axis of rotation) are invariant under both the reflections, and hence under the rotation.

Matrix proof

An algebraic proof starts from the fact that a rotation is a linear map in one-to-one correspondence with a 3×3 rotation matrix R, i.e, a matrix for which


\mathbf{R}^\mathrm{T}\mathbf{R} = \mathbf{R}\mathbf{R}^\mathrm{T} = \mathbf{E},

where E is the 3×3 identity matrix and superscript T indicates the transposed matrix. Clearly a rotation matrix has determinant ±1, for invoking some properties of determinants, one can prove


1=\det(\mathbf{E})=\det(\mathbf{R}^\mathrm{T}\mathbf{R}) = \det(\mathbf{R}^\mathrm{T})\det(\mathbf{R})
= \det(\mathbf{R})^2 \quad\Longrightarrow \quad \det(\mathbf{R}) = \pm 1.

The matrix with positive determinant is a proper rotation and with a negative determinant an improper rotation (is equal to a reflection times a proper rotation).

It will now be shown that a rotation matrix R has at least one invariant vector n, i.e., R n = n. Note that this is equivalent to stating that the vector n is an eigenvector of the matrix R with eigenvalue λ = 1.

A proper rotation matrix R has at least one unit eigenvalue. Using the two relations:


\det(-\mathbf{R}) = (-1)^3 \det(\mathbf{R}) = - \det(\mathbf{R})
\quad\hbox{and}\quad\det(\mathbf{R}^{-1} ) = 1,

we find


\begin{align}
\det(\mathbf{R} - \mathbf{E}) =& \det\big((\mathbf{R} - \mathbf{E})^{\mathrm{T}}\big)
=\det\big((\mathbf{R}^{\mathrm{T}} - \mathbf{E})\big)
= \det\big((\mathbf{R}^{-1} - \mathbf{E})\big) = \det\big(-\mathbf{R}^{-1} (\mathbf{R} - \mathbf{E}) \big) \\
=&  -  \det(\mathbf{R}^{-1} ) \; \det(\mathbf{R} - \mathbf{E})
= - \det(\mathbf{R}  - \mathbf{E})\quad \Longrightarrow\quad  \det(\mathbf{R}  - \mathbf{E}) = 0
\end{align}

From this follows that λ = 1 is a root (solution) of the secular equation, that is,


\det(\mathbf{R}  - \lambda \mathbf{E}) = 0\quad \hbox{for}\quad \lambda=1.

In other words, the matrix RE is singular and has a non-zero kernel, that is, there is at least one non-zero vector, say n, for which


(\mathbf{R} - \mathbf{E}) \mathbf{n} = \mathbf{0} \quad \Longleftrightarrow \quad \mathbf{R}\mathbf{n} =  \mathbf{n}

The line μn for real μ is invariant under R, i.e, μn is a rotation axis. This proves Euler's theorem.

Equivalence of an orthogonal matrix to a rotation matrix

A proper orthogonal matrix is equivalent to


\mathbf{R} \sim
\begin{pmatrix}
\cos\phi  & -\sin\phi  & 0 \\
\sin\phi  & \cos\phi  & 0 \\
0  & 0  & 1\\
\end{pmatrix}, \qquad 0\le \phi \le 2\pi.

If R has more than one invariant vector then φ = 0 and R = E. Any vector is an invariant vector of E.

Excursion into matrix theory

In order to prove the previous equation some facts from matrix theory must be recalled. Matrices over the field of complex numbers are considered.

An m×m matrix A has m orthogonal eigenvectors if and only if A is normal, that is, if AA = AA. [2][3]

This result is equivalent to stating that normal matrices can be brought to diagonal form by a unitary similarity transformation:


\mathbf{A}\mathbf{U} = \mathbf{U}\; \mathrm{diag}(\alpha_1,\ldots,\alpha_m)\quad \Longleftrightarrow\quad
\mathbf{U}^\dagger \mathbf{A}\mathbf{U} = \operatorname{diag}(\alpha_1,\ldots,\alpha_m),

and U is unitary, that is,


\mathbf{U}^\dagger = \mathbf{U}^{-1}.

The eigenvalues α1, ..., αm are roots of the secular equation. If the matrix A happens to be unitary (and note that unitary matrices are normal), then


\left(\mathbf{U}^\dagger\mathbf{A} \mathbf{U}\right)^\dagger = \mathrm{diag}(\alpha^*_1,\ldots,\alpha^*_m) =
\mathbf{U}^\dagger\mathbf{A}^{-1} \mathbf{U} = \mathrm{diag}(1/\alpha_1,\ldots,1/\alpha_m)

and it follows that the eigenvalues of a unitary matrix are on the unit circle in the complex plane:


\alpha^*_k = 1/\alpha_k \;\Longleftrightarrow\; \alpha^*_k\alpha_k = |\alpha_k|^2 = 1,\qquad k=1,\ldots,m.

Also an orthogonal (real unitary) matrix has eigenvalues on the unit circle in the complex plane. Moreover, since its secular equation (an mth order polynomial in λ) has real coefficients, it follows that its roots appear in complex conjugate pairs, that is, if α is a root then so is α.


After recollection of these general facts from matrix theory, we return to the rotation matrix R. It follows from its realness and orthogonality that


 \mathbf{R} \mathbf{U} = \mathbf{U}
\begin{pmatrix}
e^{i\phi} & 0           & 0  \\
0         & e^{-i\phi}  & 0   \\
0         &      0      & 1 \\
\end{pmatrix}

with the third column of the 3×3 matrix U equal to the invariant vector n. Writing u1 and u2 for the first two columns of U, this equation gives


 \mathbf{R}\mathbf{u}_1 = e^{i\phi}\, \mathbf{u}_1 \quad\hbox{and}\quad  \mathbf{R}\mathbf{u}_2 = e^{-i\phi}\, \mathbf{u}_2

If u1 has eigenvalue 1, then φ = 0 and u2 has also eigenvalue 1, which implies that in that case R = E.

Finally, the matrix equation is transformed by means of a unitary matrix,


 \mathbf{R} \mathbf{U} 
\begin{pmatrix}
\frac{1}{\sqrt{2}}  & \frac{i}{\sqrt{2}}  & 0 \\
\frac{1}{\sqrt{2}}  & \frac{-i}{\sqrt{2}}  & 0 \\
0  & 0  & 1\\
\end{pmatrix}
= \mathbf{U}
\underbrace{
\begin{pmatrix}
\frac{1}{\sqrt{2}}  & \frac{i}{\sqrt{2}}  & 0 \\
\frac{1}{\sqrt{2}}  & \frac{-i}{\sqrt{2}}  & 0 \\
0  & 0  & 1\\
\end{pmatrix}
\begin{pmatrix}
\frac{1}{\sqrt{2}}  & \frac{1}{\sqrt{2}}  & 0 \\
\frac{-i}{\sqrt{2}}  & \frac{i}{\sqrt{2}}  & 0 \\
0  & 0  & 1\\
\end{pmatrix}
}_{=\;\mathbf{E}}
\begin{pmatrix}
e^{i\phi} & 0           & 0  \\
0         & e^{-i\phi}  & 0   \\
0         &      0      & 1 \\
\end{pmatrix} 
\begin{pmatrix}
\frac{1}{\sqrt{2}}  & \frac{i}{\sqrt{2}}  & 0 \\
\frac{1}{\sqrt{2}}  & \frac{-i}{\sqrt{2}}  & 0 \\
0  & 0  & 1\\
\end{pmatrix}

which gives


\mathbf{U'}^\dagger \mathbf{R} \mathbf{U'} =  \begin{pmatrix}
\cos\phi  & -\sin\phi  & 0 \\
\sin\phi  & \cos\phi  & 0 \\
0  & 0  & 1\\
\end{pmatrix}
\quad\hbox{with}\quad \mathbf{U'}
= \mathbf{U}
\begin{pmatrix}
\frac{1}{\sqrt{2}}  & \frac{i}{\sqrt{2}}  & 0 \\
\frac{1}{\sqrt{2}}  & \frac{-i}{\sqrt{2}}  & 0 \\
0  & 0  & 1\\
\end{pmatrix} .

The columns of U′ are orthonormal. The third column is still n, the other two columns are perpendicular to n. This result implies that any orthogonal matrix R is equivalent to a rotation over an angle φ around an axis n.

Equivalence classes

It is of interest to remark that the trace (sum of diagonal elements) of the real rotation matrix given above is 1 + 2cosφ. Since a trace is invariant under an orthogonal matrix transformation:

 
\mathrm{Tr}[\mathbf{A} \mathbf{R} \mathbf{A}^\mathrm{T}] =
\mathrm{Tr}[ \mathbf{R} \mathbf{A}^\mathrm{T}\mathbf{A}] = \mathrm{Tr}[\mathbf{R}]\quad\hbox{with}\quad \mathbf{A}^\mathrm{T} = \mathbf{A}^{-1},

it follows that all matrices that are equivalent to R by an orthogonal matrix transformation have the same trace. The matrix transformation is clearly an equivalence relation, that is, all equivalent matrices form an equivalence class. In fact, all matrices with the same trace form an equivalence class in the group SO(3). Elements of such an equivalence class share their rotation angle, but all rotations are around different axes. If n is a eigenvector of R with eigenvalue 1, then An is an eigenvector of ARAT, also with eigenvalue 1. Unless A = E, n and An are different.

Note

  1. see the bibliography subpage for the 1776 reference (p.202)
  2. The dagger symbol † stands for complex conjugation followed by transposition. For real matrices complex conjugation does nothing and daggering a real matrix is the same as transposing it.
  3. See for a proof most books on linear algebra or matrix theory. For instance, Felix R. Gantmacher, Matrizentheorie, Springer-Verlag, Berlin (1986), chapter 9.10. Here it is proved that a linear operator on a finite-dimensional inner product space is normal if and only if it has a complete set of orthonormal eigenvectors. Compare F. Ayres, Theory and Problems of Matrices, Schaum, New York (1962), p.164: A square matrix A is unitarily similar to a diagonal matrix if and only if A is normal.
Views
Personal tools