Undergraduate → Algebra → Linear Algebra ↓
Orthogonality and Orthonormal Bases
In linear algebra, "orthogonality" is a fundamental concept that deals with the perpendicularity of vectors. It carries the idea of perpendicular lines from geometry to vector spaces. The dot product of orthogonal vectors is zero, meaning that they are 'independent' directions. Understanding orthogonality is important in a variety of mathematical tasks, including simplifying vector calculations, and has applications in computer science, physics, and engineering.
Orthogonality
To understand orthogonality, we first need to explore the dot product (also called the inner product). For vectors in R n
, the dot product is a way to create a scalar (a single number) by multiplying two vectors. If you have two vectors u
and v
in R n
, the dot product is defined as
u • v = u 1 v 1 + u 2 v 2 + ... + u n v n
where u i
and v i
are the components of the vectors u
and v
, respectively. Two vectors are called orthogonal if their dot product is zero:
u • v = 0
Visual example:
In the visualization above, the red and blue lines represent two orthogonal vectors. They meet at right angles at the origin, which is represented by the green dot.
Properties of orthogonal vectors
- Two non-zero vectors in a two-dimensional plane are orthogonal if they are at a 90 degree angle to each other.
- Orthogonal vectors in higher dimensions still have the property that their dot product is zero.
- If a set of vectors is mutually perpendicular (each pair of vectors in the set is perpendicular), then this set is called a perpendicular set.
Orthonormal vectors and bases
While orthogonal vectors are vectors that are at right angles to each other, orthonormal vectors add an additional condition: each vector must have the same length (magnitude). If all vectors in an orthogonal set also have the same length, then it becomes an orthonormal set. The key points are as follows:
- Orthonormal vectors: A group of vectors is orthonormal if the magnitude of each vector is unity and any pair of vectors in the group are orthogonal.
- Orthonormal basis: An orthonormal basis for a vector space is a basis that is also an orthonormal set.
Mathematically, for a set of vectors { e 1, e 2, ..., e n
} to be orthonormal, the following must hold:
e i • e j = , 1 if i = j (same vectors) 0 if i ≠ j (different vectors) ,
Visual example:
The red and blue lines above represent vectors that form an orthonormal basis for the 2D plane. They are at right angles and each has a magnitude of one.
Why orthonormal bases are useful
Having an orthonormal basis for a vector space is incredibly useful because it simplifies many calculations, such as finding the components of a vector in that space. Let b
be the orthonormal basis for a vector space V
Then, for any vector v
in V
, we can express v
as follows:
V = (V • E 1 )E 1 + (V • E 2 )E 2 + ... + (V • E n )E n
This is a powerful result because to find the coordinate of v
along the direction of e i
, one only needs the dot product v • e i
. This simplicity arises because the dot product of v with itself will give one due to normalization.
Construction of an orthonormal basis: the Gram–Schmidt process
One of the most famous procedures for converting any basis into an orthonormal basis is the Gram-Schmidt procedure. It works like this:
- Start with an initial set of linearly independent vectors
{v 1, v 2, ..., v n}
. - To convert the first vector to a unit vector, set
e 1 = v 1 / |v 1|
- For each subsequent vector
v k
, make it orthogonal to all previously obtained orthogonal vectors. - Normalize each orthogonal vector to turn it into a unit vector.
Illustration:
This illustration shows how initial vectors (red and blue) are transformed into orthogonal vectors using the Gram–Schmidt process, and ultimately, when all vectors are normalized, an orthonormal basis is formed.
Applications and examples
Orthonormal bases are used extensively in computer graphics, signal processing, and solving differential equations, as well as many other fields. They simplify calculations by allowing vector decomposition and projection in a straightforward manner.
Example calculation:
Consider the vectors u = (1, 0, -1)
and v = (1, 2, 1)
. We want to see if they are orthogonal:
The dot product is calculated as follows:
u • v = (1 * 1) + (0 * 2) + (-1 * 1) = 0
Since their dot product is zero, the vectors u
and v
are orthogonal.
Orthonormal bases allow efficient representation and manipulation of vector spaces. By understanding orthogonality and how to construct orthonormal sets, we can discover simple solutions to complex problems.