Why is the Fourier Transform so useful both in theoretical and applied science and engineering? In short, often it is more convenient to solve a problem in Fourier space than the space of the problem's original formulation. In this case, one prescription to attack the problem is to convert the representation of the problem to Fourier space, solve the problem in Fourier space (where presumably it is simpler to solve) and then convert back to the original basis. The Fourier Transform may be expressed as:

But, what really is this operation of performing a Fourier Transform?

What does this mean and why is it useful? It helps to understand the preceding statements by looking at finite dimensional space. Consider a two dimensional set of coupled linear equations:

ax+by=c

dx+ey=f

This system of equations may be represented as a matrix equation:

Ax=b

Explicity,

This system of equations is coupled. That is, x and y are tangled together. Is there a way to decouple x and y? This term "diagonalizing" is a euphemism for the operation of decoupling x and y. How is it accomplished?

One way to look at the matrix equation above is as an operator equation. The matrix A is an operator that operates on the vector (x,y) and "sends" it to the vector (c,f). But, consider a special situation in which the equation Ax=b assumes the form:

This equation says that the matrix A operating on x preserves the direction of x. It only changes it's magnitude by lambda. x is said to be an eigenvector and lamba its associated eigenvalue of the matrix A. Intuitively then eigenvectors have something to do with diagonalizing a matrix; the eigenvectors are the key to decoupling the underlying variables. Choosing a new basis formed by the eigenvectors is the key to diagonalizing a matrix. The eigenvectors are very special "directions" which are preserved when the matrix A acts on them.

How to calculate the eigenvectors and eigenvalues?

This equation is degenerate implying that the columns of A are linearly dependent and therefore A is singular. This also means that the determinant of A=0. Thus the eigenvalues (lambda) can be found by taking the determinant of the left side of the above equation and solving the characteristic equation for lambda. The eigenvectors can be found by plugging the associated eigenvalues lambda into the original equation and solving for the components of x.

Once the eigenvectors are known, A can be diagonalized by choosing new coordinates using the eigenvectors as the new basis.

How does this relate back to the Fourier transform? The above example of diagonalizing a matrix involved a finite dimensional space (perhaps A is 2x2). Imagine an infinite dimensional space. For example, consider the abstract equation:

Here f and f' are different functions. O hat is called an operator. For convenience it helps to think of it acting like a matrix on a function f and "sending" it to f'. Loosely, O hat is like an infinite dimensional matrix and the above equation is the infinite dimensional case of the original equation Ax=b.

Now consider a special operator, the convolution operator.

The convolution operator represents the operation of linear superposition. If you think of as the response of a linear system to a delta function, the overall response may be obtained by taking the linear superposition of all such responses translated and scaled. This is what the convolution operator does. It smears out a function via the "response" of a linear system to a single delta function. The response to a delta function is known in the parlance of electrical engineering as an impulse response. It is a special case of a Green's function.

In the context of a filter, the more localized the response, the larger the bandwidth of the filter. Thus, a delta function response represents a filter of infinite bandwidth.

But, it is easy to see that the convolution operator is not diagonal for by its nature, it mixes up coordinates. That is the nature of the convolution operator.

Is there a basis in which the convolution operator is diagonal? It happens that in Fourier space, this is true. To see this, it is not difficult to show that the complex exponential is an eigenfunction of the convolution operator.

In particular, the convolution operator acting on a function f(x) is:

Now consider the convolution operator applied to the complex exponential:

where

and H denotes the Fourier Transform of h.

What are the corresponding eigenvalues and eigenfunctions of the convolution operator? Looking at the brief derivation above shows that taking the Fourier Transform, amounts to choosing a basis of the complex exponentials. And, in this "Fourier space", the convolution operator does not mix up Fourier components; it acts upon each component independently. In particular, the Fourier transform of a function itself represents the eigenvalues of the convolution operator indexed by each complex exponential eigenfunction (i.e., indexed by omega). The convolution operation simply multiples a complex exponential by a complex value (a magnitude and a phase). This is the eigenvalue for the corresponding complex exponential. It happens that these eigenvalues may in fact be complex representing a change in magnitude and phase of the corresponding complex exponential eigenfunction. In other words, in Fourier space, convolution becomes multiplication.

The moral of the story is that the Fourier Transform may be thought of as a change of basis. The Fourier integral projects a function onto the basis functions of a new coordinate system whose basis functions are the complex exponentials. In this new basis, the convolution operator is diagonal and everything is simple. The convolution operator acts on each Fourier component independently by multiplying the component by an associated magnitude and phase. This magnitude and phase happens to be the Fourier Transform of the function itself. This is why, by the way, convolution in Fourier Space is simple multiplication.

Thus, the Fourier Transform amounts to diagonalizing the convolution operator. The, eigenfunctions are the complex exponentials and the eigenvalues are the Fourier Coefficients of the impulse response or Green's function.

But, what really is this operation of performing a Fourier Transform?

**One way to look at it is that the Fourier Transform diagonalizes the convolution operator**. In other words, Fourier Transforming a function amounts to representing a function using a new basis in which the convolution operator is diagonal.What does this mean and why is it useful? It helps to understand the preceding statements by looking at finite dimensional space. Consider a two dimensional set of coupled linear equations:

ax+by=c

dx+ey=f

This system of equations may be represented as a matrix equation:

Ax=b

Explicity,

This system of equations is coupled. That is, x and y are tangled together. Is there a way to decouple x and y? This term "diagonalizing" is a euphemism for the operation of decoupling x and y. How is it accomplished?

One way to look at the matrix equation above is as an operator equation. The matrix A is an operator that operates on the vector (x,y) and "sends" it to the vector (c,f). But, consider a special situation in which the equation Ax=b assumes the form:

This equation says that the matrix A operating on x preserves the direction of x. It only changes it's magnitude by lambda. x is said to be an eigenvector and lamba its associated eigenvalue of the matrix A. Intuitively then eigenvectors have something to do with diagonalizing a matrix; the eigenvectors are the key to decoupling the underlying variables. Choosing a new basis formed by the eigenvectors is the key to diagonalizing a matrix. The eigenvectors are very special "directions" which are preserved when the matrix A acts on them.

How to calculate the eigenvectors and eigenvalues?

This equation is degenerate implying that the columns of A are linearly dependent and therefore A is singular. This also means that the determinant of A=0. Thus the eigenvalues (lambda) can be found by taking the determinant of the left side of the above equation and solving the characteristic equation for lambda. The eigenvectors can be found by plugging the associated eigenvalues lambda into the original equation and solving for the components of x.

Once the eigenvectors are known, A can be diagonalized by choosing new coordinates using the eigenvectors as the new basis.

How does this relate back to the Fourier transform? The above example of diagonalizing a matrix involved a finite dimensional space (perhaps A is 2x2). Imagine an infinite dimensional space. For example, consider the abstract equation:

Here f and f' are different functions. O hat is called an operator. For convenience it helps to think of it acting like a matrix on a function f and "sending" it to f'. Loosely, O hat is like an infinite dimensional matrix and the above equation is the infinite dimensional case of the original equation Ax=b.

Now consider a special operator, the convolution operator.

The convolution operator represents the operation of linear superposition. If you think of as the response of a linear system to a delta function, the overall response may be obtained by taking the linear superposition of all such responses translated and scaled. This is what the convolution operator does. It smears out a function via the "response" of a linear system to a single delta function. The response to a delta function is known in the parlance of electrical engineering as an impulse response. It is a special case of a Green's function.

In the context of a filter, the more localized the response, the larger the bandwidth of the filter. Thus, a delta function response represents a filter of infinite bandwidth.

But, it is easy to see that the convolution operator is not diagonal for by its nature, it mixes up coordinates. That is the nature of the convolution operator.

Is there a basis in which the convolution operator is diagonal? It happens that in Fourier space, this is true. To see this, it is not difficult to show that the complex exponential is an eigenfunction of the convolution operator.

In particular, the convolution operator acting on a function f(x) is:

Now consider the convolution operator applied to the complex exponential:

where

and H denotes the Fourier Transform of h.

What are the corresponding eigenvalues and eigenfunctions of the convolution operator? Looking at the brief derivation above shows that taking the Fourier Transform, amounts to choosing a basis of the complex exponentials. And, in this "Fourier space", the convolution operator does not mix up Fourier components; it acts upon each component independently. In particular, the Fourier transform of a function itself represents the eigenvalues of the convolution operator indexed by each complex exponential eigenfunction (i.e., indexed by omega). The convolution operation simply multiples a complex exponential by a complex value (a magnitude and a phase). This is the eigenvalue for the corresponding complex exponential. It happens that these eigenvalues may in fact be complex representing a change in magnitude and phase of the corresponding complex exponential eigenfunction. In other words, in Fourier space, convolution becomes multiplication.

The moral of the story is that the Fourier Transform may be thought of as a change of basis. The Fourier integral projects a function onto the basis functions of a new coordinate system whose basis functions are the complex exponentials. In this new basis, the convolution operator is diagonal and everything is simple. The convolution operator acts on each Fourier component independently by multiplying the component by an associated magnitude and phase. This magnitude and phase happens to be the Fourier Transform of the function itself. This is why, by the way, convolution in Fourier Space is simple multiplication.

Thus, the Fourier Transform amounts to diagonalizing the convolution operator. The, eigenfunctions are the complex exponentials and the eigenvalues are the Fourier Coefficients of the impulse response or Green's function.

## Comments