     Next: Notes and References Up: Quadratic Eigenvalue Problems Z. Bai, Previous: Numerical Methods for Solving   Contents   Index

## Jacobi-Davidson Method

The possible disadvantage of the linearized approach is the doubling of the dimension of the problems; that is, a problem with -dimensional matrices , , and is transformed to a generalized problem with -dimensional matrices and . This is avoided in the Jacobi-Davidson method to be discussed in this section. In this method, the QEP is first projected onto a low-dimensional subspace, which leads to a QEP of modest dimension. This low-dimensional projected QEP can be solved with any method of choice. Expansion of the subspace is realized by a Jacobi-Davidson correction equation. For polynomial eigenproblems this technique was first suggested and discussed in [408, sec. 8].

As we will see below, this method can also be applied directly to polynomial eigenproblems (263)

and no transformation to a generalized linear'' eigenvalue problem will be required, where for are given matrices. For simplicity, we will only present the case for .

In the first part of a Jacobi-Davidson iteration step, for solving the polynomial eigenproblem (264)

where (265)

the projected polynomial problem (266)

is solved. The columns of the matrix form a basis for the search subspace. For stability reasons, the columns are constructed to be orthonormal. The projected problem has the same order as the original one but is of much smaller dimension, typically . We will assume that the solution vectors are normalized, . First, a Ritz value with the desired properties, such as the largest real part or closest to some target , is selected and for the associated eigenvector . Then the Ritz vector and the residual is computed. For expansion of the search space the vector , with is also computed.

In the second part of the Jacobi-Davidson iteration step, the search subspace is expanded by a vector that solves (approximately) the correction equation (267)

The next column of is obtained by orthonormalizing the approximate solution against the previously computed columns .

This process is repeated until an eigenpair has been detected, i.e., until the residual vector is sufficiently small. The basic form of the algorithm is presented in Algorithm 9.1. We refer to  for an example of a quadratic eigenvalue problem arising from an acoustic problem, which has been solved with this reduction technique, for up to 250,000. If the dimension of the search subspace becomes too large, if the number of columns of is equal to, say, , then the process can be continued with a search subspace of smaller dimension. Take for the new search subspace the space spanned by the best Ritz pairs from the last step; that is, take , where are the Ritz vectors associated with the best available Ritz values . Then apply modified Gram-Schmidt to orthonormalize and restart with this matrix. Note that the new search matrix can be expressed in terms of the old search matrix as for some matrix . The transformation matrix can be computed explicitly and can be used to update the auxiliary matrices ( ) and ( ).

Eigenvectors that already have been detected can be kept in the search subspace (explicit deflation) if more eigenvectors are wanted. Keeping the eigenvectors in the search subspace prevents the process from reconstructing known eigenvectors.

In §4.7.3 and §8.4.2, we suggested using implicit'' deflation, since that approach is based on Schur forms and this is more stable, with better conditioned correction equations. However, in this general polynomial setting, it is not clear how to incorporate implicit deflation: the Schur form and the generalized Schur form cannot easily be generalized.     Next: Notes and References Up: Quadratic Eigenvalue Problems Z. Bai, Previous: Numerical Methods for Solving   Contents   Index
Susan Blackford 2000-11-20