next up previous contents index
Next: Jacobi-Davidson Method  G. Sleijpen and Up: Transformation to Standard Problems Previous: Split-and-invert .   Contents   Index


Shift-and-Invert.

None of the transformations to standard form, (8.3), (8.4), or (8.5), can be used when both $A$ and $B$ are singular or when $B$ is ill-conditioned. An attractive and popular technique is to apply first the shift to the original problem and then carry out the split-invert. This is the SI, as discussed in §3.3. Specifically, let $\sigma$ be a user-selected shift such that the matrix $A-\sigma B$ is nonsingular; then the original problem (8.1) can be transformed to
\begin{displaymath}
C x = \mu x
\quad \mbox{and} \quad
z^{\ast} C = \mu z^{\ast},
\end{displaymath} (219)

where

\begin{displaymath}
C = (A - \sigma B)^{-1} B, \quad
\mu = \frac{1}{\lambda - \sigma}, \quad \mbox{and} \quad
z= (A - \sigma B)^{\ast} y.
\end{displaymath}

We see that the eigenvalues $\lambda$ of the problem (8.1) closest to the shift $\sigma$ are mapped as the exterior eigenvalues of the reduced standard eigenvalue problem (8.6), that is, to the eigenvalues of largest magnitude, and these are the eigenvalues that are first well approximated by the iterative methods.

In practice, an effective shift selection depends on the user's preferences and on knowledge of the underlying generalized eigenproblem. A good shift not only amplifies the desired eigenvalues, but it also leads to a well-conditioned matrix $A-\sigma B$. This often makes the task of selecting good shifts a challenging one.

For the application of an iterative method for the reduced standard eigenvalue problem (8.6), one needs to evaluate matrix-vector products

\begin{displaymath}
r = C q = \left( (A - \sigma B)^{-1} B \right) q
\end{displaymath}

or

\begin{displaymath}
s = C^{\ast} p = \left( (A - \sigma B)^{-1} B \right)^{\ast} p
\end{displaymath}

for given vectors $q$ and $p$. For an efficient evaluation, let
\begin{displaymath}
A - \sigma B = {\cal L} {\cal U}
\end{displaymath} (220)

represent some convenient factorization of $A-\sigma B$, where ${\cal L}$ and ${\cal U}$ are square matrices. Since $A-\sigma B$ is assumed to be nonsingular, the factors ${\cal L}$ and ${\cal U}$ are also nonsingular. The factorization should be chosen so that the corresponding linear systems of equations with ${\cal L}$, ${\cal L}^{\ast}$ and/or ${\cal U}$, ${\cal U}^{\ast}$ can be solved efficiently, and typically, sparse LU factorizations are used. See §10.3. Of course, one can also select ${\cal L} = A - \sigma B$ and ${\cal U} = I$ if this leads to convenient linear systems.

With the above factorization, the matrix-vector product $r = Cq$ can be evaluated as follows:


(a) 		 		 		 form $v = B q$, 

(b) solve ${\cal L} w = v$ for $w $,
(c) solve ${\cal U} r = w $ for $r$.
Similar, the matrix-vector product $s = C^{\ast} p$ can be evaluated as following three steps:

(a)$^{\prime}$ solve ${\cal U}^{\ast} v = p$ for $v$, 

(b)$^{\prime}$ solve ${\cal L}^{\ast} w = v$ for $w $,
(c)$^{\prime}$ form $s = B^{\ast} w $.

The SI technique is a powerful tool in the treatment of the generalized eigenvalue problem (8.1). The major problem, which often becomes bottleneck, is to find a convenient factorization (8.7) of $A-\sigma B$ so that the associated linear systems of equations can be solved efficiently. If accurate solution of the linear systems with $A-\sigma B$ becomes too expensive, then one may consider the usage of inexact Cayley transformations (see §11.2), or the Jacobi-Davidson method.


next up previous contents index
Next: Jacobi-Davidson Method  G. Sleijpen and Up: Transformation to Standard Problems Previous: Split-and-invert .   Contents   Index
Susan Blackford 2000-11-20