next up previous contents index
Next: Deflation and Restart Up: Jacobi-Davidson Method  G. Sleijpen and Previous: Jacobi-Davidson Method  G. Sleijpen and   Contents   Index

Basic Theory

As for the GHEP, we want to avoid transformation of $A{x}=\lambda B {x}$ to a standard eigenproblem. This would require the solution of some linear system, involving $B$ and/or $A$, per iteration step. Furthermore, for stability reasons we want to work with orthonormal transformations, and for this reason our goal is to compute Schur vectors for the pencil $A - \lambda B$, rather than eigenvectors. This leads to a generalization of the Jacobi-Davidson algorithm, in which we compute a partial Schur form for the pencil. This algorithm can be interpreted as a subspace iteration variant of the QZ algorithm. A consequence of this approach is that we have to work with search and test spaces that are different.

With $\lambda = \alpha/\beta$, the generalized eigenproblem $A{x}=\lambda B {x}$ is equivalent to the eigenproblem

(\beta A-\alpha B){x}=0,
\end{displaymath} (221)

and we denote a generalized eigenvalue of the matrix pair $\{A,B\}$ as a pair $(\alpha,\beta)$. This approach is preferred, because underflow or overflow for $\lambda = \alpha/\beta$ in finite precision arithmetic may occur when $\alpha$ and/or $\beta$ are zero or close to zero, in which case the pair is still well defined [328], [425, Chap. VI], [376].

A partial generalized Schur form of dimension $k$ for a matrix pair $\{A,B\}$ is the decomposition

A Q_k = Z_k R^A_k,\quad B Q_k = Z_k R^B_k,
\end{displaymath} (222)

where $Q_k$ and $Z_k $ are orthogonal $n$ by $k$ matrices and $R^A_k$ and $R^B_k$ are upper triangular $k$ by $k$ matrices. A column $q_i$ of $Q_k$ is referred to as a generalized Schur vector, and we refer to a pair $((\alpha_i,\beta_i),q_i)$, with $(\alpha_i,\beta_i) =
(R^A_k(i,i),R^B_k(i,i))$ as a generalized Schur pair. It follows that if $((\alpha,\beta),y)$ is a generalized eigenpair of $(R^A_k,R^B_k)$ then $((\alpha,\beta), Q_k y)$ is a generalized eigenpair of $\{A,B\}$.

We will now describe a Jacobi-Davidson method for the generalized eigenproblem (8.8). From the relations (8.9) we conclude that

\beta_i A q_i - \alpha_i B q_i \perp z_i,

which suggests that we should follow a Petrov-Galerkin condition for the construction of reduced systems. In each step the approximate eigenvector $u$ is selected from a search subspace $\mbox{span}(V)$. We require that the residual $\eta Au - \zeta Bu$ is orthogonal to some other well-chosen test subspace $\mbox{span}(W)$:
\eta \, A u -\zeta \, B u \perp \mbox{span}(W).
\end{displaymath} (223)

Both subspaces are of the same dimension, say ${m}$. Equation (8.10) leads to the projected eigenproblem
(\eta\, W^\ast A V-\zeta \,W^\ast B V) \,{s}=0.
\end{displaymath} (224)

The pencil $\eta\, W^\ast A V-\zeta \,W^\ast B V$ can be reduced by the QZ algorithm (see §8.2) to generalized Schur form (note that this is an ${m}$-dimensional problem). This leads to orthogonal ${m}$ by ${m}$ matrices $S^R$ and $S^L$ and upper triangular ${m}$ by ${m}$ matrices $T^A$ and $T^B$, such that
(S^L)^\ast ( W^\ast A V)S^R=T^A
(S^L)^\ast ( W^\ast B V)S^R=T^B.
\end{displaymath} (225)

The decomposition can be reordered such that the first column of $S^R$ and the $(1,1)$-entries of $T^A$ and $T^B$ represent the wanted Petrov solution [172]. With ${s}\equiv s^R_1\equiv {S^R}e_1$ and $\zeta\equiv T^A_{1,1}$, $\eta\equiv T^B_{1,1}$, the Petrov vector is defined as ${u}\equiv V {s}$ for the associated generalized Petrov value $(\zeta,\eta)$. In our algorithm we will construct orthogonal matrices $V$ and $W$, so that also $\Vert{u}\Vert _2=1$. With the decomposition in (8.12), we construct an approximate partial generalized Schur form (cf. (8.9)): $ V S^R$ approximates a $Q_k$, and $W S^L$ approximates the associated $Z_k $. Since $\mbox{span}(Z_k)
= \mbox{span}(A Q_k)= \mbox{span}(B Q_k)$ (cf. (8.9)), this suggests to choose $W$ such that $\mbox{span}(W)$ coincides with $\mbox{span}(\nu_0 A V+\mu_0 B V)$. With the weights $\nu_0$ and $\mu_0$ we can influence the convergence of the Petrov values. If we want eigenpair approximations for eigenvalues $\lambda$ close to a target $\tau $, then the choice

\begin{displaymath}\nu_0=1/\sqrt{1+\vert\tau\vert^2}, \qquad \mu_0=-\tau \nu_0

is very effective [172], especially if we want to compute eigenvalues in the interior of the spectrum of $A - \lambda B$. We will call the Petrov approximations for this choice the harmonic Petrov eigenpairs. The Jacobi-Davidson correction equation for the component ${t}\perp u$ for the pencil $\eta A -\zeta B$ becomes
\left( I- \frac{pp^\ast}{p^\ast p}\right)
(\eta A-\zeta B)
\left( I-uu^\ast\right) {t}=- r,
\end{displaymath} (226)

with $r\equiv \eta Au - \zeta Bu$ and $p\equiv \nu_0 Au + \mu_0 Bu$. It can be shown that if (8.13) is solved exactly, the convergence to the generalized eigenvalue will be quadratic; see [408, Theorem 3.2]. Usually, this correction equation is solved only approximately, for instance, with a (preconditioned) iterative solver. The obtained vector ${t}$ is used for the expansion $v$ of $V$ and $\nu_0 Av + \mu_0 Bv$ is used for the expansion of $W$. For both spaces we work with orthonormal bases. Therefore, the new columns are orthonormalized with respect to the current basis by a modified Gram-Schmidt orthogonalization process (see §4.7.1).

It can be shown that, with the above choices for $p$ and $W$,

p= W s^L_1= W S^Le_1.
\end{displaymath} (227)

The relation between the partial generalized Schur form for the given large problem and the complete generalized Schur form for the reduced problem (8.11) via right vectors ($u = Vs^R_1$) is similar to the relation via left vectors ($p = Ws^L_1$). This can also be exploited for restarts.

next up previous contents index
Next: Deflation and Restart Up: Jacobi-Davidson Method  G. Sleijpen and Previous: Jacobi-Davidson Method  G. Sleijpen and   Contents   Index
Susan Blackford 2000-11-20