next up previous contents index
Next: Software Availability Up: Jacobi-Davidson Method  G. Sleijpen and Previous: Restart.   Contents   Index


Algorithm

The Jacobi-Davidson algorithm is given in Algorithm 8.1. This algorithm attempts to compute the generalized Schur pairs $((\alpha,\beta),q)$, for which the ratio $\beta/\alpha$ is closest to a specified target value $\tau $ in the complex plane. The algorithm includes restart in order to limit the dimension of the search space, and deflation with already converged left and right Schur vectors.

\begin{figure}\begin{algorithm}{Jacobi--Davidson QZ Method for $k_{\max}$\ Inter...
...}$\ \\
{\rm (38)}\> {\bf end while}
\end{tabbing}}
\end{algorithm}\end{figure}

To apply this algorithm we need to specify a starting vector $v_0$, a tolerance $\epsilon$, a target value $\tau $, and a number $k_{\max}$ that specifies how many eigenpairs near $\tau $ should be computed. The value of ${m}_{\max}$ specifies the maximum dimension of the search subspace. If it is exceeded then a restart takes place with a subspace of dimension ${m}_{\min}$.

On completion the $k_{\max}$ generalized eigenvalues close to $\tau $ are delivered, and the corresponding reduced Schur form $AQ=ZR^A$, $BQ={Z}R^B$, where $Q$ and $Z$ are $n$ by $k_{\max}$ orthogonal and $R^A$, $R^B$ are $k_{\max}$ by $k_{\max}$ upper triangular. The generalized eigenvalues are the on-diagonals of $R^A$ and $R^B$. The computed form satisfies $\Vert A q_{j}- {Z} R^Ae_{j}\Vert _2=O(\epsilon)$, $\Vert B q_{j}- {Z} R^Be_{j}\Vert _2=O(\epsilon)$, where $q_j$is the $j$th column of $Q$.

The accuracy of the computed reduced Schur form depends on the distance between the target value $\tau $ and the eigenvalue $(\alpha_j,\beta_j)\equiv(R^A_{j,j},R^B_{j,j})$. If we neglect terms of order machine precision and of order $\epsilon^2$, then we have that $\Vert A q_j- {Z} R^Ae_j\Vert _2\leq j\gamma_A \epsilon$, $\Vert B q_j- {Z} R^Be_j\Vert _2\leq j\gamma_B \epsilon$, where the constants $\gamma_A$ and $\gamma_B$ are given by

\begin{displaymath}\gamma_A\equiv\frac{\vert\mu_0\vert}{\vert\nu_0\alpha_j+\mu_0...
...frac{\vert\nu_0\vert}{\vert\nu_0\alpha_j+\mu_0\beta_j\vert}.
\end{displaymath}

If $\mu_0/\nu_0=-\tau$, as in step (1) of the algorithm, then $\gamma_A=\vert\tau\vert/\vert\alpha_j-\tau\beta_j\vert$, $\gamma_B=1/\vert\alpha_j-\tau\beta_j\vert$. These values can be large if $\tau\approx \alpha_j/\beta_j$. In practice an accuracy of order $\epsilon$ is achieved also if $\tau $ is close to detected eigenvalues. The $\epsilon$-accuracy can be guaranteed when an additional refinement step is performed with values for $(\mu_0,\nu_0)$ as $(\mu_0,\nu_0)=(1,\bar\tau)$.

We will now explain the successive main phases of the algorithm.

(1)
The initialization phase. The choice for the scalars $\nu_0$ and $\mu_0$ is in particular effective if $\tau $ is in the interior of the spectrum. The choice causes a breakdown if $\tau $ is an eigenvalue (which can easily be tested).

(4)-(5)
The new vector $t$ is made orthogonal with respect to the current search subspace $V$ by means of modified Gram-Schmidt. Likewise, the vector $w=(\nu_0A+\mu_0B)t$ is made orthogonal with respect to the current test subspace $W$. The two orthogonalization processes can be replaced, for improved numerical stability, by a template as in Algorithm 4.14 (p. [*]).

We expand the subspaces $V$, $V^A\equiv AV$, $V^B\equiv BV$, and $W$. $V$ denotes the matrix with the current basis vectors $v_i$ for the search subspace as its columns. The other matrices are defined in a similar obvious way.

(12)-(13)
The ${m}$th row and column of the matrices $M^A\equiv W^\ast A V$ and $M^B\equiv W^\ast B V$ are computed.

Note that the scalars $M_{i,{m}}^B$ can also be computed from the scalars $M_{i,{m}}^A$, and the orthogonalization constants of $w_i^\ast w$ in step (10).

(15)

The QZ decomposition for the pair $(M^A, M^B)$ of ${m}$ by ${m}$ matrices can be computed by a suitable routine for dense matrix pencils from LAPACK.

We have chosen to compute the generalized Petrov pairs, which makes the algorithm suitable for computing $k_{\max}$ interior generalized eigenvalues of $\beta A-\alpha B$, for which $\alpha/\beta$ is close to a specified $\tau $.

For algorithms for reordering the generalized Schur form, see [448,449,171].

(18)
The stopping criterion is to accept a generalized eigenpair approximation as soon as the norm of the residual (for the normalized right Schur vector approximation) is below $\epsilon$. This means that we accept inaccuracies in the order of $\epsilon$ in the computed generalized eigenvalues, and inaccuracies (in angle) in the Schur vectors of $O({\epsilon})$ (provided that the concerned eigenvalue is simple and well separated from the others).

Detection of all wanted eigenvalues cannot be guaranteed; see note (13) for Algorithm 4.17 (p. [*]).

(22)
After acceptance of a Petrov pair, we continue the search for a next pair, with the remaining Petrov vectors as a basis for the initial search space.

(30)
We restart when the dimension of the search space for the current eigenvector exceeds ${m}_{\max}$. The process is restarted with the subspaces spanned by the ${m}_{\min}$ left and right Ritz vectors corresponding to the generalized Ritz pairs closest to the target value $\tau $.

(36)
We have collected the locked (computed) right Schur vectors in ${Q}$, and the matrix $\tilde{Q}$ is ${Q}$ expanded with the current right Schur eigenvector approximation ${u}$. Likewise, the converged left Schur vectors have been collected in ${Z}$, and this matrix is expanded with ${p}$. This is done in order to obtain a more compact formulation; the correction equation in step (37) of Algorithm 8.1 is equivalent to the one in equation (8.13) for the deflated pair in (8.15). The new correction ${t}$ has to be orthogonal to the columns of ${Q}$ as well as to ${u}$.

Of course, the correction equation can be solved by any suitable process, for instance, a preconditioned Krylov subspace method that is designed to solve unsymmetric systems. However, because of the different projections, we always need a preconditioner (which may be the identity operator if nothing else is available) that is deflated by the same skew projections so that we obtain a mapping between $\widetilde{Q}^\perp$ and itself. Because of the occurrence of $\widetilde{Q}$ and $\widetilde{Z}$, one has to be careful with the usage of preconditioners for the matrix $\eta A -\zeta B$. The inclusion of preconditioners can be done as in Algorithm 8.2. Make sure that the starting vector $t_0$ for an iterative solver satisfies the orthogonality constraints $\widetilde Q^\ast t_0=0$. Note that significant savings per step can be made in Algorithm 8.2 if ${K}$ is kept the same for a (few) Jacobi-Davidson iterations. In that case columns of $\widehat{Z}$ can be saved from previous steps. Also the matrix ${\cal M}$ can be updated from previous steps, as well as its ${\cal L}{\cal U}$ decomposition.

It is not necessary to solve the correction equation very accurately. A strategy, often used for inexact Newton methods [113], also works well here: increase the accuracy with the Jacobi-Davidson iteration step, for instance, solve the correction equation with a residual reduction of $2^{-\ell}$ in the $\ell$th Jacobi-Davidson iteration ($\ell$ is reset to 0 when a Schur vector is detected).

In particular, in the first few initial steps, the approximate eigenvalue $\theta $ may be very inaccurate, and then it does not make sense to solve the correction equation accurately. In this stage it can be more effective to temporarily replace $\theta $ by $\tau $ or to take ${t}=-r$ for the expansion of the search subspace [335,172].

For the full theoretical background of this method, as well as details on the deflation technique with Schur vectors, see [172].

\begin{figure}\begin{algorithm}{Approximate Solution of the
Deflated Jacobi--Dav...
...{Z}\vec{\alpha}$\end{tabbing}}
\end{algorithm}\vspace*{-9pt}%% help
\end{figure}


next up previous contents index
Next: Software Availability Up: Jacobi-Davidson Method  G. Sleijpen and Previous: Restart.   Contents   Index
Susan Blackford 2000-11-20