next up previous contents index
Next: Error Bound for Computed Up: Stability and Accuracy Assessments Previous: Residual Vectors.   Contents   Index


Transfer Residual Errors to Backward Errors.

It turns out that the computed eigenvalue and eigenvector(s) can always be interpreted as the exact one of nearby matrices, i.e.,

\begin{displaymath}
(A+E)\wtd x=\wtd\lambda\wtd x,
\end{displaymath}

and

\begin{displaymath}
\wtd y^{\ast} (A+E)=\wtd\lambda\wtd y^{\ast}
\end{displaymath}

if $\wtd y$ is available, where the error matrices $E$ are generally small in norm relative to $\Vert A\Vert$. Such an interpretation serves two purposes: first, it reflects indirectly how accurately the eigenproblem has been solved; and second, it can be used to derive error bounds for the computed eigenvalues and eigenvectors to be discussed below. Ideally, we would like $E$ to be zero matrices, but this hardly ever happens at all in practice. There are infinitely many error matrices $E$ that satisfy the above equations, we would like to know only the optimal or nearly optimal error matrices in the sense that certain norms (usually the 2-norm $\Vert\cdot\Vert _2$ or the Frobenius norm $\Vert\cdot\Vert _{F}$) are minimized among all feasible error matrices. In fact, practical purposes will be served if we can determine upper bounds for the norms of these (nearly) optimal matrices. The following collection of results indeed shows that if $r$ (and $s^{\ast}$ if available) is small, the error matrix $E$ is small, too [425].

We distinguish two cases.

  1. Only $\wtd x$ is available but $\wtd y$ is not. Then the optimal error matrix $E$ (in both 2-norm and the Frobenius norm) for which $\wtd\lambda$ and $\wtd x$ are an exact eigenvalue and its corresponding eigenvector of $A+E$, i.e.,
    \begin{displaymath}
(A+E)\wtd x=\wtd\lambda\wtd x,
\end{displaymath} (210)

    satisfies

    \begin{displaymath}
\Vert E\Vert _2=\Vert E\Vert _{F}=\Vert r\Vert _2.
\end{displaymath}

  2. Both $\wtd x$ and $\wtd y$ are available. Then the optimal error matrices $E_2$ (in 2-norm) and $E_{\F}$ (in the Frobenius norm) for which $\wtd\lambda$, $\wtd x$, and $\wtd y$ are an exact eigenvalue and its corresponding eigenvectors of $A+E$, i.e.,
    \begin{displaymath}
(A+E)\wtd x=\wtd\lambda\wtd x
\quad\mbox{and}\quad
\wtd y^{\ast} (A+E)=\wtd\lambda\wtd y^{\ast}
\end{displaymath} (211)

    for $E=E_2, E_{F}$, satisfy

    \begin{displaymath}
\Vert E_2\Vert _2=\max\{\Vert r\Vert _2,\Vert s^{\ast}\Vert _2\}
\end{displaymath}

    and

    \begin{displaymath}
\Vert E_{F}\Vert _{F}=\sqrt{\Vert r\Vert^2_2 +\Vert s^{\ast...
... - (\wtd y^{\ast} A\wtd x-\wtd\lambda\wtd y^{\ast}\wtd x)^2}.
\end{displaymath}

See [256,431].

We say the algorithm that delivers the approximate eigenpair $(\wtd\lambda,\wtd x)$ is $\tau $-backward stable for the pair with respect to the norm $\Vert\cdot\Vert$ if it is an exact eigenpair for $A+E$ with $\Vert E\Vert\le\tau$; analogously the algorithm that delivers the eigentriplet $(\wtd\lambda,\wtd x,\wtd y)$ is $\tau $-backward stable for the triplet with respect to the norm $\Vert\cdot\Vert$ if it is an exact eigentriplet for $A+E$ with $\Vert E\Vert\le\tau$. With these in mind, statements can be made about the backward stability of the algorithm which computes the eigenpair $(\wtd\lambda,\wtd x)$ or the eigentriplet $(\wtd\lambda,\wtd x,\wtd y)$. Conventionally, an algorithm is called backward stable if $\tau = O(\epsilon_M \Vert A\Vert)$.


next up previous contents index
Next: Error Bound for Computed Up: Stability and Accuracy Assessments Previous: Residual Vectors.   Contents   Index
Susan Blackford 2000-11-20