Next: Error Bounds for the
Up: Further Details: Error Bounds
Previous: Balancing and Conditioning
Contents
Index
Computing s and
To explain s and , we need to
introduce
the spectral projector P [94,76], and the
separation of two matrices
A and B,
[94,98].
We may assume the matrix A is in Schur form, because reducing it
to this form does not change the values of s and .
Consider a cluster of
eigenvalues, counting multiplicities.
Further assume the nbyn matrix A is

(4.1) 
where the eigenvalues of the mbym matrix
A_{11} are exactly those in which we are
interested. In practice, if the eigenvalues on the diagonal of A
are in the wrong order, routine xTREXC
can be used to put the desired ones in the upper left corner
as shown.
We define the spectral projector, or simply projector P belonging
to the eigenvalues of A_{11} as

(4.2) 
where R satisfies the system of linear equations
A_{11}R  RA_{22} = A_{12}.

(4.3) 
Equation (4.3) is called a Sylvester equation.
Given the Schur form (4.1), we solve equation
(4.3) for R using the subroutine xTRSYL.
We can now define s for the eigenvalues of A_{11}:

(4.4) 
In practice we do not use this expression since R_{2} is hard to
compute. Instead we use the more easily computed underestimate

(4.5) 
which can underestimate the true value of s by no more than a factor
.
This underestimation makes our error bounds more conservative.
This approximation of s is called RCONDE in xGEEVX and xGEESX.
The separation
of the matrices A_{11} and
A_{22} is defined as the smallest singular value of the linear
map in (4.3) which takes X to
A_{11}X  XA_{22}, i.e.,

(4.6) 
This formulation lets us estimate
using the condition estimator
xLACON [59,62,63], which estimates the norm of
a linear operator
given the ability to compute Tx and
T^{T}x quickly for arbitrary x.
In our case, multiplying an
arbitrary vector by T
means solving the Sylvester equation (4.3)
with an arbitrary right hand side using xTRSYL, and multiplying by
T^{T} means solving the same equation with A_{11} replaced by
A_{11}^{T} and A_{22} replaced by A_{22}^{T}. Solving either equation
costs at most O(n^{3}) operations, or as few as O(n^{2}) if .
Since the true value of
is T_{2} but we use T_{1},
our estimate of
may differ from the true value by as much as
.
This approximation to
is called
RCONDV by xGEEVX and xGEESX.
Another formulation which in principle permits an exact evaluation of
is

(4.7) 
where
is the Kronecker product of X and Y.
This method is
generally impractical, however, because the matrix whose smallest singular
value we need is m(nm) dimensional, which can be as large as
n^{2}/4. Thus we would require as much as O( n^{4} ) extra workspace and
O(n^{6}) operations, much more than the estimation method of the last
paragraph.
The expression
measures the ``separation'' of
the spectra
of A_{11} and A_{22} in the following sense. It is zero if and only if
A_{11} and A_{22} have a common eigenvalue, and small if there is a small
perturbation of either one that makes them have a common eigenvalue. If
A_{11} and A_{22} are both Hermitian matrices, then
is just the gap, or minimum distance between an eigenvalue of A_{11} and an
eigenvalue of A_{22}. On the other hand, if A_{11} and A_{22} are
nonHermitian,
may be much smaller than
this gap.
Next: Error Bounds for the
Up: Further Details: Error Bounds
Previous: Balancing and Conditioning
Contents
Index
Susan Blackford
19991001