next up previous contents index
Next: Error Bounds for the Up: General Linear Model Problem Previous: General Linear Model Problem   Contents   Index


Further Details: Error Bounds for General Linear Model Problems

In this subsection, we will summarize the available error bounds. The reader may also refer to [2,13,50,80] for further details.

Let $\widehat {x}$ and $\widehat {y}$ be the solutions by the driver routine xGGGLM (see subsection 4.6). Then $\widehat {x}$ is normwise backward stable and $\widehat {y}$ is stable in a mixed forward/backward sense [13]. Specifically, we have $\widehat {x}$ and $\widehat {y} = \bar {y} + \Delta \bar {y}$, where $\widehat {x}$ and $\bar {y}$ solve $\min\{\Vert y \Vert _2: \; (A + \Delta A)x + (B + \Delta B)y= d + \Delta d \} $, and

$\textstyle \parbox{2in}{
\begin{eqnarray*}
\Vert \Delta \bar {y} \Vert _2 & \...
... \Delta A \Vert _F & \leq & q(m,n,p)\epsilon\Vert A\Vert _F,
\end{eqnarray*} }$     $\textstyle \parbox{2in}{
\begin{eqnarray*}
\Vert \Delta d \Vert _2 & \leq & q...
...t \Delta B \Vert _F & \leq & q(m,n,p)\epsilon\Vert B\Vert _F,
\end{eqnarray*}}$
and q(m,n,p) is a modestly growing function of m, n, and p. We take q(m,n,p) = 1 in the code fragment above. Let $X^{\dagger}$ denote the Moore-Penrose pseudo-inverse of X. Let $\kappa_B(A) = \Vert A \Vert _F \Vert (A^\dagger_B) \Vert _2 $( = CNDAB above) and $\kappa_A(B) = \Vert B \Vert _F \Vert (GB)^\dagger \Vert _2 $( = CNDBA above) where $G = I - AA^\dagger$ and $A^\dagger_B = A^\dagger[I-B(GB)^\dagger]$. When $q(m,n,p)\epsilon$ is small, the errors $x-\widehat {x}$ and $y - \widehat {y}$ are bounded by

\begin{eqnarray*}
\frac{ \Vert x-\widehat {x} \Vert _2 }{ \Vert x \Vert _2 } & ...
...ght)
+ \Vert B \Vert _F \Vert (GB)^\dagger \Vert _2 ^2 \bigg] .
\end{eqnarray*}




When B = I, the GLM problem is the standard LS problem. y is the residual vector y = d - Ax, and we have

\begin{displaymath}
\frac{ \Vert x-\widehat {x} \Vert _2 }{ \Vert x \Vert _2 } ...
...Vert y \Vert _2 }{ \Vert A \Vert _F \Vert x \Vert _2 } \Bigg),
\end{displaymath}

and

\begin{displaymath}
\frac{ \Vert y - \widehat {y} \Vert _2 }{ \Vert y \Vert _2 ...
...ilon
\kappa(A)\frac{ \Vert y \Vert _2 }{ \Vert d \Vert _2 } .
\end{displaymath}

where $\kappa(A) = \kappa_B(A) = \Vert A \Vert _F \Vert A^\dagger \Vert _2 $ and $\kappa_A(B) = 1$. The error bound of $x-\widehat {x}$ is the same as in the LSE problem (see section 4.6.1.1), which is essentially the same as given in section 4.5.1. The bound on the error in $\widehat {y}$ is the same as that provided in [55, section 5.3.7].


next up previous contents index
Next: Error Bounds for the Up: General Linear Model Problem Previous: General Linear Model Problem   Contents   Index
Susan Blackford
1999-10-01