next up previous contents index
Next: Non-Hermitian Eigenvalue Problems Up: Singular Value Decomposition Previous: Numerical Example   Contents   Index

Related Problems
  J. Demmel

As described in §2.4.7, there are several problems closely related to the SVD whose solutions we describe here. This list is not exhaustive, because the use of low rank approximations to matrices (for which the ``optimal'' one is supplied by the truncated SVD) is very widespread in applications. Depending on the application, it may be appropriate to use an SVD algorithm described here or another low rank approximation.

  1. The quotient or generalized SVD of $A$ and $B$ is defined as follows. Suppose first that $A$ is $m$ by $n$ and $B$ is $n$ by $n$ and nonsingular. Let the SVD of $AB^{-1} = U \Sigma V^*$. Then we may also write two equivalent decompositions of $A$ and $B$ as $A = U \Sigma_A X$ and $B = V \Sigma_B X$, where $X$ is $n$ by $n$ and nonsingular, $\Sigma_A = {\rm diag} (\sigma_{A,1},\ldots,\sigma_{A,n})$ is $m$ by $n$, and $\Sigma_B = {\rm diag} (\sigma_{B,1},\ldots,\sigma_{B,n})$ is $n$ by $n$, with $\sigma_{A,i}^2 + \sigma_{B,i}^2 = 1$. Then $\Sigma = \Sigma_A \Sigma_B^{-1} = {\rm diag} (\sigma_1,\ldots,\sigma_n)$. The values $\sigma_i = \sigma_{A,i} / \sigma_{B,i}$ are called the generalized singular values of $A$ and $B$.

    Software using transformation methods (i.e., suitable for dense matrices) is available in LAPACK driver routine xGGSVD, for the more general case of $B$ being singular or $p$ by $n$. It is also available as the MATLAB command gsvd. No ScaLAPACK software is available. The advantage of using these algorithms is accuracy only; that is, when $B$ is ill-conditioned they can be much more accurate than simply forming $AB^{-1}$ and computing its SVD. But they are also rather slower, so if $B$ is well-conditioned, it is better to just form and compute the SVD of $AB^{-1}$.

    When $A$ and $B$ are large and sparse and $B$ is square, nonsingular, and can be factored, we recommend applying SVD algorithms discussed above that only require multiplication by the product $AB^{-1}$ and its transpose.

    If $B$ is not square, we recommend finding the eigenvalues and eigenvectors of the generalized Hermitian eigenvalue problem $A^*A - \lambda B^*B$, using techniques from Chapter 5. This is because the eigendecomposition of $A^*A - \lambda B^*B$ is $X^*A^*AX = \Sigma_A^* \Sigma_A$ and $X^*B^*BX = \Sigma_B^* \Sigma_B$.

  2. Yet more generalized SVDs can be defined for $AB$ instead of $AB^{-1}$ (the product SVD), or indeed arbitrary products of the form $A_1^{\pm 1} A_2^{\pm 1} \cdots A_k^{\pm 1}$. No specialized software for these cases is available in LAPACK, ScaLAPACK, or MATLAB. So in the dense case, one would form these products explicitly (although algorithms have been proposed that avoid this [54,3,53]), and in the sparse case, one would multiply by them without forming them.

  3. Finding $x$ that minimizes $\Vert Ax - b \Vert _2$ is the linear least squares problem. Suppose $A$ is $m$ by $n$ with $m>n$; then we say that the least squares problem is overdetermined. If $A$ is nonsingular and $A = U_n \Sigma_n V_n^*$ is the thin SVD of $A$, then the solution is $x = V_n \Sigma_n^{-1} U_n^*b$. It is generally cheaper to either solve the normal equations $A^*Ax = A^*b$ or use the QR decomposition $A = QR$ to compute $x = R^{-1}Q^*b$, but when $A$ is badly conditioned, i.e., $\sigma_n \ll \sigma_1$, then the SVD can be much more accurate. In particular, when $A$ is very ill-conditioned one typically uses the truncated SVD of $A$ instead of the thin SVD: $x = V_t \Sigma_t^{-1} U_t^*b$.

    It is not necessary to compute the thin or truncated SVD of $A$ explicitly to solve the least squares problem. For dense problems, LAPACK driver routine xGELSD is the method of choice, using divide-and-conquer to solve the least squares problem quickly without forming an explicit thin SVD. In ScaLAPACK, there is only a routine for computing a thin SVD, PxGESVD, which could then be used for the least squares problem. For sparse problems, we can use the approximate factorization $A \approx U_k B_k V_k^*$ derived from equations (6.7) or (6.8) to get $x = V_k B_k^{-1} U_k^{-1}$.


next up previous contents index
Next: Non-Hermitian Eigenvalue Problems Up: Singular Value Decomposition Previous: Numerical Example   Contents   Index
Susan Blackford 2000-11-20