C ALGORITHM 694 , COLLECTED ALGORITHMS FROM ACM. C THIS WORK PUBLISHED IN TRANSACTIONS ON MATHEMATICAL SOFTWARE, C VOL. 17, NO. 3, September, 1991, PP. 289-305. The MATLAB M-files on this disk accompany the revised version of the manuscript ``A Collection of Test Matrices in MATLAB'' submitted to ACM TOMS for consideration as an algorithm. The M-files are self-documenting, so no documentation is provided in this file. The contents of the disk are as follows: readme.doc (this file) whatsnew.doc (describes changes for Version 1.2, May 30 1990 of the collection) % Test matrices % The `.m' extension is omitted in the following files... % augment cauchy chebspec chebvand chow circul clement compan condex cycol dingdong dorr dramadah fiedler forsythe frank gallery gear gfpp hadamard hanowa hilb invol ipjfact jordan kahan kms krylov lauchli lehmer lotkin minij moler ohess orthog pascal pei rando randsvd riemann tridiag triw vand wathen wilk % Utility routines % bandred chop comp fv house matrix qmult rq see seqa seqcheb seqm show skew sparse sparsify sub symm % As a backup, and an alternative form of copying the M-files to another disk, % an ARC (compressed archive) file is also included together with a program for % extracting the files. % % To copy all the M-files from this disk in drive A: (say) move to the drive % and directory where the files are to be copied and type % % A:PKUNPAK A:MFILES % % This extracts all the M-files from MFILES.ARC and copies them to the % current DOS directory. mfiles.arc pkunpak.exe -------------------------------- Nick Higham Department of Mathematics University of Manchester Manchester M13 9PL England (Phone: 061 275 5822) na.nhigham@na-net.stanford.edu June 4, 1990 My paper ``A Collection of Test Matrices in MATLAB'' has been accepted for publication in ACM Trans. Math. Soft., and in revising the paper I have prepared a new version of the collection of M-files. The original version is `Version 1.0, July 4, 1989' (as returned by the command `matrix(-1)') and the new one is `Version 1.2, May 30 1990'. There was an intermediate version, `Version 1.1, November 15 1989'---this was the one originally submitted to TOMS and it was not otherwise distributed. Here is a summary of the changes in going from version 1.0 to version 1.2. (1) Numerous improvements to the documentation in the comment lines of the M-files, including correction of errors, improved wording, and extra references. (2) A few minor bugs or inefficiencies corrected. (3) Added a new matrix, bringing the total to 45: %AUGMENT AUGMENT(A) is the square matrix [EYE(m) A; A' ZEROS(n)] of dimension % m+n, where A is m-by-n. It is the symmetric and indefinite % coefficient matrix of the augmented system associated with a least % squares problem minimize NORM(A*x-b). % Special case: if A is a scalar, n say, then AUGMENT(A) is the same % as AUGMENT(RAND(p,q)) where n = p+q and p = ROUND(n/2), % that is, a random augmented matrix of dimension n is % produced. (4) Added extra functionality to RANDSVD: function A = randsvd(n, kappa, mode, kl, ku) %RANDSVD Random matrices with pre-assigned singular values. ... % Special case: if KAPPA < 0 then a random full symmetric positive % definite matrix is produced with COND(A) = -KAPPA and % eigenvalues distributed according to MODE. % KL and KU, if present, are ignored. (5) Added an extra option to ORTHOG: function Q = orthog(n, k) %ORTHOG Orthogonal and nearly orthogonal matrices. % Q = ORTHOG(N, K) selects the K'th type of matrix of order N. ... % K = 4: Helmert matrix: a permutation of a lower Hessenberg matrix, % whose first row is ONES(1:N)/SQRT(N). (5) Changed second parameter in SPARSE from a character selecting the plot type to a tolerance such that elements smaller than the tolerance are regarded as zero. (6) Changed SEE. It now works for rectangular matrices. Instead of the inverse, the pseudo-inverse is plotted in the second window. The plot of columns of A versus the row index in the third window is replaced in this new version by a plot of the singular values of A on a logarithmic scale. (7) HOUSE now avoids division by zero when it is given a zero vector (BANDRED is the only routine that calls HOUSE). ---Nick Higham Dept. of Mathematics University of Manchester na.nhigham@na-net.stanford.edu June 4, 1990 function [A, b] = wilk(n) %WILK Various specific matrices devised/discussed by Wilkinson. % [A, b] = WILK(N) is the matrix or system of order N. % N=3: upper triangular system Ux=b illustrating inaccurate solution. % N=4: lower triangular system Lx=b, ill-conditioned. % N=5: HILB(6)(1:5,2:6)*1.8144. Symmetric positive definite. % N=21: W21+, tridiagonal. Eigenvalue problem. % References: % J.H. Wilkinson, Error analysis of direct methods of matrix inversion, % J. Assoc. Comput. Mach., 8 (1961), pp. 281-330. % J.H. Wilkinson, Rounding Errors in Algebraic Processes, Notes on Applied % Science No. 32, Her Majesty's Stationery Office, London, 1963. % J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University % Press, 1965. if n == 3 % Wilkinson (1961) p.323. A = [ 1e-10 .9 -.4 0 .9 -.4 0 0 1e-10]; b = [ 0 0 1]'; elseif n == 4 % Wilkinson (1963) p.105. A = [0.9143e-4 0 0 0 0.8762 0.7156e-4 0 0 0.7943 0.8143 0.9504e-4 0 0.8017 0.6123 0.7165 0.7123e-4]; b = [0.6524 0.3127 0.4186 0.7853]'; elseif n == 5 % Wilkinson (1965), p.234. A = hilb(6); A = A(1:5, 2:6)*1.8144; elseif n == 21 % Taken from gallery.m. Wilkinson (1965), p.308. E = diag(ones(n-1,1),1); m = (n-1)/2; A = diag(abs(-m:m)) + E + E'; else error('Sorry, that value of N is not available.') end function A = wathen(nx, ny, k) %WATHEN A = WATHEN(NX,NY) is a random N-by-N finite element matrix % where N = 3*NX*NY + 2*NX + 2*NY + 1. % A is precisely the "consistent mass matrix" for a regular NX-by-NY % grid of 8-node (serendipity) elements in 2 space dimensions. % A is symmetric positive definite for any (positive) values of % the "density", RHO(NX,NY), which is chosen randomly in this routine. % In particular, if D=DIAG(DIAG(A)), then % 0.25 <= EIG(INV(D)*A) <= 4.5 % for any positive integers NX and NY and any densities RHO(NX,NY). % This diagonally scaled matrix is returned by WATHEN(NX,NY,1). % Reference: A.J.Wathen, Realistic eigenvalue bounds for the Galerkin % mass matrix, IMA J. Numer. Anal., 7 (1987), pp. 449-457. % BEWARE - this is a sparse matrix and it quickly gets large! if nargin < 2, error('Two dimensioning arguments must be specified.'), end if nargin < 3, k = 0; end e1 = [6,-6,2,-8;-6,32,-6,20;2,-6,6,-6;-8,20,-6,32]; e2 = [3,-8,2,-6;-8,16,-8,20;2,-8,3,-8;-6,20,-8,16]; e = [e1,e2;e2',e1]/45; n = 3*nx*ny+2*nx+2*ny+1; A = zeros(n); rand('uniform') RHO = 100*rand(nx,ny); for j=1:ny for i=1:nx nn(1) = 3*j*nx+2*i+2*j+1; nn(2) = nn(1)-1; nn(3) = nn(2)-1; nn(4) = (3*j-1)*nx+2*j+i-1; nn(5) = 3*(j-1)*nx+2*i+2*j-3; nn(6) = nn(5)+1; nn(7) = nn(6)+1; nn(8) = nn(4)+1; em = e*RHO(i,j); for krow=1:8 for kcol=1:8 A(nn(krow),nn(kcol)) = A(nn(krow),nn(kcol))+em(krow,kcol); end end end end if k == 1 A = diag(diag(A)) \ A; end function V = vand(m, p) %VAND V = VAND(P), where P is a vector, produces the (primal) % Vandermonde matrix based on the points P, i.e. V(i,j) = P(j)^(i-1). % VAND(M,P) is a rectangular version of VAND(P) with M rows. % Special case: If P is a scalar then P equally spaced points on [0,1] % are used. % Reference: N.J. Higham, Stability analysis of algorithms for solving % confluent Vandermonde-like systems, SIAM J. Matrix Anal. Appl., 11 (1990), % pp. 23-41. if nargin == 1, p = m; end n = max(size(p)); % Handle scalar p. if n == 1 n = p; p = seqa(0,1,n); end if nargin == 1, m = n; end p = p(:).'; % Ensure p is a row vector. V = ones(m,n); for i=2:m V(i,:) = p.*V(i-1,:); end function t = triw(n, alpha, k) %TRIW TRIW(N, ALPHA, K) is the upper triangular matrix with ones on % the diagonal and ALPHAs on the first K >= 0 superdiagonals. % N may be a 2-vector, in which case the matrix is N(1)-by-N(2) and % upper trapezoidal. % Defaults: ALPHA = -1, % K = N-1 (full upper triangle). % TRIW(N) is a matrix discussed by Wilkinson. m = n(1); % Parameter n specifies dimension: m-by-n. n = n(max(size(n))); if nargin < 3, k = n-1; end if nargin < 2, alpha = -1; end if max(size(alpha)) ~= 1 error('Second argument must be a scalar.') end t = tril( eye(m,n) + alpha*triu(ones(m,n), 1), k); function T = tridiag(n, x, y, z) %TRIDIAG TRIDIAG(X,Y,Z) is the tridiagonal matrix with subdiagonal X, % diagonal Y, and superdiagonal Z. % X and Z must be vectors of dimension one less than Y. % Alternatively TRIDIAG(N,C,D,E), where C, D, and E are all % scalars, yields the Toeplitz tridiagonal matrix of order N % with subdiagonal elements C, diagonal elements D, and superdiagonal % elements E. This matrix has eigenvalues (Todd 1978) % D + 2*SQRT(C*E)*COS(k*PI/(N+1)), k=1:N. % TRIDIAG(N) is the same as TRIDIAG(N,-1,2,-1), which is % a symmetric positive definite M-matrix (the negative of the % second difference matrix). % Reference: J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra, % Birkhauser, Basel, and Academic Press, New York, 1977, p. 155. if nargin == 1, x = -1; y = 2; z = -1; end if nargin == 3, z = y; y = x; x = n; end x = x(:); y = y(:); z = z(:); % Force column vectors. if max( [ size(x) size(y) size(z) ] ) == 1 x = x*ones(n-1,1); z = z*ones(n-1,1); y = y*ones(n,1); else [nx, m] = size(x); [ny, m] = size(y); [nz, m] = size(z); if (ny - nx - 1) | (ny - nz -1) error('Dimensions of vector arguments are incorrect.') end end T = diag(x, -1) + diag(y) + diag(z, 1); function A = riemann(n) %RIEMANN A = RIEMANN(N) is a N-by-N matrix associated with the % Riemann hypothesis, which is true if and only if % DET(A) = O( N! N^(-1/2+epsilon) ) for every epsilon>0 % ('!' denotes factorial). % A = B(2:N+1, 2:N+1), where B(i,j) = i-1 if i divides j and % -1 otherwise. % Properties include, with M = N+1: % Each eigenvalue E(i) satisfies ABS(E(i)) <= M - 1/M. % i <= E(i) <= i+1 with at most M-SQRT(M) exceptions. % All integers in the interval (M/3, M/2] are eigenvalues. % Reference: F. Roesler, Riemann's hypothesis as an % eigenvalue problem, Linear Algebra and Appl., 81 (1986), % pp. 153-198. n = n+1; i = (2:n)'*ones(1,n-1); j = i'; A = i .* (~rem(j,i)) - ones(n-1); function A = randsvd(n, kappa, mode, kl, ku) %RANDSVD Random matrices with pre-assigned singular values. % RANDSVD(N, KAPPA, MODE, KL, KU) is a (banded) random matrix of order N % with COND(A) = KAPPA and singular values from the distribution MODE. % N may be a 2-vector, in which case the matrix is N(1)-by-N(2). % Available types: % MODE = 1: one large singular value, % MODE = 2: one small singular value, % MODE = 3: geometrically distributed singular values, % MODE = 4: arithmetically distributed singular values, % MODE = 5: random singular values with unif. dist. logarithm. % If omitted, MODE defaults to 3, and KAPPA defaults to SQRT(1/EPS). % If MODE < 0 then the effect is as for ABS(MODE) except that in the % original matrix of singular values the order of the diagonal entries is % reversed: small to large instead of large to small. % KL and KU are the lower and upper bandwidths respectively; if they are % omitted a full matrix is produced. If only KL is present, KU defaults % to KL. % Special case: if KAPPA < 0 then a random full symmetric positive % definite matrix is produced with COND(A) = -KAPPA and % eigenvalues distributed according to MODE. % KL and KU, if present, are ignored. % This routine is similar to the more comprehensive Fortran routine xLATMS % in the following reference: % J.W. Demmel and A. McKenney, A test matrix generation suite, % LAPACK Working Note #9, Courant Institute of Mathematical Sciences, % New York, 1989. if nargin < 2, kappa = sqrt(1/eps); end if nargin < 3, mode = 3; end if nargin < 4, kl = n-1; end % Full matrix. if nargin < 5, ku = kl; end % Same upper and lower bandwidths. if abs(kappa) < 1, error('Condition number must be at least 1!'), end posdef = 0; if kappa < 0, posdef = 1; kappa = -kappa; end % Special case. p = min(n); m = n(1); % Parameter n specifies dimension: m-by-n. n = n(max(size(n))); if p == 1 % Handle case where A is a vector. rand('normal') A = rand(m, n); A = A/norm(A); return end j = abs(mode); % Set up vector sigma of singular values. if j == 3 factor = kappa^(-1/(p-1)); sigma = factor.^[0:p-1]; elseif j == 4 sigma = ones(p,1) - (0:p-1)'/(p-1)*(1-1/kappa); elseif j == 5 % In this case cond(A) <= kappa. rand('uniform') sigma = exp( -rand(p,1)*log(kappa) ); elseif j == 2 sigma = ones(p,1); sigma(p) = 1/kappa; elseif j == 1 sigma = ones(p,1)./kappa; sigma(1) = 1; end % Convert to diagonal matrix of singular values. if mode < 0 sigma = sigma(p:-1:1) end sigma = diag(sigma); if posdef % Handle special case. Q = qmult(p); A = Q'*sigma*Q; A = (A + A')/2; % Ensure matrix is symmetric. return end if m ~= n sigma(m, n) = 0; % Expand to m-by-n diagonal matrix. end if kl == 0 & ku == 0 % Diagonal matrix requested - nothing more to do. A = sigma; return end % A = U*sigma*V, where U, V are random orthogonal matrices from the % Haar distribution. A = qmult(sigma'); A = qmult(A'); if kl < n-1 | ku < n-1 % Bandwidth reduction. A = bandred(A, kl, ku); end function A = rando(n, k) %RANDO A = RANDO(N, K) is a random N-by-N matrix with elements from one of % the following discrete distributions (default K=1): % K=1: A(i,j) = 0 or 1 with equal probability, % K=2: A(i,j) = -1 or 1 with equal probability, % K=3: A(i,j) = -1, 0 or 1 with equal probability. % N may be a 2-vector, in which case the matrix is N(1)-by-N(2). if nargin < 2, k = 1; end m = n(1); % Parameter n specifies dimension: m-by-n. n = n(max(size(n))); rand('uniform') if k == 1 % {0, 1} A = floor( rand(m,n) + .5 ); elseif k == 2 % {-1, 1} A = 2*floor( rand(m,n) + .5 ) - 1; elseif k == 3 % {-1, 0, 1} A = round( 3*rand(m,n) - 1.5 ); end function P = pei(n, alpha) %PEI PEI(N, ALPHA), where ALPHA is a scalar, is the symmetric matrix % ALPHA*EYE(N) + ONES(N). % If ALPHA is omitted then ALPHA = 1 is used. % The matrix is singular for ALPHA = 0, -N. % Reference: M.L. Pei, A test matrix for inversion procedures, % Comm. ACM, 5 (1962), p. 508. if nargin == 1, alpha = 1; end P = alpha*eye(n) + ones(n); function P = pascal(n, k) %PASCAL PASCAL(N) is the Pascal matrix of order N: a symmetric positive % definite matrix with integer entries, made up from Pascal's % triangle. Its inverse has integer entries. % [Conjecture by NJH: the Pascal matrix is totally positive.] % PASCAL(N,1) is the lower triangular Cholesky factor (up to signs % of columns) of the Pascal matrix. It is involutary (is its own % inverse). % PASCAL(N,2) is a transposed and permuted version of PASCAL(N,1) % which is a cube root of the identity. % J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra, % Birkhauser, Basel, and Academic Press, New York, 1977, p. 172. if nargin == 1, k = 0; end P = diag( (-1).^[0:n-1] ); P(:, 1) = ones(n,1); % Generate the Pascal Cholesky factor (up to signs). for j=2:n-1 for i=j+1:n P(i,j) = P(i-1,j) - P(i-1,j-1); end end if k == 0 P = P*P'; elseif k == 2 P = P'; P = P(n:-1:1,:); for i=1:n-1 P(i,:) = -P(i,:); P(:,i) = -P(:,i); end if n/2 == round(n/2), P = -P; end end function Q = orthog(n, k) %ORTHOG Orthogonal and nearly orthogonal matrices. % Q = ORTHOG(N, K) selects the K'th type of matrix of order N. % K > 0 for exactly orthogonal matrices, K < 0 for diagonal scalings of % orthogonal matrices. % Available types: (K = 1 is the default) % K = 1: Q(i,j) = SQRT(2/(n+1)) * SIN( i*j*PI/(n+1) ) % Symmetric eigenvector matrix for second difference matrix. % K = 2: Q(i,j) = 2/SQRT(2*n+1)) * SIN( 2*i*j*PI/(2*n+1) ) % Symmetric. % K = 3: Q(r,s) = EXP(2*PI*i*(r-1)*(s-1)/n) / SQRT(n) (i=SQRT(-1)) % Unitary, the Fourier matrix. Q^4 is the identity. % K = 4: Helmert matrix: a permutation of a lower Hessenberg matrix, % whose first row is ONES(1:N)/SQRT(N). % K = -1: Q(i,j) = COS( (i-1)*(j-1)*PI/(n-1) ) % Chebyshev Vandermonde-like matrix, based on extrema of T(n-1). % K = -2: Q(i,j) = COS( (i-1)*(j-1/2)*PI/n) ) % Chebyshev Vandermonde-like matrix, based on zeros of T(n). % References: % N.J. Higham and D.J. Higham, Large growth factors in Gaussian % elimination with pivoting, SIAM J. Matrix Analysis and Appl., % 10 (1989), pp. 155-164. % P. Morton, On the eigenvectors of Schur's matrix, J. Number Theory, % 12 (1980), pp. 122-127. (Re. ORTHOG(N, 3)) % H.O. Lancaster, The Helmert Matrices, Amer. Math. Monthly, 72 (1965), % pp. 4-12. if nargin == 1, k = 1; end if k == 1 % E'vectors second difference matrix m = (1:n)'*(1:n) * (pi/(n+1)); Q = sin(m) * sqrt(2/(n+1)); elseif k == 2 m = (1:n)'*(1:n) * (2*pi/(2*n+1)); Q = sin(m) * (2/sqrt(2*n+1)); elseif k == 3 % Vandermonde based on roots of unity m = 0:n-1; Q = exp(m'*m*2*pi*sqrt(-1)/n) / sqrt(n); elseif k == 4 % Helmert matrix Q = tril(ones(n)); Q(1,2:n) = ones(1,n-1); for i=2:n Q(i,i) = -(i-1); end Q = diag( sqrt( [n 1:n-1] .* [1:n] ) ) \ Q; elseif k == -1 % extrema of T(n-1) m = (0:n-1)'*(0:n-1) * (pi/(n-1)); Q = cos(m); elseif k == -2 % zeros of T(n) m = (0:n-1)'*(.5:n-.5) * (pi/n); Q = cos(m); end function H = ohess(x) %OHESS H = OHESS(N) is an N-by-N real, random, orthogonal upper Hessenberg % matrix. % Alternatively, H = OHESS(X), where X is an arbitrary real N-vector % (N>1) constructs H non-randomly using the elements of X as parameters. % In both cases H is constructed via a product of N-1 Givens rotations. % Note: See Gragg (1986) for how to represent an N-by-N (complex) % unitary Hessenberg matrix with positive subdiagonal elements in terms % of 2N-1 real parameters (the Schur parametrization). % This M-file handles the real case only and is intended simply as a % convenient way to generate random or non-random orthogonal Hessenberg % matrices. % Reference: W.B. Gragg, The QR algorithm for unitary Hessenberg matrices, % J. Comp. Appl. Math., 16 (1986), pp. 1-8. if any(imag(x)), error('Parameter must be real.'), end n = max(size(x)); if n == 1 % Handle scalar x. n = x; rand('uniform') x = rand(n-1,1)*2*pi; H = eye(n); rand('normal'); H(n,n) = sign(rand); else H = eye(n); H(n,n) = sign(x(n)) + (x(n)==0); % Second term ensures H(n,n) nonzero. end for i=n:-1:2 % Apply Givens rotation through angle x(i-1). theta = x(i-1); c = cos(theta); s = sin(theta); H( [i-1 i], : ) = [ c*H(i-1,:)+s*H(i,:) -s*H(i-1,:)+c*H(i,:) ]; end function A = moler(n, alpha) %MOLER A = MOLER(N, ALPHA) is the symmetric positive definite N-by-N matrix % U'*U where U = TRIW(N, ALPHA). % For ALPHA = -1 (the default) A(i,j) = min(i,j)-2, A(i,i) = i. % A has one small eigenvalue. % Nash (1979) attributes the ALPHA = -1 matrix to Moler. % Reference: J.C. Nash, Compact Numerical Methods for Computers: Linear % Algebra and Function Minimisation, Adam Hilger, Bristol, and John Wiley, % New York, 1979. pp. 76, 210. if nargin == 1, alpha = -1; end A = triw(n, alpha)'*triw(n, alpha); function A = minij(n) %MINIJ A = MINIJ(N) is the N-by-N symmetric positive definite matrix with % A(i,j) = min(i,j). % Properties, variations: % INV(A) is tridiagonal: it is minus the second difference matrix % except its (N,N) element is 1. % 2*A-ONES(A) (Givens' matrix) has tridiagonal inverse and % eigenvalues .5*sec^2([2r-1)PI/4N], r=1:N. % (N+1)ONES(A)-A has elements max(i,j), and also has a tridiagonal % inverse. % References: % J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra, % Birkhauser, Basel, and Academic Press, New York, 1977, p. 158. % D.E. Rutherford, Some continuant determinants arising in physics and % chemistry---II, Proc. Royal Soc. Edin., 63, A (1952), pp. 232-241. % (For the eigenvalues of Givens' matrix.) A = min( ones(n,1)*(1:n), (1:n)'*ones(1,n) ); function A = lotkin(n) %LOTKIN A = LOTKIN(N) is the Hilbert matrix with its first row altered to % all ones. A is unsymmetric, ill-conditioned, and has many negative % eigenvalues of small magnitude. The inverse has integer entries % and is known explicitly. % Reference: M. Lotkin, A set of test matrices, MTAC, 9 (1955), % pp. 153-161. A = hilb(n); A(1,:) = ones(1,n); function A = lehmer(n) %LEHMER A = LEHMER(N) is the symmetric positive definite N-by-N matrix with % A(i,j) = i/j for j>=i. % A is totally nonnegative. INV(A) is tridiagonal, and explicit % formulas are known for its entries. % N <= COND(A) <= 4*N*N. % References: % M. Newman and J. Todd, The evaluation of matrix inversion % programs, J. Soc. Indust. Appl. Math., 6 (1958), pp. 466-476. % Solutions to problem E710 (proposed by D.H. Lehmer): The inverse % of a matrix, Amer. Math. Monthly, 53 (1946), pp. 534-535. % J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra, % Birkhauser, Basel, and Academic Press, New York, 1977, p. 154. A = ones(n,1)*(1:n); A = A./A'; A = tril(A) + tril(A,-1)'; function A = lauchli(n, mu) %LAUCHLI LAUCHLI(N, MU) is the (N+1)-by-N matrix [ONES(1,N); MU*EYE(N))]. % It is a well-known example in least squares and other problems % that indicates the dangers of forming A'*A. % MU defaults to SQRT(EPS). % Reference: P. Lauchli, Jordan-Elimination und Ausgleichung nach % kleinsten Quadraten, Numer. Math, 3 (1961), pp. 226-240. if nargin < 2, mu = sqrt(eps); end A = [ones(1,n); mu*eye(n)]; function B = krylov(A, x, j) %KRYLOV KRYLOV(A, x, j) is the Krylov matrix % [x, Ax, A^2x, ..., A^(j-1)x], where A is an n-by-n matrix and % x is an n-vector. % Defaults: x = ONES(n,1), j = n. % KRYLOV(n) is the same as KRYLOV(RAND(n)). % Reference: G.H. Golub and C.F. Van Loan, Matrix Computations, Johns Hopkins % University Press, Baltimore, Maryland, 1983, p. 224. [n, n] = size(A); if n == 1 % Handle special case A = scalar. n = A; A = rand(n); end if nargin < 3, j = n; end if nargin < 2, x = ones(n,1); end B = ones(n,j); B(:,1) = x(:); for i=2:j B(:,i) = A*B(:,i-1); end function A = kms(n, rho) %KMS A = KMS(N, RHO) is the N-by-N Kac-Murdock-Szego Toeplitz matrix with % A(i,j) = RHO^(ABS((i-j))) (for real RHO). % If RHO is complex, then the same formula holds except that elements % below the diagonal are conjugated. % RHO defaults to 0.5. % Properties: % A has an LDL' factorization with % L = INV(TRIW(N,-RHO,1)'), % D(i,i) = (1-ABS(RHO)^2)*EYE(N) except D(1,1) = 1. % A is positive definite if and only if 0 < ABS(RHO) < 1. % INV(A) is tridiagonal. % Reference: W.F. Trench, Numerical solution of the eigenvalue problem % for Hermitian Toeplitz matrices, SIAM J. Matrix Analysis and Appl., % 10 (1989), pp. 135-146 (and see the references therein). if nargin < 2, rho = 0.5; end A = (1:n)'*ones(1,n); A = abs(A - A'); A = rho .^ A; if imag(rho) A = conj(tril(A,-1)) + triu(A); end function U = kahan(n, theta) %KAHAN KAHAN(N, THETA) is an upper trapezoidal matrix % involving a parameter THETA, which has some interesting % properties regarding estimation of condition and rank. % The matrix is N-by-N unless N is a 2-vector, in which case it % is N(1)-by-N(2). THETA defaults to 0.25. % The inverse of KAHAN(N, THETA) is known explicitly: see % Higham (1987, p. 588), for example. % References: % W. Kahan, Numerical linear algebra, Canadian Math. Bulletin, % 9 (1966), pp. 757-801. % N.J. Higham, A survey of condition number estimation for % triangular matrices, SIAM Review, 29 (1987), pp. 575-596. if nargin < 2, theta = 0.25; end r = n(1); % Parameter n specifies dimension: r-by-n. n = n(max(size(n))); s = sin(theta); c = cos(theta); U = eye(n) - c*triu(ones(n), 1); U = diag(s.^[1:n])*U; if r > n C U(r,n) = 0; % Extend to an r-by-n matrix. else U = U(1:r,:); % Reduce to an r-by-n matrix. end function J = jordan(n, lambda) %JORDAN JORDAN(N, LAMBDA) is the N-by-N Jordan block with eigenvalue LAMBDA. % LAMBDA = 1 is the default. if nargin == 1, lambda = 1; end J = lambda*eye(n) + diag(ones(n-1,1),1); function A = ipjfact(n, k) %IPJFACT A = IPJFACT(N, K) is the matrix with % A(i,j) = (i+j)! (K=0, default) % A(i,j) = 1/(i+j)! (K=1) % Both are Hankel matrices. % Suggested by P.R. Graves-Morris. if nargin == 1, k = 0; end c = cumprod(2:n+1); d = cumprod(n+1:2*n) * c(n-1); A = hankel(c, d); if k == 1 A = ones(A)./A; end function A = invol(n) %INVOL A = INVOL(N) is an N-by-N involutory (A*A=EYE(N)) and % ill-conditioned matrix. % It is a diagonally scaled version of HILB(N). % NB. B=(EYE(N)-A)/2 and B=(EYE(N)+A)/2 are idempotent (B*B=B). % Reference: A.S. Householder and J.A. Carpenter, The singular values % of involutory and of idempotent matrices, Numer. Math. 5 (1963), % pp. 234-237. A = hilb(n); d = -n; A(:, 1) = d*A(:, 1); for i = 1:n-1 d = -(n+i)*(n-i)*d/(i*i); A(i+1, :) = d*A(i+1, :); end function H = hilb(n) %HILB HILB(N) is the N-by-N Hilbert matrix with elements 1/(i+j-1), % which is a famous example of a badly conditioned matrix. % See INVHILB (standard MATLAB routine) for the exact inverse, which % has integer entries. % HILB(N) is symmetric positive definite, totally positive, and a % Hankel matrix. % This routine is shorter and faster than the one supplied with MATLAB. % References: D.E. Knuth, The Art of Computer Programming, % Volume 1, Fundamental Algorithms, Second Edition, Addison-Wesley, % Reading, Massachusetts, 1973, p. 37. % M.-D. Choi, Tricks or treats with the Hilbert matrix, Amer. Math. % Monthly, 90 (1983), pp. 301-312. if n == 1 H = 1; else H = cauchy( (1:n) - .5); end function A = hanowa(n, d) %HANOWA HANOWA(N, d) is the N-by-N block 2x2 matrix (thus N=2M must be even) % [d*EYE(M) -DIAG(1:M) % DIAG(1:M) d*EYE(M)] % It has complex eigenvalues lambda(k) = d +/- k*i (1 <= k <= M). % Parameter d defaults to -1. % Reference: E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary % Differential Equations I: Nonstiff Problems, Springer-Verlag, % Berlin, 1987. (pp. 86-87) if nargin == 1, d = -1; end m = n/2; if round(m) ~= m error('N must be even.') end A = [ d*eye(m) -diag(1:m) diag(1:m) d*eye(m)]; function H = hadamard(n) %HADAMARD HADAMARD(N) is a Hadamard matrix of order N, that is, a matrix H % with elements 1 or -1 such that H*H' = N*EYE(N). % An N-by-N Hadamard matrix with N>2 exists only REM(N,4)=0. % This function handles only the cases where N, N/12 or N/20 % is a power of 2. % This is an expanded version of a routine supplied with MATLAB. % Reference: S.W. Golomb and L.D. Baumert, The search for Hadamard % matrices, Amer. Math. Monthly, 70 (1963) pp. 12-17. if n ~= 2 & rem(n,4) ~= 0 error('For an NxN Hadamard matrix to exist, N must be 2 or a multiple of 4.') end k2 = log(n)/log(2); k12 = log(n/12)/log(2); k20 = log(n/20)/log(2); if k2 == fix(k2) H = [1 1 1 -1]; k = k2-1; elseif k12 == fix(k12) H = toeplitz( [-1 -1 1 -1 -1 -1 1 1 1 -1 1 ], ... [-1 1 -1 1 1 1 -1 -1 -1 1 -1] ); H = [ones(1,12); ones(11,1) H]; k = k12; elseif k20 == fix(k20) H = hankel( [ -1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1], ... [1 -1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1] ); H = [ones(1,20); ones(19,1) H]; k = k20; else error('Sorry, can''t handle that value of N.') end % Kronecker product construction. for i=1:k H = [H H H -H]; end function A = gfpp(T, c) %GFPP GFPP(T) is a matrix of order N for which Gaussian elimination % with partial pivoting yields a growth factor 2^(N-1). % T is an arbitrary nonsingular upper triangular matrix of order N-1. % GFPP(T, C) sets all the multipliers to C (0 <= C <= 1) % and gives growth factor (1+C)^(N-1). % GFPP(N, C) (a special case) is the same as GFPP(EYE(N-1), C) and % generates the well-known example of Wilkinson. % Reference: N.J. Higham and D.J. Higham, Large growth factors in % Gaussian elimination with pivoting, SIAM J. Matrix Analysis and % Appl., 10 (1989), pp. 155-164. if T ~= triu(T) | any(~diag(T)) error('First argument must be a nonsingular upper triangular matrix.') end if nargin == 1, c = 1; end if c < 0 | c > 1 error('Second parameter must be a scalar between 0 and 1 inclusive.') end [m, m] = size(T); if m == 1 % Handle the special case T = scalar n = T; m = n-1; T = eye(n-1); else n = m+1; end d = 1+c; L = eye(n) - c*tril(ones(n), -1); U = [T (d.^[0:n-2])'; zeros(1,m) d^(n-1)]; A = L*U; theta = max(abs(A(:))); A(:,n) = (theta/norm(A(:,n),inf)) * A(:,n); function A = gear(n, i, j) %GEAR A = GEAR(N,I,J) is the N-by-N matrix with ones on the sub- and % super-diagonals, SIGN(I) in the (1,ABS(I)) position, SIGN(J) % in the (N,N+1-ABS(J)) position, and zeros everywhere else. % Defaults: I=N, j=-N. % All eigenvalues are of the form 2*COS(a) and the eigenvectors % are of the form [SIN(w+a), SIN(w+2a), ..., SIN(w+Na)]. % The values of a and w are given in the reference below. % A can have double and triple eigenvalues and can be defective. % GEAR(N) is singular. % Reference: C.W. Gear, A simple set of test matrices for eigenvalue % programs, Math. Comp., 23 (1969), pp. 119-125. if nargin == 1, i = n; j = -n; end if ~(i~=0 & abs(i)<=n & j~=0 & abs(j)<=n) error('Invalid I and J parameters') end A = diag(ones(n-1,1),-1) + diag(ones(n-1,1),1); A(1, abs(i)) = sign(i); A(n, n+1-abs(j)) = sign(j); function [A, e] = gallery(n) %GALLERY Famous, and not so famous, test matrices. % A = GALLERY(N) is an N-by-N matrix with some special property. % Only the following values of N are currently available: % N = 3 is badly conditioned. % N = 4 is the Wilson matrix. Symmetric pos def, integer inverse. % N = 5 is an interesting ei'value problem: defective, nilpotent. % N = 8 is the Rosser matrix, a classic symmetric eigenvalue problem. % [A, e] = GALLERY(8) returns the exact eigenvalues in e. % N = 21 is Wilkinson's tridiagonal W21+, another eigenvalue problem. % Original version supplied with MATLAB. Modified by N.J. Higham. % References: % J.R. Westlake, A Handbook of Numerical Matrix Inversion and Solution % of Linear Equations, John Wiley, New York, 1968. % J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University % Press, 1965. if n == 3 A = [ -149 -50 -154 537 180 546 -27 -9 -25 ]; elseif n == 4 A = [10 7 8 7 7 5 6 5 8 6 10 9 7 5 9 10]; elseif n == 5 % disp('Try to find the EXACT eigenvalues and eigenvectors.') % Matrix devised by Cleve Moler. Its Jordan form has just one block, with % eigenvalue zero. Proof: A^k is nonzero for k<5, zero for k=5. % TRACE(A)=0. No simple form for null vector. A = [ -9 11 -21 63 -252 70 -69 141 -421 1684 -575 575 -1149 3451 -13801 3891 -3891 7782 -23345 93365 1024 -1024 2048 -6144 24572 ]; elseif n == 8 A = [ 611. 196. -192. 407. -8. -52. -49. 29. 196. 899. 113. -192. -71. -43. -8. -44. -192. 113. 899. 196. 61. 49. 8. 52. 407. -192. 196. 611. 8. 44. 59. -23. -8. -71. 61. 8. 411. -599. 208. 208. -52. -43. 49. 44. -599. 411. 208. 208. -49. -8. 8. 59. 208. 208. 99. -911. 29. -44. 52. -23. 208. 208. -911. 99. ]; % Exact eigenvalues from Westlake (1968), p.150 (ei'vectors given too): a = sqrt(10405); b = sqrt(26); e = [-10*a, 0, 510-100*b, 1000, 1000, 510+100*b, ... 1020, 10*a]'; elseif n == 21 % W21+, Wilkinson (1965), p.308. E = diag(ones(n-1,1),1); m = (n-1)/2; A = diag(abs(-m:m)) + E + E'; else error('Sorry, that value of N is not available.') end function F = frank(n, k) %FRANK F = FRANK(N, K) is the Frank matrix of order N. It is upper % Hessenberg with determinant 1. K = 0 is the default; if K = 1 the % elements are reflected about the anti-diagonal (1,N)--(N,1). % The eigenvalues of F may be obtained in terms of the zeros of the % Hermite polynomials. They are positive and occur in reciprocal % pairs. Thus if N is odd, 1 is an eigenvalue. % F has FLOOR(N/2) ill-conditioned eigenvalues---the smaller ones. % For large N, DET(FRANK(N)') comes out far from 1---see Frank (1958) % for an explanation. % References: % W.L. Frank, Computing eigenvalues of complex matrices by determinant % evaluation and by methods of Danilewski and Wielandt, J. Soc. Indust. % Appl. Math., 6 (1958), pp. 378-392 (see pp. 385, 388). % J.H. Wilkinson, Error analysis of floating-point computation, % Numer. Math., 2 (1960), pp. 319-340 (section 8). % J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University % Press, 1965 (pp. 92-93). % G.H. Golub and J.H. Wilkinson, Ill-conditioned eigensystems and the % computation of the Jordan canonical form, SIAM Review, 18 (1976), % pp. 578-619 (section 13). % The next two references give details of the eigensystem: % P.J. Eberlein, A note on the matrices denoted by $B_n$, SIAM J. Appl. % Math., 20 (1971), pp. 87-92. % J.M. Varah, A generalization of the Frank matrix, SIAM J. Sci. Stat. % Comput., 7 (1986), pp. 835-839. if nargin == 1, k = 0; end F = min( ones(n,1)*(1:n), (1:n)'*ones(1,n) ); % Take upper Hessenberg part. F = triu(F,-1); if k == 0 p=n:-1:1; F = F(p,p)'; end function A = forsythe(n, alpha, lambda) %FORSYTHE FORSYTHE(N, ALPHA, LAMBDA) is the N-by-N matrix equal to % JORDAN(N, LAMBDA) except it has an ALPHA in the (N,1) position. % It has the characteristic polynomial % DET(A-t*EYE) = (LAMBDA-t)^N - (-1)^N ALPHA. % ALPHA defaults to SQRT(EPS) and LAMBDA to 0. if nargin < 2, alpha = sqrt(eps); end if nargin < 3, lambda = 0; end A = jordan(n, lambda); A(n,1) = alpha; function A = fiedler(c) %FIEDLER A = FIEDLER(C), where C is an n-vector, is the n-by-n symmetric % matrix with elements ABS(C(i)-C(j)). % Special case: if C is a scalar, then A = FIEDLER(1:C) % (i.e. A(i,j) = ABS(i-j)). % Properties: % FIEDLER(N) has a dominant positive eigenvalue and all the other % eigenvalues are negative (Szego 1936). % Explicit formulas for INV(A) and DET(A) are given in (Todd 1977) % and attributed to Fiedler. These indicate that INV(A) is % tridiagonal except for nonzero (1,n) and (n,1) elements. % [I think these formulas are valid only if the elements of % C are in increasing or decreasing order---NJH.] % References: G. Szego, Solution to problem 3705, Amer. Math. Monthly, % 43 (1936), pp. 246-259. % J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra, % Birkhauser, Basel, and Academic Press, New York, 1977, p. 159. n = max(size(c)); % Handle scalar c. if n == 1 n = c; c = 1:n; end c = c(:).'; % Ensure c is a row vector. A = ones(n,1)*c; A = abs(A-A.'); % NB. array transpose function A = dramadah(n, k) %DRAMADAH An anti-Hadamard matrix A is a matrix with elements 0 or 1 for % which MU(A) := NORM(INV(A),'FRO') is maximal. % A = DRAMADAH(N, K) is an N-by-N (0,1) matrix for which MU(A) is % relatively large, although not necessarily maximal. % Available types (the default is K=1): % K = 1: A is Toeplitz, with ABS(DET(A)) = 1, and MU(A) > c(1.75)^N, % where c is a constant. % K = 2: A is upper triangular and Toeplitz. % The inverses of both types have integer entries. % Reference: R.L. Graham and N.J.A. Sloane, Anti-Hadamard matrices, % Linear Algebra and Appl., 62 (1984), pp. 113-137. if nargin < 2, k = 1; end if k == 1 % Toeplitz c = ones(n,1); for i=2:4:n m = min(1,n-i); c(i:i+m) = zeros(m+1,1); end r = zeros(n,1); r(1:4) = [1 1 0 1]; if n < 4, r = r(1:n); end A = toeplitz(c,r); else % Upper triangular and Toeplitz c = zeros(n,1); c(1) = 1; r = ones(n,1); for i = 3:2:n r(i) = 0; end A = toeplitz(c,r); end function [c, d, e] = dorr(n, theta) %DORR [C, D, E] = DORR(N, THETA) returns the vectors defining a diagonally % dominant, tridiagonal M-matrix which is ill-conditioned for small % values of the parameter THETA>=0. If only one output parameter is % supplied then C = TRIDIAG(C,D,E), i.e. the full matrix is returned. % The columns of INV(C) vary greatly in norm. THETA defaults to 0.01. % Reference: F.W. Dorr, An example of ill-conditioning in the numerical % solution of singular perturbation problems, Math. Comp., 25 (1971), % pp. 271-283. if nargin < 2, theta = 0.01; end c = zeros(n,1); e = c; d = c; % All length n for convenience. Make c,e of length n-1 later. h = 1/(n+1); m = floor( (n+1)/2 ); term = theta/h^2; i = (1:m)'; c(i) = -term*ones(i); e(i) = c(i) - (0.5-i*h)/h; d(i) = -(c(i) + e(i)); i = (m+1:n)'; e(i) = -term*ones(n-m,1); c(i) = e(i) + (0.5-i*h)/h; d(i) = -(c(i) + e(i)); c = c(2:n); e = e(1:n-1); if nargout <= 1 c = tridiag(c, d, e); end function A = dingdong(n) %DINGDONG A = DINGDONG(N) is the symmetric N-by-N Hankel matrix with % A(i,j) = 0.5/(N-i-j+1.5). % The eigenvalues of A cluster around PI/2 and -PI/2. % Invented by F.N. Ris. % Reference: J.C. Nash, Compact Numerical Methods for Computers: Linear % Algebra and Function Minimisation, Adam Hilger, Bristol, and John Wiley, % New York, 1979, p. 210. p= -2*(1:n) + (n+1.5); A = cauchy(p); function A = cycol(n, k) %CYCOL A = CYCOL([M N], K) is an M-by-N matrix of the form A = B(1:M,1:N) % where B = [C C C...] and C=RAND(M,K). Thus A's columns repeat % cyclically, and A has rank at most K. K need not divide N. % K defaults to ROUND(N/4). % CYCOL(N,K), where N is a scalar, is the same as CYCOL([N N], K). % This type of matrix can lead to underflow problems for Gaussian % elimination: see NA Digest Volume 89, Issue 3 (January 22, 1989). m = n(1); % Parameter n specifies dimension: m-by-n. n = n(max(size(n))); if nargin < 2, k = max(round(n/4),1); end A = rand(m, k); for i=2:ceil(n/k) A = [A A(:,1:k)]; end A = A(:, 1:n); function A = condex(n, k, theta) %CONDEX CONDEX(N, K, THETA) is a `counter-example' matrix to a condition % estimator. It has order N and scalar parameter THETA (default 100). % If N is not equal to the `natural' size of the matrix then % the matrix is padded out with an identity matrix to order N. % The matrix, its natural size, and the estimator to which it applies % are specified by K (default K = 4) as follows: % K=1: 4-by-4, LINPACK (RCOND) % K=2: 3-by-3, LINPACK (RCOND) % K=3: arbitrary, LINPACK (RCOND) % K=4: N >= 4, SONEST (Higham 1988) % (Note that in practice the K=4 matrix is not usually a % counter-example because of the rounding errors in forming it.) % References: % A.K. Cline and R.K. Rew, A set of counter-examples to three condition number % estimators, SIAM J. Sci. Stat. Comput., 4 (1983), pp. 602-611. % N.J. Higham, FORTRAN codes for estimating the one-norm of a real or complex % matrix, with applications to condition estimation, ACM Trans. Math. % Soft., 14 (1988), pp. 381-396. if nargin < 3, theta = 100; end if nargin < 2, k = 4; end if k == 1 % Cline and Rew (1983), Example B. A = [1 -1 -2*theta 0 0 1 theta -theta 0 1 1+theta -(theta+1) 0 0 0 theta]; elseif k == 2 % Cline and Rew (1983), Example C. A = [1 1-1/theta^2 -2 0 1/theta -1/theta 0 0 1]; elseif k == 3 % Cline and Rew (1983), Example D. A = triw(n,-1)'; A(n,n) = -1; elseif k == 4 % Higham (1988), p. 390. x = ones(n,3); % First col is e x(2:n,2) = zeros(n-1,1); % Second col is e(1) % Third col is special vector b in SONEST x(:, 3) = (-1).^[0:n-1]' .* ( 1 + [0:n-1]'/(n-1) ); Q = orth(x); % Q*Q' is now the orthogonal projector onto span(e(1),e,b)). P = eye(n) - Q*Q'; A = eye(n) + theta*P; end % Pad out with identity as necessary. [m, m] = size(A); if m < n for i=n:-1:m+1, A(i,i) = 1; end end function A = compan(p) %COMPAN COMPAN(P), where P is an (n+1)-vector, is the n-by-n companion matrix % whose first row is -P(2:n+1)/P(1). % Special case: if P is a scalar then COMPAN(P) is the P-by-P matrix % COMPAN(1:P+1). % References: % J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, % 1965, p. 12. % C. Kenney and A.J. Laub, Controllability and stability radii for companion % form systems, Math. Control Signals Systems, 1 (1988), pp. 239-256. % (Gives explicit formulas for the singular values of COMPAN(P).) n = max(size(p)); % Handle scalar p. if n == 1 n = p+1; p = 1:n; end p = p(:)'; % Ensure p is a row vector. % Construct matrix of order n-1. if n == 2 A = 1; else A = diag(ones(1,n-2),-1); A(1,:) = -p(2:n)/p(1); end function A = clement(n, k) %CLEMENT CLEMENT(N, K) is a tridiagonal matrix with zero diagonal entries % and known eigenvalues. It is singular if N is odd. About 64 % percent of the entries of the inverse are zero. The eigenvalues % are plus and minus the numbers N-1, N-3, N-5, ..., (1 or 0). % For K=0 (the default) the matrix is unsymmetric, while for K=1 it % is symmetric. % Similar properties hold for TRIDIAG(X,Y,Z) where Y = ZEROS(N,1). % The eigenvalues still come in plus/minus pairs but they are not % known explicitly. % Reference: P.A. Clement, A class of triple-diagonal matrices for test % purposes, SIAM Review, 1 (1959), pp. 50-52. if nargin == 1, k = 0; end n = n-1; x = n:-1:1; z = 1:n; if k == 0 A = diag(x, -1) + diag(z, 1); else y = sqrt(x.*z); A = diag(y, -1) + diag(y, 1); end function C = circul(v) %CIRCUL C = CIRCUL(V) is the circulant matrix whose first row is V. % (A circulant matrix has the property that each row is obtained % from the previous one by cyclically permuting the entries one step % forward; it is a special Toeplitz matrix in which the diagonals % `wrap round'.) % Special case: if V is a scalar then C = CIRCUL(1:V). % The eigensystem of C (N-by-N) is known explicitly. If t is an Nth % root of unity, then the inner product of V with W = [1 t t^2 ... t^N] % is an eigenvalue of C, and W(N:-1:1) is an eigenvector of C. % Reference: P.J. Davis, Circulant Matrices, John Wiley, 1977. n = max(size(v)); if n == 1 n = v; v = 1:n; end v = v(:).'; % Make sure v is a row vector. C = toeplitz( [ v(1) v(n:-1:2) ], v ); function A = chow(n, alpha, delta) %CHOW A = CHOW(N, ALPHA, DELTA) is a Toeplitz lower Hessenberg matrix % A = H(ALPHA) + DELTA*EYE, where H(i,j) = ALPHA^(i-j+1). % H(ALPHA) has p=FLOOR((N+1)/2) zero eigenvalues, the rest being % 4*ALPHA*COS( k*PI/(N+2) )^2, k=1:N-p. % Defaults: ALPHA = 1, DELTA = 0. % References: % T.S. Chow, A class of Hessenberg matrices with known % eigenvalues and inverses, SIAM Review, 11 (1969), pp. 391-395. % G. Fairweather, On the eigenvalues and eigenvectors of a class of % Hessenberg matrices, SIAM Review, 13 (1971), pp. 220-221. if nargin < 3, delta = 0; end if nargin < 2, alpha = 1; end A = toeplitz( alpha.^(1:n), [alpha 1 zeros(1,n-2)] ) + delta*eye(n); function C = chebvand(m,p) %CHEBVAND Vandermonde-like matrix for the Chebyshev polynomials. % C = CHEBVAND(P), where P is a vector, produces the (primal) % Chebyshev Vandermonde matrix based on the points P, % i.e. C(i,j) = T_{i-1}(P(j)), where T_{i-1} is the Chebyshev % polynomial of degree i-1. % CHEBVAND(M,P) is a rectangular version of CHEBVAND(P) with M rows. % Special case: If P is a scalar then P equally spaced points on % [0,1] are used. % Reference: N.J. Higham, Stability analysis of algorithms for solving % confluent Vandermonde-like systems, SIAM J. Matrix Anal. Appl., 11 (1990), % pp. 23-41. if nargin == 1, p = m; end n = max(size(p)); % Handle scalar p. if n == 1 n = p; p = seqa(0,1,n); end if nargin == 1, m = n; end p = p(:).'; % Ensure p is a row vector. C = ones(m,n); if m == 1, return, end C(2,:) = p; % Use Chebyshev polynomial recurrence. for i=3:m C(i,:) = 2.*p.*C(i-1,:) - C(i-2,:); end function C = chebspec(n, k) %CHEBSPEC C = CHEBSPEC(N, K) is a Chebyshev spectral differentiation % matrix of order N. K = 0 (the default) or 1. % For K=0 (`no boundary conditions'), C is nilpotent, with % C^N=0 and it has the null vector ONES(N,1). C is similar to a % Jordan block of size N with eigenvalue zero. % For K=1, C is nonsingular and well-conditioned, and its eigenvalues % have negative real parts. % For both K, the computed eigenvector matrix X from EIG is % ill-conditioned (MESH(X) is interesting). % References: % C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral % Methods in Fluid Dynamics, Springer-Verlag, Berlin, 1987; p. 69. % L.N. Trefethen, M.R. Trummer, An instability phenomenon in spectral % methods, SIAM J. Numer. Anal., 24 (1987), pp. 1008-1023. % D. Funaro, Computing the inverse of the Chebyshev collocation % derivative, SIAM J. Sci. Stat. Comput., 9 (1988), pp. 1050-1057. if nargin == 1, k = 0; end % k = 1 case obtained from k = 0 case with one bigger n. if k == 1, n = n + 1; end n = n-1; C = zeros(n+1); one = ones(n+1,1); x = cos( (0:n)' * (pi/n) ); d = ones(n+1,1); d(1) = 2; d(n+1) = 2; C = (d * (one./d)') ./ (x*one'-one*x' +eye(C)); % eye(C) avoids div by zero % Now fix diagonal and signs. C(1,1) = (2*n^2+1)/6; for i=2:n+1 if rem(i,2) == 0 C(:,i) = -C(:,i); C(i,:) = -C(i,:); end if i < n+1 C(i,i) = -x(i)/(2*(1-x(i)^2)); else C(n+1,n+1) = -C(1,1); end end if k == 1 C = C(2:n+1,2:n+1); end function C = cauchy(x, y) %CAUCHY C = CAUCHY(X,Y), where X,Y are N-vectors, is the N-by-N matrix % with C(i,j) = 1/(X(i)+Y(j)). By default, Y = X. % Special case: if X is a scalar CAUCHY(X) is the same as CAUCHY(1:X). % Explicit formulas are known for DET(C) (which is nonzero if X and Y % both have distinct elements) and the elements of INV(C). % Reference: D.E. Knuth, The Art of Computer Programming, Volume 1, % Fundamental Algorithms, Second Edition, Addison-Wesley, Reading, % Massachusetts, 1973, p. 36. n = max(size(x)); % Handle scalar x. if n == 1 n = x; x = 1:n; end if nargin == 1, y = x; end x = x(:); y = y(:); % Ensure x and y are column vectors. if any(size(x) ~= size(y)) error('Parameter vectors must be of same dimension.') end C = x*ones(1,n) + ones(n,1)*y.'; C = ones(n) ./ C; function C = augment(A); %AUGMENT AUGMENT(A) is the square matrix [EYE(m) A; A' ZEROS(n)] of dimension % m+n, where A is m-by-n. It is the symmetric and indefinite % coefficient matrix of the augmented system associated with a least % squares problem minimize NORM(A*x-b). % Special case: if A is a scalar, n say, then AUGMENT(A) is the same % as AUGMENT(RAND(p,q)) where n = p+q and p = ROUND(n/2), % that is, a random augmented matrix of dimension n is % produced. % Reference: G.H. Golub and C.F. Van Loan, Matrix Computations, Second % Edition, Johns Hopkins University Press, Baltimore, Maryland, 1989, p. 253. [m, n] = size(A); if max(m,n) == 1 n = A; p = round(n/2); q = n - p; A = rand(p,q); m = p; n = q; end C = [eye(m) A; A' zeros(n)]; function S = symm(A) %SYMM SYMM(A) is the symmetric (Hermitian) part of A, (A+A')/2. % It is the nearest symmetric (Hermitian) matrix to A in both the % 2- and the Frobenius norms. S = (A + A')./2; function S = sub(A, i, j) %SUB Principal submatrix of A. % SUB(A,i,j) is A(i:j,i:j). % SUB(A,i) is the leading principal submatrix of order i, A(1:i,1:i), % if i>0, and the trailing principal submatrix of order ABS(i) if i<0. if nargin == 2 if i >= 0 S = A(1:i, 1:i); else n = min(size(A)); S = A(n+i+1:n, n+i+1:n); end else S = A(i:j, i:j); end function A = sparsify(A, p) %SPARSIFY S = SPARSIFY(A, P) is A with elements randomly set to zero % (S = S' if A is square and A = A', i.e. symmetry is preserved). % Each element has probability P of being zeroed. % Thus on average 100*P percent of the elements of A will be zeroed. % Default: P = 0.25. if nargin < 2, p = 0.25; end if p<0 | p>1, error('Second parameter must be between 0 and 1 inclusive.'), end % Is A square and symmetric? symm = 0; if min(size(A)) == max(size(A)) if norm(A-A',1) == 0, symm = 1; end end rand('uniform') if ~symm A = A .* (rand(A) > p); % Unsymmetric case else A = triu(A) .* (rand(A) > p); % Preserve symmetry A = (A + A')/2; end function sparse(A, tol, x, y) % SPARSE SPARSE(A) plots the sparsity pattern of a matrix A, showing % an `x' for every nonzero element. % SPARSE(A, TOL) plots the elements bigger than TOL in absolute value. % SPARSE(A,TOL,x,y) overlays the current graphics screen taking (x,y) % as the origin. if nargin <= 2 x = 1; y = 1; end if nargin == 1, tol = 0; end [m, n] = size(A); A = A(m:-1:1,:)'; [m, n] = size(A); if nargin <= 2 clg hold off axis([1, m, 1, n]) else hold on end for j=n:-1:1 plot( (j+y-1) .* [zeros(x-1,1); (abs(A(:,j))>tol)], 'wx' ); hold on end hold off function S = skew(A) %SKEW SKEW(A) is the skew-symmetric (Hermitian) part of A, (A-A')/2. % It is the nearest skew-symmetric (Hermitian) matrix to A in both the % 2- and the Frobenius norms. S = (A - A')./2; function show(x) %SHOW SHOW(X) displays X in `FORMAT +' form, that is, with `+', `-' and % blank representing positive, negative and zero elements respectively. format + disp(x) format function y = seqm(a, b, n) %SEQM Generate a multiplicative sequence. % Y = SEQM(A, B, N) produces a row vector comprising N % logarithmically equally spaced numbers, starting at A~=0 % and finishing at B~=0. % If A*B < 0 and N > 2 then complex results are produced. % If N is omitted then 10 points are generated. if nargin == 2, n = 10; end if n <= 1 y = a; return end p = [0:n-2]/(n-1); r = (b/a).^p; y = [a*r, b]; function x = seqcheb(n, k) %SEQCHEB Sequence of points related to Chebyshev polynomials, T_N. % X = SEQCHEB(N, K) produces a row vector of length N. % K = 1: zeros of T_N, (the default) % K = 2: extrema of T_{N-1}. if nargin == 1, k = 1; end if k == 1 % Zeros of T_n i = 1:n; j = .5*ones(i); x = cos( (i-j) * (pi/n) ); elseif k == 2 % Extrema of T_(n-1) i = 0:n-1; x = cos( i * (pi/(n-1)) ); end function y = seqa(a, b, n) %SEQA Generate an additive sequence. % Y = SEQA(A, B, N) produces a row vector comprising N equally % spaced numbers starting at A and finishing at B. % If N is omitted then 10 points are generated. if nargin == 2, n = 10; end if n <= 1 y = a; return end y = [a+(0:n-2)*(b-a)/(n-1), b]; function see(a, k) %SEE Pictures of a matrix and its (pseudo-) inverse. % SEE(A) displays MESH(A), MESH(PINV(A)), SEMILOGY(SVD(A),'x'), % and (if A is square) FV(A) in four subplot windows. % SEE(A,1) uses the full screen for each picture, with a PAUSE in % between each one. if nargin < 2, k = 0; end [m, n] = size(a); square = (m == n); clg b = pinv(a); s = svd(a); zs = (s == zeros(s)); if any( zs ) s( zs ) = []; % Remove zeros singular values for semilogy plot. end if k == 1, mesh(a), pause mesh(b), pause % plot(a), pause semilogy(s, 'xg') hold on, semilogy(s, '--'), hold off, pause if any(zs), title('Zero(s) omitted'), end if square, fv(a); end else subplot(221) mesh(a) mesh(b) % plot(a) subplot(223) semilogy(s, 'xg') hold on, semilogy(s, '--'), hold off if any(zs), subplot(223), title('Zero(s) omitted'), subplot(224), end if square, fv(a); end % subplot end function z = rq(a,x) %RQ RQ(A,x) is the Rayleigh quotient of A and x, x'*A*x/(x'*x). % This routine is called by FV. z = x'*a*x/(x'*x); function B = qmult(A) %QMULT QMULT(A) is Q*A where Q is a random real orthogonal matrix from the % Haar distribution, of dimension the number of rows in A. % Special case: if A is a scalar then QMULT(A) is the same as % QMULT(EYE(A)). N.B. This routine sets RAND('NORMAL'). % This routine is called by RANDSVD. % Reference: G.W. Stewart, The efficient generation of random % orthogonal matrices with an application to condition estimators, % SIAM J. Numer. Anal., 17 (1980), 403-409. [n, m] = size(A); % Handle scalar A. if max(n,m) == 1 n = A; A = eye(n); end d = zeros(n); rand('normal') for k = n-1:-1:1 % Generate random Householder transformation. x = rand(n-k+1,1); s = norm(x); sgn = sign(x(1)) + (x(1)==0); % Modification for sign(1)=1. s = sgn*s; d(k) = -sgn; x(1) = x(1) + s; beta = s*x(1); % Apply the transformation to A. y = x'*A(k:n,:); A(k:n,:) = A(k:n,:) - x*(y/beta); end % Tidy up signs. for i=1:n-1 A(i,:) = d(i)*A(i,:); end A(n,:) = A(n,:)*sign(rand); B = A; function A = matrix(k, n) %MATRIX MATRIX(K, N) is the N-by-N instance of the matrix number K % in the collection [1] (including some of the matrices provided % with MATLAB), with all other parameters set to their default. % N.B. Only those matrices which take an arbitrary dimension N % are included (thus GALLERY is omitted, for example). % MATRIX(K) is a string containing the name of the K'th matrix. % MATRIX(0) is the number of matrices, i.e. the upper limit for K. % Thus to set A to each N-by-N test matrix in turn use a loop like % for k=1:matrix(0) % A = matrix(k, N); % Aname = matrix(k); % The name of the matrix % end % MATRIX(-1) returns the version number and date of the collection. % [1] N.J. Higham, A collection of test matrices in MATLAB, Technical Report % TR 89-1025, Department of Computer Science, Cornell University, Ithaca, % NY 14853, July 1989; also Numerical Analysis Report No. 172, Department % of Mathematics, University of Manchester, Manchester, M13 9PL, England, % 1989. To appear in ACM Trans. Math. Soft. % Matrices omitted are: gallery, hadamard, hanowa, lauchli, wathen, wilk. % Matrices provided with MATLAB that are included here: invhilb, magic. % Set up string array a few lines at a time to avoid `input buffer line % overflow'. matrices = 'augment '; matrices = [matrices 'cauchy '; 'chebspec'; 'chebvand'; 'chow '; 'circul '; ... 'clement '; 'compan '; 'condex '; 'cycol '; 'dingdong'; ... 'dorr '; 'dramadah'; 'fiedler '; 'forsythe'; 'frank '; ... 'gear '; 'gfpp '; 'hilb '; 'invhilb '; 'invol ';]; matrices = [matrices 'ipjfact '; 'jordan '; 'kahan '; 'kms '; 'krylov '; ... 'lehmer '; 'lotkin '; 'magic '; 'minij '; 'moler '; ... 'ohess '; 'orthog '; 'pascal '; 'pei '; 'rando '; ... 'randsvd '; 'riemann '; 'tridiag '; 'triw '; 'vand ';]; if nargin == 1 if k == 0 [A, temp] = size(matrices); elseif k > 0 A = matrices(k,:); else % Version number and date of collection. % A = 'Version 1.0, July 4, 1989'; % A = 'Version 1.1, November 15 1989'; A = 'Version 1.2, May 30 1990'; end else A = eval( [matrices(k,:) '(n)'] ); end function [v, beta] = house(x) %HOUSE Householder matrix. % If [v, beta] = HOUSE(x) then H = EYE - beta*v*v' is a Householder % matrix such that Hx = -sign(x(1))*norm(x)*e_1. % NB: If x = 0 then v = 0, beta = 1 is returned. % x can be real or complex. % sign(x) := exp(i*arg(x)) ( = x./abs(x) when x ~= 0). % Theory: (textbook references Golub & Van Loan 1983, 38-43; % Stewart 1973, 231-234, 262; Wilkinson 1965, 48-50). % Hx = y: (I - beta*v*v')x = -s*e_1. % Must have |s| = norm(x), v = x+s*e_1, and % x'y = x'Hx =(x'Hx)' real => arg(s) = arg(x(1)). % So take s = sign(x(1))*norm(x) (which avoids cancellation). % v'v = (x(1)+s)^2 + x(2)^2 + ... + x(n)^2 % = 2*norm(x)*(norm(x) + |x(1)|). [m, n] = size(x); if n > 1, error('Argument must be a column vector.'), end s = norm(x) * (sign(x(1)) + (x(1)==0)); % Modification for sign(0)=1. v = x; if s == 0, beta = 1; return, end % Quit if x is the zero vector. v(1) = v(1) + s; beta = 1/(s'*v(1)); % NB the conjugated s. % beta = 1/(abs(s)*(abs(s)+abs(x(1)) would guarantee beta real. % But beta as above can be non-real (due to rounding) only when x is complex. function [f, e] = fv(b, nk, thmax, noplot) %FV Field of values (or numerical range). % FV(A, NK, THMAX) evaluates and plots the field of values of the NK % largest leading principal submatrices of A, using THMAX equally spaced % angles in the complex plane. The defaults are NK = 1 and THMAX = 20. % The eigenvalues of A are displayed as `x'. % Alternative usage: [F, E] = FV(A, NK, THMAX, 1) suppresses the plot % and returns the field of values plot data in F, with A's eigenvalues % in E. % Theory: % Field of values FV(A) = set of all Rayleigh quotients. FV(A) is a % convex set containing the eigenvalues of A. When A is normal FV(A) is % the convex hull of the eigenvalues of A (but not vice versa). % z = x'Ax/(x'x), z' = x'A'x/(x'x) % => REAL(z) = x'Hx/(x'x), H = (A+A')/2 % so MIN(EIG(H)) <= REAL(z) <= MAX(EIG(H)) % with equality for x = corresponding eigenvectors of H. For these x, % RQ(A,x) is on the boundary of FV(A). % Reference: A.S. Householder, The Theory of Matrices in Numerical % Analysis, Blaisdell, New York, 1964, section 3.3. % Original version by A. Ruhe. Modified by N.J. Higham. % The plot is `incorrect' for skew-symmetric matrices! if nargin < 2, nk = 1; end if nargin < 3, thmax = 20; end iu = sqrt(-1); [n, p] = size(b); f = []; z = zeros(2*thmax+1,1); for m = 1:nk ns = n+1-m; a = b(1:ns, 1:ns); for i = 0:thmax th = i/thmax*pi; ath = exp(iu*th)*a; % Rotate A through angle th. h = 0.5*(ath + ath'); % Hermitian part of rotated A. [x, d] = eig(h); [lmbh, k] = sort(real(diag(d))); z(1+i) = rq(a,x(:,k(1))); % RQ's of A corr. to eigenvalues of H z(1+i+thmax) = rq(a,x(:,k(ns))); % with smallest/largest real part. end f = [f; z]; end if nargin < 4 axis('square') if norm(imag(f),1) == 0 % In case of real f (e.g. Hermitian A) choose [-1,1] for y-range. axis([ min(f) max(f) -1 1]); end plot(real(f), imag(f),'w') % Plot the field of values hold on e = eig(b); plot(real(e), imag(e), 'xg') % Plot the eigenvalues too. axis('normal'), hold off if norm(imag(f),1) == 0 % Return to auto-scaling axis([1 2 3 4]); axis; end end function C = comp(A, k) %COMP Comparison matrices. % COMP(A) is DIAG(B) - TRIL(B,-1) - TRIU(B,1), where B = ABS(A). % COMP(A, 1) is A with each diagonal element replaced by its % absolute value, and each off-diagonal element replaced by minus % the absolute value of the largest element in absolute value in % its row. However, if A is triangular COMP(A, 1) is too. % COMP(A, 0) is the same as COMP(A). % COMP(A) is often denoted by M(A) in the literature. % Reference (e.g.): N.J. Higham, A survey of condition number estimation for % triangular matrices, SIAM Review, 29 (1987), pp. 575-596. if nargin == 1, k = 0; end [m, n] = size(A); p = min(m, n); if k == 0 % This code uses less temporary storage than the `high level' definition above. C = -abs(A); for j=1:p C(j,j) = abs(A(j,j)); end elseif k == 1 C = A'; for j=1:p C(k,k) = 0; end mx = max(abs(C)); C = -mx'*ones(1,n); for j=1:p C(j,j) = abs(A(j,j)); end if all( A == tril(A) ), C = tril(C); end if all( A == triu(A) ), C = triu(C); end end function c = chop(x, t) %CHOP CHOP(X, t) is the matrix obtained by rounding the elements of X % to t significant binary places. % Default is t=23, corresponding to IEEE single precision. if nargin < 2, t = 23; end [m, n] = size(x); % Use the representation: % x(i,j) = 2^e(i,j) * .d(1)d(2)...d(s) * sign(x(i,j)) % On the next line `+(x==0)' avoids passing a zero argument to LOG, which % would cause a warning message to be generated. y = abs(x) + (x==0); e = floor(log(y)./log(2) + 1); p = (2.*ones(m,n)).^(t-e); c = round(p.*x)./p; function A = bandred(A, kl, ku) %BANDRED B = BANDRED(A, KL, KU) is a matrix orthogonally equivalent to A % with lower bandwidth KL and upper bandwidth KU % (i.e. B(i,j) = 0 if i>j+KL or j>i+KU). % The reduction is performed using Householder transformations. % If KU is omitted it defaults to KL. % This routine is called by RANDSVD. % This is a `standard' reduction. Cf. reduction to bidiagonal form % prior to computing the SVD. This code is a little wasteful in that % it computes certain elements which are immediately set to zero! if nargin == 2, ku = kl; end if kl == 0 & ku == 0 error('You''ve asked for a diagonal matrix. In that case use the SVD!') end % Check for special case where order of left/right transformations matters. % Easiest approach is to work on the transpose, flipping back at the end. flip = 0; if ku == 0 A = A'; temp = kl; kl = ku; ku = temp; flip = 1; end [m, n] = size(A); for j=1 : min( min(m,n), max(m-kl-1,n-ku-1) ) if j+kl+1 <= m [v, beta] = house(A(j+kl:m,j)); temp = A(j+kl:m,j:n); A(j+kl:m,j:n) = temp - beta*v*(v'*temp); A(j+kl+1:m,j) = zeros(m-j-kl,1); end if j+ku+1 <= n [v, beta] = house(A(j,j+ku:n)'); temp = A(j:m,j+ku:n); A(j:m,j+ku:n) = temp - beta*(temp*v)*v'; A(j,j+ku+1:n) = zeros(1,n-j-ku); end end if flip A = A'; end