Next: Matrix Factorizations, Up: Linear Algebra
[dd, aa] = balance (a)returnsaa = dd \ a * dd.aais a matrix whose row and column norms are roughly equal in magnitude, anddd=p * d, wherepis a permutation matrix anddis a diagonal matrix of powers of two. This allows the equilibration to be computed without roundoff. Results of eigenvalue calculation are typically improved by balancing first.
[cc, dd, aa, bb] = balance (a, b)returnsaa = cc*a*ddandbb = cc*b*dd), whereaaandbbhave non-zero elements of approximately the same magnitude andccandddare permuted diagonal matrices as inddfor the algebraic eigenvalue problem.The eigenvalue balancing option
optis selected as follows:
"N","n"- No balancing; arguments copied, transformation(s) set to identity.
"P","p"- Permute argument(s) to isolate eigenvalues where possible.
"S","s"- Scale to improve accuracy of computed eigenvalues.
"B","b"- Permute and scale, in that order. Rows/columns of a (and b) that are isolated by permutation are not scaled. This is the default behavior.
Algebraic eigenvalue balancing uses standard Lapack routines.
Generalized eigenvalue problem balancing uses Ward's algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).
Compute the (two-norm) condition number of a matrix.
cond (a)is defined asnorm (a) * norm (inv (a)), and is computed via a singular value decomposition.
Compute the determinant of a using Lapack. Return an estimate of the reciprocal condition number if requested.
If a is a vector of length
rows (b), returndiag (a) *b (but computed much more efficiently).
Computes the dot product of two vectors. If x and y are matrices, calculate the dot-product along the first non-singleton dimension. If the optional argument dim is given, calculate the dot-product along this dimension.
The eigenvalues (and eigenvectors) of a matrix are computed in a several step process which begins with a Hessenberg decomposition, followed by a Schur decomposition, from which the eigenvalues are apparent. The eigenvectors, when desired, are computed by further manipulations of the Schur decomposition.
Return a 2 by 2 orthogonal matrix g
= [c s; -s'c]such that g[x;y] = [*; 0]with x and y scalars.For example,
givens (1, 1) => 0.70711 0.70711 -0.70711 0.70711
Compute the inverse of the square matrix a. Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small.
Compute the p-norm of the matrix a. If the second argument is missing,
p = 2is assumed.If a is a matrix:
- p =
1- 1-norm, the largest column sum of the absolute values of a.
- p =
2- Largest singular value of a.
- p =
Inf- Infinity norm, the largest row sum of the absolute values of a.
- p =
"fro"- Frobenius norm of a,
sqrt (sum (diag (a' *a))).If a is a vector or a scalar:
- p =
Infmax (abs (a)).- p =
-Infmin (abs (a)).- other
- p-norm of a,
(sum (abs (a) .^p)) ^ (1/p).
Return an orthonormal basis of the null space of a.
The dimension of the null space is taken as the number of singular values of a not greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
Return an orthonormal basis of the range space of a.
The dimension of the range space is taken as the number of singular values of a greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
Return the pseudoinverse of x. Singular values less than tol are ignored.
If the second argument is omitted, it is assumed that
tol = max (size (x)) * sigma_max (x) * eps,where
sigma_max (x)is the maximal singular value of x.
Compute the rank of a, using the singular value decomposition. The rank is taken to be the number of singular values of a that are greater than the specified tolerance tol. If the second argument is omitted, it is taken to be
tol = max (size (a)) * sigma(1) * eps;where
epsis machine precision andsigma(1)is the largest singular value of a.