[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
22.1 The Creation and Manipulation of Sparse Matrices | ||
22.2 Graphs are their use with Sparse Matrices | ||
22.3 Linear Algebra on Sparse Matrices | ||
22.4 Iterative Techniques applied to sparse matrices | Iterative Techniques applied to Sparse Matrices | |
22.5 Using Sparse Matrices in Oct-files | ||
22.6 Licensing of Third Party Software | ||
22.7 Function Reference | Documentation from the Specific Sparse Functions |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The size of mathematical problems that can be treated at any particular time is generally limited by the available computing resources. Both, the speed of the computer and its available memory place limitation on the problem size.
There are many classes mathematical problems which give rise to matrices, where a large number of the elements are zero. In this case it makes sense to have a special matrix type to handle this class of problems where only the non-zero elements of the matrix are stored. Not only done this reduce the amount of memory to store the matrix, but it also means that operations on this type of matrix can take advantage of the a-priori knowledge of the positions of the non-zero elements to accelerate their calculations.
A matrix type that stores only the non-zero elements is generally called sparse. It is the purpose of this document to discuss the basics of the storage and creation of sparse matrices and the fundamental operations on them.
THIS DOCUMENT STILL HAS LARGE BLANKS. PLEASE FILL THEM IN. LOOK FOR THE TAG "WRITE ME"
22.1.1 Storage of Sparse Matrices | ||
22.1.2 Creating Sparse Matrices | ||
22.1.3 Basic Operators and Functions on Sparse Matrices | ||
22.1.4 Finding out Information about Sparse Matrices |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is not strictly speaking necessary for the user to understand how sparse matrices are stored. However, such an understanding will help to get an understanding of the size of sparse matrices. Understanding the storage technique is also necessary for those users wishing to create their own oct-files.
There are many different means of storing sparse matrix data. What all of the methods have in common is that they attempt to reduce the compelxity and storage given a-priori knowledge of the particular class of problems that will be solved. A good summary of the available techniques for storing sparse matrix is given by Saad (6). With full matrices, knowledge of the point of an element of the matrix within the matrix is implied by its position in the computers memory. However, this is not the case for sparse matrices, and so the positions of the non-zero elements of the matrix must equally be stored.
An obvious way to do this is by storing the elements of the matrix as triplets, with two elements being the of the position in the array (rows and column) and the third being the data itself. This is conceptually easy to grasp, but requires more storage than is strictly needed.
The storage technique used within OCTAVE is compressed column format. In this format the position of each element in a row and the data are stored as previously. However, if we assume that all elements in the same column are stored adjacent in the computers memory, then we only need to store information on the number of non-zero elements in each column, rather than their positions. Thus assuming that the matrix has more non-zero elements than there are columns in the matrix, we win in terms of the amount of memory used.
In fact, the column index contains one more element than the number of columns, with the first element always being zero. The advantage of this is a simplication in the code, in that their is no special case for the first or last columns. A short example, demonstrating this in C is.
for (j = 0; j < nc; j++) for (i = cidx (j); i < cidx(j+1); i++) printf ("non-zero element (%i,%i) is %d\n", ridx(i), j, data(i)); |
A clear understanding might be had by considering an example of how the above applies to an example matrix. Consider the matrix
1 2 0 0 0 0 0 3 0 0 0 4 |
The non-zero elements of this matrix are
(1, 1) => 1 (1, 2) => 2 (2, 4) => 3 (3, 4) => 4 |
This will be stored as three vectors cidx, ridx and data, representing the column indexing, row indexing and data respectively. The contents of these three vectors for the above matrix will be
cidx = [0, 2, 2, 4] ridx = [0, 0, 1, 2] data = [1, 2, 3, 4] |
Note that this is the representation of these elements with the first row
and column assumed to start as zero. Thus the number of elements in the
i-th column is given by cidx (i + 1) -
cidx (i)
.
It should be noted that compressed row formats are equally possible. However, in the context of mixed operations between mixed sparse and dense matrices, it makes sense that the elements of the sparse matrices are in the same order as the dense matrices. OCTAVE stores dense matrices in column major ordering, and so sparse matrices are equally stored in this manner.
A further constraint on the sparse matrix storage used by OCTAVE is that all elements in the rows are stored in increasing order of their row index. This makes certain operations, later be faster. However, it imposes the need to sort the elements on the creation of sparse matrices. Having dis-ordered elements is potentially an advantage in that it makes operations such as concatenating two sparse matrices together easier and faster, however it adds complexity and speed problems elsewhere.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are several means to create sparse matrix.
There are several basic functions to return specific sparse
matrices. For example the sparse identity matrix, is a matrix that is
often needed. It therefore has its own function to create it as
speye (n)
or speye (r, c)
, which
creates an n-by-n or r-by-c sparse identity
matrix.
Another typical sparse matrix that is often needed is a random distribution
of random elements. The functions sprand and sprandn perform
this for uniform and random distributions of elements. They have exactly
the same calling convention as sprand (r, c, d)
,
which creates an r-by-c sparse matrix a density of filled
elements of d.
Other functions of interest that directly creates a sparse matrix, are spdiag or its generalization spdiags, that can take the definition of the diagonals of the matrix and create the sparse matrix that corresponds to this. For example
s = spdiag (randn(1,n), -1); |
creates a sparse (n+1)-by-(n+1) sparse matrix with a single diagonal defined.
The recommended way for the user to create a sparse matrix, is to create two vectors contain the row and column index of the data and a third vector of the same size containing the data to be stored. For example
function x = foo (r, j) idx = randperm (r); x = ([zeros(r-2,1); rand(2,1)])(idx); endfunction ri = []; ci = []; d = []; for j=1:c dtmp = foo (r, j); # Returns vector of length r with (:,j) values idx = find (dtmp != 0.); ri = [ri; idx]; ci = [ci; j*ones(length(idx),1)]; d = [d; dtmp(idx)]; endfor s = sparse (ri, ci, d, r, c); |
creates an r-by-c sparse matrix with a random distribution of 2 elements per row. The elements of the vectors do not need to be sorted in any particular order as OCTAVE will store them prior to storing the data. However, per sorting the data will make teh creation of the sparse matrix faster.
The function spconvert takes a three or four column real matrix. The first two columns represent the row and column index respectively and the third and four columns, the real and imaginary parts of the sparse matrix. The matrix can contain zero elements and the elements can be sorted in any order. Adding zero elements is a convenient way to define the size of the sparse matrix. For example
s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]') => Compressed Column Sparse (rows=4, cols=4, nnz=3) (1 , 1) -> 1 (2 , 3) -> 2 (3 , 4) -> 3 |
An example of creating and filling a matrix might be
k = 5; nz = r * k; s = spalloc (r, c, nz) for j = 1:c idx = randperm (r); s (:, j) = [zeros(r - k, 1); rand(k, 1)] (idx); endfor |
It should be noted, that due to the way that the OCTAVE assignment functions are written that the assignment will reallocate the memory used by the sparse matrix at each iteration. Therefore the spalloc function ignores the nz argument and does not preassign the memory for the matrix. Therefore, it is vitally important that code using to above structure should be as vectorized as possible to minimize the number of assignments and reduce the number of memory allocations.
The above problem can be avoided in oct-files. However, the construction of a sparse matrix from an oct-file is more complex than can be discussed in this brief introduction, and you are referred to section 22.5 Using Sparse Matrices in Oct-files, to have a full description of the techniques involved.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
22.1.3.1 Operators and Functions | ||
22.1.3.2 The Return Types of Operators and Functions | ||
22.1.3.3 Mathematical Considerations |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
WRITE ME
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The two basic reason to use sparse matrices are to reduce the memory usage and to not have to do calculations on zero elements. The two are closely related in that the computation time on a sparse matrix operator or function is roughly linear with the numberof non-zero elements.
Therefore, there is a certain density of non-zero elements of a matrix where it no longer makes sense to store it as a sparse matrix, but rather as a full matrix. For this reason operators and functions that have a high probability of returning a full matrix will always return one. For example adding a scalar constant to a sparse matrix will almost always make it a full matrix, and so the example
speye(3) + 0 => 1 0 0 0 1 0 0 0 1 |
returns a full matrix as can be seen. Additionally all sparse functions
test the amount of memory occupied by the sparse matrix to see if the
amount of storage used is larger than the amount used by the full
equivalent. Therefore speye (2) * 1
will return a full matrix as
the memory used is smaller for the full version than the sparse version.
As all of the mixed operators and functions between full and sparse
matrices exist, in general this does not cause any problems. However,
one area where it does cause a problem is where a sparse matrix is
promoted to a full matrix, where subsequent operations would resparsify
the matrix. Such cases as rare, but can be artificially created, for
example (fliplr(speye(3)) + speye(3)) - speye(3)
gives a full
matrix when it should give a sparse one. In general, where such cases
occur, they impose only a small memory penalty.
There is however one known case where this behaviour of OCTAVE's sparse matrices will cause a problem. That is in the handling of the diag function. Whether diag returns a sparse or full matrix depending on the type of its input arguments. So
a = diag (sparse([1,2,3]), -1); |
should return a sparse matrix. To ensure this actually happens, the sparse function, and other functions based on it like speye, always returns a sparse matrix, even if the memory used will be larger than its full representation.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The attempt has been made to make sparse matrices behave in exactly the same manner as there full counterparts. However, there are certain differences and especially differences with other products sparse implementations.
Firstly, the "./" and ".^" operators must be used with care. Consider what the examples
s = speye (4); a1 = s .^ 2; a2 = s .^ s; a3 = s .^ -2; a4 = s ./ 2; a5 = 2 ./ s; a6 = s ./ s; |
will give. The first example of s raised to the power of 2 causes
no problems. However s raised element-wise to itself involves a
a large number of terms 0 .^ 0
which is 1. There s .^
s
is a full matrix.
Likewise s .^ -2
involves terms terms like 0 .^ -2
which
is infinity, and so s .^ -2
is equally a full matrix.
For the "./" operator s ./ 2
has no problems, but
2 ./ s
involves a large number of infinity terms as well
and is equally a full matrix. The case of s ./ s
involves terms like 0 ./ 0
which is a NaN
and so this
is equally a full matrix with the zero elements of s filled with
NaN
values.
The above behaviour is consistent with full matrices, but is not consistent with sparse implementations in other products.
A particular problem of sparse matrices comes about due to the fact that as the zeros are not stored, the sign-bit of these zeros is equally not stored... In certain cases the sign-bit of zero is important. For example
a = 0 ./ [-1, 1; 1, -1]; b = 1 ./ a => -Inf Inf Inf -Inf c = 1 ./ sparse (a) => Inf Inf Inf Inf |
Also discuss issues of fill-in. Discuss symamd etc, and mention randperm that is included elsewhere in the docs...
WRITE ME
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Talk about the spy, spstats, nnz, spparms, etc function
WRITE ME
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Someone who knows more about this than me should write this...
WRITE ME
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
OCTAVE includes a poly-morphic solver for sparse matrices, where the exact solver used to factorize the matrix, depends on the properties of the sparse matrix, itself. As the cost of determining the matrix type is small relative to the cost of factorizing the matrix itself, the matrix type is re-determined each time it is used in a linear equation.
The selection tree for how the linear equation is solve is
spparms ("bandden")
continue, else goto 5.
FIXME: Detection of permuted triangular matrices not written yet, and so the code itself is not tested either
FIXME: Detection of positive definite matrices written and tested, but Cholesky factorization isn't yet written
FIXME: QR solvers not yet written.
The band density is defined as the number of non-zero values in the matrix
divided by the number of non-zero values in the matrix. The banded matrix
solvers can be entirely disabled by using spparms to set bandden
to 1 (i.e. spparms ("bandden", 1)
).
All of the solvers above, expect the banded solvers, calculate an estimate of the condition number. This can be used to detect numerical stability problems in the solution and force a minimum norm solution to be used. However, for narrow banded matrices, the cost of calculating the condition number is significant, and can in fact exceed the cost of factoring the matrix. Therefore the condition number is not calculated for banded matrices, and therefore unless the factorization is exactly singular, these numerical instabilities won't be detected. In cases where, this might be a problem the user is recommended to disable the banded solvers as above, at a significant cost in terms of speed.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
WRITE ME
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
An oct-file is a means of writing an OCTAVE function in a compilable language like C++, rather than as a script file. This results in a significant acceleration in the code. It is not the purpose of this section to discuss how to write an oct-file, or discuss what they are. There are already two (7) very good references on oct-files themselves. Users who are not familiar with oct-files are urged to read these references to fully understand this chapter. The examples discussed here assume that the oct-file is written entirely in C++.
There are three classes of sparse objects that are of interest to the user.
All of these classes inherit from the Sparse<T>
template class,
and so all have similar capabilities and usage. The Sparse<T>
class was based on OCTAVE Array<T>
class, and so users familar
with OCTAVE's Array classes will be comfortable with the use of
the sparse classes.
The sparse classes will not be entirely described in this section, due
to their similar with the existing Array classes. However, there are a
few differences due the different nature of sparse objects, and these
will be described. Firstly, although it is fundamentally possible to
have N-dimensional sparse objects, the OCTAVE sparse classes do
not allow them at this time. So all operations of the sparse classes
must be 2-dimensional. This means that in fact SparseMatrix
is
similar to OCTAVE's Matrix
class rather than its
NDArray
class.
22.5.1 The Differences between the Array and Sparse Classes | ||
22.5.2 Creating Spare Matrices in Oct-Files | ||
22.5.3 Using Sparse Matrices in Oct-Files |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The number of elements in a sparse matrix is considered to be the number of non-zero elements rather than the product of the dimensions. Therefore
SparseMatrix sm; ... int nel = sm.nelem (); |
returns the number of non-zero elements. If the user really requires the
number of elements in the matrix, including the non-zero elements, they
should use numel
rather than nelem
. Note that for very
large matrices, where the product of the two dimensions is large that
the representation of the an unsigned int, then numel
can overflow.
An example is speye(1e6)
which will create a matrix with a million
rows and columns, but only a million non-zero elements. Therefore the
number of rows by the number of columns in this case is more than two
hundred times the maximum value that can be represented by an unsigned int.
The use of numel
should therefore be avoided useless it is known
it won't overflow.
Extreme care must be take with the elem method and the "()" operator, which perform basically the same function. The reason is that if a sparse object is non-const, then OCTAVE will assume that a request for a zero element in a sparse matrix is in fact a request to create this element so it can be filled. Therefore a piece of code like
SparseMatrix sm; ... for (int j = 0; j < nc; j++) for (int i = 0; i < nr; i++) std::cerr << " (" << i << "," << j << "): " << sm(i,j) << std::endl; |
is a great way of turning the sparse matrix into a dense one, and a very slow way at that since it reallocates the sparse object at each zero element in the matrix.
An easy way of preventing the above from hapening is to create a temporary constant version of the sparse matrix. Note that only the container for the sparse matrix will be copied, while the actual representation of the data will be shared between the two versions of the sparse matrix. So this is not a costly operation. For example, the above would become
SparseMatrix sm; ... const SparseMatrix tmp (sm); for (int j = 0; j < nc; j++) for (int i = 0; i < nr; i++) std::cerr << " (" << i << "," << j << "): " << tmp(i,j) << std::endl; |
Finally, as the sparse types aren't just represented as a contiguous
block of memory, the fortran_vec
method of the Array<T>
is not available. It is however replaced by three seperate methods
ridx
, cidx
and data
, that access the raw compressed
column format that the OCTAVE sparse matrices are stored in.
Additionally, these methods can be used in a manner similar to elem
,
to allow the matrix to be accessed or filled. However, in that case it is
up to the user to repect the sparse matrix compressed column format
discussed previous.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The user has several alternatives in how to create a sparse matrix. They can first create the data as three vectors representing the row and column indexes and the data, and from those create the matrix. Or alternatively, they can create a sparse matrix with the appropriate amount of space and then fill in the values. Both techniques have their advantages and disadvantages.
An example of how to create a small sparse matrix with the first technique might be seen the example
int nz = 4, nr = 3, nc = 4; ColumnVector ridx (nz); ColumnVector cidx (nz); ColumnVector data (nz); ridx(0) = 0; ridx(1) = 0; ridx(2) = 1; ridx(3) = 2; cidx(0) = 0; cidx(1) = 1; cidx(2) = 3; cidx(3) = 3; data(0) = 1; data(1) = 2; data(2) = 3; data(3) = 4; SparseMatrix sm (data, ridx, cidx, nr, nc); |
which creates the matrix given in section 22.1.1 Storage of Sparse Matrices. Note that the compressed matrix format is not used at the time of the creation of the matrix itself, however it is used internally.
As previously mentioned, the values of the sparse matrix are stored in increasing column-major ordering. Although the data passed by the user does not need to respect this requirement, the pre-sorting the data significantly speeds up the creation of the sparse matrix.
The disadvantage of this technique of creating a sparse matrix is that there is a brief time where two copies of the data exists. Therefore for extremely memory constrained problems this might not be the right technique to create the sparse matrix.
The alternative is to first create the sparse matrix with the desired number of non-zero elements and then later fill those elements in. The easiest way to do this is
int nz = 4, nr = 3, nc = 4; SparseMatrix sm (nr, nc, nz); sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
That creates the same matrix as previously. Again, although it is not strictly necessary, it is significantly faster if the sparse matrix is created in this manner that the elements are added in column-major ordering. The reason for this is that if the elements are inserted at the end of the current list of known elements then no element in the matrix needs to be moved to allow the new element to be inserted. Only the column indexes need to be updated.
There are a few further points to note about this technique of creating a sparse matrix. Firstly, it is not illegal to create a sparse matrix with fewer elements than are actually inserted in the matrix. Therefore
int nz = 4, nr = 3, nc = 4; SparseMatrix sm (nr, nc, 0); sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
is perfectly legal. However it is a very bad idea. The reason is that as each new element is added to the sparse matrix the space allocated to it is increased by reallocating the memory. This is an expensive operation, that will significantly slow this means of creating a sparse matrix. Furthermore, it is not illegal to create a sparse matrix with too much storage, so having nz above equaling 6 is also legal. The disadvantage is that the matrix occupies more memory than strictly needed.
It is not always easy to known the number of non-zero elements prior to filling a matrix. For this reason the additional storage for the sparse matrix can be removed after its creation with the maybe_compress function. Furthermore, the maybe_compress can deallocate the unused storage, but it can equally remove zero elements from the matrix. The removal of zero elements from the matrix is controlled by setting the argument of the maybe_compress function to be 'true'. However, the cost of removing the zeros is high because it implies resorting the elements. Therefore, if possible it is better is the user doesn't add the zeros in the first place. An example of the use of maybe_compress is
int nz = 6, nr = 3, nc = 4; SparseMatrix sm1 (nr, nc, nz); sm1(0,0) = 1; sm1(0,1) = 2; sm1(1,3) = 3; sm1(2,3) = 4; sm1.maybe_compress (); // No zero elements were added SparseMatrix sm2 (nr, nc, nz); sm2(0,0) = 1; sm2(0,1) = 2; sm(0,2) = 0; sm(1,2) = 0; sm1(1,3) = 3; sm1(2,3) = 4; sm2.maybe_compress (true); // Zero elements were added |
The use of the maybe_compress function should be avoided if possible, as it will slow the creation of the matrices.
A third means of creating a sparse matrix is to work directly with the data in compressed row format. An example of this technique might be
octave_value arg; ... int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz SparseMatrix sm (nr, nc, nz); Matrix m = arg.matrix_value (); int ii = 0; sm.cidx (0) = 0; for (int j = 1; j < nc; j++) { for (int i = 0; i < nr; i++) { double tmp = foo (m(i,j)); if (tmp != 0.) { sm.data(ii) = tmp; sm.ridx(ii) = i; ii++; } } sm.cidx(j+1) = ii; } sm.maybe_mutate (); // If don't know a-priori the final no of nz. |
which is probably the most efficient means of creating the sparse matrix.
Finally, it might sometimes arise that the amount of storage initially
created is insufficient to completely store the sparse matrix. Therefore,
the method change_capacity
exists to reallocate the sparse memory.
The above example would then be modified as
octave_value arg; ... int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz SparseMatrix sm (nr, nc, nz); Matrix m = arg.matrix_value (); int ii = 0; sm.cidx (0) = 0; for (int j = 1; j < nc; j++) { for (int i = 0; i < nr; i++) { double tmp = foo (m(i,j)); if (tmp != 0.) { if (ii == nz) { nz += 2; // Add 2 more elements sm.change_capacity (nz); } sm.data(ii) = tmp; sm.ridx(ii) = i; ii++; } } sm.cidx(j+1) = ii; } sm.maybe_mutate (); // If don't know a-priori the final no of nz. |
Note that both increasing and decreasing the number of non-zero elements in a sparse matrix is expensive, as it involves memory reallocation. Also as parts of the matrix, though not its entirety, exist as the old and new copy at the same time, additional memory is needed. Therefore if possible this should be avoided.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Most of the same operators and functions on sparse matrices that are
available from the OCTAVE are equally available with oct-files.
The basic means of extracting a sparse matrix from an octave_value
and returning them as an octave_value
, can be seen in the
following example
octave_value_list retval; SparseMatrix sm = args(0).sparse_matrix_value (); SparseComplexMatrix scm = args(1).sparse_complex_matrix_value (); SparseBoolMatrix sbm = args(2).sparse_bool_matrix_value (); ... retval(2) = sbm; retval(1) = scm; retval(0) = sm; |
The conversion to an octave-value is handled by the sparse
octave_value
constructors, and so no special care is needed.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are several third party software packages used by the OCTAVE sparse matrix.
THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Permission is hereby granted to use, copy, modify, and/or distribute this program, provided that the Copyright, this License, and the Availability of the original version is retained on all copies and made accessible to the end-user of any code or package that includes COLAMD or any modified version of COLAMD.
Your use or distribution of UMFPACK or any modified version of UMFPACK implies that you agree to this License.
THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Permission is hereby granted to use or copy this program, provided that the Copyright, this License, and the Availability of the original version is retained on all copies. User documentation of any code that uses UMFPACK or any modified version of UMFPACK code must cite the Copyright, this License, the Availability note, and 'Used by permission.' Permission to modify the code and to distribute modified code is granted, provided the Copyright, this License, and the Availability note are retained, and a notice that the code was modified is included. This software was developed with support from the National Science Foundation, and is provided to you free of charge.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
22.7.0.1 colamd | Column approximate minimum degree permutation. | |
22.7.0.2 colperm | Returns the column permutations such that the columns of `S | |
(:, P)' are ordered in terms of increase number of non-zero | ||
---|---|---|
elements. | ||
22.7.0.3 dmperm | Perfrom a Deulmage-Mendelsohn permutation on the sparse matrix S. | |
22.7.0.4 etree | Returns the elimination tree for the matrix S. | |
22.7.0.5 full | returns a full storage matrix from a sparse one See also: sparse | |
22.7.0.6 issparse | Return 1 if the value of the expression EXPR is a sparse matrix. | |
22.7.0.7 nnz | returns number of non zero elements in SM See also: sparse | |
22.7.0.8 nonzeros | Returns a vector of the non-zero values of the sparse matrix S | |
22.7.0.9 nzmax | Returns the amount of storage allocated to the sparse matrix SM. | |
22.7.0.10 spalloc | Returns an empty sparse matrix of size R-by-C. | |
22.7.0.11 sparse | SPARSE: create a sparse matrix | |
22.7.0.12 spatan2 | Compute atan (Y / X) for corresponding sparse matrix elements of Y and X. | |
22.7.0.13 spconvert | This function converts for a simple sparse matrix format easily produced by other programs into Octave's internal sparse format. | |
22.7.0.14 spcumprod | Cumulative product of elements along dimension DIM. | |
22.7.0.15 spcumsum | Cumulative sum of elements along dimension DIM. | |
22.7.0.16 spdet | Compute the determinant of sparse matrix A using UMFPACK. | |
22.7.0.17 spdiag | Return a diagonal matrix with the sparse vector V on diagonal K. | |
22.7.0.18 spdiags | A generalization of the function `spdiag'. | |
22.7.0.19 speye | Returns a sparse identity matrix. | |
22.7.0.20 spfind | SPFIND: a sparse version of the find operator 1. | |
22.7.0.21 spfun | Compute `f(X)' for the non-zero values of X This results in a sparse matrix with the same structure as X. | |
22.7.0.22 spinv | Compute the inverse of the square matrix A. | |
22.7.0.23 splu | Compute the LU decomposition of the sparse matrix A, using subroutines from UMFPACK. | |
22.7.0.24 spmax | For a vector argument, return the maximum value. | |
22.7.0.25 spmin | For a vector argument, return the minimum value. | |
22.7.0.26 spones | Replace the non-zero entries of X with ones. | |
22.7.0.27 spparms | Sets or displays the parameters used by the sparse solvers and factorization functions. | |
22.7.0.28 spprod | Product of elements along dimension DIM. | |
22.7.0.29 sprand | Generate a random sparse matrix. | |
22.7.0.30 sprandn | Generate a random sparse matrix. | |
22.7.0.31 spreshape | Return a sparse matrix with M rows and N columns whose elements are taken from the sparse matrix A. | |
22.7.0.32 spstats | Return the stats for the non-zero elements of the sparse matrix S COUNT is the number of non-zeros in each column, MEAN is the mean of the non-zeros in each column, and VAR is the variance of the non-zeros in each column | |
22.7.0.33 spsum | Sum of elements along dimension DIM. | |
22.7.0.34 spsumsq | Sum of squares of elements along dimension DIM. | |
22.7.0.35 spy | Plot the sparsity pattern of the sparse matrix X | |
22.7.0.36 symamd | For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S. | |
22.7.0.37 symbfact | Performs a symbolic factorization analysis on the sparse matrix S. | |
22.7.0.38 symrcm | Returns the Reverse Cuthill McKee reordering of the sparse matrix S. |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Column approximate minimum degree permutation. p = colamd
(s)
returns the column approximate minimum degree permutation
vector for the sparse matrix s. For a non-symmetric matrix s,
s (:,p)
tends to have sparser LU factors than s.
The Cholesky factorization of s (:,p)' * s
(:,p)
also tends to be sparser than that of s' *
s
.
knobs is an optional two-element input vector. If s is
m-by-n, then rows with more than (knobs (1)) * n
entries are ignored. Columns with more than (knobs (2)) *
m
entries are removed prior to ordering, and ordered last in the
output permutation p. If the knobs parameter is not present, then
0.5 is used instead, for both knobs (1)
and
knobs (2)
. knobs (3)
controls the printing of
statistics and error messages.
stats is an optional 20-element output vector that provides data
about the ordering and the validity of the input matrix s. Ordering
statistics are in stats (1:3)
. stats (1)
and
stats (2)
are the number of dense or empty rows and columns
ignored by COLAMD and stats (3)
is the number of garbage
collections performed on the internal data structure used by COLAMD
(roughly of size 2.2 * nnz(s) + 4 * m + 7 * n
integers).
Octave built-in functions are intended to generate valid sparse matrices, with no duplicate entries, with ascending row indices of the nonzeros in each column, with a non-negative number of entries in each column (!) and so on. If a matrix is invalid, then COLAMD may or may not be able to continue. If there are duplicate entries (a row index appears two or more times in the same column) or if the row indices in a column are out of order, then COLAMD can correct these errors by ignoring the duplicate entries and sorting each column of its internal copy of the matrix s (the input matrix s is not repaired, however). If a matrix is invalid in other ways then COLAMD cannot continue, an error message is printed, and no output arguments (p or stats) are returned. COLAMD is thus a simple way to check a sparse matrix to see if it's valid.
stats (4:7)
provide information if COLAMD was able to
continue. The matrix is OK if stats (4)
is zero, or 1 if
invalid. stats (5)
is the rightmost column index that is
unsorted or contains duplicate entries, or zero if no such column exists.
stats (6)
is the last seen duplicate or out-of-order row
index in the column index given by stats (5)
, or zero if no
such row index exists. stats (7)
is the number of duplicate
or out-of-order row indices. stats (8:20)
is always zero in
the current version of COLAMD (reserved for future use).
The ordering is followed by a column elimination tree post-ordering.
The authors of the code itself are Stefan I. Larimore and Timothy A. Davis (davis@cise.ufl.edu), University of Florida. The algorithm was developed in collaboration with John Gilbert, Xerox PARC, and Esmond Ng, Oak Ridge National Laboratory. (see http://www.cise.ufl.edu/research/sparse/colamd)
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
s (:, p)
are ordered in terms of increase number
of non-zero elements. If s is symmetric, then p is chosen
such that s (p, p)
orders the rows and
columns with increasing number of non zeros elements.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Returns the elimination tree for the matrix s. By default s is assumed to be symmetric and the symmetric elimination tree is returned. The argument typ controls whether a symmetric or column elimination tree is returned. Valid values of typ are 'sym' or 'col', for symmetric or column elimination tree respectively
Called with a second argument, etree also returns the postorder permutations on the tree.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It should be noted that this means that code like
k = 5; nz = r * k; s = spalloc (r, c, nz) for j = 1:c idx = randperm (r); s (:, j) = [zeros(r - k, 1); rand(k, 1)] (idx); endfor |
will reallocate memory at each step. It is therefore vitally important that code like this is vectorized as much as possible.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
sparse can be called in the following ways:
sv, and i or j may be scalars, in which case they are expanded to all have the same length
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
spdiag ([1, 2, 3], 1) ans = Compressed Column Sparse (rows=4, cols=4, nnz=3) (1 , 2) -> 1 (2 , 3) -> 2 (3 , 4) -> 3 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
spdiag
. Called with a single
input argument, the non-zero diagonals c of A are extracted.
With two arguments the diagonals to extract are given by the vector
c.
The other two forms of spdiags
modify the input matrix by
replacing the diagonals. They use the columns of v to replace
the columns represented by the vector c. If the sparse matrix
a is defined then the diagonals of this matrix are replaced.
Otherwise a matrix of m by n is created with the
diagonals given by v.
Negative values of c representive diagonals below the main diagonal, and positive values of c diagonals above the main diagonal.
For example
spdiags (reshape (1:12, 4, 3), [-1 0 1], 5, 4) => 5 10 0 0 1 6 11 0 0 2 7 12 0 0 3 8 0 0 0 4 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
sparse (eye (m))
as the full matrix
is not constructed.
Called with a single argument a square matrix of size m by m is created. Otherwise a matrix of m by n is created. If called with a single vector argument, this argument is taken to be the size of the matrix to create.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
f(x)
for the non-zero values of x.
This results in a sparse matrix with the same structure as
x. The function f can be passed as a string, a
function handle or an inline function.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
An optional second input argument Q is the optional pre-ordering of
the matrix, such that x = inv (a (:, Q))
. Q
can equally be a matrix, in which case x = inv (a *
Q))
.
If a third output argument is given then the permuations to achieve a sparse inverse are returned. It is not required that the return column permutations Q and the same as the user supplied permutations
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Called with two or three output arguments and a single input argument,
splu is a replacement for lu, and therefore the sparsity
preserving column permutations Q are not performed. Called with
a fourth output argument, the sparsity preserving column transformation
Q is returned, such that P * a * Q =
l * u
.
An additional input argument thres, that defines the pivoting threshold can be given. Alternatively, the desired sparsity preserving column permutations Q can be passed. Note that Q is assumed to be fixed if three are fewer than four output arguments. Otherwise, the updated column permutations are returned as the fourth argument.
With two output arguments, returns the permuted forms of the upper and
lower triangular matrices, such that a = l * u
.
With two or three output arguments, if a user-defined Q is given,
then u * Q'
is returned. The matrix is not required to
be square.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
max (max (x)) |
returns the largest element of x, and
max (2:5, pi) => 3.1416 3.1416 4.0000 5.0000 |
2:5
with pi
, and
returns a row vector of the maximum values.
For complex arguments, the magnitude of the elements are used for comparison.
If called with one input and two output arguments,
max
also returns the first index of the
maximum value(s). Thus,
[x, ix] = max ([1, 3, 5, 2, 5]) => x = 5 ix = 3 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
min (min (x)) |
returns the smallest element of x, and
min (2:5, pi) => 2.0000 3.0000 3.1416 3.1416 |
2:5
with pi
, and
returns a row vector of the minimum values.
For complex arguments, the magnitude of the elements are used for comparison.
If called with one input and two output arguments,
min
also returns the first index of the
minimum value(s). Thus,
[x, ix] = min ([1, 3, 0, 2, 5]) => x = 0 ix = 3 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The value of individual keys can be set with spparms (key,
val)
. The default values can be restored with the special keyword
'defaults'. The special keyword 'tight' can be used to set the mmd solvers
to attempt for a sparser solution at the potetial cost of longer running
time.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Note: sometimes the actual density may be a bit smaller than d. This is unlikely to happen for large really sparse matrices.
If called with a single matrix argument, a random sparse matrix is generated wherever the matrix S is non-zero.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Note: sometimes the actual density may be a bit smaller than d. This is unlikely to happen for large really sparse matrices.
If called with a single matrix argument, a random sparse matrix is generated wherever the matrix S is non-zero.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Called with two output arguments, if s is the data and j
is the bin number for the data, compute the stats for each bin. In
this case, bins can contain data values of zero, whereas with
spstats (s)
the zeros may disappear.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
spsum (x .* spconj (x), dim) |
spconj
if x is
real.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For a symmetric positive definite matrix s, returns the permutation
vector p such that s (p, p)
tends to have a
sparser Cholesky factor than s. Sometimes SYMAMD works well for
symmetric indefinite matrices too. The matrix s is assumed to be
symmetric; only the strictly lower triangular part is referenced. s
must be square.
knobs is an optional input argument. If s is n-by-n, then
rows and columns with more than knobs (1) * n
entries
are removed prior to ordering, and ordered last in the output permutation
p. If the knobs parameter is not present, then the default of
0.5 is used instead. knobs (2)
controls the printing of
statistics and error messages.
stats is an optional 20-element output vector that provides data
about the ordering and the validity of the input matrix s. Ordering
statistics are in stats (1:3)
. stats (1) =
stats (2)
is the number of dense or empty rows and columns
ignored by SYMAMD and stats (3)
is the number of garbage
collections performed on the internal data structure used by SYMAMD
(roughly of size 8.4 * nnz (tril (s, -1)) + 9 * n
integers).
Octave built-in functions are intended to generate valid sparse matrices, with no duplicate entries, with ascending row indices of the nonzeros in each column, with a non-negative number of entries in each column (!) and so on. If a matrix is invalid, then SYMAMD may or may not be able to continue. If there are duplicate entries (a row index appears two or more times in the same column) or if the row indices in a column are out of order, then SYMAMD can correct these errors by ignoring the duplicate entries and sorting each column of its internal copy of the matrix S (the input matrix S is not repaired, however). If a matrix is invalid in other ways then SYMAMD cannot continue, an error message is printed, and no output arguments (p or stats) are returned. SYMAMD is thus a simple way to check a sparse matrix to see if it's valid.
stats (4:7)
provide information if SYMAMD was able to
continue. The matrix is OK if stats (4)
is zero, or 1
if invalid. stats (5)
is the rightmost column index that
is unsorted or contains duplicate entries, or zero if no such column
exists. stats (6)
is the last seen duplicate or out-of-order
row index in the column index given by stats (5)
, or zero
if no such row index exists. stats (7)
is the number of
duplicate or out-of-order row indices. stats (8:20)
is
always zero in the current version of SYMAMD (reserved for future use).
The ordering is followed by a column elimination tree post-ordering.
The authors of the code itself are Stefan I. Larimore and Timothy A. Davis (davis@cise.ufl.edu), University of Florida. The algorithm was developed in collaboration with John Gilbert, Xerox PARC, and Esmond Ng, Oak Ridge National Laboratory. (see http://www.cise.ufl.edu/research/sparse/colamd)
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Performs a symbolic factorization analysis on the sparse matrix s.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |