<script type="application/ld+json">
{
 "@context": "https://schema.org",
 "@type": "FAQPage",
 "mainEntity": [{
   "@type": "Question",
   "name": "What is a sparse matrix in data structures?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "A matrix is a two-dimensional data object comprised of m rows and n columns, thus it has total m x n values. If there are more elements in the matrix that have zero values than those with non-zero values, the matrix is said to be a sparse matrix."
   }
 },{
   "@type": "Question",
   "name": "What is the use of a sparse matrix?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "1. Memory management.
2. Computing time.
3. Computational Efficiency."
   }
 },{
   "@type": "Question",
   "name": "How do you store a sparse matrix?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "1. Compressed Row Storage (CRS).
2. Compressed Column Storage (CCS).
3. Block Compressed Row Storage (BCRS).
4. Compressed Diagonal Storage (CDS).
5. Jagged Diagonal Storage (JDS).
6. Skyline Storage (SKS)."
   }
 }]
}
</script>

Sparse matrix

What is a sparse matrix in data structures?

A matrix is a two-dimensional data object comprised of m rows and n columns, thus it has total m x n values. If there are more elements in the matrix that have zero values than those with non-zero values, the matrix is said to be a sparse matrix.

A sparse matrix can be represented with two representations. These are:

  • Triplet Representation or Array Representation
  • Linked Representation


Triplet Representation or Array Representation

In triplet representations, only the non-zero values, along with their row and column index values, are considered. The 0th row of the triplet representation stores the total number of rows and the total number of columns along with the total number of non-zero values in the sparse matrix.


Linked Representation

In linked representations, you’ll use a linked list data structure to represent a sparse matrix. You will use two different nodes, the header node and the element node, in this linked list. The header node is made up of 3 fields while the is made up of 5 fields as shown below.


Header Node:

IndexValue

down

right


Element node:

row

column

value

down/up

right


What is the use of a sparse matrix?

Sparse matrices are very useful for computing large-scale applications that dense matrices just are not able to handle.

Here are three major reasons why you would want to use a sparse matrix instead of a simple matrix:

1. Memory management

A sparse matrix is made up of less non-zero value elements than zero value elements. It only stores and evaluates the non-zero value elements, which helps you save a very large amount of of memory. 


2. Computing time

While searching n sparse matrix, only the non-zero elements are traversed, instead of traversing all the sparse matrix elements. By logically designing a data structure traversing non-zero elements, a lot of computing time is saved.


3. Computational Efficiency

Operations with sparse matrices do not bother performing unnecessary low-level arithmetic like zero adds ((x+0 will always be x). This leads to computational efficiency and massive improvements in execution time for programs that work with vast amounts of sparse data.

How do you know if a matrix is sparse?

A sparse matrix is a matrix where most of the elements have a value of zero. If we are being exact, a matrix where 2/3rd of the elements are 0 can be denoted as a sparse matrix. 

So, if you want to know for sure whether a matrix is sparse, you’ll have to count all the elements in the matrix and then count how many of those elements are 0. If the number of zero elements are more is 2/3rd or more of the total number of elements, then you know for a fact that the matrix in question is a sparse matrix. 

What are the types of sparse matrices?

There are 7 types of sparse matrices that you can use. These are:

  • csc_matrix: Compressed Sparse Column format
  • csr_matrix: Compressed Sparse Row format
  • bsr_matrix: Block Sparse Row format
  • lil_matrix: List of Lists format
  • dok_matrix: Dictionary of Keys format
  • coo_matrix: COOrdinate format (aka IJV, triplet format)
  • dia_matrix: DIAgonal format

If you want to construct a matrix in an efficient manner, you should make use of the dok_matrix or lil_matrix. The lil_matrix class allows for basic slicing and fancy indexing using a syntax that is similar to NumPy arrays. You could even use the coo_matrix to construct matrices efficiently. 

Even though they are similar to NumPy arrays, it is not a particularly great idea to use NumPy functions directly on these matrices because NumPy might fail to properly convert them for computations, which would cause unexpected (and incorrect) results.

If you really want to a NumPy function to these matrices, it would be better to convert the sparse matrix to a NumPy array before you apply the NumPy function.

In case you want to perform manipulations like multiplication or inversion, you should convert the matrix to either CSC or CSR format before doing that. Since the lil_matrix format is row-based, converting it to CSR is more efficient than converting it to CSC.

Conversions among the CSR, CSC, and COO formats all tend to be efficient, linear-time operations. 


How do you store a sparse matrix?

There are several methods that you can use to store a sparse matrix, some of these methods are:

  • Compressed Row Storage (CRS)
  • Compressed Column Storage (CCS)
  • Block Compressed Row Storage (BCRS)
  • Compressed Diagonal Storage (CDS)
  • Jagged Diagonal Storage (JDS)
  • Skyline Storage (SKS)


Compressed Row Storage (CRS)

The compressed row storage format makes no assumptions about the sparsity structure of the matrix, and doesn’t store any unnecessary elements. However, it isn’t particularly efficient. It needs to have an indirect addressing step for each scalar operation in a matrix-vector product or preconditioner solve.

It puts the subsequent nonzeros of the matrix rows in contiguous memory locations. 

If you have a  nonsymmetric sparse matrix A, you can create three vectors: one for floating-point numbers (val), and the remaining two for integers (col_ind, row_ptr). The val vector will store the values of the nonzero elements of the matrix , as they are traversed in a row-wise fashion. 

The col_ind vector will store the column indexes of the matrix’s elements in the val vector. The row_ptr vector stores the locations in the val vector that start a row. The storage savings for this approach are massive.

If the matrix A is symmetric, we only have to store the upper (or lower) triangular portion of the matrix. The trade-off isn algorithm of greater complexity with a different pattern of data access.  


Compressed Column Storage (CCS)

The Compressed Column Storage (CCS), is also known as the Harwell-Boeing sparse matrix format. This format is almost the same as CCS, the only difference is that the columns of are stored (traversed) rather than the rows.


Block Compressed Row Storage (BCRS)

In situations where your sparse matrix is made up of square dense blocks of nonzeros in a regular pattern, the CRS or CCS can be tweaked to make the most of those block patterns. Block matrices tend to arise from the discretization of partial differential equations in which there are several degrees of freedom associated with a grid point. The matrices going into tiny blocks with sizes equivalent to the number of degrees of freedom, and treat every block as a dense matrix, even though it might have a sequence of some zeros.


Compressed Diagonal Storage (CDS)

If matrix A has a bandwidth that is quite constant from row to row, then it could be worth taking advantage of this structure in the storage scheme by storing subdiagonals of the matrix in consecutive locations. This would eliminate the vector identifying the column and row, allow us to pack the nonzero elements in a manner that would make the matrix-vector product more efficient. This storage scheme is very useful if the matrix arises from a finite element or finite difference discretization on a tensor product grid.

Jagged Diagonal Storage (JDS)

This format is very useful for implementing iterative methods on parallel and vector processors.. Like the Compressed Diagonal format, it gives a vector length essentially of the size of the matrix. It is more space-efficient than CDS at the cost of a gather/scatter operation.


Skyline Storage (SKS)

This is for skyline matrices which is also known as variable band or profile matrices. A very significatn advantage of solving linear systems with skyline coefficient matrices is that when pivoting is not necessary, the skyline structure gets preserved during Gaussian elimination.

If the matrix is symmetrical, only its lower triangular part is stored.

Thanks for reading! We hope you found this helpful.

Ready to level-up your business? Click here.

About Engati

Engati powers 45,000+ chatbot & live chat solutions in 50+ languages across the world.

We aim to empower you to create the best customer experiences you could imagine. 

So, are you ready to create unbelievably smooth experiences?

Check us out!

Sparse matrix

October 14, 2020

Table of contents

Key takeawaysCollaboration platforms are essential to the new way of workingEmployees prefer engati over emailEmployees play a growing part in software purchasing decisionsThe future of work is collaborativeMethodology

What is a sparse matrix in data structures?

A matrix is a two-dimensional data object comprised of m rows and n columns, thus it has total m x n values. If there are more elements in the matrix that have zero values than those with non-zero values, the matrix is said to be a sparse matrix.

A sparse matrix can be represented with two representations. These are:

  • Triplet Representation or Array Representation
  • Linked Representation


Triplet Representation or Array Representation

In triplet representations, only the non-zero values, along with their row and column index values, are considered. The 0th row of the triplet representation stores the total number of rows and the total number of columns along with the total number of non-zero values in the sparse matrix.


Linked Representation

In linked representations, you’ll use a linked list data structure to represent a sparse matrix. You will use two different nodes, the header node and the element node, in this linked list. The header node is made up of 3 fields while the is made up of 5 fields as shown below.


Header Node:

IndexValue

down

right


Element node:

row

column

value

down/up

right


What is the use of a sparse matrix?

Sparse matrices are very useful for computing large-scale applications that dense matrices just are not able to handle.

Here are three major reasons why you would want to use a sparse matrix instead of a simple matrix:

1. Memory management

A sparse matrix is made up of less non-zero value elements than zero value elements. It only stores and evaluates the non-zero value elements, which helps you save a very large amount of of memory. 


2. Computing time

While searching n sparse matrix, only the non-zero elements are traversed, instead of traversing all the sparse matrix elements. By logically designing a data structure traversing non-zero elements, a lot of computing time is saved.


3. Computational Efficiency

Operations with sparse matrices do not bother performing unnecessary low-level arithmetic like zero adds ((x+0 will always be x). This leads to computational efficiency and massive improvements in execution time for programs that work with vast amounts of sparse data.

How do you know if a matrix is sparse?

A sparse matrix is a matrix where most of the elements have a value of zero. If we are being exact, a matrix where 2/3rd of the elements are 0 can be denoted as a sparse matrix. 

So, if you want to know for sure whether a matrix is sparse, you’ll have to count all the elements in the matrix and then count how many of those elements are 0. If the number of zero elements are more is 2/3rd or more of the total number of elements, then you know for a fact that the matrix in question is a sparse matrix. 

What are the types of sparse matrices?

There are 7 types of sparse matrices that you can use. These are:

  • csc_matrix: Compressed Sparse Column format
  • csr_matrix: Compressed Sparse Row format
  • bsr_matrix: Block Sparse Row format
  • lil_matrix: List of Lists format
  • dok_matrix: Dictionary of Keys format
  • coo_matrix: COOrdinate format (aka IJV, triplet format)
  • dia_matrix: DIAgonal format

If you want to construct a matrix in an efficient manner, you should make use of the dok_matrix or lil_matrix. The lil_matrix class allows for basic slicing and fancy indexing using a syntax that is similar to NumPy arrays. You could even use the coo_matrix to construct matrices efficiently. 

Even though they are similar to NumPy arrays, it is not a particularly great idea to use NumPy functions directly on these matrices because NumPy might fail to properly convert them for computations, which would cause unexpected (and incorrect) results.

If you really want to a NumPy function to these matrices, it would be better to convert the sparse matrix to a NumPy array before you apply the NumPy function.

In case you want to perform manipulations like multiplication or inversion, you should convert the matrix to either CSC or CSR format before doing that. Since the lil_matrix format is row-based, converting it to CSR is more efficient than converting it to CSC.

Conversions among the CSR, CSC, and COO formats all tend to be efficient, linear-time operations. 


How do you store a sparse matrix?

There are several methods that you can use to store a sparse matrix, some of these methods are:

  • Compressed Row Storage (CRS)
  • Compressed Column Storage (CCS)
  • Block Compressed Row Storage (BCRS)
  • Compressed Diagonal Storage (CDS)
  • Jagged Diagonal Storage (JDS)
  • Skyline Storage (SKS)


Compressed Row Storage (CRS)

The compressed row storage format makes no assumptions about the sparsity structure of the matrix, and doesn’t store any unnecessary elements. However, it isn’t particularly efficient. It needs to have an indirect addressing step for each scalar operation in a matrix-vector product or preconditioner solve.

It puts the subsequent nonzeros of the matrix rows in contiguous memory locations. 

If you have a  nonsymmetric sparse matrix A, you can create three vectors: one for floating-point numbers (val), and the remaining two for integers (col_ind, row_ptr). The val vector will store the values of the nonzero elements of the matrix , as they are traversed in a row-wise fashion. 

The col_ind vector will store the column indexes of the matrix’s elements in the val vector. The row_ptr vector stores the locations in the val vector that start a row. The storage savings for this approach are massive.

If the matrix A is symmetric, we only have to store the upper (or lower) triangular portion of the matrix. The trade-off isn algorithm of greater complexity with a different pattern of data access.  


Compressed Column Storage (CCS)

The Compressed Column Storage (CCS), is also known as the Harwell-Boeing sparse matrix format. This format is almost the same as CCS, the only difference is that the columns of are stored (traversed) rather than the rows.


Block Compressed Row Storage (BCRS)

In situations where your sparse matrix is made up of square dense blocks of nonzeros in a regular pattern, the CRS or CCS can be tweaked to make the most of those block patterns. Block matrices tend to arise from the discretization of partial differential equations in which there are several degrees of freedom associated with a grid point. The matrices going into tiny blocks with sizes equivalent to the number of degrees of freedom, and treat every block as a dense matrix, even though it might have a sequence of some zeros.


Compressed Diagonal Storage (CDS)

If matrix A has a bandwidth that is quite constant from row to row, then it could be worth taking advantage of this structure in the storage scheme by storing subdiagonals of the matrix in consecutive locations. This would eliminate the vector identifying the column and row, allow us to pack the nonzero elements in a manner that would make the matrix-vector product more efficient. This storage scheme is very useful if the matrix arises from a finite element or finite difference discretization on a tensor product grid.

Jagged Diagonal Storage (JDS)

This format is very useful for implementing iterative methods on parallel and vector processors.. Like the Compressed Diagonal format, it gives a vector length essentially of the size of the matrix. It is more space-efficient than CDS at the cost of a gather/scatter operation.


Skyline Storage (SKS)

This is for skyline matrices which is also known as variable band or profile matrices. A very significatn advantage of solving linear systems with skyline coefficient matrices is that when pivoting is not necessary, the skyline structure gets preserved during Gaussian elimination.

If the matrix is symmetrical, only its lower triangular part is stored.

Thanks for reading! We hope you found this helpful.

Ready to level-up your business? Click here.

Share

Continue Reading