What is space complexity?
Space complexity is pretty much a measurement of the total amount of memory that algorithms or operations need to run according to their input size. It is essentially the amount of memory space that the algorithm requires to solve an instance of the computational problem as a function of characteristics of the input.
It is the memory that the algorithm requires till it is fully executed.
Space complexity is often mistaken for Auxilary Space. But there is a difference between them. Auxiliary space is just a part of space complexity. Auxiliary space refers to the extra space or the temporary space that an algorithm uses. Space complexity is the total space taken up by the algorithm with respect to the input size. Space complexity includes auxiliary space as well as the space used by input.
Space complexity = Auxiliary space + Input space.
Space complexity is essentially a parallel concept to time complexity. If you need to create an array of size n, this will take O(n) space. If you build a two-dimensional array of size n*n, this will need O(n2) space.
Space complexity is dependent on several factors. These include the programming language, the compiler, as well as the machine that runs the algorithm.
What does an algorithm use memory space for?
There are three purposes for which an algorithm uses memory space while executing::
This is the amount of space that is utilized for saving the compiled version of instructions.
An algorithm(function) could be inside another algorithm(function). In such circumstances, the current variables will get pushed onto the system stack, where it waits for further execution and the call to the inside algorithm(function) gets made.
Let's say that a function A() calls function B() that is inside it. In this situation, all the variables of the function A() will be temporarily stored on the system stack while the function B() gets called and executed inside the function (A).
This is the amount of space that is used by the variables and the constants.
While space complexity is calculated, only the data space is considered. The instruction space and the environmental stack is ignored.
How important is space complexity?
Application developers are bound by the physical memory of the systems that they intend to run the applications on. That’s why space complexity is extremely important. You do not ever want to run a function or process that goes over the amount of space the system has at any point.
Which notations are used to express space complexity measurements?
This notation describes an asymptotic upper bound of the asymptotic Notations. It denotes the algorithms scalability and performance. It essentially gives you the worst-case scenario of an algorithm’s growth rate. What it tells you is that the amount of space that the algorithm in question takes will not grow any faster than this f(x), but it could grow at a slower pace.
Here are some examples of expressing space complexity using big-O notation, starting from the slowest space growth to the fastest.
- O(1): This signifies constant complexity. It takes the same amount of space irrespective of the input size.
- O(log n): This signifies logarithmic complexity. It takes up space proportional to the log of the input size.
- O(n): This denotes linear complexity. It takes space directly proportional to the input size.
- O(n log n):This denotes log-linear or quasilinear complexity. It is also called linearithmic. Its space complexity increases proportionally to the input size and a logarithmic factor.
- O(n2): This denotes square or polynomial complexity. Space complexity increases proportionally to the square of the input size.
Omega Notation – Ω
The omega notation describes an asymptotic lower bound. It essentially gives you the best-case scenario of the algorithm’s growth rate. Essentially, it says that the amount of space that the algorithm in question takes will not grow any slower than this f(x), but it could grow at a faster pace.
Theta Notation – θ
The Theta notation represents a function that lies within upper and lower bounds. It can say that the algorithm in question will take up at least this much space (the lower bound) and no more than this amount of space (the maximum) amount of space.
How do you calculate space complexity?
The are two kinds if algorithms that you’d have to calculate space complexity for.
With these, you’d need to consider the variables and data structures that are declared in your program.
As an example, declaring an array of size n would cause the space complexity to rise by a factor of O(n). a 2D array would increase the space complexity by a factor of O(n^2) and so on. Simple variables always add a space complexity of O(1) because the size of a specific datatype will always be constant.
User defined datatypes tend to be trickier, but most of the time, iterative algorithms tend to be straightforward to analyse in terms of space complexity.
Recursive algorithms can be trickier since on top of taking care of the variables and data structures (memory reserved for operations in the heap), you would also have to handle the stack memory that is used for recursive calls.
You can take the Fibonacci algorithm as an example. Here F(n) = F(n-1) + F(n-2), n>2. On tracing out the recursive calls you will see that the program doesn’t occupy upwards of n cells in the stack memory for a function call of F(n).
If you assume that every cell in the stack reserves equal space, then you could say that this algorithm has a space complexity of O(n)