## What is Markov chain?

A Markov chain is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules. The defining characteristic of what a Markov chain is that, no matter how the process arrived at its present state, the possible future states are fixed. In other words, the probability of transitioning to any particular state is dependent solely on the current state and time elapsed. The state space, or set of all possible states, can be anything: letters, numbers, weather conditions, baseball scores, or stock performances.

These Markov chains are a fundamental part of stochastic processes. They are utilized to a great extent in a wide range of disciplines.

Essentially, a Markov chain is a stochastic process that satisfies the Markov property. This means that the past and future are independent when the present is known. To simplify this, lets just say that this means that when you know the current state of the process, you do not require any additional information of its past states in order to make the best possible prediction of its future. This simplicity makes it possible to greatly reduce the number of parameters when you are studying such a process.

Markov chains may be modeled by finite state machines, and random walks provide a prolific example of their usefulness in mathematics. They arise broadly in statistical and information-theoretical contexts and are widely employed in economics, game theory, queueing (communication) theory, genetics, and finance. While it is possible to discuss Markov chains with any size of state space, the initial theory and most applications are focused on cases with a finite (or countably infinite) number of states.

## What are the types of Markov chains?

There are two types of Markov chain. These are: discrete-time Markov chains and continuous-time Markov chains. This means that we have one situation in which the changes happen at specific states and one in which the changes are continuous.

The system's state space and time parameter index need to be specified.

For countable state space:

- Discrete-time: Markov chain on a countable or finite state space
- Continuous-time: Markov process or Markov jump process

For continuous or general state space:

- Discrete-time: Markov chain on a measurable state space (for example, Harris chain)
- Continuous-time: Any continuous stochastic process with the Markov property (for example, the Wiener process)

It is possible for a Markov chain to be stationary and therefore be independent of the initial state in the process. This phenomenon is referred to as a steady-state Markov chain. An infinite-state Markov chain does not need to be steady state, but a steady-state Markov chain has to be time-homogenous. This means that means that the transition probabilities matrix Pi, j (n, n + 1) is independent of n.

## What are the properties of a Markov chain?

A variety of descriptions of either a specific state in a Markov chain or the entire Markov chain allow for better understanding of the Markov chain's behavior. Let PP be the transition matrix of Markov chain {X0, X1, … }.

- A state
*i*has period*k*≥ 1 if any chain starting at and returning to state*i*with positive probability must take a number of steps divisible by*k*. If*k*= 1, then the state is known as aperiodic, and if*k*> 1, the state is known as periodic. If all states are aperiodic, then the Markov chain is known as aperiodic. - A Markov chain is known as irreducible if there exists a chain of steps between any two states that has positive probability.
- An absorbing state
*i*is a state for which Pi,i = 1. Absorbing states are crucial for the discussion of absorbing Markov chains. - A state is known as recurrent or transient depending upon whether or not the Markov chain will eventually return to it. A recurrent state is known as positive recurrent if it is expected to return within a finite number of steps, and null recurrent otherwise.
- A state is known as ergodic if it is positive recurrent and aperiodic. A Markov chain is ergodic if all its states are.

Irreducibility and periodicity both concern the locations a Markov chain could be at some later point in time, given where it started. Stationary distributions deal with the likelihood of a process being in a certain state at an unknown point of time. For Markov chains with a finite number of states, each of which is positive recurrent, an aperiodic Markov chain is the same as an irreducible Markov chain.

**What is a Markov chain used for?**

Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics. They are used to compute the probabilities of events occurring by viewing them as states transitioning into other states, or transitioning into the same state that they were previously in.

Markov chains are even used in search engine algorithms, music composition and speech recognition.

In the domains of economics and finance, Markov Chains are utilized for the purpose of predicting macroeconomic situations such as market crashes and cycles between recession and expansion. They are also used for predicting asset and option prices, and calculating credit risks. When considering a continuous-time financial market Markov chains are employed for the purpose of modeling the randomness. They are also used to model the probabilities of certain financial market climates, thereby predicting the likelihood of future market conditions like bull markets, bear markets, and stagnant markets.

Markov Chains are also important in many fields, such as:

- Physics
- Chemistry
- Biology
- Testing
- Solar irradiance variability
- Speech recognition
- Information theory
- Queueing theory
- Internet applications
- Statistics
- Economics and finance
- Social sciences
- Games
- Music
- Baseball
- Markov text generators