Named after the Russian mathematician Andrey Markov, a Markov Chain is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules.
The defining characteristic of a Markov chain is that no matter how the process arrived at its present state, the possible future states are fixed. In other words, the probability of transitioning to any particular state is dependent solely on the current state and time elapsed. The state space, or set of all possible states, can be anything: letters, numbers, weather conditions, health conditions or stock performances.
Markov chains arise in statistical contexts and are widely employed in health, economics, game theory, communication, genetics, and finance.
A Markov chain is a stochastic process, but it differs from a general stochastic process in that a Markov chain must be “memory-less.” That is, the probability of future actions are not dependent upon the steps that led up to the present state. This is called the Markov property. While the theory of Markov chains is important precisely because so many “everyday” processes satisfy the Markov property, there are many common examples of stochastic properties that do not satisfy the Markov property.
Concepts regarding Markov chains
When approaching Markov chains there are two different types: discrete-time Markov chains and continuous-time Markov chains.
This means that we have one case where the changes happen at specific states and one where the changes are continuous. In this article, we will focus on discrete-time Markov chains.
One example to explain the discrete-time Markov chain is the price of an asset where the value is registered only at the end of the day. The value of the Markov chain in discrete-time is called the state and in this case, the state corresponds to the closing price. A continuous-time Markov chain changes at any time.
A Markov chain can be stationary and therefore be independent of the initial state in the process. This phenomenon is also called a steady-state Markov chain where the probabilities for different outcomes converge to a certain value. However, an infinite-state Markov chain does not have to be a steady-state, but a steady-state Markov chain must be time-homogenous.
Application of Markov Chains
Since Markov chains can be designed to model many real-world processes, they are used in a wide variety of situations. These fields range from the mapping of animal life populations to search engine algorithms, music composition and speech recognition. In economics and finance, they are often used to predict macroeconomic situations like market crashes and cycles between recession and expansion. Other areas of application include predicting asset and option prices and calculating credit risks.
The Markov Chains are usually called in the epidemiological and in the chemistry literature “compartmental models”. The most famous is the SIR model for infectious diseases.
The main idea is that individuals can be classified into three states: Susceptible (people that can get the disease), Infected (people that have the disease and are contagious), and Removed.
The main idea is that almost all individuals start Susceptible they can catch the disease with a handful of Infected.
Considering the discrete-time, such as days, some infected individuals will either get better or die, hence becoming Removed, and some Susceptible will become Infected.
The interesting part is how interactions between the compartments give rise to different models. Also, there are a lot of infectious disease models that use this idea and add additional compartments. For example, you can divide the infected into symptomatic and asymptomatic, you can divide the susceptible into vaccinated and unvaccinated, removed can be divided into recovered or death. A more complete description of these models can be found in Mathematical Epidemiology by Brauer.
Chronic disease models
Chronic disease models do not usually have interaction between compartments, think in diabetes which is not contagious. The main idea is that everyone is Healthy, then gets the disease and finally dies.
A discrete-time Markov chain with stationary transition probabilities is often used to investigate treatment programs and health care protocols for chronic disease.
If we classify the chronic disease into distinct health states, the movement through these health states over time then represents a patient’s disease history. We can use a discrete-time Markov chain to describe such movement using the transition probabilities between the health states.
Markov chains are powerful for problems modelling when dealing with random dynamics. Due to their useful properties, they are used in various fields such as statistics, biology and medicine, modelling of biological populations evolution, computer science, information theory and speech recognition through hidden Markov models are important tools and many others. MySense utilises Markov chains in some of our modellings to give us deeper insights into our user’s behaviours, helping us to help provide the best care possible.
The huge possibilities offered by Markov chains in terms of modelling as well as in terms of computation go far behind what has been presented in this modest introduction and, so, we encourage the interested readers to read more about these tools that should have their place in the data scientist toolbox.
Source: based_simulation_from_System_Dynamics https://www.researchgate.net/publication/224209140_To_agent