Summary for Continuous Time Markov Chains

Chapter 16.8 has focused on time continuous processes, reading the content of this chapter has enabled me realize that it is possible to apply Markov chains in time continuous and time discrete processes. The state of system are still labeled in discrete values; 0, 1, 2, 3 … N. In such discrete Markov chain, time parameter is any value equal or greater than zero (t ≥ 0). A random value X(t) represents the system and it occurs at interval *t*’. However, it is not a must for time to be represented by letter *t*’, present values of time can be presented by *r*’, current values by *s*’, and future values by *s + t*.

In a continuous process, Markov chain only take into consideration stationary transition probability and finite number of states. Moreover, *T* represent the time a process takes in a particular state before moving out to another. Also the probability distribution of time not spent on any given state at any particular time is usually the same regardless of the amount of time a process spends in a given state. This is due to memory-less property of exponential distribution which has *q* as the only parameter and a mean of *1/q*.

Therefore, the random value *T _{i}* has an exponential distribution with mean 1

*/q*and the probability of the transition process from state

_{i}*i*to

*j*can be identifies as

*p*which must satisfy the following:

_{ij},*p _{ij }= 0*, for every

*i*and , for all

*i*

Moreover the state entered after *i* is not dependent on time spend on *i* state. Continuous Markov chains no longer use transition probabilities but implements transition intensities which are represented by* q _{i}* and

*q*where

_{ij},*q*represents the rate of transition from state

_{ij}*i*, thus it is the number of times that the process leaves state

*i*in every unit of time spent on the same state.

*q*is the rate at which a process will move from state

_{ij}*i*to

*j*therefore, it is expected time taken by a process to transit from state

*i*to

*j*in every unit of time spent at state

*i*. Thus, the reciprocal of

*q*

_{i}is the expected value of time spent at state

*i.*

The last chapter has addressed the continuous Markov chain steady state probability, which is presented by *π _{j,}* this is calculated using the following equation:

—————– ———————(1)

And ——————————————————————————–(2)

The left hand of equation (1) shows the rate at which Markov chain process leaves state *j*, while the left hand shows the rate of transition from state *i* to *j*. In summary, the equation shows that the probability of leaving *j* is the same as that of entering* j*, considering that the left hand of the equation equal the right hand. Equation (1) is an assertion that a sum of all probabilities should be equal to one.