Title: | Suite of Functions Related to Discrete-Time Discrete-State Markov Chains |
---|---|
Description: | A series of functions which aid in both simulating and determining the properties of finite, discrete-time, discrete state markov chains. Two functions (DTMC, MultDTMC) produce n iterations of a Markov Chain(s) based on transition probabilities and an initial distribution. The function FPTime determines the first passage time into each state. The function statdistr determines the stationary distribution of a Markov Chain. |
Authors: | William Nicholson |
Maintainer: | William Nicholson<[email protected]> |
License: | GPL (>= 2) |
Version: | 0.1-3 |
Built: | 2025-02-10 02:57:20 UTC |
Source: | https://github.com/cran/DTMCPack |
A series of functions which aid in both simulating and determining the properties of finite, discrete-time, discrete state markov chains. This package may be of use to practioners who need to simulate Markov Chains, but its primary intended audience is students of an introductory stochastic processes studying class properties and long run behavior patterns of Markov Chains. Two functions (DTMC, MultDTMC) produce n iterations of a Markov Chain(s) based on transition probabilities and an initial distribution. The function FPTime determines the first passage time into each state. The function statdistr determines the stationary distribution of a Markov Chain. Updated 4/10/22 to maintain compatibility with R.
Package: | DTMCPack |
Type: | Package |
Version: | 0.1-2 |
Date: | 2013-05-22 |
License: | GPL(>=2) |
LazyLoad: | yes |
Will Nicholson
Maintainer: <[email protected]>
Sidney Resnick, "Adventures in Stochastic Processes"
data(gr) data(id) DTMC(gr,id,10,trace=FALSE)
data(gr) data(id) DTMC(gr,id,10,trace=FALSE)
This function simulates iterations through a discrete time Markov Chain. A Markov Chain is a discrete Markov Process with a state space that usually consists of positive integers. The advantage of a Markov process in a stochastic modeling context is that conditional dependencies over time are manageable because the probabilistic future of the process depends only on the present state, not the past. Therefore, if we specify an initial distribution as well as a transition matrix, we can simulate many periods into the future without any further information. Future transition probabilities can be computed by raising the transition matrix to higher-and higher powers, but this method is not numerically tractable for large matrices. My method uses a uniform random variable to iterate a user-specified number of iterations of a Markov Chain based on the transition probabilities and the initital distribution. A graphical output is also available in the form of a trace plot.
DTMC(tmat, io, N, trace)
DTMC(tmat, io, N, trace)
tmat |
Transition matrix-rows must sum to 1 and the number of rows and columns must be equal. |
io |
Initial observation, 1 column, must sum to 1, must be the same length as transition matrix. |
N |
Number of simulations. |
trace |
Optional trace plot, specify as TRUE or FALSE. |
Trace |
Trace-plot of the iterations through states (if selected) |
State |
An n x nrow(tmat) matrix detailing the iterations through each state of the Markov Chain |
Will Nicholson
"Adventures in Stochastic Processes" by Sidney Resnick
data(gr) data(id) DTMC(gr,id,10,trace=TRUE) # 10 iterations through "Gambler's ruin"
data(gr) data(id) DTMC(gr,id,10,trace=TRUE) # 10 iterations through "Gambler's ruin"
This function uses the companion function multDTMC to simulate several Markov chains to determine the first passage time into each state, i.e. the first time (after the initial iteration) that a specified state is reached in the Markov Process. First Passage Time can be useful for both determining class properties as well as the stationary/invariant distribution for large Markov Chains in which explicit matrix inversion is not computationally tractable.
FPTime(state, nchains, tmat, io, n)
FPTime(state, nchains, tmat, io, n)
state |
State in which you want to find the first passage time. |
nchains |
Number of chains you wish to simulate. |
tmat |
Transition Matrix, must be a square matrix, rows must sum to 1. |
io |
Initial Distribution |
n |
Number of iterations to run for each Markov Chain. |
fp1 |
Vector of length(nchains) which gives first passage time into the specified state for each Markov Chain. |
Will Nicholson
data(gr) data(id) FPTime(1,10,gr,id,10) # First passage time into first state on Gambler's ruin
data(gr) data(id) FPTime(1,10,gr,id,10) # First passage time into first state on Gambler's ruin
Motivating example, random walk with absorbing boundaries on 4 states. Analogous to a gambler at a casino. The 4 states represent a range of wealth. States 1 and 4 are absorbing with state 1="Broke", state 4="wealthy enough to walk away" and the intermediate states 2 and 3 are transitory. It is assumed that he bets of all his winnings in the intermediate states and has equal probability of winning and losing
data(gr) data(id) DTMC(gr,id,10,trace=FALSE)
data(gr) data(id) DTMC(gr,id,10,trace=FALSE)
Example Markov Chain from page 139 of Resnick. The protagonist, basketball player "Happy Harry's" productivity fluctuates between three states (0-1 points), (2-5 points), (5 or more points) and the transition between states can be modeled using a Markov Chain. Used as a motivating example to calculate the long run proportion of time spent in each state using the statdist function.
Sidney Resnick "Adventures in Stochastic Processes"
data(hh) statdistr(hh)
data(hh) statdistr(hh)
A starting distribution for the gambler's ruin example, which assigns equal probability of starting in each state.
data(id) data(gr) DTMC(gr,id,10,trace=FALSE)
data(id) data(gr) DTMC(gr,id,10,trace=FALSE)
An extension of the DTMC package which enables multiple cocurrent Markov Chain simulations. At this time, plotting is not enabled.
MultDTMC(nchains, tmat, io, n)
MultDTMC(nchains, tmat, io, n)
nchains |
Number of chains to simulate (integer). |
tmat |
Transition Matrix |
io |
Initial distribution |
n |
Number of iterations to run each chain. |
chains |
Returns nchains matrices of length nrow(tmat) by n which depict the transition of the Markov Chain. |
Will Nicholson
data(gr) data(id) MultDTMC(20,gr,id,10) # 20 chains with 10 iterations using the Gambler's ruin example.
data(gr) data(id) MultDTMC(20,gr,id,10) # 20 chains with 10 iterations using the Gambler's ruin example.
This function computes the stationary distribution of a markov chain (assuming one exists) using the formula from proposition 2.14.1 of Resnick: pi=(1,...1)(I-P+ONE)^(-1), where I is an mxm identity matrix, P is an mxm transition matrix, and ONE is an mxm matrix whose entries are all 1. This formula works well if the number of states is small, but since it directly computes the inverse of the matrix, it is not tractable for larger matrices. For larger matrices 1/E(FPTime(n)) is a rough approximation for the long run proportion of time spent in a state n.
statdistr(tmat)
statdistr(tmat)
tmat |
Markov chain transition matrix, must be a square matrix and rows must sum to 1. |
Returns a stationary distribution: mxm matrix which represents the long run percentage of time spent in each state.
Will Nicholson
Resnick, "Adventures in Stochastic Processes"
data(hh) statdistr(hh)
data(hh) statdistr(hh)