An example of stationary time

Today, I want to talk a little about strong stationary time, which is another useful technique to bound the mixing time of Markov chain. Let {S} be a countable state space, {\left\{X_n\right\}} be a Markov chain with stationary distribution {\pi}, a stationary time {\tau} is a random time such that {P(X_\tau=s)=\pi(s)}. In general, {\tau} and {X_\tau} are not independent. A more stronger concept is strong stationary time, in which {\tau} and {X_\tau} are independent. Explicitly, this means that {P(\tau=t,X_t=s)=P(\tau=t)\pi(s)}. Using the definition of stationarity, one can actually shows that {P(\tau \leq t, X_t=s)=P(\tau\leq t)\pi(s)}.

One example of stationary time but not strong stationary time is the following: Consider {n}-cycle random walk starting at {0}, let {\tau=0} with probability {\frac{1}{n}}, and with probability {\frac{n-1}{n}}, {\tau} be the time that the random walk hit the last new vertex. One interesting result is that the distribution of last new vertex is uniform on the non-zero vertexes. This verifies that {\tau} is a stationary time. However, it is not strong since when {\tau=0}, {X_\tau=0}.

I want to apply a basic martingale argument to prove this result. Recall that for a simple random walk on {\mathbb{Z}} starting from 0, with absorbing state {-a} and {b}, {P(\tau_{-a}<\tau_{b})=\frac{b}{a+b}}, where {\tau_x} is the first time the walk hits {x}.

Going back to the {n}-cycle random walk, let us calculate the probability that {k} is the last hit new vertex. We can transform {n}-cycle random walk to a random walk on {\mathbb{Z}}, where each integer {x} corresponds to {x} mod {n}. Then the absorbing states are {k-n} and {k}. Since {k} is the last new hit vertex on {n}-cycle, the random walk on {\mathbb{Z}} must hit {k+1-n} and {k-1} before hitting {k} or {k-n}. Hence, the probability is

\displaystyle  P_{0}(\tau_{k+1-n}<\tau_{k-1},\tau_{k-1}<\tau_{k},\tau_{k-1}<\tau_{k-n})+\\ P_{0}(\tau_{k+1-n}>\tau_{k-1},\tau_{k+1-n}<\tau_{k},\tau_{k+1-n}<\tau_{k-n})

Since the walk starts from {0}, {\tau_{k-1}<\tau_{k}} and {\tau_{k+1-n}<\tau_{k-n}}. So the probability we are calculating is {P_{0}(\tau_{k+1-n}<\tau_{k-1},\tau_{k-1}<\tau_{k-n})+P_{0}(\tau_{k+1-n}>\tau_{k-1},\tau_{k+1-n}<\tau_{k})}. For the first summand, we observe that conditional on {\tau_{k+1-n}}, the events {\tau_{k+1-n}<\tau_{k-1}} and {\tau_{k-1}<\tau_{k-n}} are independent. Hence, it is {P_{0}(\tau_{k+1-n}<\tau_{k-1})P_{k+1-n}(\tau_{k-1}<\tau_{k-n})}. Using the martingale result, we have that it is {\frac{k-1}{n-2}\frac{1}{n-1}}.

Similarly, for the second summand, we see that conditional on {\tau_{k-1}}, the events {\tau_{k+1-n}>\tau_{k-1}} and {\tau_{k+1-n}<\tau_{k}} are independent, it is {P_{0}(\tau_{k+1-n}>\tau_{k-1})P_{k-1}(\tau_{k+1-n}<\tau_{k})}. Using the martingale result, we get {\frac{-(k+1-n)}{n-2}\frac{1}{n-1}}. Summing up two probabilities, we have that {P(k\; \text{is last new hit vertex})=\frac{k-1}{n-2}\frac{1}{n-1}+\frac{-(k+1-n)}{n-2}\frac{1}{n-1}=\frac{1}{n-1}}. This implies that the last hit new vertex is uniformly distributed among non-zero vertexes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s