Stay Focus

I read a post by Will Ma, who is a genius grad student in MIT ORC and wins lots of poker games  during his spare time, on his reflection over the past three years as well as his plan for the future as a PhD student.

One thing I was impressed is going all-in  as a PhD student, that is putting all your focus and thought in just one particular thing during your time as a PhD student. I barely do it, and I waste a lot time doing random things and float my minds around. Thus I do not deem myself efficient or professional at this point. Mitya once told me that: I have been thinking abstractly for 10 years, that is why I am good at it. If I never truly set myself experiencing such a state, I will never find the beauty in anything.

Stay Focus.

Check out the original post

Some advice on Research

I have been in Cornell for a year, studying Operations Research. For the first half of the semester, every one have to try hard passing three core courses: Linear Programming, Probability and Statistics. The second semester is more relaxing, and it gives you the freedom to learn things you are interested in. I also begin to approach some professors in our department, asking what they do and reading a few papers they suggest.

This summer, I will study and do research (seriously) for a small project in optimization. I always think some advice on research would be helpful in guiding my study.

Here are a few links I searched online:

Ravi Vakil: on how to go to seminar

Secret Blogging Seminar: by recent Berkeley Ph.D. students in mathematics on representation theory, algebraic geometry, knot theory

Terry Tao: Write down what you have done; work hard

Funny one: Using analogy of a vacuum cleaner

An example of stationary time

Today, I want to talk a little about strong stationary time, which is another useful technique to bound the mixing time of Markov chain. Let {S} be a countable state space, {\left\{X_n\right\}} be a Markov chain with stationary distribution {\pi}, a stationary time {\tau} is a random time such that {P(X_\tau=s)=\pi(s)}. In general, {\tau} and {X_\tau} are not independent. A more stronger concept is strong stationary time, in which {\tau} and {X_\tau} are independent. Explicitly, this means that {P(\tau=t,X_t=s)=P(\tau=t)\pi(s)}. Using the definition of stationarity, one can actually shows that {P(\tau \leq t, X_t=s)=P(\tau\leq t)\pi(s)}.

One example of stationary time but not strong stationary time is the following: Consider {n}-cycle random walk starting at {0}, let {\tau=0} with probability {\frac{1}{n}}, and with probability {\frac{n-1}{n}}, {\tau} be the time that the random walk hit the last new vertex. One interesting result is that the distribution of last new vertex is uniform on the non-zero vertexes. This verifies that {\tau} is a stationary time. However, it is not strong since when {\tau=0}, {X_\tau=0}.

I want to apply a basic martingale argument to prove this result. Recall that for a simple random walk on {\mathbb{Z}} starting from 0, with absorbing state {-a} and {b}, {P(\tau_{-a}<\tau_{b})=\frac{b}{a+b}}, where {\tau_x} is the first time the walk hits {x}.

Going back to the {n}-cycle random walk, let us calculate the probability that {k} is the last hit new vertex. We can transform {n}-cycle random walk to a random walk on {\mathbb{Z}}, where each integer {x} corresponds to {x} mod {n}. Then the absorbing states are {k-n} and {k}. Since {k} is the last new hit vertex on {n}-cycle, the random walk on {\mathbb{Z}} must hit {k+1-n} and {k-1} before hitting {k} or {k-n}. Hence, the probability is

\displaystyle  P_{0}(\tau_{k+1-n}<\tau_{k-1},\tau_{k-1}<\tau_{k},\tau_{k-1}<\tau_{k-n})+\\ P_{0}(\tau_{k+1-n}>\tau_{k-1},\tau_{k+1-n}<\tau_{k},\tau_{k+1-n}<\tau_{k-n})

Since the walk starts from {0}, {\tau_{k-1}<\tau_{k}} and {\tau_{k+1-n}<\tau_{k-n}}. So the probability we are calculating is {P_{0}(\tau_{k+1-n}<\tau_{k-1},\tau_{k-1}<\tau_{k-n})+P_{0}(\tau_{k+1-n}>\tau_{k-1},\tau_{k+1-n}<\tau_{k})}. For the first summand, we observe that conditional on {\tau_{k+1-n}}, the events {\tau_{k+1-n}<\tau_{k-1}} and {\tau_{k-1}<\tau_{k-n}} are independent. Hence, it is {P_{0}(\tau_{k+1-n}<\tau_{k-1})P_{k+1-n}(\tau_{k-1}<\tau_{k-n})}. Using the martingale result, we have that it is {\frac{k-1}{n-2}\frac{1}{n-1}}.

Similarly, for the second summand, we see that conditional on {\tau_{k-1}}, the events {\tau_{k+1-n}>\tau_{k-1}} and {\tau_{k+1-n}<\tau_{k}} are independent, it is {P_{0}(\tau_{k+1-n}>\tau_{k-1})P_{k-1}(\tau_{k+1-n}<\tau_{k})}. Using the martingale result, we get {\frac{-(k+1-n)}{n-2}\frac{1}{n-1}}. Summing up two probabilities, we have that {P(k\; \text{is last new hit vertex})=\frac{k-1}{n-2}\frac{1}{n-1}+\frac{-(k+1-n)}{n-2}\frac{1}{n-1}=\frac{1}{n-1}}. This implies that the last hit new vertex is uniformly distributed among non-zero vertexes.

A Beginner’s Confess

Hello World!

In this blog, I want to share with you the following:

(1) Cool Mathematics and Computer Science I am currently learning in my PhD study

(2) Interaction between Randomness and Algorithms

(3) Interesting problems and expository articles.