The Entropy Mixing of Reversible Markov Chains
這個三年期計畫，目的是要解決一連串與馬可夫鏈密切相關的問題。其中包括了spectral gap、log-Sobolev constant、Lp-mixing time和entropy cutoff。最終的目標是要探討自然界中的cutoff現象。|
The Markov chain Monte Carlo method (MCMC for short) is a strategy used in sampling probability distributions. In practice, one is interested in the expectation of a function f defined on a set S according to a probability π on S. Mostly, it is impossible to determine the mean value of a function with a direction computation because the state space S can be very large and the probability π might be available only up to proposition, while the normalizing constant is never computable at all. From the perspective of MCMC method, a Markov chain Xn with stationary distribution π is simulated and the value f(Xn) is considered as a representative for π(f). The qualitative analysis of Markov chain ensures that, under mild conditions, f(Xn) converges to π(f) (in any possible sense) as n tends to infinity. A natural question arises: When (a time T) to stop the simulation so that the value f(XT) is close to its limit? T is known as the mixing time for Xn and solving this issue falls on the quantitative analysis of Markov chains. In the three-year project, we plan to solve a series of problems related to the mixing time T, which include the spectral gap and the log-Sobolev constant, the sup-Lp-mixing time, the entropy cutoff and the cutoff for random graphs. Each of them has a close connection to the others and all of them are of special interests both in theory and in practice. In the long run, we plan to explore the cutoff phenomenon on any physical model.
|Appears in Collections:||Research Plans|
Files in This Item: