Forgetting Where You Started: Convergence to Equilibrium in Markov Chains
Presenter: Xiaoran Fan
Faculty Sponsor: John Pike
School: Bridgewater State University
Research Area: Mathematics and Statistics
Session: Poster Session 4, 2:15 PM - 3:00 PM, Auditorium, A67
ABSTRACT
Markov chains provide a fundamental framework for modeling stochastic systems that arise in probability theory, statistical physics, computer science, and related fields. A central problem in their study is determining when a process “forgets” its initial state and converges to a stable long-term equilibrium. Understanding this convergence behavior is essential for analyzing both theoretical models and practical algorithms.
This project investigates the structural conditions that guarantee convergence for finite, time-homogeneous Markov chains. The analysis focuses on the roles of irreducibility, aperiodicity, and positive recurrence in ensuring the existence and uniqueness of a stationary distribution. Using coupling methods and spectral analysis of transition matrices, the study establishes convergence in total variation distance and provides quantitative bounds on the rate of mixing. In particular, the relationship between eigenvalues of the transition matrix and long-term behavior is examined in detail.
Random walks on finite graphs and groups illustrate how structural properties such as connectivity, symmetry, and generation directly control convergence to equilibrium in Markov chains.
The results demonstrate that finite, irreducible, and aperiodic chains converge exponentially fast, with the second-largest eigenvalue determining the speed of convergence. This work clarifies the mathematical mechanisms underlying equilibrium formation and provides a unified framework for understanding stochastic dynamics across multiple disciplines.