Wednesday, May 9, 2018 - 2:00pm
Klaus 3100
Title: | Markov Chains and Emergent Behavior for Problems from Discrete Geometry |
Advisor: | Dr. Dana Randall, School of Computer Science |
Committee: | Dr. Sebastian Pokutta, School of Industrial and Systems Engineering |
Dr. Andrea Richa, School of Computing, Informatics, and Decision systems Engineering, Arizona State University | |
Dr. Prasad Tetali, School of Mathematics | |
Dr. Eric Vigoda, School of Computer Science | |
Reader: | Dr. Eric Vigoda, School of Computer Science |
SUMMARY:
The problem of generating random samples from large, complex sets is widespread across the sciences, where such samples provide one way to begin to learn about the sets' typical properties. However, when the samples generated are unexpectedly correlated or drawn from the wrong distribution, this can produce misleading conclusions. One way to generate random samples is with Markov chains, which are widely used but often applied without careful analysis of their mixing time, how long they must run for until they are guaranteed to produce good samples. We present new mixing time bounds for two sampling problems from discrete geometry: dyadic tilings, combinatorial structures with applications in machine learning and harmonic analysis, and 3-colorings on a grid, an instance of the celebrated antiferromagnetic Potts model from statistical physics. Both of these results required the development of new techniques.
In addition, we use Markov chains in a novel way to address research questions in programmable matter. Here, a main goal is to understand how simple computational elements can collectively accomplish complicated system-level goals. In an abstracted setting, we show that groups of particles executing our simple processes, based on Markov chains, can accomplish various tasks. This includes compression, a behavior exhibited by natural distributed systems such as fire ants and honey bees, and shortcut bridging, where the particles build bridges that optimize the same global trade-off as certain bridge-building ant colonies.
Throughout, a key ingredient is the interplay between global properties of Markov chains, including but not limited to mixing time, and their dependence on local moves, or Markov chain transitions that change only a small part of the configuration. We call the global behavior that arises out of these local moves and their probabilities emergent behavior. In addition to understanding the relationship between local moves and mixing times in order to give sampling guarantees, our work on programmable matter harnesses this interaction between local and emergent behavior in a novel way, to develop distributed algorithms.