Abstracts

Laszlo Babai, University of Chicago
Graph Isomorphism: The Emergence of the Johnson Graphs

One of the fundamental computational problems in the complexity class NP on Karp's 1973 list, the Graph Isomorphism problem asks to decide whether or not two given graphs are isomorphic. While program packages exist that solve this problem remarkably efficiently in practice (McKay, Piperno, and others), for complexity theorists the problem has been notorious for its unresolved asymptotic worst-case complexity.

In this talk we outline a key combinatorial ingredient of the speaker's recent algorithm for the problem. A divide-and-conquer approach requires efficient canonical partitioning of graphs and higher-order relational structures. We shall indicate why Johnson graphs are the sole obstructions to this approach. This talk will be purely combinatorial, no familiarity with group theory will be required.

Laszlo Babai, University of Chicago
Group Theory and the Graph Isomorphism Problem

In this talk we outline the core group theoretic routine, the "Local Certificates" algorithm, underlying the new Graph Isomorphism test. The basic strategy follows Luks's group-theoretic divide-and-conquer approach (1980). We address the bottleneck of Luks's technique via local-global interaction based on a new group theoretic lemma. Undergraduate-level familiarity with the basic concepts of group theory (homomorphism, kernel, quotient group, permutation groups) will be assumed.

Nayantara Bhatnagar, University of Delaware
Subsequence Statistics in Random Mallows Permutations

The longest increasing subsequence (LIS) of a uniformly random permutation is a well studied problem. Vershik-Kerov and Logan-Shepp first showed that asymptotically the typical length of the LIS is 2sqrt(n). This line of research culminated in the work of Baik-Deift-Johansson who related this length to the GUE Tracy-Widom distribution.

We study the length of the LIS of random permutations drawn from the Mallows measure, introduced by Mallows in connection with ranking problems in statistics. We prove limit theorems for the LIS for different regimes of the parameter of the distribution. I will also describe some recent results on the longest common subsequence of independent Mallows permutations.

Relevant background for the talk will be introduced as needed.

Based on work with Ron Peled, Riddhi Basu and Ke Jin.

Karthik Chandrasekaran, University of Illinois at Urbana-Champaign
Global and Fixed-Terminal Cuts in Digraphs

The computational complexity of multicut-like problems may vary significantly depending on whether the terminals are fixed or not. In this talk, I will highlight this phenomenon in directed graphs. I will describe two variants of multi-cut like problems in directed graphs, namely double cut and bicut. I will present approximability results for global and fixed-terminal node weighted double cut. Next, I will exhibit a gap in the approximability of global and fixed-terminal edge weighted bicut.

In relation to these investigations, I will also consider two problems on undirected graphs which are of independent interest. First, I will show NP-completeness and a tight inapproximability bound of 4/3 for the node weighted 3-cut problem. Second, I will show an efficient algorithm to solve the minimum {s,t}-separating 3-cut problem.

Based on joint work with Kristof Berczi, Tamas Kiraly, Euiwoong Lee and Chao Xu.

William Cook, University of Waterloo
A Traveling Salesman Tour of ACO

Following Dantzig, Fulkerson, and Johnson, we show that a certain tour of 49,603 sites in the US is the shortest possible, measuring distance with point-to-point routes obtained from Google Maps. We highlight aspects of ACO that make the computation possible, including a cost-refinement technique to generate lower bounds on the tour length. This is joint work with Daniel Espinoza, Marcos Goycoolea, and Keld Helsgaun.

Daniel Dadush, Centrum Wiskunde and Informatica
Making Banaszczyk's Bound Constructive for the Komlos Problem

We first consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)^{1/2}), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t^{1/2} log n) bound. The result also extends to the more general Komlos setting, where each vector has norm at most 1, and gives an algorithmic O(log^{1/2} n) bound.

Joint work with Nikhil Bansal and Shashwat Garg.

Cristina Fernandes, Instituto de Matematica e Estatistic
Improved Approximation Algorithms for Capacitated Fault-Tolerant k-Center

The k-center problem consists of, given a metric space V and a positive integer k, finding k elements of V, called centers, and an assignment from V to the centers that minimize the maximum distance between an element of V and its assigned center. One of the most general variants of the problem is the capacitated \alpha-fault-tolerant k-center, where centers have a limit on the number of assigned elements and, if up to \alpha centers fail, there is a reassignment from V to non-faulty centers. Two variants are considered in the literature: the conservative one, where only elements assigned to faulty centers can be reassigned, and the non-conservative, where every element can be reassigned. We present a new approach to tackle fault tolerance, by selecting and pre-opening a set of backup centers, then solving the obtained residual instance. For the {0,L}-capacitated case, we give approximations with factor 6 for the non-conservative variant, and 7 for the conservative one, improving on the previously best known factors of 9 and 17, respectively. Moreover, we consider the case with general capacities. Assuming \alpha is constant, our method leads to the first approximations for this case. In this talk, we will present our results for the conservative case.

This is joint work with Samuel P. de Paula and Lehilton L. C. Pedrosa.

Cristobal Guzman, Centrum Wiskunde and Informatica
Statistical-Query Algorithms for Stochastic Convex Optimization

We study the complexity of solving stochastic convex programs by the restricted class of statistical-query (SQ) algorithms, where the algorithm has oracle access to the target distribution via estimates of the expectation for arbitrary (one-dimensional) query functions.

Our main technical contribution is deriving nearly optimal SQ algorithms for estimating the mean of a distribution over vectors in a d-dimensional space. These algorithms serve as a working horse to provide efficient methods for solving stochastic convex programs in the standard Lipschitz and smooth settings, together with the less studied bounded-range setting.

Time permitting, I will show some further applications of our results for improved SQ versions of algorithms for learning halfspaces, differential privacy, and proving concrete lower bounds on convex relaxations for problems over distributions.

This is joint work with Vitaly Feldman and Santosh Vempala.

Penny Haxell, University of Waterloo
Stability for Matchings in Regular Tripartite Hypergraphs

We consider a hypergraph analogue of the basic fact that every regular bipartite graph has a perfect matching. A theorem of Aharoni implies that every regular tripartite hypergraph H with n vertics in each class has a matching of size at least n/2, and moreover this bound is tight. We prove a stability version of this statement, showing that if H has matching number close to n/2 then it is correspondingly close in structure to the extremal configuration.

Petr Hlineny, Masaryk University
A Short Proof of Euler-Poincaré Formula

We provide a short self-contained inductive proof of famous Euler-Poincaré Formula for the numbers of faces of a convex polytope in every dimension. Our proof is elementary and it does not use shellability of polytopes.

Kamal Jain, Microsoft Research
Theoretical Analysis of Business Models

A business model is a revenue plan of a company. A good business model improves customer experience and as well provides a sustainable advantage to the company. Business models can make or break a company, but they are often discovered by simply trial and error. Theoretical analysis can provide insights behind them, and may help improve the batting average of these trial and error methods.

Diego Klabjan, Northwestern University
Optimization for Deep Learning

A (or perhaps the) reason for the recent buzz around artificial intelligence is deep learning. Large-scale neural network models, the proliferation of GPUs, and advances in optimization have been the main contributors to the success of deep learning. We will introduce deep learning with the focus on optimization. Algorithms and tricks-of-the-trade will be overviewed together with interesting recent advances in this space (batch normalization, optimization over activation functions, stochastic variance reduced gradient).

Sara Krehbiel, University of Richmond
Quantitative Electroencephalography for Detecting Concussions

This project explores whether quantitative electroencephalography (qEEG) can be a useful tool for diagnosing mild traumatic brain injury (mTBI). Our data contains a large number of subjects with post-traumatic stress disorder (PTSD), so our analysis explicitly considers what role PTSD plays in our ability to detect mTBI. Our raw EEG data consists of the recorded electrical activity for each of 19 nodes placed on the scalp of each patient. The starting point of our analysis is a single [0,1] coherence value for each pair of electrodes, roughly indicating the similarities of the signals at each electrode. Our analysis treats the 19 electrodes as nodes on a complete graph and the 171 coherence values as edge weights. With the goal of identifying which patients have mTBI, we consider graph measures on each patient?s complete weighted graph and also on unweighted graphs constructed by removing edges for which coherence is below a certain threshold. This talk will discuss challenges in identifying a simultaneously informative subset of our graph measures, the impacts of different thresholding strategies, and the relationship between PTSD and mTBI.

Chun-Hung Liu, Princeton University
The Erdos-Posa Property

Fixed a collection F of graphs, what is the maximum number of disjoint subgraphs contained in a given graph, where each of them is isomorphic to a member of F? And what is the minimum size of a set of vertices touching all such subgraphs? The former is called the packing problem and the later is called the covering problem. These two problems can be formulated as integer programming problems, and they are the dual of each other. The optimal value of the covering problem is obviously lower bounded by the optimal value of the packing problem, but when the former can be upper bounded in terms of the latter?

We say that a collection of graphs has the Erdos-Posa property if there exists a function f such that every graph either contains k disjoint subgraphs where each isomorphic to a member of F or contains a set of at most f(k) vertices touching all such subgraphs. Robertson and Seymour proved that the collection of graphs with an H minor has the Erdos-Posa property if and only if H is a planar graph; and they left the same question with respect to topological containment. We will present an answer of this question in this talk. That is, we will provide a complete (yet complicated) characterization for the graphs H in which the collection of graphs topologically containing H having the Erdos-Posa property, and show that a simple characterization is unlikely to be existed unless P=NP. We will also discuss how to relax the problem to obtain a clean characterization and consider the problem with respect to weak immersions. Part of this talk is based on joint work with Luke Postle and Paul Wollan.

Adam Marcus, Princeton
Two Proofs of the Existence of Ramanujan Graphs

This talk will review two results showing the existence of Ramanujan graphs. While both methods employ the method of interlacing polynomials, they are thematically quite different. The goal will be to highlight some of the commonalities and differences. This represents joint work with Dan Spielman and Nikhil Srivastava.

Aranyak Mehta, Google Research
Online Matching and Ad Allocation

In the classic online bipartite matching problem, we are given the vertices of one side of a bipartite graph, and have the vertices on the other side arriving online, one at a time, to be matched to a neighbor irrevocably upon arrival. The goal is to maximize the size of the matching thus constructed. This problem, and its generalization to the problem of online allocations, has a rich structure and leads to new algorithmic techniques, and also turns out to have an important application in the area of Internet Advertising. I will provide a brief survey of online matching algorithms and models, and describe the practical impact on Internet Ad Allocation.

Luke Postle, University of Waterloo
On the List Coloring Version of Reed's Conjecture

In 1998, Reed conjectured that chromatic number of a graph is at most halfway between its trivial lower bound, the clique number, and its trivial upper bound, the maximum degree plus one. Reed also proved that the chromatic number is at most some convex combination of the two bounds. In 2012, King and Reed gave a short proof of this fact. Last year, Bonamy, Perrett and I proved that a fraction of 1/26 away from the upper bound holds for large enough maximum degree. In this talk, we show using new techniques that the list-coloring versions of these results hold, namely that there is such a convex combination for which the statement holds for the list chromatic number. Furthermore, we show that for large enough maximum degree, a fraction of 1/13 suffices for the list chromatic number, improving also on the bound for ordinary chromatic number. This is joint work with Michelle Delcourt.

Amin Saberi, Stanford University
Approximating Traveling Salesman Problem using Algebraic Techniques

We will give an overview of the recent progress in our understanding of traveling salesman problems as well as an introduction to the underlying algebraic techniques. We will cover the following:

  • Thin trees, maximum entropy convex programs, spectral thinness and effective resistances
  • Strongly Rayleigh distributions, real stable polynomials, and interlacing
  • Effective resistance reducing convex programs.

Leonard J. Schulman, California Institute of Technology
Analysis of a Classical Matrix Preconditioning Algorithm

We study a classical iterative algorithm for balancing matrices via scaling transformations. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. We prove the first such bound: a natural L_\infty -norm variant of the algorithm converges, on n by n matrices, in O(n^3 log(n)) elementary scaling transformations. This is tight up to the log n factor. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.

Joint work with Alistair Sinclair.

Mohit Singh, Georgia Institute of Technology
Nash Social Welfare, Permanents and Inequalities on Stable Polynomials

Given a collection of items and agents, Nash social welfare problem aims to find a fair assignment of these items to agents. The Nash social welfare objective is to maximize the geometric mean of the valuation of the agents in the assignment. In this talk, we will give a new mathematical programming relaxation for the problem and give an approximation algorithm based on a simple randomized algorithm. To analyze the algorithm, we find new connections of the Nash social welfare problem to the problem of computation of permanent of a matrix. A crucial ingredient in this connection will be new inequalities on stable polynomials that generalize the work of Gurvits.

Joint work with Nima Anari, Shayan Oveis-Gharan and Amin Saberi.

Lutz Warnke, Georgia Institute of Technology
On Random Graphs Evolving Over Time

Random graphs are the basic mathematical models for large-scale disordered networks in many different fields (e.g., physics, biology, sociology).

Since many real world networks evolve over time, it is natural to study random graph processes which arise by adding edges (or vertices) step-by-step in some random way. For example, one hope is that this dynamic point of view might give some further insight into why certain phenomena happen, or how certain structures arise (e.g., how a hamiltonian cycle or a linear size `giant' component gradually emerges over time).

In this talk we shall survey some methods for analyzing random graph processes. Much in the spirit of the ACO theme, these not only bring together tools and techniques from somewhat different areas (combinatorial enumeration, differential equations, discrete martingales, branching processes, etc), but also have connections to the analysis of randomized algorithms.

Paul Wollan, University of Rome "La Sapienza"
Explicit Bounds for the Graph Minor Structure Theorem

Robertson and Seymour proved that every graph excluding a fixed H as a minor has a tree decomposition where each bag of the decomposition nearly embeds in a surface of bounded genus. The technical definition of "nearly embedding in a surface" includes a parameter k which measures how close the near-embedding is to a genuine embedding with no crossing edges (when k = 0, a k-near embedding is the same as an embedding without crossing edges). The theorem shows that the parameterized k-near embedding exists where the parameter k is bounded by a function on H.

The original proof of Robertson and Seymour was existential and gave no explicit bound on k in terms of H. Recent work by Geelen, Huynh, and Richter gives an explicit bound for the existential step in the Robertson-Seymour proof, and so together, there is a constructive bound on k. However, the length and technicality of the original proof meant no bound on k was known, although it had been estimated to be iterated tower functions.

We present a new and simpler proof of the graph minor structure theorem. A consequence of the shorter proof is that it is now possible to give an explicit bound on all the parameters in the theorem.

This is joint work with Ken Kawarabayashi and Robin Thomas.

Carl Yerger, Davidson College
Class 0 and Complexity Bounds for Graph Pebbling

Given a configuration of pebbles on the vertices of a connected graph G, a pebbling move removes two pebbles from some vertex and places one pebble on an adjacent vertex. The pebbling number of a graph G is the smallest integer k such that for each vertex v and each configuration of k pebbles on G, there exists a sequence of pebbling moves that places at least one pebble on v. If the pebbling number of G equals the number of vertices of G, we say the graph is Class 0. In this talk, we investigate and improve on bounds related to the minimum number of edges in a Class 0 graph via a discharging approach. We also discuss some recent results related to the complexity of the Class 0 decision problem for specific classes of graphs. This is joint work with Dan Cranston, Virginia Commonwealth University, Luke Postle, University of Waterloo, and Chenxiao Xue, Davidson College.

Stephen Young, University of Louisville
Combinatorial Problems in Topological Quantum Computing

One possible path forward for practical quantum computation is the development of a topological quantum phase. In principle, such a system will be more resistant to decoherence because of its inherently topological nature. We highlight progress on two combinatorial questions which arise naturally in the study of these systems. Joint work with Paul Bruillard and Kathleen Nowak.