The Entropy method

The entropy method is a beautiful method to show existential results in extremal combinatorics. This blog post will discuss the entropy method through the lens of minimizing combinatorial discrepancy - the playing field where this technique was first introduced.

Introduction

In order to motivate the entropy method, let’s first discuss the notion of combinatorial discrepancy. Consider a set system where . The combinatorial discrepancy of this set system corresponds to a red-blue coloring of the points in $X$ in such a way that each set is approximately balanced in both colors. In other words, the goal is to find a labelling such that, is minimized. This is a classically studied problem in quasi-randomness, and several strong deterrents exist for polynomial time algorithms for colorings of set systems with discrepancy. Therefore, a natural question to ask is, does there exist a matching upper bound - an algorithm to compute an discrepancy coloring for every set system, at least when . The answer to this question was unknown until recently, answered by a paper due to Bansal et al []. However, in 1923, Spencer devised the entropy method to show that an existential result for colorings with discrepancy when .

Outline

Spencer’s proof relies on showing the following result:

Given a set system , there exists a partial coloring of the set system such that at least of the vertices are colored , and such that the maximum discrepancy of any of the sets over this partial coloring is, for some constant , \begin{equation} \max_{S \in \mathcal{S}} | \sum_{x \in S} \sigma(x) | \le \sqrt{n \log m} \end{equation}

The entropy method

Written on June 10, 2019