Randomized Smoothing Done Right

This is one of my first projects during my AI Residency at Microsoft Research.

The usefulness of safety critical systems is upper-bounded by their robustness under adversarial conditions. Indeed, adversarial robustness has been studied extensively in recent years. However, most attacks and defenses are caught in an escalating arm race where defenses are broken quickly after proposed. A notable exception is certified defenses, which give mathematical guarantees about the behaviors of the models.

Randomized smoothing is one such defenses that is scalable to deep neural networks. It aggregates a model’s predictions under noise and provides a radius under a certain norm within which the aggregated prediction stays constant. Nonetheless, the dynamics between the noise distribution used and the certified norm remains opaque and has only been studied empirically. We study this dynamics rigorously and reveal the surprising connection to a century-old concept in theoretical physics – Wulff crystals.

Our theoretical result not only yields state-of-the-art performances on certified defenses, but also insights potentially useful for differential privacy. We also analyze the fundamental limitation of current randomized smoothing for high dimensional settings.

The paper is titled “Randomized Smoothing of All Shapes and Sizes”.

Check it out on arXiv: https://arxiv.org/abs/2002.08118 or here. Our code is available on Github.

Updated: