We consider differentiable games: multi-objective minimization problems,
where the goal is to find a Nash equilibrium. The machine learning community
has recently started using extrapolation-based variants of the gradient method.
A prime example is the extragradient, which yields linear convergence in cases
like bilinear games, where the standard gradient method fails. The full
benefits of extrapolation-based methods are not known: i) there is no unified
analysis for a large class of games that includes both strongly monotone and
bilinear games; ii) it is not known whether the rate achieved by extragradient
can be improved, e.g. by considering multiple extrapolation steps. We answer
these questions through new analysis of the extragradient's local and global
convergence properties. Our analysis covers the whole range of settings between
purely bilinear and strongly monotone games. It reveals that extragradient
converges via different mechanisms at these extremes; in between, it exploits
the most favorable mechanism for the given problem. We then present lower
bounds on the rate of convergence for a wide class of algorithms with any
number of extrapolations. Our bounds prove that the extragradient achieves the
optimal rate in this class, and that our upper bounds are tight. Our precise
characterization of the extragradient's convergence behavior in games shows
that, unlike in convex optimization, the extragradient method may be much
faster than the gradient method.
Description
[1906.05945] A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games
%0 Journal Article
%1 azizian2019tight
%A Azizian, Waïss
%A Mitliagkas, Ioannis
%A Lacoste-Julien, Simon
%A Gidel, Gauthier
%D 2019
%K game-theory optimization
%T A Tight and Unified Analysis of Extragradient for a Whole Spectrum of
Differentiable Games
%U http://arxiv.org/abs/1906.05945
%X We consider differentiable games: multi-objective minimization problems,
where the goal is to find a Nash equilibrium. The machine learning community
has recently started using extrapolation-based variants of the gradient method.
A prime example is the extragradient, which yields linear convergence in cases
like bilinear games, where the standard gradient method fails. The full
benefits of extrapolation-based methods are not known: i) there is no unified
analysis for a large class of games that includes both strongly monotone and
bilinear games; ii) it is not known whether the rate achieved by extragradient
can be improved, e.g. by considering multiple extrapolation steps. We answer
these questions through new analysis of the extragradient's local and global
convergence properties. Our analysis covers the whole range of settings between
purely bilinear and strongly monotone games. It reveals that extragradient
converges via different mechanisms at these extremes; in between, it exploits
the most favorable mechanism for the given problem. We then present lower
bounds on the rate of convergence for a wide class of algorithms with any
number of extrapolations. Our bounds prove that the extragradient achieves the
optimal rate in this class, and that our upper bounds are tight. Our precise
characterization of the extragradient's convergence behavior in games shows
that, unlike in convex optimization, the extragradient method may be much
faster than the gradient method.
@article{azizian2019tight,
abstract = {We consider differentiable games: multi-objective minimization problems,
where the goal is to find a Nash equilibrium. The machine learning community
has recently started using extrapolation-based variants of the gradient method.
A prime example is the extragradient, which yields linear convergence in cases
like bilinear games, where the standard gradient method fails. The full
benefits of extrapolation-based methods are not known: i) there is no unified
analysis for a large class of games that includes both strongly monotone and
bilinear games; ii) it is not known whether the rate achieved by extragradient
can be improved, e.g. by considering multiple extrapolation steps. We answer
these questions through new analysis of the extragradient's local and global
convergence properties. Our analysis covers the whole range of settings between
purely bilinear and strongly monotone games. It reveals that extragradient
converges via different mechanisms at these extremes; in between, it exploits
the most favorable mechanism for the given problem. We then present lower
bounds on the rate of convergence for a wide class of algorithms with any
number of extrapolations. Our bounds prove that the extragradient achieves the
optimal rate in this class, and that our upper bounds are tight. Our precise
characterization of the extragradient's convergence behavior in games shows
that, unlike in convex optimization, the extragradient method may be much
faster than the gradient method.},
added-at = {2019-09-04T15:56:00.000+0200},
author = {Azizian, Waïss and Mitliagkas, Ioannis and Lacoste-Julien, Simon and Gidel, Gauthier},
biburl = {https://www.bibsonomy.org/bibtex/21fbb061440907f89b3f0b2ab3800b8e0/kirk86},
description = {[1906.05945] A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games},
interhash = {9a0d966310f5629fa8a7fa92668381c3},
intrahash = {1fbb061440907f89b3f0b2ab3800b8e0},
keywords = {game-theory optimization},
note = {cite arxiv:1906.05945},
timestamp = {2019-09-04T15:56:00.000+0200},
title = {A Tight and Unified Analysis of Extragradient for a Whole Spectrum of
Differentiable Games},
url = {http://arxiv.org/abs/1906.05945},
year = 2019
}