学术预告 | 机器学习与数据科学博士生系列论坛(第十一期)Abstract: The minimax optimization problem arises ubiquitously in machine learning, with examples including matrix games, Generative Adversarial Networks, robust optimization and reinforcement learning. If the objective function is convex-concave, Sion's minimax theorem guarantees the existence of a saddle point under certain regularity conditions. Therefore, the primal-dual gap forms the basis for a standard optimality criterion. For the nonconvex-concave case, the stationarity of the primal function can be used as the optimality measure. Many algorithms have been proposed to solve minimax problems. First-order methods have been particularly popular due to the huge scale of many modern applications and the dimension-free convergence rates. Additionally, some of them are designed for problems with special structures such as bilinear problems, discrete zero-sum games and finite-sum problems. In this talk, we will give an overview of recent theoretical results on gradient complexity bounds of deterministic and stochastic first-order methods for smooth minimax optimization. In particular, we will also focus on the special case when the problem has a finite-sum structure.
文章分类:
对外活动
|