学 术 报 告
报告题目:A 2-phase augmented Lagrangian method for solving large scale convex matrix optimization
报告人:孙德锋 教授 (新加坡国立大学数学系)
报告校内联系人:张立卫 联系电话:84708351-8307
报告时间:2014年10月14日(周二)8:30-11:30
报告地点:研教楼410
报告摘要:We present a majorized semismooth Newton-CG augmented Lagrangian method, called SDPNAL
, for semidefinite programming (SDP) with partial or full nonnegative constraints on the matrix variable. SDPNAL
is a much enhanced version of SDPNAL introduced by Zhao, Sun and Toh [SIAM Journal on Optimization, 20 (2010), pp.~1737--1765] for solving generic SDPs. SDPNAL works very efficiently for nondegenerate SDPs but may encounter numerical difficulty for degenerate ones. Here we tackle this numerical difficulty by employing a majorized semismooth Newton-CG augmented Lagrangian method coupled with a convergent 3-block alternating direction method of multipliers introduced recently by Sun, Toh and Yang [arXiv preprint arXiv:1404.5378, (2014)]. Numerical results for various large scale SDPs with or without nonnegative constraints show that the proposed method is not only fast but also robust in obtaining accurate solutions. It outperforms, by a significant margin, two other competitive publicly available first order methods based codes: (1) an alternating direction method of multipliers based solver called SDPAD by Wen, Goldfarb and Yin [Mathematical Programming Computation, 2 (2010), pp.~203--230] and (2) a two-easy-block-decomposition hybrid proximal extragradient method called 2EBD-HPE by Monteiro, Ortiz and Svaiter [Mathematical Programming Computation, (2013), pp.~1--48]. In contrast to these two codes, we are able to solve all the 95 difficult SDP problems arising from the relaxations of quadratic assignment problems tested in SDPNAL to an accuracy of
efficiently, while SDPAD and 2EBD-HPE successfully solve 30 and 16 problems, respectively. In addition, SDPNAL
appears to be the only viable method currently available to solve large scale SDPs arising from rank-1 tensor approximation problems constructed by Nie and Wang [arXiv preprint arXiv:1308.6562, (2013)]. The largest rank-1 tensor approximation problem we solved (in about 14.5 hours) is
, in which its resulting SDP problem has matrix dimension
and the number of equality constraints
. [This talk is based on a joint work with Liuqin Yang and Kim-Chuan Toh at National University of Singapore]
报告人简介:孙德锋,国际著名优化专家、新加坡国立大学数学系教授。在南京大学获得本科和硕士学位,在中国科学院应用数学研究所获得博士学位。国际顶级数学期刊Mathematical Programming和SIAM Journal on Optimization的编委。孙教授主要从事连续优化和金融优化的研究,包括基础理论、算法及应用。他开辟了矩阵优化这一新领域,在大规模矩阵优化理论与算法方面取得重大突破。
437ccm必赢国际首页欢迎您
2014年10月11日