Provably efficient algorithms for decentralized optimization

dc.contributor.authorLiu, Changxin
dc.contributor.supervisorShi, Yang
dc.date.accessioned2021-08-31T22:46:49Z
dc.date.available2021-08-31T22:46:49Z
dc.date.copyright2021en_US
dc.date.issued2021-08-31
dc.degree.departmentDepartment of Mechanical Engineeringen_US
dc.degree.levelDoctor of Philosophy Ph.D.en_US
dc.description.abstractDecentralized multi-agent optimization has emerged as a powerful paradigm that finds broad applications in engineering design including federated machine learning and control of networked systems. In these setups, a group of agents are connected via a network with general topology. Under the communication constraint, they aim to solving a global optimization problem that is characterized collectively by their individual interests. Of particular importance are the computation and communication efficiency of decentralized optimization algorithms. Due to the heterogeneity of local objective functions, fostering cooperation across the agents over a possibly time-varying network is challenging yet necessary to achieve fast convergence to the global optimum. Furthermore, real-world communication networks are subject to congestion and bandwidth limit. To relieve the difficulty, it is highly desirable to design communication-efficient algorithms that proactively reduce the utilization of network resources. This dissertation tackles four concrete settings in decentralized optimization, and develops four provably efficient algorithms for solving them, respectively. Chapter 1 presents an overview of decentralized optimization, where some preliminaries, problem settings, and the state-of-the-art algorithms are introduced. Chapter 2 introduces the notation and reviews some key concepts that are useful throughout this dissertation. In Chapter 3, we investigate the non-smooth cost-coupled decentralized optimization and a special instance, that is, the dual form of constraint-coupled decentralized optimization. We develop a decentralized subgradient method with double averaging that guarantees the last iterate convergence, which is crucial to solving decentralized dual Lagrangian problems with convergence rate guarantee. Chapter 4 studies the composite cost-coupled decentralized optimization in stochastic networks, for which existing algorithms do not guarantee linear convergence. We propose a new decentralized dual averaging (DDA) algorithm to solve this problem. Under a rather mild condition on stochastic networks, we show that the proposed DDA attains an $\mathcal{O}(1/t)$ rate of convergence in the general case and a global linear rate of convergence if each local objective function is strongly convex. Chapter 5 tackles the smooth cost-coupled decentralized constrained optimization problem. We leverage the extrapolation technique and the average consensus protocol to develop an accelerated DDA algorithm. The rate of convergence is proved to be $\mathcal{O}\left( \frac{1}{t^2}+ \frac{1}{t(1-\beta)^2} \right)$, where $\beta$ denotes the second largest singular value of the mixing matrix. To proactively reduce the utilization of network resources, a communication-efficient decentralized primal-dual algorithm is developed based on the event-triggered broadcasting strategy in Chapter 6. In this algorithm, each agent locally determines whether to generate network transmissions by comparing a pre-defined threshold with the deviation between the iterates at present and lastly broadcast. Provided that the threshold sequence is summable over time, we prove an $\mathcal{O}(1/t)$ rate of convergence for convex composite objectives. For strongly convex and smooth problems, linear convergence is guaranteed if the threshold sequence is diminishing geometrically. Finally, Chapter 7 provides some concluding remarks and research directions for future study.en_US
dc.description.scholarlevelGraduateen_US
dc.identifier.bibliographicCitationChangxin Liu, Zirui Zhou, Jian Pei, Yong Zhang, and Yang Shi. Decentralized composite optimization in stochastic networks: A dual averaging approach with linear convergence. arXiv: 2106.14075, 2021.en_US
dc.identifier.bibliographicCitationChangxin Liu, Huiping Li, and Yang Shi. Resource-aware exact decentralized optimization using event-triggered broadcasting. IEEE Transactions on Automatic Control, 66(7): 2961-2974, 2020.en_US
dc.identifier.bibliographicCitationChangxin Liu, Yang Shi, and Huiping Li. Accelerated decentralized dual averaging. arXiv: 2007.05141, 2020.en_US
dc.identifier.bibliographicCitationChangxin Liu, Huiping Li, and Yang Shi. A unitary distributed subgradient method for multi-agent optimization with different coupling sources. Automatica, 114(108834), 2020.en_US
dc.identifier.urihttp://hdl.handle.net/1828/13350
dc.languageEnglisheng
dc.language.isoenen_US
dc.rightsAvailable to the World Wide Weben_US
dc.subjectConvex optimizationen_US
dc.subjectDistributed optimizationen_US
dc.subjectMulti-agent systemsen_US
dc.subjectDual averaging algorithmsen_US
dc.subjectDistributed primal-dual algorithmsen_US
dc.titleProvably efficient algorithms for decentralized optimizationen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_Changxin_PhD_2021.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2 KB
Format:
Item-specific license agreed upon to submission
Description: