The paper studies a general class of distributed dictionary learning (DL) problems where the learning task is distributed over a multi-agent network with (possibly) time-varying (non-symmetric) connectivity. This setting is relevant, for instance, in scenarios where massive amounts of data are not collocated but collected/stored in different spatial locations. We develop a unified distributed algorithmic framework for this class of non-convex problems and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques while leveraging a novel broadcast protocol to disseminate information and distribute the computation over the network, which neither requires the double-stochasticity of the consensus matrices nor the knowledge of the graph sequence to implement. To the best of our knowledge, this is the first distributed scheme with provable convergence for DL (and more generally bi-convex) problems, over (time-varying) digraphs
2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Pages 4084-4088
D2L: Decentralized dictionary learning over dynamic networks (04b Atto di convegno in volume)
Daneshmand A., Sun Y., Scutari G., Facchinei F.
Gruppo di ricerca: Continuous Optimization