Login | Register

Scaling Local Learning for Supervised and Self-supervised Learning

Title:

Scaling Local Learning for Supervised and Self-supervised Learning

Patel, Adeetya (2023) Scaling Local Learning for Supervised and Self-supervised Learning. Masters thesis, Concordia University.

[thumbnail of Patel_MCompSc_S2023.pdf]
Preview
Text (application/pdf)
Patel_MCompSc_S2023.pdf - Accepted Version
Available under License Spectrum Terms of Access.
3MB

Abstract

Traditional neural network training methods optimize a monolithic objective function jointly for all the components. This can lead to various inefficiencies in terms of potential parallelization. Local learning is an approach to model-parallelism that removes the standard end-to-end learning setup and utilizes local objective functions to permit parallel learning amongst model components in a deep network. Recent works have demonstrated that variants of local learning can lead to efficient training of modern deep networks. However, in terms of how much computation can be distributed, these approaches are typically limited by the number of layers in a network. Hence, the first study explores how local-learning can be applied at the level of splitting layers or modules into sub-components, adding a notion of width-wise modularity to the existing depth-wise modularity associated with local learning. We investigate local-learning penalties that permit such models to be trained efficiently. Our experiments on various datasets demonstrate that introducing width-level modularity can lead to computational advantages over existing methods and opens new opportunities for improved model-parallel distributed training. The second study focuses on adapting existing local-learning frameworks to self-supervised learning tasks, specifically using the SimCLR method. However, existing local-learning frameworks lack in performance due to task-relevant information collapse in early layers. To address the issue, we propose modifying the local objective functions layerwise to gradually increase the problem difficulty with depth. We found that our method was able to maintain the similar performance as the end-to-end trained model while also increasing parallelization.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Patel, Adeetya
Institution:Concordia University
Degree Name:M. Comp. Sc.
Program:Computer Science
Date:7 February 2023
Thesis Supervisor(s):Belilovsky, Eugene
Keywords:Local learning, Model parallelism
ID Code:991800
Deposited By: Adeetya Patel
Deposited On:21 Jun 2023 14:42
Last Modified:21 Jun 2023 14:42
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top