Login | Register

Greedy sparse recovery algorithms: from weighted generalizations to deep unrolling

Title:

Greedy sparse recovery algorithms: from weighted generalizations to deep unrolling

Mohammad-Taheri, Sina (2025) Greedy sparse recovery algorithms: from weighted generalizations to deep unrolling. PhD thesis, Concordia University.

[thumbnail of MohammadTaheri_PhD_F2025.pdf]
Preview
Text (application/pdf)
MohammadTaheri_PhD_F2025.pdf - Accepted Version
Available under License Spectrum Terms of Access.
25MB

Abstract

Sparse recovery is a clear manifestation of Occam’s razor in the mathematics of data thanks to its ability to favor the simplest explanation. A prominent example is the reconstruction of a sparse
vector (i.e., one with mostly zero or negligible coefficients) from linear measurements, possibly corrupted by noise. This problem arises in numerous applications across various domains, including medical imaging and high-dimensional function approximation. Greedy sparse recovery algorithms approach this problem by recursively solving local optimization problems and have proven to be efficient alternatives to convex methods.
In this dissertation, we first develop weighted generalizations of Orthogonal Matching Pursuit (OMP)–one of the most popular greedy sparse recovery algorithms–based on two distinct weighting strategies aimed at incorporating a priori knowledge about the signal structure. These generalizations feature explicit and theoretically justified greedy index selection rules. The first strategy establishes a novel connection between greedy sparse recovery and convex relaxation methods,
particularly the LASSO family, resulting in new OMP-based algorithms suited to a wide range of loss functions. Numerical results show that these greedy variants inherit key traits from their ancestor convex decoders. For the second strategy, we provide optimal recovery guarantees for rescaling-based weighted OMP under the weighted Restricted Isometry Property (wRIP), extending the classical RIP-based analysis to the weighted setting.
The second part of the thesis addresses the non-differentiability of greedy algorithms caused by the (arg)sorting operator by employing "softsorting", enabling differentiable, neural-networkcompatible versions of these algorithms. We provide rigorous theoretical analysis and numerically demonstrate that these "soft" algorithms reliably approximate their original counterparts. We then link our weighted greedy solvers to neural architectures, by embedding their iterations into neural networks using the “algorithm unrolling” paradigm, treating weights as meaningful trainable parameters. This approach not only advances the development of data-driven methods for sparse
recovery, but also marks a significant step toward building safer and more trustworthy neural networks. Finally, we extend this framework to deterministic compressed sensing and to learning additional parameters beyond weights.

Divisions:Concordia University > Faculty of Arts and Science > Mathematics and Statistics
Item Type:Thesis (PhD)
Authors:Mohammad-Taheri, Sina
Institution:Concordia University
Degree Name:Ph. D.
Program:Mathematics
Date:14 July 2025
Thesis Supervisor(s):Brugiapaglia, Simone
ID Code:996047
Deposited By: Sina MohammadTaheri
Deposited On:04 Nov 2025 17:12
Last Modified:04 Nov 2025 17:12
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top