Mohammad-Taheri, Sina (2025) Greedy sparse recovery algorithms: from weighted generalizations to deep unrolling. PhD thesis, Concordia University.
Preview |
Text (application/pdf)
25MBMohammadTaheri_PhD_F2025.pdf - Accepted Version Available under License Spectrum Terms of Access. |
Abstract
Sparse recovery is a clear manifestation of Occam’s razor in the mathematics of data thanks to its ability to favor the simplest explanation. A prominent example is the reconstruction of a sparse
vector (i.e., one with mostly zero or negligible coefficients) from linear measurements, possibly corrupted by noise. This problem arises in numerous applications across various domains, including medical imaging and high-dimensional function approximation. Greedy sparse recovery algorithms approach this problem by recursively solving local optimization problems and have proven to be efficient alternatives to convex methods.
In this dissertation, we first develop weighted generalizations of Orthogonal Matching Pursuit (OMP)–one of the most popular greedy sparse recovery algorithms–based on two distinct weighting strategies aimed at incorporating a priori knowledge about the signal structure. These generalizations feature explicit and theoretically justified greedy index selection rules. The first strategy establishes a novel connection between greedy sparse recovery and convex relaxation methods,
particularly the LASSO family, resulting in new OMP-based algorithms suited to a wide range of loss functions. Numerical results show that these greedy variants inherit key traits from their ancestor convex decoders. For the second strategy, we provide optimal recovery guarantees for rescaling-based weighted OMP under the weighted Restricted Isometry Property (wRIP), extending the classical RIP-based analysis to the weighted setting.
The second part of the thesis addresses the non-differentiability of greedy algorithms caused by the (arg)sorting operator by employing "softsorting", enabling differentiable, neural-networkcompatible versions of these algorithms. We provide rigorous theoretical analysis and numerically demonstrate that these "soft" algorithms reliably approximate their original counterparts. We then link our weighted greedy solvers to neural architectures, by embedding their iterations into neural networks using the “algorithm unrolling” paradigm, treating weights as meaningful trainable parameters. This approach not only advances the development of data-driven methods for sparse
recovery, but also marks a significant step toward building safer and more trustworthy neural networks. Finally, we extend this framework to deterministic compressed sensing and to learning additional parameters beyond weights.
| Divisions: | Concordia University > Faculty of Arts and Science > Mathematics and Statistics |
|---|---|
| Item Type: | Thesis (PhD) |
| Authors: | Mohammad-Taheri, Sina |
| Institution: | Concordia University |
| Degree Name: | Ph. D. |
| Program: | Mathematics |
| Date: | 14 July 2025 |
| Thesis Supervisor(s): | Brugiapaglia, Simone |
| ID Code: | 996047 |
| Deposited By: | Sina MohammadTaheri |
| Deposited On: | 04 Nov 2025 17:12 |
| Last Modified: | 04 Nov 2025 17:12 |
Repository Staff Only: item control page


Download Statistics
Download Statistics