Login | Register

Learning Linear Programs: Inverse Optimization as a Form of Machine Learning

Title:

Learning Linear Programs: Inverse Optimization as a Form of Machine Learning

Tan, Yingcong (2021) Learning Linear Programs: Inverse Optimization as a Form of Machine Learning. PhD thesis, Concordia University.

[thumbnail of Tan_PhD_S2021.pdf]
Preview
Text (application/pdf)
Tan_PhD_S2021.pdf - Accepted Version
Available under License Spectrum Terms of Access.
36MB

Abstract

Conventionally, in an optimization problem, we aim to determine the values of the decision variables to minimize the objective function subject to the constraints. We refer to this problem as the forward optimization problem (FOP). In an inverse optimization (IO) problem, the goal is to determine the coefficients of an FOP such that an observed solution becomes an optimal solution of the learned FOP model.
In this dissertation, we focus on the inverse linear optimization problem whose FOP has the form of linear programming.
We adopt an interdisciplinary approach, leveraging concepts and methods from machine learning to address a very general form of this problem.

Firstly, we study the general form of the inverse linear optimization problem, that is, learning all model coefficients individually or jointly where the unknown model coefficients may or may not depend on exogenous parameters. We are the first to cast the IO problem as a form of deep learning and solve it with a gradient-based algorithm. To compute the gradients, we differentiate through the steps of an optimization process, in particular, the Barrier interior-point method. We develop new sets of benchmark instances and show good performance of our algorithm on different IO tasks: 1). learning a cost vector of linear program; 2). learning cost vector and constraints of a linear program jointly; 3). learning unknown parameters in the objective and constraints of a parametric linear program.
To the best of our knowledge, this algorithm is the first IO approach in the literature to be able to handle all three types of tasks.


Secondly, we formulate the inverse linear optimization problem as a bilevel optimization problem and explicitly encode constraints in the outer problem to ensure that observed solutions remain feasible with respect to the constraints. Again by leveraging a machine learning perspective on inverse linear optimization, we develop a general-purpose framework to solve the bilevel model with gradient-based algorithms. We investigate different methods for differentiating through an optimization problem and specialize them to LP. Additionally, we focus on an objective-based loss function and derive a closed-form expression for computing gradients.
Experimental results show that our framework is capable of solving synthetic parametric linear program and multi-commodity flow problem instances which could not be previously solved by methods in the IO literature. Additionally, we show that our closed-form expression is orders of magnitude faster than other methods for computing the gradients.

Finally, we focus on a special case of learning the objective only. We present four different methods for solving this problem, including three mathematical formulations and one general gradient-based algorithm. We test all four methods on synthetic parametric linear program and multi-commodity flow problem instances, and show that all four methods can successfully solve all experimental instances. All three mathematical models are solved in a commercial optimization solver and, due to their specialized nature, outperform the more generic gradient-based algorithm in runtime. Additionally, we show that the parametric linear programs learned from the KKT-based and strong duality-based formulations produce the best predictions on testing data.

In summary, the main contribution of this dissertation is the development of a general gradient-based framework which is able to solve problems that were previously not tackled in the IO literature. In addition, we extend and unify models for the special case of learning the objective only, which has been the focus of previous IO work. This dissertation unifies machine learning and inverse optimization both at the modelling and algorithmic levels.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Mechanical, Industrial and Aerospace Engineering
Item Type:Thesis (PhD)
Authors:Tan, Yingcong
Institution:Concordia University
Degree Name:Ph. D.
Program:Industrial Engineering
Date:15 January 2021
Thesis Supervisor(s):Terekhov, Daria and Delong, Andrew
Keywords:Inverse Optimization, Linear Program, Deep Learning
ID Code:988038
Deposited By: Yingcong Tan
Deposited On:29 Jun 2021 20:56
Last Modified:29 Jun 2021 20:56
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top