OPTIMIZATION AND DECISION SCIENCE
Channel 1
LAURA PALAGI
Lecturers' profile
Program - Frequency - Exams
Course program
Prescriptive/predictive/descriptive Models in decision theory
a. From data to models; endogenous/exogenous parameters;
b. Mathematical Programming Models (MP);
c. Classification of MP: unconstrained nonlinear optimization, constrained optimization (nonlinear/linear, continuous/integer)
Convex and concave optimization
a. Convex set
b. Convex functions; characterization of convex function; convex optimization
c. Properties of the optimal solution of convex problems:
Theorem [Absence of local solution of a convex problem] with proof
d. Concave function and concave optimization
e. Properties of the optimal solution of convex problems:
Theorem [No internal solution for concave problems] with proof
Graphical solution of MP problem in two variables (plot of feasible region, level curves in 2D)
Unconstrained optimization
a. Example of unconstrained problems:
Parametric predictive models: linear regression and linear least square problem;
Non Parametric predictive models: training of a shallow neural network (NN).
Algorithmic viewpoint of optimality conditions:
i. descent directions; sufficient condition for identifying descent direction;
ii. the special case of convex functions; the special case of linear functions;
Derivation of the First order unconstrained necessary conditions for optimality; the special case of convex/linear functions
d. Algorithmic use of optimality conditions: the gradient direction and the gradient method. The backpropagation algorithm (or vanilla gradient) for training NN
Constrained continuous optimization:
a. Example of constrained problems:
i. nonlinear models: (unconstrained) design model, (quadratic programming) Support Vector Machines for supervised classification
ii. linear models:
The assignment problem,
The transportation problem,
The blending problem,
The revenue management problem,
The production problem with stock,
Modelling piecewise linear convex objective function;
Algorithmic viewpoint of optimality conditions in the case of linear constraints: absence of feasible and descent directions
c. Characterization of feasible directions in the special case of linear constraints
d. Algorithmic use of optimality conditions:
i. the conditional gradient (Frank-Wolfe) method
Derivation of the optimality condition in the case of lonely equality linear constraints
Theorem: the Lagrangian first order optimality condition over Ax = b with proof
Derivation of the optimality conditions for inequality linear constrained problem; theorem The Karush-Kuhn-Tucker conditions on a polyhedron with proof
6. The special case of min/max a linear function over a polyhedron (Linear Programming problem)
a. The fundamental theorem of LP and characterization of vertices
b. Graphical solution of 2D LP problems
7. Duality theory for LP
The KKT conditions for LP: Strong duality theorem with proof
Weak duality theorem with proof (derivation of bounds on the optimal value)
Sensitivity analysis: how changes in the parameters of the model affect the solution
Shadow (marginal) prices for active constraints;
e. A solution algorithm of the continuous Knapsack problem based on its dual;
f. Use of out-of-the-shelf solvers and interpretation of the solution
Integer programming optimization
a. Example of ILP: capital budgeting, the knapsack problem, power generation, network design.
b. Example of IQP: the SVM problems with feature selection
c. Use of binary variable for modelling: disjunctive constraints, piecewise concave objective function
d. The Branch & Bound method for ILP
e. The solution of the binary knapsack problem using B&B algorithm
Prerequisites
linear algebra, vectors, partial derivatives
Books
Teaching notes
Frequency
in presence
Exam mode
The assessment involves a written multiple-choice and open-ended test aimed at testing methodological skills.
The oral test is optional
Lesson mode
in presence
LAURA PALAGI
Lecturers' profile
Program - Frequency - Exams
Course program
Prescriptive/predictive/descriptive Models in decision theory
a. From data to models; endogenous/exogenous parameters;
b. Mathematical Programming Models (MP);
c. Classification of MP: unconstrained nonlinear optimization, constrained optimization (nonlinear/linear, continuous/integer)
Convex and concave optimization
a. Convex set
b. Convex functions; characterization of convex function; convex optimization
c. Properties of the optimal solution of convex problems:
Theorem [Absence of local solution of a convex problem] with proof
d. Concave function and concave optimization
e. Properties of the optimal solution of convex problems:
Theorem [No internal solution for concave problems] with proof
Graphical solution of MP problem in two variables (plot of feasible region, level curves in 2D)
Unconstrained optimization
a. Example of unconstrained problems:
Parametric predictive models: linear regression and linear least square problem;
Non Parametric predictive models: training of a shallow neural network (NN).
Algorithmic viewpoint of optimality conditions:
i. descent directions; sufficient condition for identifying descent direction;
ii. the special case of convex functions; the special case of linear functions;
Derivation of the First order unconstrained necessary conditions for optimality; the special case of convex/linear functions
d. Algorithmic use of optimality conditions: the gradient direction and the gradient method. The backpropagation algorithm (or vanilla gradient) for training NN
Constrained continuous optimization:
a. Example of constrained problems:
i. nonlinear models: (unconstrained) design model, (quadratic programming) Support Vector Machines for supervised classification
ii. linear models:
The assignment problem,
The transportation problem,
The blending problem,
The revenue management problem,
The production problem with stock,
Modelling piecewise linear convex objective function;
Algorithmic viewpoint of optimality conditions in the case of linear constraints: absence of feasible and descent directions
c. Characterization of feasible directions in the special case of linear constraints
d. Algorithmic use of optimality conditions:
i. the conditional gradient (Frank-Wolfe) method
Derivation of the optimality condition in the case of lonely equality linear constraints
Theorem: the Lagrangian first order optimality condition over Ax = b with proof
Derivation of the optimality conditions for inequality linear constrained problem; theorem The Karush-Kuhn-Tucker conditions on a polyhedron with proof
6. The special case of min/max a linear function over a polyhedron (Linear Programming problem)
a. The fundamental theorem of LP and characterization of vertices
b. Graphical solution of 2D LP problems
7. Duality theory for LP
The KKT conditions for LP: Strong duality theorem with proof
Weak duality theorem with proof (derivation of bounds on the optimal value)
Sensitivity analysis: how changes in the parameters of the model affect the solution
Shadow (marginal) prices for active constraints;
e. A solution algorithm of the continuous Knapsack problem based on its dual;
f. Use of out-of-the-shelf solvers and interpretation of the solution
Integer programming optimization
a. Example of ILP: capital budgeting, the knapsack problem, power generation, network design.
b. Example of IQP: the SVM problems with feature selection
c. Use of binary variable for modelling: disjunctive constraints, piecewise concave objective function
d. The Branch & Bound method for ILP
e. The solution of the binary knapsack problem using B&B algorithm
Prerequisites
linear algebra, vectors, partial derivatives
Books
Teaching notes
Frequency
in presence
Exam mode
The assessment involves a written multiple-choice and open-ended test aimed at testing methodological skills.
The oral test is optional
Lesson mode
in presence
- Lesson code10616523
- Academic year2025/2026
- CourseElectrical Engineering
- CurriculumElectrical Engineering for Digital Transition and Sustainable Power Systems
- Year1st year
- Semester2nd semester
- SSDMAT/09
- CFU6