Deep Learning and Applied Artificial Intelligence

Course objectives

General goals: Familiarity with advanced machine learning techniques, both supervised and unsupervised; modeling skills of complex problems using deep learning techniques, and their application to diverse applicative settings. Specific goals: Topics include: deep neural networks, their training and the interpretation of results; convolutional networks and prominent architectures; theory of deep learning and convergence; programming frameworks for implementing advanced machine learning techniques; autoencoders; adversarial attacks. Knowledge and understanding: How neural networks work and their mathematical interpretation as universal approximators. Understanding the limits and potentials of advanced machine learning models. Applying knowledge and understanding: Design, implementation, deployment and analysis of deep learning architectures addressing complex problems in several applicative areas. Critical and judgmental abilities: To be able to evaluate the performance of different architectures, and to assess their generalization capabilities. Communication skills: To be able to communicate clearly how to formulate an advanced machine learning problem as well as its implementation, its applicability in realistic settings, and specific architectural and regularization choices. Ability to learn: Understanding alternative and more complex techniques such as generative models based on optimal transportation, scattering transforms and the energetic profile of neural networks. To be able to implement existing techniques efficiently, robustly and reliably.

Channel 1
EMANUELE RODOLA' Lecturers' profile

Program - Frequency - Exams

Course program
- Data, features, and embeddings - Data awareness - Modeling prior knowledge - The curse of dimensionality - Task-driven features and invariances - Recap of linear algebra - Vector spaces, bases - Linear maps - Matrix notation and matrix algebra - Tensors and tensor operations - Parametric models and regression - Linear and polynomial regression - Convexity and Lp norms - Underfitting and overfitting - Cross validation - Logistic regression - Optimization - Gradient descent - Stochastic gradient descent - Learning rate, decay, momentum, batch size - Forward and reverse-mode automatic differentiation - Deep neural networks - Multi layer perceptron - Backpropagation - Universal approximation theorems - Autograd and modules - Invariance, equivariance, compositionality - Convolutional neural networks - Pooling - Double descent - Regularization: weight penalty, early stopping, dropout, batchnorm - Generative models - PCA - Manifolds and the manifold hypothesis - Representation learning - Autoencoders: variational, contractive, denoising - Generative adversarial networks - Adversarial learning - Decision boundaries - Black-box and white-box attacks - Adversarial perturbations: universal and one-pixel - Adversarial training - Geometric deep learning - Learning on graphs and point clouds - Learning on surfaces - Generative models of structured data - Adversarial surfaces
Prerequisites
Important: Calculus; linear algebra; fundamentals of machine learning. Mandatory: Fundamentals of programming (Python language). The course will cover the basics of calculus, linear algebra and machine learning that are needed to fully understand the lectures.
Books
Due to the highly dynamic nature of this advanced course, classes will not follow a specific text book. Different sources will be provided throughout the course in the form of scientific papers and book chapters. For your own reference, the following material may be useful: Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville MIT Press, 2016 Deep Learning with PyTorch Vishnu Subramanian Packt, 2018
Teaching mode
The course follows a traditional format and is held completely in the classroom. Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.
Frequency
In presence, but it is not mandatory to attend the lectures.
Exam mode
Evaluation consists in the following steps: 1. A written midterm, acting as a self-evaluation test, which does not concur to the final grade. 2. A project (not necessarily individual). 3. An optional oral exam. The oral examination, which is optional, can bring up to 3 points (plus or minus) to the final score. All these steps are aimed at evaluating the technical and theoretical skills, as well as the capability to work in a group, knowledge of the literature, the ability to formulate deep learning problems and to setup experiments.
Bibliography
Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville MIT Press, 2016 Deep Learning with PyTorch Vishnu Subramanian Packt, 2018
Lesson mode
The course follows a traditional format and is held completely in the classroom. Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.
EMANUELE RODOLA' Lecturers' profile

Program - Frequency - Exams

Course program
- Data, features, and embeddings - Data awareness - Modeling prior knowledge - The curse of dimensionality - Task-driven features and invariances - Recap of linear algebra - Vector spaces, bases - Linear maps - Matrix notation and matrix algebra - Tensors and tensor operations - Parametric models and regression - Linear and polynomial regression - Convexity and Lp norms - Underfitting and overfitting - Cross validation - Logistic regression - Optimization - Gradient descent - Stochastic gradient descent - Learning rate, decay, momentum, batch size - Forward and reverse-mode automatic differentiation - Deep neural networks - Multi layer perceptron - Backpropagation - Universal approximation theorems - Autograd and modules - Invariance, equivariance, compositionality - Convolutional neural networks - Pooling - Double descent - Regularization: weight penalty, early stopping, dropout, batchnorm - Generative models - PCA - Manifolds and the manifold hypothesis - Representation learning - Autoencoders: variational, contractive, denoising - Generative adversarial networks - Adversarial learning - Decision boundaries - Black-box and white-box attacks - Adversarial perturbations: universal and one-pixel - Adversarial training - Geometric deep learning - Learning on graphs and point clouds - Learning on surfaces - Generative models of structured data - Adversarial surfaces
Prerequisites
Important: Calculus; linear algebra; fundamentals of machine learning. Mandatory: Fundamentals of programming (Python language). The course will cover the basics of calculus, linear algebra and machine learning that are needed to fully understand the lectures.
Books
Due to the highly dynamic nature of this advanced course, classes will not follow a specific text book. Different sources will be provided throughout the course in the form of scientific papers and book chapters. For your own reference, the following material may be useful: Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville MIT Press, 2016 Deep Learning with PyTorch Vishnu Subramanian Packt, 2018
Teaching mode
The course follows a traditional format and is held completely in the classroom. Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.
Frequency
In presence, but it is not mandatory to attend the lectures.
Exam mode
Evaluation consists in the following steps: 1. A written midterm, acting as a self-evaluation test, which does not concur to the final grade. 2. A project (not necessarily individual). 3. An optional oral exam. The oral examination, which is optional, can bring up to 3 points (plus or minus) to the final score. All these steps are aimed at evaluating the technical and theoretical skills, as well as the capability to work in a group, knowledge of the literature, the ability to formulate deep learning problems and to setup experiments.
Bibliography
Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville MIT Press, 2016 Deep Learning with PyTorch Vishnu Subramanian Packt, 2018
Lesson mode
The course follows a traditional format and is held completely in the classroom. Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.
  • Lesson code10593236
  • Academic year2025/2026
  • CourseComputer Science
  • CurriculumSingle curriculum
  • Year1st year
  • Semester2nd semester
  • SSDINF/01
  • CFU6