THREE-DIMENSIONAL MODELING
Channel 1
SIMONE SCARDAPANE
Lecturers' profile
Program - Frequency - Exams
Course program
First part of the course will be focused on standard topics, including:
- Introduction to the course (objectives, textbook, exam)
- Preliminaries: Linear algebra for neural networks, linear regression and classification, fully-connected neural networks, automatic differentiation of programs (forward and backward, Jacobians-vector products, checkpointing)
- Convolutional neural networks
- Modern convolutional networks
- Attention-based models for sequences, multi-head attention, positional embeddings
- Recurrent models for variable-length sequences
The second part will introduce a number of research topics at different levels of depth:
- Graph neural networks
- Continual learning
- Generative models
- Self-supervised models
The course also includes several practical lab sessions, ranging from PyTorch to JAX.
Prerequisites
The student is expected to have prior exposure and understanding of the following topics, which will briefly reviewed during the initial lectures:
Linear algebra, including: vectorizing operations, common matrix decompositions, spectral analysis.
Probability, including: common probability distributions (Bernoulli, Gaussian, …), conditional probabilities, expectations and moments of the distributions.
Numerical optimization, including: minima and maxima of an optimization problem, gradients, optimizing functions through gradient descent.
Machine learning, including: supervised learning, unsupervised learning, linear classification and regression.
The student should also have strong experience in programming with Python, while experience in working with NumPy or similar libraries is advised but not mandatory.
Books
Dive Into Deep Learning (https://d2l.ai/), freely available online. Slides and materials released on the course's website.
Frequency
Classroom lectures, non mandatory attendance.
Exam mode
The course will include one or more homeworks (of varying difficulty), so that the students can self-evaluate their acquired knowledge up to that point. The exam is split into a practical coding project (requiring the partial re-implementation of a recent scientific paper), and an oral evaluation.
Lesson mode
In-person lectures, with possible video recordings or remote lectures depending on the dispositions by the Faculty at the time of the course. All news will be shared on a Google Classroom webpage.
SIMONE SCARDAPANE
Lecturers' profile
Program - Frequency - Exams
Course program
First part of the course will be focused on standard topics, including:
- Introduction to the course (objectives, textbook, exam)
- Preliminaries: Linear algebra for neural networks, linear regression and classification, fully-connected neural networks, automatic differentiation of programs (forward and backward, Jacobians-vector products, checkpointing)
- Convolutional neural networks
- Modern convolutional networks
- Attention-based models for sequences, multi-head attention, positional embeddings
- Recurrent models for variable-length sequences
The second part will introduce a number of research topics at different levels of depth:
- Graph neural networks
- Continual learning
- Generative models
- Self-supervised models
The course also includes several practical lab sessions, ranging from PyTorch to JAX.
Prerequisites
The student is expected to have prior exposure and understanding of the following topics, which will briefly reviewed during the initial lectures:
Linear algebra, including: vectorizing operations, common matrix decompositions, spectral analysis.
Probability, including: common probability distributions (Bernoulli, Gaussian, …), conditional probabilities, expectations and moments of the distributions.
Numerical optimization, including: minima and maxima of an optimization problem, gradients, optimizing functions through gradient descent.
Machine learning, including: supervised learning, unsupervised learning, linear classification and regression.
The student should also have strong experience in programming with Python, while experience in working with NumPy or similar libraries is advised but not mandatory.
Books
Dive Into Deep Learning (https://d2l.ai/), freely available online. Slides and materials released on the course's website.
Frequency
Classroom lectures, non mandatory attendance.
Exam mode
The course will include one or more homeworks (of varying difficulty), so that the students can self-evaluate their acquired knowledge up to that point. The exam is split into a practical coding project (requiring the partial re-implementation of a recent scientific paper), and an oral evaluation.
Lesson mode
In-person lectures, with possible video recordings or remote lectures depending on the dispositions by the Faculty at the time of the course. All news will be shared on a Google Classroom webpage.
DANILO COMMINIELLO
Lecturers' profile
Program - Frequency - Exams
Course program
INTRODUCTION TO NEURAL NETWORKS. Course overview, objectives, textbooks and material, exams. Neural networks preliminaries. Linear algebra for neural networks. Linear regression and classification. Fully-connected neural networks. Automatic differentiation of programs (forward and backward, Jacobians-vector products, checkpointing). (Ref: [1: Ch. 1, 2, 3, 4, 5, 11, 18])
CONVOLUTIONAL NEURAL NETWORKS. Convolutional layers. Deep learning architectures based on blocks (VGG). Residual and dense networks. Advanced convolutional neural network architectures. Applicative examples. (Ref: [1: Ch. 6, 7])
TRANSFORMER NETWORKS. Attention-based models for sequences. Multi-head attention. Positional embeddings. (Ref: [1: Ch. 11])
RECURRENT NEURAL NETWORKS. Recurrent layers. Long short-term memory networks. Gated recurrent units. Encoder-decoder architectures. Applicative examples. (Ref: [1: Ch. 8, 9, 10])
GENERATIVE MODELS. Generative modeling. Variational inference. Variational autoencoder models. Generative adversarial networks. Diffusion models. Applicative examples. (Ref: [1: Ch. 20])
ADVANCED LEARNING FRAMEWORKS. Graph neural networks. Continual learning. Self-supervised learning. (Additional Material)
Throughout the entire course, lab sessions in Python will be arranged, including the implementation of the main neural network models in different application contexts.
Prerequisites
The student is expected to have prior exposure and understanding of the following topics, which will briefly reviewed during the initial lectures:
o Linear algebra, including: vectorizing operations, common matrix decompositions, spectral analysis.
o Probability, including: common probability distributions (Bernoulli, Gaussian, …), conditional probabilities, expectations and moments of the distributions.
o Numerical optimization, including: minima and maxima of an optimization problem, gradients, optimizing functions through gradient descent.
o Machine learning, including: supervised learning, unsupervised learning, linear classification and regression.
The student should also have strong experience in programming with Python, while experience in working with NumPy or similar libraries is recommended but not mandatory.
Books
Main textbook:
- Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
Supplementary material provided by the instructor (course slides, papers) is available on the course website and on the Classroom page of the course.
Teaching mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
Frequency
Course attendance is strongly recommended but not mandatory. Non-attending students will have all the information and teaching materials necessary to perform the exam under the same conditions and in the same way as the attending students.
Exam mode
The course will include one or more homework exercises (of varying difficulty), so that the students can self-evaluate their acquired knowledge up to that point. The exam is split into a practical coding project (requiring the partial re-implementation of a recent scientific paper), and an oral evaluation.
Bibliography
References
[1] Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020.
[2] Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021.
[3] Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
[4] S. Raschka, V. Mirjalili, “Python Machine Learning”, (3rd ed.), Packt Publishing, 2019.
[5] Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”. The MIT Press, 2016.
[6] Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006.
Lesson mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
DANILO COMMINIELLO
Lecturers' profile
Program - Frequency - Exams
Course program
INTRODUCTION TO NEURAL NETWORKS. Course overview, objectives, textbooks and material, exams. Neural networks preliminaries. Linear algebra for neural networks. Linear regression and classification. Fully-connected neural networks. Automatic differentiation of programs (forward and backward, Jacobians-vector products, checkpointing). (Ref: [1: Ch. 1, 2, 3, 4, 5, 11, 18])
CONVOLUTIONAL NEURAL NETWORKS. Convolutional layers. Deep learning architectures based on blocks (VGG). Residual and dense networks. Advanced convolutional neural network architectures. Applicative examples. (Ref: [1: Ch. 6, 7])
TRANSFORMER NETWORKS. Attention-based models for sequences. Multi-head attention. Positional embeddings. (Ref: [1: Ch. 11])
RECURRENT NEURAL NETWORKS. Recurrent layers. Long short-term memory networks. Gated recurrent units. Encoder-decoder architectures. Applicative examples. (Ref: [1: Ch. 8, 9, 10])
GENERATIVE MODELS. Generative modeling. Variational inference. Variational autoencoder models. Generative adversarial networks. Diffusion models. Applicative examples. (Ref: [1: Ch. 20])
ADVANCED LEARNING FRAMEWORKS. Graph neural networks. Continual learning. Self-supervised learning. (Additional Material)
Throughout the entire course, lab sessions in Python will be arranged, including the implementation of the main neural network models in different application contexts.
Prerequisites
The student is expected to have prior exposure and understanding of the following topics, which will briefly reviewed during the initial lectures:
o Linear algebra, including: vectorizing operations, common matrix decompositions, spectral analysis.
o Probability, including: common probability distributions (Bernoulli, Gaussian, …), conditional probabilities, expectations and moments of the distributions.
o Numerical optimization, including: minima and maxima of an optimization problem, gradients, optimizing functions through gradient descent.
o Machine learning, including: supervised learning, unsupervised learning, linear classification and regression.
The student should also have strong experience in programming with Python, while experience in working with NumPy or similar libraries is recommended but not mandatory.
Books
Main textbook:
- Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
Supplementary material provided by the instructor (course slides, papers) is available on the course website and on the Classroom page of the course.
Teaching mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
Frequency
Course attendance is strongly recommended but not mandatory. Non-attending students will have all the information and teaching materials necessary to perform the exam under the same conditions and in the same way as the attending students.
Exam mode
The course will include one or more homework exercises (of varying difficulty), so that the students can self-evaluate their acquired knowledge up to that point. The exam is split into a practical coding project (requiring the partial re-implementation of a recent scientific paper), and an oral evaluation.
Bibliography
References
[1] Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020.
[2] Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021.
[3] Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
[4] S. Raschka, V. Mirjalili, “Python Machine Learning”, (3rd ed.), Packt Publishing, 2019.
[5] Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”. The MIT Press, 2016.
[6] Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006.
Lesson mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
- Academic year2025/2026
- CourseElectrical Engineering
- CurriculumElectrical Engineering for Digital Transition and Sustainable Power Systems
- Year2nd year
- Semester1st semester
- SSDING-IND/31
- CFU6