Course program
INTRODUCTION TO MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP). Introduction to signals, time series and possible representations. Introduction to deep learning computation. Optimization algorithms for deep learning. Propagation, computational graphs and automatic differentiation. Layers, blocks and activations. Practical examples using typical MLSP data and scenarios.
MODERN DEEP LEARNING FOR IMAGE SIGNALS. Convolutional layers. Deep learning architectures based on blocks (VGG). Residual and dense networks. Graph convolutional networks. Applicative examples for image, biomedical signals, signals represented like images (i.e., spectrograms).
MODERN DEEP LEARNING FOR TIME-VARYING SIGNALS. Recurrent layers. Encode-decoder architecture. Attention mechanism. Transformer. Applicative examples for audio, speech and time-varying signals.
FAST AND EFFICIENT LEARNING. Low-complexity machine learning approaches. Sparsification techniques. Model compression and acceleration. Quantized and efficient architectures. Applicative examples.
GENERATIVE DEEP LEARNING. Energy-based models. Variational autoencoders. Generative adversarial networks. Applicative examples for images, biomedical signals, audio, speech and music signals.
Throughout the entire course, lab sessions in Python will be arranged, including the implementation of the main machine learning methods in different application contexts of signal processing (e.g., audio and speech, images, biomedical signals, music signals, multichannel signals from heterogeneous sensors and predictive maintenance processes, among others).
Prerequisites
Most of the prerequisites will be briefly recalled in classes. However, basic knowledge of linear algebra, signal theory and stochastic processes are warmly recommended, as well as basic programming skills.
Books
Main textbook:
- Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
Alternative textbooks:
- Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020.
- Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021.
- Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006.
Supplementary material provided by the instructor (course slides, papers) available on the website http://danilocomminiello.site.uniroma1.it and on the Classroom page of the course.
Teaching mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
Frequency
Course attendance is strongly recommended but not mandatory. Non-attending students will have all the information and teaching materials necessary to perform the exam under the same conditions and in the same way as the attending students.
Exam mode
Exam grades (in thirties) will be based on homework assignments (30%) and final project (70%). Final projects will be assigned to small teams of students. The theoretical skills acquired by the student will be evaluated as well as the ability to apply and implement a specific methodology in a practical problem.
Bibliography
References
[1] Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020.
[2] Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021.
[3] Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020.
[4] S. Raschka, V. Mirjalili, “Python Machine Learning”, (3rd ed.), Packt Publishing, 2019.
[5] Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”. The MIT Press, 2016.
[6] Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006.
Lesson mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.