MACHINE LEARNING FOR SIGNAL PROCESSING

Course objectives

ENG GENERAL The course “Machine Learning for Signal Processing” aims to provide a solid understanding of machine learning techniques applied to signal processing. Starting from foundational concepts in supervised, unsupervised, and generative learning, students are guided through the analysis, modeling, and synthesis of complex signals using neural and probabilistic models. The course covers classical architectures such as multilayer perceptrons, convolutional and recurrent networks, and progresses to advanced generative models like autoencoders, GANs, and diffusion models, with applications in audio, biomedical, and time-series signals. Through a mix of theoretical lectures and hands-on Python notebooks, the course enables students to design intelligent systems for signal analysis and critically evaluate their outcomes. SPECIFIC • Knowledge and understanding: Understand the principles of machine learning and how they apply to signal processing tasks. • Applying knowledge and understanding: Be able to implement and adapt machine learning models across various signal domains (audio, biomedical, temporal). • Making judgements: Justify the selection of suitable methods based on the nature of the signal and the problem at hand. • Communication skills: Effectively communicate approaches, results, and implications of signal analysis using machine learning techniques. • Learning skills: Develop autonomy in studying new techniques and applying models to novel datasets.

Channel 1
DANILO COMMINIELLO Lecturers' profile

Program - Frequency - Exams

Course program
INTRODUCTION TO MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP). Introduction to signals, time series and possible representations. Introduction to deep learning computation. Optimization algorithms for deep learning. Propagation, computational graphs and automatic differentiation. Layers, blocks and activations. Practical examples using typical MLSP data and scenarios. MODERN DEEP LEARNING FOR IMAGE SIGNALS. Convolutional layers. Deep learning architectures based on blocks (VGG). Residual and dense networks. Graph convolutional networks. Applicative examples for image, biomedical signals, signals represented like images (i.e., spectrograms). MODERN DEEP LEARNING FOR TIME-VARYING SIGNALS. Recurrent layers. Encode-decoder architecture. Attention mechanism. Transformer. Applicative examples for audio, speech and time-varying signals. FAST AND EFFICIENT LEARNING. Low-complexity machine learning approaches. Sparsification techniques. Model compression and acceleration. Quantized and efficient architectures. Applicative examples. GENERATIVE DEEP LEARNING. Energy-based models. Variational autoencoders. Generative adversarial networks. Applicative examples for images, biomedical signals, audio, speech and music signals. Throughout the entire course, lab sessions in Python will be arranged, including the implementation of the main machine learning methods in different application contexts of signal processing (e.g., audio and speech, images, biomedical signals, music signals, multichannel signals from heterogeneous sensors and predictive maintenance processes, among others).
Prerequisites
Most of the prerequisites will be briefly recalled in classes. However, basic knowledge of linear algebra, signal theory and stochastic processes are warmly recommended, as well as basic programming skills.
Books
Main textbook: - Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020. Alternative textbooks: - Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020. - Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021. - Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006. Supplementary material provided by the instructor (course slides, papers) available on the website http://danilocomminiello.site.uniroma1.it and on the Classroom page of the course.
Teaching mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
Frequency
Course attendance is strongly recommended but not mandatory. Non-attending students will have all the information and teaching materials necessary to perform the exam under the same conditions and in the same way as the attending students.
Exam mode
Exam grades (in thirties) will be based on homework assignments (30%) and final project (70%). Final projects will be assigned to small teams of students. The theoretical skills acquired by the student will be evaluated as well as the ability to apply and implement a specific methodology in a practical problem.
Bibliography
References [1] Sergios Theodoridis, “Machine Learning: A Bayesian and Optimization Perspective”. Elsevier, 2020. [2] Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”. The MIT Press, 2021. [3] Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola, “Dive Into Deep Learning”. Amazon, 2020. [4] S. Raschka, V. Mirjalili, “Python Machine Learning”, (3rd ed.), Packt Publishing, 2019. [5] Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”. The MIT Press, 2016. [6] Christopher M. Bishop, "Pattern Recognition and Machine Learning". Springer, 2006.
Lesson mode
The course is based on regular classroom lessons, which also include regular exercises in Python on practical problems. Exercises may be performed in small teams of students.
  • Lesson code1056158
  • Academic year2025/2026
  • CourseElectronics Engineering
  • CurriculumIngegneria Elettronica (percorso valido anche ai fini del conseguimento del doppio titolo italo-statunitense o italo-francese)
  • Year1st year
  • Semester2nd semester
  • SSDING-IND/31
  • CFU6