Critical Data Analysis
Course objectives
Presentation Critical Data Studies is an interdisciplinary subject offered by professors from three disciplines: computing, philosophy, and law. The course seeks to train students in addressing issues of personal data protection, as well as those arising from two broad applications of intelligent systems: data-driven decision support, and automated decision making. About half of the sessions cover computing technologies for, e.g., anonymizing data, or detecting and mitigating algorithmic bias. The other half of the sessions study different conceptualizations of power around data processing pipelines, analyze bias and discrimination in computer systems from a moral philosophy perspective, and overview the relevant legal frameworks for data processing. The course includes 12 theory sessions for delivering and discussing the main concepts and methods. Optionally, students can attend 6 seminar sessions for case studies, not graded, and 6 practice sessions to receive help in the data analysis assignments. The evaluation will be done on the basis of a mid-term exam and a final exam (about the theory part), assignments (data analysis), and a report (algorithmic audit project. The scope of the course are issues of fairness, accountability, transparency in data processing from an ethical, legal, and technological perspective. 1. Personal data processing: privacy, confidentiality, surveillance, recourse, data collection and power differentials 2. Data-driven decision support: biases and transparency in data processing, data-rich communication, and data visualization 3. Automated decision making: conceptualizations of power and discrimination in scenarios with different degrees of automation. 4. External algorithmic auditing in practice: data collection, metrics definition, metric boundaries, reporting. Associated skills CB8. That the students are able to integrate knowledge and face the complexity of making judgments on the basis of information that is incomplete or limited, including reflecting about the social and ethical responsibilities associated with the application of their knowledge and judgment. CE1. Apply models and algorithms in machine learning, autonomous systems, natural language interaction, mobile robotics and/or web intelligence to a well-identified problem of intelligent systems. Learning outcomes E1) 1. Solves problems related to interactive intelligent systems. Specifically, students can solve the problem of detecting an mitigating biases in such a system. 2. Identifies the appropriate models and algorithms to solve a specific problem in the field of interactive intelligent systems. Specifically, students can identify data processing methods to reduce disclosure risk and to mitigate biases. 3. Evaluates the result of applying a model or algorithm to a specific problem. Specifically, students can use standard metrics of algorithmic fairness, and at the same time understand the limitations of such metrics. 4. Presents the result of the application of a model or algorithm to a specific problem according to scientific standards. Specifically, students can present in written the results of an external audit (without the collaboration of the auditee), performed over an existing dataset or an existing online service. Sustainable Development Goals • SDG5 - Gender equality • SDG10 - Reduced inequalities
- Lesson code10610027
- Academic year2025/2026
- CourseArtificial Intelligence
- CurriculumSingle curriculum
- Year2nd year
- Semester1st semester
- SSDING-INF/05
- CFU6